Suicide watch on Facebook raises issues

Critics say social network's screening scheme could cause harm instead, and lead to arrests

Telecommunications operator Courtney Davis and police officer Bruce Haire of Rock Hill, South Carolina. Facebook called Ms Davis about a man who was live-streaming a suicide attempt, helping Sergeant Haire find him. Some critics warn that Facebook is
Telecommunications operator Courtney Davis and police officer Bruce Haire of Rock Hill, South Carolina. Facebook called Ms Davis about a man who was live-streaming a suicide attempt, helping Sergeant Haire find him. Some critics warn that Facebook is becoming an arbiter of users' mental distress without proving that its efforts are accurate, effective or safe. PHOTO: NYTIMES

NEW YORK • A police officer on the late shift in an Ohio town recently received an unusual call from Facebook. Earlier that day, a local woman had written a Facebook post saying she was walking home and intended to kill herself when she got there, according to a police report on the case.

Facebook called to warn the police department about the suicide threat. The officer who took the call quickly located the woman, but she denied having suicidal thoughts, the police report said.

Even so, the officer believed she might harm herself and told her that she must go to a hospital - either voluntarily or in police custody. He ultimately drove her to a hospital for a mental health work-up, an evaluation prompted by Facebook's intervention.

Police stations from Massachusetts to Mumbai have had similar alerts from Facebook over the past 18 months as part of what is likely the world's largest suicide threat screening and alert programme. The social network ramped up the effort after several people live-streamed their suicides on Facebook Live in 2017. It now utilises algorithms and user reports to flag possible suicide threats.

Facebook's rise as a global arbiter of mental distress puts it in a tricky position at a time when it is under investigation for privacy lapses by regulators in the United States, Canada and the European Union - as well as facing heightened scrutiny for failing to respond quickly to election interference and ethnic hatred campaigns on its site.

Even as Facebook's chief executive Mark Zuckerberg has apologised for improper harvesting of user data, the company has been grappling with fresh revelations about special data-sharing deals with tech companies.

The anti-suicide campaign gives Facebook an opportunity to frame its work as a good news story.

Suicide is the second-leading cause of death among people aged 15 to 29 worldwide, according to the World Health Organisation. Some mental health experts and police officials said Facebook had aided officers in locating and stopping people who were clearly about to harm themselves.

Facebook has computer algorithms that scan the posts, comments and videos of users in the US and other countries for indications of immediate suicide risk. When a post is flagged, by the technology or a concerned user, it moves to human reviewers at the company, who then call law enforcement.

"In the last year, we've helped first responders quickly reach around 3,500 people globally who needed help," Mr Zuckerberg wrote in a November post about the company's efforts.

But other mental health experts said Facebook's calls to police could also cause harm - such as unintentionally precipitating suicide, compelling non-suicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company's approach is accurate, effective or safe.

Facebook said that, for privacy reasons, it did not track the outcomes of its calls to police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders.

Critics said Facebook has assumed the authority of a public health agency while protecting its process as if it were a corporate secret. "It's hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk," said Dr John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Centre in Boston. "It's black box medicine."

Facebook said it worked with suicide prevention experts to develop a programme to quickly connect users in distress with friends and send them contact information for help lines. It said experts also helped train dedicated Facebook teams, who have experience in law enforcement and crisis response, to review the most urgent cases.

In a forthcoming article in a Yale law journal, Mr Mason Marks, a health law scholar, argues that Facebook's suicide risk scoring software, along with its calls to police that may lead to mandatory psychiatric evaluations, constitutes the practice of medicine.

He said government agencies should regulate the programme, requiring Facebook to produce safety and effectiveness evidence. "In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, 'Trust us here'," said Mr Marks, a fellow at Yale Law School and New York University School of Law.

A Facebook spokesman disagreed that the programme amounted to health screening, saying: "These are complex issues, which is why we have been working closely with experts."

NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on January 02, 2019, with the headline Suicide watch on Facebook raises issues. Subscribe