Facebook Expands Self-Harm Prevention Program That Monitors Users' 'Thoughts Of Suicide'

Facebook is expanding its artificial intelligence-based suicide prevention efforts. The company said today that it has plans to eventually monitor and respond to suicidal intent on Facebook "worldwide", excluding the European Union.

Image: Getty

Today, Facebook's vice president of product management Guy Rosen published a blog about the company's efforts to detect and report users who express "thoughts of suicide". The post states that Facebook is "[u]sing pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster". But Rosen said the company is also relying on more human reviewers who investigate suicide - and self harm-related posts. This "Community Operations team" is comprised of thousands of workers, including a "dedicated group of specialists who have specific training in suicide and self harm". Reviewers use software that points them to portions of videos that have a spike in comments and reactions.

These workers also uses pattern-recognition software to determine the order in which they should review flagged posts and videos, and to determine which local authorities they should alert about potential suicide attempts. The flagging software looks for certain phrases, such as "Can I help?" and "Are you OK?"

Facebook did not respond to a Gizmodo request for more information on how the machine-learning software flags posts that might include suicidal thoughts, which could lead to people getting reported to their local authorities.

Facebook first announced in May that it was hiring 3000 people to help monitor the social networking site for comments, posts and videos about murder and suicide. Ryan Calo, a professor at University of Washington who specialises in cyber law and privacy, expressed concern to Reuters over the notion of Facebook scanning comments and posts for harmful activity. "Once you open the door, you might wonder what other kinds of things we would be looking for," Calo told Reuters.

Rosen wrote that, in the last month, Facebook connected with first responders on more than 100 cases stemming from suicide-related Facebook posts. That does not include the reports that Facebook has received from concerned users. Facebook also produced a video of police officers in Chautauqua County, New York, discussing a case in which a Facebook reported a post of a person threatening to commit suicide. According to the sheriff and deputy in the video, local authorities were able to track the person down through a "mobile phone ping" and took her to a hospital.

Rosen wrote that Facebook is expanding its use of suicide-prevention AI outside of the United States and plans to eventually apply the system across the world, excluding the European Union. Facebook did not respond to a Gizmodo request for information on what other countries will be included in this effort.

If depression is affecting you or someone you know, call Lifeline on 13 11 14.

[Facebook Newsroom]

WATCH MORE: Tech News


Comments

    But hate crime and religous extreemism is ok to Facebook.

    People that tell other people they're going to suicide rarely do it. It's the depressed people that say nothing that are the ones to be concerned about. I know of four friends that committed suicide and non of them ever let anyone know beforehand.

    It's only a matter of time before this is expanded to look out for people who might intend to harm others. I hate to use the words 'slippery slope', but seems like it could be one.

    Next thing you know, we're locking people for pre-crimes or thought crimes.

    As much as im for helping people who have suicidial thoughts. These types of programs can be easily exploited for nefarious means.

    Wow the implications of this are scary bad. Also quite cynical to publicise it under the guise of suicide prevention. Yet another reason I don’t use Facebook.

Join the discussion!

Trending Stories Right Now