Facebook is improving its detection of posts by users who are vulnerable and in danger of suicide with the help of artificial intelligence.
Earlier this year, the firm first developed and tested the algorithms for suicide detection, flagging up posts and statuses which are made by people who were in need of help. The firm only rolled out and tested it in the US then, but now, after a positive reception, it’ll be rolling it out to other regions as well aside from the EU due to privacy regulations.
“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community,” Facebook’s Guy Rosen explained the benefits of the program, “We also use pattern recognition to help accelerate the most concerning reports. We’ve found these accelerated reports— that we have signalled require immediate attention—are escalated to local authorities twice as quickly as other reports.”
Facebook’s AI is helpful for reducing response times to at-risk users, prioritising content which it deems especially worrisome so human moderators can review them and take the most appropriate action, depending on the level of risk.
“We provide people with a number of support options, such as the option to reach out to a friend and even offer suggested text templates. We also suggest contacting a help line and offer other tips and resources for people to help themselves in that moment.” Rosen says.
Facebook is for many, the centre of their digital lives. This is just one measure out of many that the firm is now taking — after years of a lackadaisical attitude — to ensure that it handles that power responsibly.