Facebook explains how it uses AI to help suicide prevention
2 min. read
Updated on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Facebook has been contending with the use of its platform by those with suicidal ideation and mental illnesses, and today the firm shared some of the ways it was using AI to triage posts and facilitate help and support for them.
The firm shared the challenges involved with this process in support for World Suicidal Prevention Day.
“To train a machine learning classifier, you need to feed it tons of examples, both of what you’re trying to identify (positive examples) as well as what you’re not trying to identify (negative examples), so that the classifier learns to distinguish patterns between the two. (Check out the first video on this page for more on this concept.) Usually, you want thousands or even millions of examples in both categories,” Facebook’s Catherine Card, Director of Product Management explained “But when it comes to Facebook posts that contain suicidal expressions, the team had, thankfully, relatively few positive examples to work from. That sparse data set was dwarfed by the negative examples — that is, the entire universe of Facebook text posts that were not suicidal expressions. This imbalance only made the context challenge harder.”
Facebook was able to combine its few “positive” examples, with its data set of negative examples which had been reviewed by moderators and found that said users did not, in fact, pose a risk to themselves.
The algorithm uses the posts, as well as the comments to determine levels of risk. A post with comments asking for the location of the poster would be treated with more urgency. Even with all this, people are still sadly capable of slipping through the cracks.
“Technology can’t replace people in the process, but it can be an aid to connect more people in need with compassionate help,” Card writes.
You can read the full post in detail here.
User forum
0 messages