Facebook Uses Artificial Intelligence To Spot Potentially Suicidal Users
Facebook already has some tools and services in place, though it is adding new ones, along with making available additional resources through Facebook Live and Messenger for friends of those who might be feeling suicidal. One example is when watching someone's Facebook Live video. If the person says or does something that makes the viewer think he or she might be having feelings of suicide, they can reach out to the person and report the video to Facebook.
Reporting the video to Facebook is not to get the person in trouble, obviously. Instead, it is a way to alert Facebook that someone might be at risk of hurting himself or herself. Facebook can provide a set of resources to the person while they're streaming live. Those resources include being able to reach out to a friend, contacting a helpline, and seeing tips, among other tools.
The alternative is to disconnect the stream to prevent viewers from seeing someone commit an unfortunate act. However, Facebook learned that doing so is not the best approach.
"Some might say we should cut off the live stream, but what we've learned is cutting off the stream too early could remove the opportunity for that person to receive help," Facebook Researcher Jennifer Guadagno told TechCrunch.
Facebook isn't going about this alone. It worked with various organizations such as the Crisis Text Line, the National Eating Disorder Association, and the National Suicidal Prevention Lifeline to make these new tools and resources available to users. And of course Facebook tapped into its technical prowess—the social networking site is using artificial intelligence and pattern recognition recognition to determine if a post is suicidal.