Facebook admits its AI isn’t capable enough to stop extremist content from going Live

Facebook has admitted that it can’t control Facebook Live that shows extremist content as there are not enough contents similar to those available on the platform to train its AI. To put it simply, Facebook’s AI is not capable enough to detect extremist content and take prompt action.

The terrorist attack that took the life of more than 49 people is clearly one of the tragic incidents that took place in human history. It’s not just New Zealand’s Darkest Day as the Prime minister of the country described. And thanks to social media, the terrorist was able to even Live stream the gruesome attack.

Facebook earlier said that Christchurch Live broadcast had 200 people hooked and then 4,000 more joined after the broadcast came to an end. In its defense, Facebook said that the Live broadcast wasn’t reported even a single time and therefore it was unable to take any prompt action.

The social media giant, however, played its part by removing more than 1.5 million videos from the platform. But surely The company will need to push itself to address these kinds of issues. Needless to say, the company will continue to be under immense pressure for the same.

Via: Forbes

Some links in the article may not be viewable as you are using an AdBlocker. Please add us to your whitelist to enable the website to function properly.

Related
Comments