Facebook to use AI to remove extremist content

SAN FRANCISCO • Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said that it would begin using artificial intelligence (AI) to help remove inappropriate content.

Claiming to have 1.94 billion monthly active users worldwide, Facebook said it wants to be "a hostile place for terrorists".

AI will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Ms Monika Bickert, head of global policy management at Facebook.

One of the first applications for the technology is identifying content that clearly violates Facebook's terms of use, such as photos and videos of beheadings or other gruesome images, and stopping users from uploading them.

In a blog post on Thursday, Facebook described how an AI system would, over time, teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group.

The same system, they wrote, could learn to identify Facebook users who associate with clusters of pages or groups that promote extremist content, or who return to the site again and again, creating fake accounts in order to spread such content online.

"Ideally, one day our technology will address everything," Ms Bickert said. "It's in development right now." But human moderators, she added, are still needed to review content for context.

Mr Brian Fishman, Facebook's lead policy manager for counterterrorism, said it has a team of 150 specialists working in 30 languages doing such reviews. They include academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, working exclusively or primarily on countering terrorism.

Facebook said it will grow its community operations teams around the world by 3,000 people over the next year. The teams will work 24 hours a day and in dozens of languages to review accounts or content that may violate its policies, including terror-related ones.

Facebook has been criticised for not doing enough to track content posted by extremist groups.

Last month, Prime Minister Theresa May of Britain announced that she would challenge Internet companies to do more to monitor and stop such groups. "We cannot allow this ideology the safe space it needs to breed," she said after the Manchester attack.

Mr J.M. Berger, a fellow with the International Centre for Counterterrorism at The Hague, said a large part of the challenge for companies such as Facebook is figuring out what qualifies as terrorism.


A version of this article appeared in the print edition of The Straits Times on June 17, 2017, with the headline 'Facebook to use AI to remove extremist content'. Print Edition | Subscribe