SAN FRANCISCO • Some of the Web's biggest destinations for watching videos have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process.
The move is a major step forward for Internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.
YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State in Iraq and Syria (ISIS) videos and other similar material, the sources said.
The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for "hashes", a type of unique digital fingerprint that Internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.
Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before.
Use of the new technology is likely to be refined over time as Internet firms continue to discuss the issue internally and with competitors and other interested parties.
In late April, amid pressure from US President Barack Obama and other US and European leaders concerned about online radicalisation, Internet companies including Alphabet Inc's YouTube, Twitter Inc, Facebook Inc and CloudFlare held a call to discuss options to curtail the menace, according to one person on the call and three who were briefed on what was discussed.
The sources said that companies expressed wariness of letting an outside group decide what defined unacceptable content.
Other alternatives raised on the call included establishing a new industry-controlled non-profit organisation or expanding an existing industry-controlled one.
All the options discussed involved hashing technology.
Mr Seamus Hughes, the deputy director of George Washington University's Programme on Extremism, said different web companies draw the line in different places when it comes to extremism.
Most have relied mainly on users to flag content that violates their terms of service, and many still do. Flagged material is then individually reviewed by human editors who delete postings found to be in violation.
The companies now using automation are not publicly discussing it, two sources said, in part out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents.
"There's no upside in these companies talking about it,"said Mr Matthew Prince, the chief executive of content distribution company CloudFlare. "Why would they brag about censorship?"