OAKLAND (CALIFORNIA) • Alphabet's Google will implement more measures to identify and remove terrorist or violent extremist content on its video-sharing platform YouTube, the company said in a blog post on Sunday.
Google said it would issue warnings and not monetise or recommend such videos for user endorsements, even if they do not clearly violate its policies.
It will also employ more engineering resources and increase its use of technology to help identify extremist videos, in addition to training new content classifiers to quickly identify and remove such content.
"While we and others have worked for years to identify and remove content that violates our policies... we, as an industry, must acknowledge that more needs to be done. Now," said Google's general counsel Kent Walker.
Google will expand its collaboration with counter-extremist groups to identify content that may be used to radicalise and recruit extremists. It will also reach potential Islamic State in Iraq and Syria recruits through targeted online advertising and redirect them towards anti-terrorist videos to try to change their minds about joining.
Germany, France and Britain - where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years - have pressed social media platforms to do more to remove militant content and hate speech.
Facebook last Thursday offered additional insight on its efforts to remove terrorism content. It has ramped up the use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, it said in a blog post.
REUTERS