YouTube says computers are catching problem videos

YouTube said it took down 8.28 million videos during the fourth quarter of 2017. PHOTO: REUTERS

SAN FRANCISCO (NYTIMES) - The vast majority of videos removed from YouTube toward the end of last year (2017) for violating the site's content guidelines had first been detected by machines instead of humans, the Google-owned company said on Monday (April 23).

YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 per cent of those videos had initially been flagged by artificially intelligent computer systems.

The new data highlighted the significant role machines - not just users, government agencies and other organizations - are taking in policing the service as it faces increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organizations.

Those videos are sometimes promoted by YouTube's recommendation system and unknowingly financed by advertisers, whose ads are placed next to them through an automated system.

This was the first time that YouTube had publicly disclosed the number of videos it removed in a quarter, making it hard to judge how aggressive the platform has previously been in removing content, or the extent to which computers played a part in making those decisions.

Figuring out how to remove unwanted videos - and balancing that with free speech - is a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University's Global Digital Policy Incubator.

"It's basically free expression on one side and the quality of discourse that's beneficial to society on the other side," Donahoe said. "It's a hard problem to solve."

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represented. But the company said the takedowns represented "a fraction of a per cent" of YouTube's total views during the quarter.

Betting on improvements in artificial intelligence is a common Silicon Valley approach to dealing with problematic content; Facebook has also said it is counting on AI tools to detect fake accounts and fake news on its platform.

But critics have warned against depending too heavily on computers to replace human judgement.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks.

Last year, parents complained that violent or provocative videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatically been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

Still, in December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms.

In a blog post on Monday, YouTube said it had filled most of the jobs allotted to it, including specialists with expertise in violent extremism, counterterrorism and human rights, as well as expanding regional teams. It was not clear what YouTube's final share of the total would be.

YouTube said three-quarters of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company's machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearing on the site.

In some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced so-called machine-learning technology to help computers identify videos associated with violent extremists, 8 per cent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of 2018, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifying problematic content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

Join ST's Telegram channel and get the latest breaking news delivered to you.