WASHINGTON (WASHINGTON POST) - YouTube's defences against misinformation just backfired in a big way - and ended up contributing to baseless speculation online that the Notre-Dame cathedral fire on Monday (April 15) resulted from a terrorist attack.
As news organisations and others used the service to broadcast the collapse of the Paris cathedral's spire, YouTube's algorithms mistakenly displayed details about the Sept 11, 2001 terrorist attacks in New York in "information panels" below the videos.
While these fact-checking tools are designed to counter hoaxes, they likely fed false rumours online. People falsely claimed Muslim terrorists caused the incident, even as Paris officials said the fire was likely due to ongoing renovations and there was no sign of a terrorist attack.
And while the boxes noted the "extensive death and destruction" from attacks that took down New York's World Trade Center and killed thousands of people, there appeared to be few injured in the Paris fire.
Technology companies are increasingly promising investments in artificial intelligence, and algorithms will be a crucial component of their arsenal of tools to combat violent content, disinformation or other hoaxes.
But Monday's high-profile mistake - on the heels of another recent failure to quickly stop the spread of violent videos of the terrorist attack in New Zealand last month - underscore how this technology is still error-prone and unreliable.
And it is raising questions about the efficacy of leaving such decisions to machines. "At this point, nothing beats humans," Mr David Carroll, an associate professor of media design at the New School in New York and a critic of social media companies, told the Washington Post.
"Here's a case where you'd be hard-pressed to misclassify this particular example, while the best machines on the planet failed."
University of Washington professor Pedro Domingos, a machine-learning researcher, told The Post he wasn't surprised YouTube's algorithms made such a mistake. Algorithms do not have the comprehension of human context or common sense, which makes them seriously unprepared for news events.
"They have to depend on these algorithms, but they all have sorts of failure modes. And they can't fly under the radar anymore," Prof Domingos said. "It's not just Whack-a-Mole. It's a losing game."
YouTube's mistake highlights the uphill challenge for companies under pressure from policymakers across the globe as they seek new ways to combat misinformation. YouTube began rolling out so-called information panels to provide factual information about hoaxes in recent months. The computer algorithms likely detected visual similarities between Monday's fire and the 9/11 tragedy, which is frequently a target of conspiracy theories on the service. BuzzFeed News reported that the widget appeared on at least three news organisations' streams.
"We are deeply saddened by the ongoing fire at the Notre-Dame cathedral," YouTube said in a statement to The Post. "Last year, we launched information panels with links to third-party sources like Encyclopaedia Britannica and Wikipedia for subjects subject to misinformation. These panels are triggered algorithmically and our systems sometimes make the wrong call. We are disabling these panels for live streams related to the fire."
YouTube was not the only platform that struggled in its response to the cathedral fire. Twitter was also racing to address the rapid spread of hoaxes and conspiracy theories on its own platform.
Ms Jane Lytvynenko of BuzzFeed News found numerous examples of fake claims about the fire on Monday afternoon on Twitter, including an account impersonating CNN that attributed the fire to terrorists, and a fake Fox News account that posted fabricated comments purporting to be from Democratic Representative Ilhan Omar. Both those examples were removed, Ms Lytvynenko reported.
In an interview, a Twitter spokesman said the company is reviewing reports of disinformation related to the fires. "The team is reviewing reports, and if they are in violation, suspending them per the Twitter Rules," the spokesman said. "Our focus continues to be detecting and removing coordinated attempts to manipulate the conversation at speed and scale."
The Verge's Casey Newton noted on Monday night that while none of the disinformation seemed to go viral immediately, there is still cause for concern.
"And even if you think some level of conspiracy theorising is inevitable after a catastrophe, it's possible to wish social media companies didn't so powerfully enable their spread," Mr Newton wrote.