NEW YORK • When they woke up and glanced at their phones last Monday morning, Americans may have been shocked to learn that the man behind the mass shooting in Las Vegas late on Sunday was an anti-Trump liberal who liked American television host Rachel Maddow and MoveOn.org (an American progressive, public policy group), that the Federal Bureau of Investigation had linked him to the Islamic State in Iraq and Syria (ISIS), and that mainstream news organisations were suppressing that he had recently converted to Islam.
They were shocking, gruesome revelations. They were also entirely false - and widely spread by Google and Facebook.
In Google's case, trolls from 4chan, a notoriously toxic online message board with a vocal far-right contingent, had spent the night scheming about how to pin the shooting on liberals.
One of their discussion threads, in which they wrongly identified the gunman, was picked up by Google's "top stories" module, and spent hours at the top of the site's search results for that man's name.
In Facebook's case, an official "safety check" page for the Las Vegas shooting prominently displayed a post from a site called "Alt-Right News". The post incorrectly identified the shooter and described him as a Trump-hating liberal. In addition, some users saw a story on a "trending topic" page on Facebook for the shooting that was published by Sputnik, a news agency controlled by the Russian government. The story's headline claimed, incorrectly, that the FBI had linked the shooter with the ISIS terror group.
Google and Facebook blamed algorithm errors for these.Twitter said it was also stepping up efforts to weed out false reports on the shooting .
But this was no one-off incident.
Over the past few years, extremists, conspiracy theorists and government-backed propagandists have made a habit of swarming major news events, using search-optimised "keyword bombs" and algorithm-friendly headlines.
These organisations are skilled at reverse-engineering the ways that tech platforms parse information, and they benefit from a vast real-time amplification network that includes 4chan and Reddit as well as Facebook, Twitter and Google.
Even when these campaigns are thwarted, they often last hours or days - long enough to spread misleading information to millions of people.
The latest fake news flare-up came at an inconvenient time for companies like Facebook, Google and Twitter, which are already defending themselves from accusations that they have let malicious actors run rampant on their platforms.
Disasters, with their compelling stories, attract much attention. But often much that is not real news gets circulated as well. Here are some recent instances:
ROHINGYA REFUGEE CRISIS
It was hard to separate fact from fiction when the circulation of photographs of burning houses in Myanmar fuelled debate on whether the Rohingya were resorting to doing so to draw attention to their plight.
"Photos of Bengalis torching their houses themselves," was a message on a Facebook post that was followed by an interview with a monk in newspaper reports, who reportedly witnessed the arson. But Internet sleuths soon figured the photographs were fake.
Mexicans were left baffled during the recent earthquake by news reports about a 12-year-old girl trapped in the rubble of a school destroyed in the 7.1 magnitude disaster, that left more than 270 people dead.
News media repeated the story of "Frida Sofia", with a sense of urgency. After two days of rescue efforts, officials declared that all the schoolchildren had been accounted for and the girl never existed.
Some news outlets, reporting off social media, said a certain Dr Elena Orozco had asked for help on her mobile phone.
But Dr Orozco was safe and sound. On her Facebook page, she said: "I'm not under the rubble. Verify everything!"
Fake news abounded as Hurricane Harvey brought flooding to the Texas coast and Houston area.
One headline read: "Black Lives Matter thugs blocking emergency crews from reaching hurricane victims."
Just one website alone flashed the report to one million people via Facebook. News reports said the opposite was actually true, with many reaching out to help victims.
Muslims were accused in some posts of refusing to let Harvey victims seek shelter in mosques. Again, the opposite was true and the imam featured in one such post said he had never visited Texas.
SOURCE: THE STRAITS TIMES, GUARDIAN, HUFFINGTON POST
Part of the problem is that these companies have largely evaded the responsibility of moderating the content that appears on their platforms, instead relying on rule-based algorithms to determine who sees what.
Facebook, for instance, previously had a team of trained news editors who chose which stories appeared in its trending topics section, a huge driver of traffic to news stories. But it disbanded the group and instituted an automated process last year, after reports surfaced that the editors were suppressing conservative news sites.
The change seems to have made the problem worse.
There is also a labelling issue. A Facebook user looking for news about the Las Vegas shooting on Monday morning, or a Google user searching for information about the wrongfully accused shooter, would have found posts from 4chan and Sputnik alongside articles by established news organisations like CNN and NBC News, with no obvious cues to indicate which ones came from reliable sources.
More thoughtful design could help solve this problem, and Facebook has already begun to label some disputed stories with the help of professional fact checkers.
But fixes that require identifying "reputable" news organisations are inherently risky because they open companies up to accusations of favouritism.
The automation of editorial judgment, combined with tech companies' reluctance to appear partisan, has created a lopsided battle between those who want to spread misinformation and those tasked with policing it.
Posting a malicious rumour on Facebook, or writing a false news story that is indexed by Google, is a nearly instantaneous process.
Removing such posts often requires human intervention.
This imbalance gives an advantage to rule-breakers, and makes it impossible for even an army of well-trained referees to keep up.
But just because the war against misinformation may be unwinnable does not mean it should be avoided. Roughly two-thirds of American adults get news from social media, which makes the methods these platforms use to vet and present information a matter of national importance.
Facebook, Twitter and Google are some of the world's richest and most ambitious companies, but they still have not shown that they are willing to bear the costs - or the political risks - of fixing the way misinformation spreads on their platforms. Tech companies should act decisively to prevent hoaxes and misinformation from spreading on their platforms.
Facebook and Google have spent billions of dollars developing virtual reality systems. They can spare a billion or two to protect actual reality.
NYTIMES, AGENCE FRANCE-PRESSE