Fake news spread by humans more than bots

Study also finds false stories about 70% more likely to be shared than true ones on Twitter

WASHINGTON • False news stories spread much more quickly and widely on Twitter than truthful ones, an imbalance driven more by people than automated "bot" accounts, researchers have found.

A study released by researchers at the Massachusetts Institute of Technology's (MIT) Media Lab on Thursday, which examined about 126,000 stories shared by some three million people on Twitter from 2006 to last year, found that false news was about 70 per cent more likely to be retweeted by people than true news.

The study, published in the journal Science, was one of the most comprehensive efforts to date to assess the dynamics behind how false news circulates on social media.

Twitter and other social media companies such as Facebook have been under scrutiny by US lawmakers and international regulators for doing too little to prevent the spread of false content.

US officials have accused Russia of using social media to try to sow discord in the United States and meddle in the 2016 US presidential polls.

The stories examined in the study were reviewed by six independent fact-checking organisations, including Snopes and Politifact, to assess their veracity.

False stories spread significantly more quickly and broadly than true stories in all categories of information, but this was more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends or financial information, said the researchers.

True stories were rarely retweeted by more than 1,000 people, but the top 1 per cent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.

True stories were rarely retweeted by more than 1,000 people, but the top 1 per cent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.

They noted increases in false political stories during the 2012 and 2016 US presidential races. Although Twitter's allowance of bots has come under particular criticism, the MIT researchers found that these automated accounts accelerated true and false news equally, meaning people were more directly responsible for the spread of false news.

MIT Media Lab researcher and study lead author Soroush Vosoughi said people may be more likely to share false news because it is more surprising, in the same way that sensationalised "click bait" headlines garner more attention.

"One reason false news might be more surprising is, it goes against people's expectations of the world," said Mr Vosoughi in an interview. "If someone makes up a rumour that goes against what they expected, you are more likely to pass it forward."

While the study focused on Twitter, the researchers said that their findings would be likely to apply to other social media platforms, including Facebook, as well.

A Twitter spokesman declined to comment on the study's findings, but pointed to tweets by the company's chief executive Jack Dorsey last week pledging to "increase the collective health, openness and civility of public conversation, and to hold ourselves publicly accountable towards progress".

REUTERS, NYTIMES

A version of this article appeared in the print edition of The Straits Times on March 10, 2018, with the headline 'Fake news spread by humans more than bots'. Print Edition | Subscribe