Twitter allows some users to flag 'misleading' content

Users can be more specific, flagging misleading tweets as potentially containing misinformation about "health", "politics" and "other".
Users can be more specific, flagging misleading tweets as potentially containing misinformation about "health", "politics" and "other".PHOTO: REUTERS

SAN FRANCISCO (AFP) - Twitter has announced a new feature that allows users to flag content that could contain misinformation, a scourge that has only grown during the Covid-19 pandemic.

"We're testing a feature for you to report tweets that seem misleading - as you see them," the social network said on its safety and security account.

Starting from Tuesday (Aug 17), a button would be visible to some users in the United States, South Korea and Australia to choose "it's misleading" after clicking "report tweet".

Users can then be more specific, flagging the misleading tweet as potentially containing misinformation about "health", "politics" and "other".

The San Francisco-based company said: "We're assessing if this is an effective approach so we're starting small.

"We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."

Twitter, like Facebook and YouTube, regularly comes under fire from critics who say it does not do enough to fight the spread of misinformation.

But the platform does not have the resources of its Silicon Valley neighbours, and so often relies on experimental techniques that are less expensive than recruiting armies of moderators.

Such efforts have ramped up as Twitter toughened its misinformation rules during the Covid-19 pandemic and during the US presidential election where the candidates were Mr Donald Trump and Mr Joe Biden.

For example, Twitter in March began blocking users who have been warned five times about spreading false information about vaccines.

And the network began flagging tweets from Mr Trump with a banner warning of their misleading content during his 2020 re-election campaign, before the then President was finally banned from the website for posting incitements to violence and messages discrediting the election results.

Moderators are ultimately responsible for determining which content actually violates Twitter's terms of use, but the network has said it hopes to eventually use a system that relies on both human and automated analysis to detect suspicious posts.

Concern around Covid-19 vaccine misinformation has become so rampant that in July, Mr Biden said Facebook and other platforms were responsible for "killing" people in allowing false information around the vaccine shots to spread.

He later clarified the remarks, saying that the false information itself is what could harm or even kill those who believe it.