MENLO PARK, UNITED STATES - Facebook said Wednesday (July 18) it will start removing misinformation that could spark violence, a response to mounting criticism that the flow of rumours on its platform has led to physical harm to people in countries around the world, reports said.
The new policy is a shift in Facebook’s broader approach to misinformation, which until now has been focused on suppressing its popularity on the platform without scrubbing the problematic content entirely, Wall Street Journal said.
But the company has also faced more questions about the platform’s role as a vector for false information that can inflame social tensions.
A Facebook spokeswoman said the company will implement the new policy first in Sri Lanka and later in Myanmar, two countries where some people and groups have used Facebook to spread rumors that ultimately lead to physical violence, reported the Wall Street Journal.
The attacks in those countries have garnered significant media attention.
Misinformation removed in Sri Lanka under the new policy included content falsely contending that Muslims were poisoning food given or sold to Buddhists, according to Facebook.
“There were instances of misinformation that didn’t violate our distinct community standards but that did contribute to physical violence in countries around the world,” WSJ cited Tessa Lyons, a product manager on Facebook’s news feed as saying, citing Sri Lanka and Myanmar specifically.
“This is a new policy created because of that feedback and those conversations.”
Facebook has been lambasted for allowing rumours or blatantly false information to circulate that may have contributed to violence. Many see Facebook as being used as a vehicle for spreading false information in recent years.
The social media giant has implemented a series of changes aimed at fighting use of the network to spread misinformation, from fabrications that incite violence to untruths that sway elections.
The new policy raises questions that company officials said are too early to answer, including who its partners will be and what the criteria will be to become one.
It also isn’t clear how those partners will determine whether or not content such as doctored photos, created or shared to stir up to ignite volatile situations in the real world, is false or could lead to violence. Nor was it clear how Facebook would ensure those organisations remain independent or relatively free from political bias.
Lyons said Facebook was in the early stages of creating these policies and didn’t have details to share publicly. In an interview with WSJ, she said Facebook will rely on outside organisations’ judgment because they have “local context and local expertise.”
Facebook has relied on third-party organisations to help it navigate other thorny issues in the past. In December 2016, while facing mounting pressure for allowing misinformation to proliferate on the platform during the US election, Facebook said it would team up with fact-checking organizations in the US to help suppress false news reports on the platform.
The organisations determine which claims are true and false. If enough organisations say it’s false, Facebook will lower the rank of the posts.