SAN FRANCISCO • Facebook has announced an expansion of several initiatives to combat the spread of misinformation on the network used by more than two billion people.
In a company blog on Thursday, Facebook acknowledged that fake news reports and doctored content have increasingly become image-based in some countries, making it harder for readers to discern whether a photo or video related to a news event is authentic.
The social media company said it has expanded its fact-checking of traditional links posted on Facebook to photos and videos.
Partnering with third-party experts trained in visual verification, the firm will also flag images that have been posted on Facebook in a misleading context, such as, for example, a photo of a previous natural disaster or shooting that is displayed as a present-day event.
Facebook will also use machine-learning tools to identify duplicates of debunked stories that continue to pop up on the network.
The firm said more than a billion pictures, links, videos and messages are uploaded to the social platform each day, making fact-checking difficult to execute by human review.
The automated tools will help the company find domains and links that are spreading the same claims that have already been proved false. Facebook has said it will use artificial intelligence to limit misinformation, but the latest update applies to finding duplicates of false claims.
Earlier this year, Facebook said it would start a new project to help provide independent research on social media's role in elections and within democracies.
The commission in charge of the elections research is hiring staff to run the initiative, will launch a website in the coming weeks and will also request research proposals on the scale and effects of misinformation on Facebook.
"Over time, this externally-validated research will help keep us accountable and track our progress," Facebook said.
The other updates announced on Thursday include using machine learning to identify repeat misinformation offenders and expanding Facebook's fact-checking partnerships internationally.
Communications professor Mike Ananny at the University of Southern California said the updates are a step in the right direction but that Facebook has not fully explained what it is doing to combat fake news or shared details about how its human-led and automated detection systems actually work.
Prof Ananny suggested that Facebook share the algorithms used by its machine learning systems, what data those systems are trained on, and if systemic errors have been identified within them.
"Facebook is on this complicated journey of trying to figure out what its responsibility is to journalism and to the public," he said. But it is not clear to him how Facebook defines success in these efforts, which may help the company evade accountability.
The latest announcement is part of Facebook's efforts, which have spanned more than a year and a half, to grapple with fake accounts, disinformation and accountability on the network. "This effort will never be finished, and we have a lot more to do," Facebook said.