SAN FRANCISCO • A week after trying to reassure the public that it was "extremely unlikely" that hoaxes had changed the outcome of the United States presidential election, Facebook founder Mark Zuckerberg outlined several ways the firm might try to stop the spread of fake news on the platform in the future.
"We've been working on this problem for a long time and we take this responsibility seriously. We've made significant progress, but there is more work to be done," Mr Zuckerberg wrote in a post on his own Facebook page.
He then named seven approaches the company was considering to address the issue, including warning labels on false stories, easier user reporting methods and the integration of third-party verification.
"The problems here are complex, both technically and philosophically," he cautioned, repeating Facebook's long-standing aversion to becoming the "arbiters of truth".
Instead, it prefers to rely on third parties and users to make those distinctions. "We need to be careful not to discourage sharing of opinions or mistakenly restricting accurate content," he said.
While none of the listed ideas are particularly specific, his post does provide more details on the firm's thinking about the problem.
Facebook's concern with fake news predates the 2016 elections. Hoaxes have long plagued the site's algorithms, which encourage the creation of content that its users would like to share, true or not.
Among the fake news reports that circulated ahead of the election were those erroneously alleging Pope Francis had endorsed Republican candidate Donald Trump and that a US federal agent who had been investigating Democratic candidate Hillary Clinton was found dead.
However, fake news - and specifically, Facebook's role in spreading it - became a story of wide interest just after the elections, when critics accused the platform of influencing voters by allowing political hoaxes to regularly go viral, particularly those favourable to Mr Trump, now the President-elect.
Mr Zuckerberg has strongly denied that this was true, and that fake news "surely had no impact" on the outcome.
He did not contradict this denial on his post but it reflects Facebook's growing acknowledgment that it is going to have to do a lot more about the plague of hoaxes and fake stories on the platform.
Facebook had announced it was going to crack down on fake news sites that use its ad services to profit off hoaxes.
One idea Mr Zuckerberg presented on his post indicates that the firm wants to go further in "disrupting fake news economics". It is also considering more policies, along with stronger "ad farm detection". Another idea promises stronger detection of misleading content.
News Feed can already make some guesses about whether a post is authentic or not based on the user behaviour around it.
Mr Zuckerberg said in his post that Facebook currently watches for things like "people sharing links to myth-busting sites such as Snopes" to decide if a post might be misleading or false.
He did not go into specifics about what more Facebook might be looking to do on this front.
Facebook also indicated it is trying to find ways to rely more on users and third parties to help flag and classify fake stories.
Mr Zuckerberg listed "stronger reporting" methods for users, and listening more to "third party verification" services such as fact-checking sites.
He also said Facebook was considering how to use third-party and user reports of fake news as a source for displaying warnings on fake or misleading content.
The firm would also work with third-party verification organisations and journalists on fact-checking efforts.
WASHINGTON POST, REUTERS