Tech giants Facebook, Google and Twitter yesterday argued against the need for additional legislation to tackle the threat of online untruths, saying they are already taking steps to address the issue.
The companies told the parliamentary Select Committee on deliberate online falsehoods that they have been investing heavily in technology and schemes.
This includes developing algorithms that can flag less trustworthy content and prioritise authoritative sources, as well as partnerships with non-profit organisations that help them identify and take down offensive material.
"Prescriptive legislation will not adequately address the issue effectively due to the highly subjective, nuanced and difficult task of discerning whether information is 'true' or 'false'," Mr Jeff Paine, managing director of the Asia Internet Coalition (AIC), wrote in his submission to the committee, adding later that multiple stakeholders have to be engaged instead of rushing to legislate. The AIC, an industry association of technology companies, counts LinkedIn and Apple among its members.
Ms Kathleen Reen, Twitter's director of public policy for Asia-Pacific, said in her written submission that "no single company, governmental or non-governmental actor should be the arbiter of truth".
However, Mr Paine conceded during yesterday's hearing that there could be gaps in Singapore's existing laws for quick action to be taken against online falsehoods, when quizzed further by Select Committee members Law and Home Affairs Minister K. Shanmugam and Social and Family Development Minister Desmond Lee.
Speaking to a panel of representatives from Facebook, Twitter, Google and AIC, Mr Lee questioned the ability of technology companies to self-regulate.
He cited how YouTube has not completely removed a 2016 video by banned British white supremacist group National Action after more than eight months, even though British Home Affairs Select Committee chairman Yvette Cooper flagged it multiple times over the past year.
"Their experience is something that we look at with concern, being a much smaller jurisdiction... even in clear-cut cases, there has been inaction," Mr Lee said.
Mr Shanmugam noted that there can be a difference between what countries and social media platforms may tolerate.
He referred to a post on Twitter with the hashtag #DeportAllMuslims, which was accompanied by a graphic cartoon of a topless mother, surrounded by toddlers of varying ethnicities. The picture was titled "The New Europeans". The tweet had not been taken down even after being flagged, despite its offensive nature, he said.
"This was not a breach of Twitter's hateful conduct policy. If this is not a breach... I find it difficult to understand what else can be."
He told the tech industry representatives: "The various beautiful statements you made... (have) to be tested against reality... For us in Singapore, this is way beyond what we would tolerate."
Facebook's Asia-Pacific vice-president of public policy Simon Milner pointed to difficulties in coming up with policies to tackle deliberate online falsehoods.
He highlighted that due process will be needed for a policy against online untruths, which is unlike "making a judgment on hate speech, or terrorism, or child sexual abuse - all the other areas of policy that we deal with".
"It is not that we are trying to abdicate our responsibilities, it is the particular notion of the kind of due process you require in order to be fair to people... that I think is more problematic for us than other policy areas," said Mr Milner.
He said that this is why using machine learning or proxies to nip the problem in the bud - a system that is still being tested - is what the platform considers to be the right approach.