Technology companies may say they actively prevent the spread of inappropriate online content on their platforms, but there has been a "pattern of denial and inaction" instead, a United States digital forensics expert has argued.
"I reject this idea that they are aggressively going after this content... The actions simply don't support it," Professor Hany Farid of Dartmouth College told the Select Committee on deliberate online falsehoods via video conferencing yesterday.
Technology firms should do more to rein in abuses online, said Prof Farid, who argued that they "put in just enough to stave off the regulatory issues".
At the hearing, the committee put to him that social media platforms would disagree that they have been unresponsive.
Video-sharing site YouTube, for instance, said 70 per cent of videos that it found to have violent extremist content were removed within eight hours of being uploaded.
But Prof Farid asked if YouTube had any idea of the extent of the problem and how many of these videos there are on its site.
The academic, who has conducted research on extremist material appearing on YouTube, said such videos can remain online for hours, days or weeks.
In his written submission, he also noted how US-based technology firms "dragged their feet" for five years even though they were asked by the US Attorney-General to help counter the spread of child pornography online.
It was only from 2008 that things got moving with a software called PhotoDNA. This is now in use worldwide and has helped to remove tens of millions of images of child exploitation from online platforms, wrote Prof Farid, who had a hand in developing the solution.
While technology can help in the fight against inappropriate content, he said human oversight is still needed as computer programs are not all accurate, and need help to identify new illegal content.
Meanwhile, S. Rajaratnam School of International Studies senior fellow Benjamin Ang, whose submission focused on falsehoods that amount to national security threats, suggested setting up an independent body of non-governmental experts.
This group can help assess if the falsehoods are part of a larger disinformation operation. If so, a strategic rather than reactive response should be taken, he said in his written submission. This means not every story should be taken down or rebutted immediately, he added.
Mr Ang summed up the principles that should guide Singapore's response to falsehoods, which include building trust through transparent, continuous communication and building media literacy.
There are limitations to using legislation, he said, as this can be circumvented and online falsehoods can be used instead to build up the narrative that the Government is suppressing the truth.