Pattern of 'denial and inaction' in tech firms' response to misuse on platforms: US professor tells Select Committee on fake news

Professor Hany Farid of Dartmouth College disagreed with tech firms' response that they actively prevent the spread of inappropriate content. Instead he felt that they have displayed a "pattern of denial and inaction".
Many inappropriate videos, for instance videos of extremist groups, can be found online for hours, days or even weeks, said Professor Hany Farid of Dartmouth College.
Many inappropriate videos, for instance videos of extremist groups, can be found online for hours, days or even weeks, said Professor Hany Farid of Dartmouth College.PHOTO: DARTMOUTH COLLEGE

SINGAPORE - While technology companies may say they actively prevent the spread of inappropriate online content on their platforms, there has been a "pattern of denial and inaction" instead, a United States digital forensics expert has argued.

"I reject this idea that they are aggressively going after this content... their actions simply don't support it," Professor Hany Farid of Dartmouth College told the Select Committee on deliberate online falsehoods via video conferencing on Tuesday (March 27).

The committee had put to him, for instance, that video-sharing site YouTube said 98 per cent of videos removed for violent extremism are now flagged by algorithms.

But what is the denominator, how does YouTube know what it is missing, he asked.

Many inappropriate videos, for instance videos of extremist groups, can be found online for hours, days or even weeks, said Prof Farid, who is involved in efforts to counter the spread of extreme ideologies online.

The US academic was the first speaker on the sixth day of public hearings held by the committee. Civil society representatives, including activists Kirsten Han and Jolovan Wham, are also scheduled to speak, during the later part of the day. 

The committee had earlier heard from  representatives of global social media companies including Facebook and Twitter, as well as  local media companies Singapore Press Holdings and Mediacorp.  Various academics from Singapore and overseas had also appeared before the committee to share their views of  what can be done to deal with online falsehoods. 

On Tuesday, Prof Farid also pointed to how technology companies had "dragged their feet" when it comes to the problem of child pornography shared online.

In his written submission, he noted that between 2003 and 2008, the United States' Attorney-General called on major US-based tech companies to discuss the spread of child pornography online. But the Technology Coalition that was formed "did not develop or deploy and effective solution" in this period, he added.

It was only in 2008 that things got moving with the development and deployment of a software called PhotoDNA, which is now in worldwide use and has been effective in removing tens of millions of images of child exploitation from online platforms, he wrote.

Prof Farid had a hand in developing this solution, after being approached by Microsoft and the National Centre for Missing and Exploited Children.

Likewise, technology companies have been slow in addressing the online recruitment and radicalisation of extremists across the globe, he noted.

Prof Farid called on technology companies to do more to rein in abuses.

"They put in just enough to stave off the regulatory issues", he argued.

The technology to detect and prevent inappropriate content from spreading exists, he said, noting the success of companies in going after unauthorised uploads of copyright content online.

While technology can help to detect and prevent inappropriate content from spreading online, this will likely still have to be paired with manual review by people, said Prof Farid.

This is because, with its current accuracy rate, allowing programmes to run fully autonomously would be "prohibitive" as it could misidentify illegal content, he said.

Currently, a low error rate, such as one in 100 billion, for illegal photos and videos, has to be allowed to detect material that has been modified, such as in size, he added.

Outlining the difficulty of targeting falsehoods with a similar method, he said also that countries will need to come up with reasonable, transparent definitions of such content.

With modern artificial intelligence systems needing a "phenomenal amount of data" to be trained, adversaries will recognise this as well, changing their tactics accordingly - meaning the challenge to combat falsehoods will be an ongoing one.

Public hearings to fight online falsehoods: Read the submissions here and watch more videos.