Child abusers run rampant as tech firms look the other way

Tools to find, delete online abuse imagery not being well utilised

Sisters F and E (identities withheld) are survivors of child sexual abuse in the US whose photos and videos of their anguish have been preserved on online platforms. PHOTO: NYTIMES
Sisters F and E (identities withheld) are survivors of child sexual abuse in the US whose photos and videos of their anguish have been preserved on online platforms. PHOTO: NYTIMES

NEW YORK • The two sisters live in fear of being recognised. One grew out her bangs and took to wearing hoodies. The other dyed her hair black. Both avoid looking the way they did as children.

Ten years ago, their father posted explicit photos and videos on the Internet of them, aged just seven and 11 at the time. Many captured violent assaults, including him and another man drugging and raping the seven-year-old.

The men are now in prison, but in a cruel consequence of the digital era, their crimes are finding new audiences. The two sisters are among the first generation of child sexual abuse victims whose anguish has been preserved on the Internet.

This year alone, photos and videos of the sisters were found in more than 130 child sexual abuse investigations.

The digital trail of abuse haunts the sisters relentlessly, they said, as does the fear of a predator recognising them from the images.

"That's in my head all the time - knowing those pictures are out there," said E, the older sister, who is being identified only by her first initial to protect her privacy.

"Because of the way the Internet works, that's not something that's going to go away."

Horrific experiences like theirs are being recirculated across the Internet because search engines, social networks and cloud storage are rife with opportunities for criminals to exploit.

The scope of the problem is only starting to be understood because the technology industry has been more diligent in recent years in identifying online child sexual abuse material, with a record 45 million photos and videos flagged last year. But the same industry has consistently failed to take aggressive steps to shut it down, an investigation by The New York Times found.

The companies have the technical tools to stop the recirculation of abuse imagery by matching newly detected images against databases of the material. Yet the tech industry does not take full advantage of the tools.

Amazon, whose cloud storage services handle millions of uploads and downloads every second, does not even look for the imagery. Apple does not scan its cloud storage and encrypts its messaging app, making detection virtually impossible. Dropbox, Google and Microsoft's consumer products scan for illegal images, but only when someone shares them, not when they are uploaded. Other companies, including Snapchat and Yahoo, look for photos but not videos.

Facebook thoroughly scans its platforms, accounting for more than 90 per cent of the imagery flagged by tech companies last year, but the company is not using all available databases to detect the material. And Facebook has announced that the main source of the imagery, Facebook Messenger, will eventually be encrypted, vastly limiting detection.

Tech companies are far more likely to review photos and videos and other files on their platforms for facial recognition, malware detection and copyright enforcement. But some businesses said looking for abuse content is different because it can raise significant privacy concerns.

The main method for detecting the illegal imagery was created in 2009 by Microsoft and Mr Hany Farid, now a professor at the University of California, Berkeley. The software, known as PhotoDNA, can use computers to recognise photos, even altered ones, and compare them against databases of known illegal images.

But this technique is limited because no single authoritative list of known illegal material exists, allowing countless images to slip through the cracks. The most commonly used database is kept by a federally designated clearinghouse, which compiles digital fingerprints of images reported by US tech companies. Other organisations around the world maintain their own.

Even if there were a single list, however, it would not solve the problems of newly created imagery flooding the Internet or the surge in livestreaming abuse.

The Times created a computer program that scoured Bing and other search engines. The automated script repeatedly found images - dozens in all - that Microsoft's own PhotoDNA service flagged as known illicit content. Bing even recommended other search terms when a known child abuse website was entered into the search box. Similar searches by the Times on DuckDuckGo and Yahoo, which use Bing results, also returned known abuse imagery.

Paedophiles have used Bing to find illegal imagery and have also deployed the site's "reverse image search" feature, which retrieves pictures based on a sample photo.

The problem is not confined to search engines, however. Paedophiles often leverage multiple technologies and platforms, meeting on chat apps and sharing images on cloud storage.

In 2017, the tech industry approved a process for sharing video fingerprints to make it easier for all companies to detect illicit material. But the plan has gone nowhere.

The lack of action across the industry has allowed untold videos to remain on the Internet. Of the centre's 1.6 million fingerprints, less than 3 per cent are for videos.

None of the major tech companies is able to detect, much less stop, the live streaming through automated imagery analysis.

And while Facebook, Google and Microsoft have said they are developing technologies that will find new photos and videos on their platforms, it could take years to reach the precision of fingerprint-based detection of known imagery.

NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on November 11, 2019, with the headline Child abusers run rampant as tech firms look the other way. Subscribe