Feature

Restore privacy with visual distortion

Created by a team from the National University of Singapore, the program is said to counter the facial-recognition algorithms of big tech firms

New research from the National University of Singapore (NUS) is promising to restore privacy to individuals by making their online images unrecognisable to even the most advanced facial recognition technologies.

The research, which has been ongoing for more than six months, is targeted at countering the facial-recognition algorithms of big tech firms such as Facebook and Google.

For instance, social media platforms Facebook and Instagram can automatically tag a user in photos. Google Photos can identify family members or friends in pictures and group photos based on themes.

The NUS technique, which has not been named, stops such artificial intelligence (AI) software from recognising specific facial attributes, such as gender and race, by introducing subtle visual distortions that do not affect image aesthetics discernible by human eyes.

Says Professor Mohan Kankanhalli, dean of the School of Computing at NUS, who led the research: "It's too late to stop people from posting photos on social media. However, the reliance on AI is something we can target."

The program aims to overcome the limitations of current visual distortion technologies, which ruin the aesthetics of photographs as the images need to be heavily altered to fool the machines.

The team at NUS developed a "human sensitivity map" that quantifies how humans react to visual distortion in different parts of an image across a variety of scenes.

The first phase of the research, which involved 234 participants and 860 images, determined that factors like illumination, texture, object sentiment and semantics are crucial to image perception.

Using this map, the team then fine-tuned its visual distortion AI technique. For instance, distortion is inserted by the AI algorithm into low human sensitivity areas - which could be clothing in one instance or skin tone in another depending on the interaction of objects in an image.

The source code of the algorithm is available on the NUS website for developers to incorporate into their apps.

Work is still ongoing to allow the NUS algorithm to be applied to all social media platforms, in what Prof Kankanhalli says is the "holy grail" of privacy protection.

The algorithms used by big tech firms are not disclosed, making it hard to target the AI behind facial recognition across platforms.

The NUS team overcomes this by working on the common attributes of such systems and making educated guesses on how the systems identify faces.

"Have we solved privacy problems with AI? No, but this is the first step," says Prof Kankanhalli.

Correction note: An earlier version of this story said the NUS-developed program is dubbed ncript. But NUS has since clarified the program has no name.  

A version of this article appeared in the print edition of The Straits Times on July 01, 2020, with the headline 'Restore privacy with visual distortion'. Print Edition | Subscribe