NEW YORK • In recent years, companies have been prowling the Web for public photos associated with people's names that they can use to build enormous databases of faces and improve their facial recognition systems, adding to a growing sense that personal privacy is being lost, bit by digital bit.
A start-up called Clearview AI, for example, scraped billions of online photos to build a tool for police that could lead them from a face to a Facebook account, revealing a person's identity.
Now researchers are trying to foil those systems. A team of computer engineers at the University of Chicago has developed a tool that disguises photos with pixel-level changes that confuse facial recognition systems.
Named Fawkes in honour of the Guy Fawkes mask favoured by protesters worldwide, the software was made available to developers on the researchers' website last month.
After being discovered by Hacker News, it has been downloaded more than 50,000 times. The researchers are working on a free app version for non-coders, which they hope to make available soon.
The software is not intended to be just a one-off tool for privacy-loving individuals. If deployed across millions of images, it would be a broadside against facial recognition systems, poisoning the accuracy of the data sets they gather from the Web.
"Our goal is to make Clearview go away," said Professor Ben Zhao, who teaches at the University of Chicago.
Fawkes converts an image - or "cloaks" it, in the researchers' parlance - by subtly altering some of the features that facial recognition systems depend on when they construct a person's face print.
In a research paper, reported earlier by OneZero, the team describes "cloaking" photos of actress Gwyneth Paltrow using actor Patrick Dempsey's face, so that a system learning what Paltrow looks like based on those photos would start associating her with some of the features of Dempsey's face. The changes, usually subtle and not perceptible to the naked eye, would prevent the system from recognising Paltrow when presented with a real, uncloaked photo of her.
In testing, the researchers were able to fool facial recognition systems from Amazon, Microsoft and Chinese tech company Megvii.
To test the tool, a New York Times reporter asked the team to cloak some images of her family and herself. She then uploaded the originals and the cloaked images to Facebook to see if they fooled the social network's facial recognition system.
It worked: Facebook tagged her in the original photo but did not recognise her in the cloaked version. However, the changes to the photos were noticeable to the naked eye.
The researchers had a few explanations for this. One is that the software is designed to match you with the face template of someone who looks as much unlike you as possible, pulling from a database of celebrity faces.
That usually ends up being a person of the opposite sex, which leads to obvious problems.
"Women get moustaches, and guys get extra eyelashes or eye shadow," Prof Zhao said.
He is enthusiastic about what he calls "privacy armour" and previously helped design a bracelet that stops smart speakers from overhearing conversations.
The team says it plans to tweak the software so that it will no longer subtly change the sex of users.
Fawkes is not intended to keep a facial recognition system like Facebook's from recognising someone in a single photo. It is trying to more broadly corrupt facial recognition systems, performing an algorithmic attack called data poisoning.
The researchers said that, ideally, people would start cloaking all the images they uploaded.
But Clearview's chief executive, Mr Hoan Ton-That, ran a version of The New York Times' Facebook experiment on the Clearview app and said the technology did not interfere with his system.
In fact, he said his company could use images cloaked by Fawkes to improve its ability to make sense of altered images.
"In practice, it's almost certainly too late to perfect a technology like Fawkes and deploy it at scale," said Mr Ton-That.