Artists use tech weapons to thwart AI copycats

Sign up now: Get ST's newsletters delivered to your inbox

While the proliferation of artificial intelligence has undoubtedly been a boon, the use of artists' work to train up models and replicate their styles has been an ethical minefield.

The majority of digital images, audio and text used to shape the way artificial intelligence thinks has been scraped from the Internet without explicit consent.

PHOTO: REUTERS

Follow topic:

Artists under siege by artificial intelligence (AI) that studies their work, then replicates their styles, have teamed up with university researchers to stymie such copycat activity.

American illustrator Paloma McClain went into defence mode after learning that several AI models had been “trained” using her art, with no credit or compensation sent her way.

“It bothered me,” she said, adding: “I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others.”

The artist turned to a free software – Glaze, which was created by researchers at the University of Chicago – to protect her artwork.

The software essentially out-thinks AI models when it comes to how they train, tweaking pixels in ways indiscernible by human viewers but which make a digitised piece of art appear dramatically different to AI.

Computer science professor Ben Zhao from the Glaze team said: “We’re basically providing technical tools to help protect human creators against invasive and abusive AI models.”

Created in just four months, Glaze spun off technology used to disrupt facial recognition systems.

“We were working at super-fast speed because we knew the problem was serious,” he said of rushing to defend artists from software imitators.

Generative AI giants have agreements to use data for training in some cases, but the majority of digital images, audio and text used to shape the way AI thinks has been scraped from the Internet without explicit consent.

Since its March release, Glaze has been downloaded more than 1.6 million times, said Professor Zhao. His team is working on an enhancement called Nightshade that notches up defences by confusing AI. For example, an image of a dog would be interpreted as a cat.

Referring to the Internet, Ms McClain said: “I believe Nightshade will have a noticeable effect if enough artists use it and put enough poisoned images into the wild. According to its research, it wouldn’t take as many poisoned images as one might think.”

Prof Zhao said his team has been approached by several companies that want to use Nightshade. “The goal is for people to be able to protect their content, whether it’s individual artists or companies with a lot of intellectual property.”

Meanwhile, a separate start-up, Spawning, has developed the Kudurru software, which detects attempts to harvest large numbers of images from an online venue.

An artist can then block access or send images that do not match what is being requested, tainting the pool of data being used to teach AI what is what, said Spawning co-founder Jordan Meyer.

More than 1,000 websites have already been integrated into the Kudurru network.

Spawning has also launched haveibeentrained.com, a website that features an online tool for finding out whether digitised works have been fed into an AI model and allows artists to opt out of such use in the future.

As defences ramp up for images, researchers at Washington University in Missouri have developed AntiFake software to thwart AI copying voices.

AntiFake enriches digital recordings of people speaking, adding noises inaudible to people but which make it “impossible to synthesise a human voice”, said Mr Zhiyuan Yu, the PhD student behind the project.

The program aims to go beyond just stopping the unauthorised training of AI to preventing the creation of “deepfakes” – bogus soundtracks or videos of celebrities, politicians, relatives or others showing them doing or saying something they did not.

A popular podcast recently reached out to the AntiFake team for help to stop its productions from being hijacked, according to Mr Yu.

The freely available software has so far been used for recordings of people speaking, but could also be applied to songs.

Mr Meyer contended: “The best solution would be a world in which all data used for AI is subject to consent and payment.

“We hope to push developers in this direction.” AFP

See more on