Deepfaking it: America’s 2024 election collides with AI boom

Tools that can generate deepfakes are being released with few or imperfect guard rails to prevent harmful misinformation. PHOTO: REUTERS

SAN FRANCISCO - “I actually like Ron DeSantis a lot,” Mrs Hillary Clinton reveals in a surprise online endorsement video. “He’s just the kind of guy this country needs and I really mean that.”

US President Joe Biden unleashes a cruel rant at a transgender person. “You will never be a real woman,” he snarls.

Welcome to America’s 2024 presidential race, where reality is up for grabs.

The Clinton and Biden deepfakes – realistic yet fabricated videos created by artificial intelligence (AI) algorithms trained on copious online footage – are among thousands surfacing on social media, blurring fact and fiction in the polarised world of US politics.

While such synthetic media has been around for several years, it has been turbocharged over the past year by a slew of new “generative AI” tools such as Midjourney that make it cheap and easy to create convincing deepfakes, according to interviews with about two dozen specialists in fields including AI, online misinformation and political activism.

“It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Mr Darrell West, senior fellow at the Brookings Institution’s Centre for Technology Innovation, referring to former US president Donald Trump, who will vie with Mr DeSantis and others for the Republican nomination to face Mr Biden.

“There could be things that drop right before the election that nobody has a chance to take down,” said Mr West.

Tools that can generate deepfakes are being released with few or imperfect guard rails to prevent harmful misinformation as the tech sector engages in an AI arms race, said Mr Aza Raskin, co-founder of the Centre for Humane Technology, a non-profit that studies technology’s impact on society.

While major social media platforms like Facebook, Twitter and YouTube have made efforts to prohibit and remove deepfakes, their effectiveness at policing such content varies.

Deepfake Pence, not Trump

There have been three times as many video deepfakes of all kinds and eight times as many voice deepfakes posted online in 2023 compared with the same time period in 2022, according to DeepMedia, a company working on tools to detect synthetic media.

About 500,000 video and voice deepfakes will be shared on social media sites globally in 2023, DeepMedia estimates. Cloning a voice used to cost US$10,000 (S$13,500) in server and AI-training costs up until late 2022, but start-ups now offer it for a few dollars, it said.

No one is certain where the generative AI road leads or how to effectively guard against its power for mass misinformation, said the people interviewed.

Industry leader OpenAI, which has changed the game in recent months with its release of ChatGPT and the updated model GPT-4, is itself grappling with the issue. Chief executive Sam Altman told the US Congress in May that election integrity was a “significant area of concern” and urged rapid regulation of the sector.

Unlike some smaller start-ups, OpenAI has taken steps to restrict use of its products in politics, according to a Reuters analysis of the terms of use of half a dozen leading companies offering generative AI services. The guard rails have gaps, though.

For example, it says it prohibits its image generator Dall-E from creating images of public figures – and indeed, when Reuters tried to create images of Mr Trump and Mr Biden, the request was blocked and a message appeared saying it “may not follow our content policy”.

Yet Reuters was able to create images of a dozen other US politicians, including former vice-president Mike Pence, who is weighing a White House run for 2024.

OpenAI restricts any “scaled” use of its products for political purposes. That bans the use of its AI to send out mass personalised e-mails to constituents, for example.

The firm, which is backed by Microsoft, explained its political policies in an interview but did not respond to requests for comment on enforcement gaps, such as blocking image creation of politicians.

Several smaller start-ups have no explicit restrictions on political content.

Midjourney, which launched in 2022, is the leading player in AI-generated images, with 16 million users on its Discord server. The app, which ranges from free to US$60 a month depending on factors such as picture quantity and speed, is a favourite of AI designers and artists as it can generate hyper-realistic images of celebrities and politicians, according to four AI researchers interviewed.

Midjourney did not respond to a request for comment for this article. During an online chat on Discord last week, chief executive David Holz said the firm would most likely make changes ahead of the election to combat misinformation.

It wants to cooperate on an industry solution to enable traceability of AI-generated images with a digital equivalent of watermarking and would consider blocking images of political candidates, he added.

Republican AI-generated ad

Some political players are seeking to use generative AI to soup up campaigns.

So far, the only high-profile AI-generated political advertisement was one published by the Republican National Committee (RNC) in April. The 30-second ad, which the RNC disclosed as being entirely generated by AI, used fake images to suggest a cataclysmic scenario should Mr Biden be re-elected, with China invading Taiwan and San Francisco shut down by crime.

The RNC did not respond to requests for comment on the ad or its use of AI. The Democratic National Committee declined to comment on its use of the technology.

Reuters polled all the Republican presidential campaigns on their use of AI. Most did not reply, although Ms Nikki Haley’s team said they were not using the technology, and candidate Perry Johnson’s campaign said it was using AI for “copy generation and iteration”, without giving further details.

The potential for generative AI to produce campaign e-mails, posts and ads is irresistible for some activists who feel the low-cost tech could level the playing field in elections.

Even deep in rural Hillsdale, Michigan, machine intelligence is on the march.

Mr Jon Smith, Republican chair for Michigan’s 5th congressional district, is holding several educational meetings so his allies can learn to use AI for social media and ad generation. “AI helps us play against the big cats. I see the biggest upswing in the local races. Someone who is 65 years old, a farmer and county commissioner, he could easily be primaried by a younger cat using the technology,” he said.

Political consultancies are also seeking to harness AI, further muddying the line between real and unreal.

Numinar Analytics, a political data firm that focuses on Republican clients, has begun experimenting with AI content generation for audio and images, and voice generation to potentially create personalised messaging in a candidate’s voice, said founder Will Long.

Democratic polling and strategy group Honan Strategy Group is, meanwhile, trying to develop an AI survey bot. It hopes to unroll a female bot in time for the 2023 municipal elections, chief executive Bradley Honan said, citing research that both men and women are more likely to speak to a female interviewer. REUTERS

Join ST's Telegram channel and get the latest breaking news delivered to you.