AI chatbots can influence voters, studies show

Sign up now: Get ST's newsletters delivered to your inbox

Chat bot intelligence Ai. Businessman using chat in smartphone Chat with AI Artificial Intelligence, developed by OpenAI generate. Futuristic technology, robot in online system.\

A partisan AI chatbot can influence voters’ political views with evidence-backed arguments – whether true or not.

PHOTO: ADOBE STOCK

Follow topic:
  • AI chatbots can shift voters' political views, especially with evidence-backed arguments, impacting voting decisions in upcoming elections.
  • Studies using models like GPT-4o showed shifts of up to 10 points in polls, with some persuasive effects lasting for a month.
  • Chatbots' politeness and evidence were key, but inaccuracies, especially in right-leaning content, raise concerns about AI's influence.

AI generated

PARIS - A brief conversation with a partisan AI chatbot can influence voters’ political views, studies published on Dec 4 found, with evidence-backed arguments – true or not – proving particularly persuasive.

Experiments with generative artificial intelligence models, such as OpenAI’s GPT-4o and Chinese alternative DeepSeek, found they were able to shift supporters of Republican Donald Trump towards his Democratic opponent Kamala Harris by almost four points on a 100-point scale ahead of the 2024 US presidential election.

Opposition supporters in 2025 polls in Canada and Poland, meanwhile, had their views shifted by up to 10 points after chatting with a bot programmed to persuade.

Those effects are enough to sway a significant proportion of voting decisions, said Cornell University professor David Rand, a senior author of the papers in journals Science and Nature.

“When we asked how people would vote if the election were held that day... roughly one in 10 respondents in Canada and Poland switched,” he told AFP by email.

“About one in 25 in the US did the same,” he added, while noting that “voting intentions aren’t the same as actual votes” at the ballot box.

However, follow-ups with participants found that around half the persuasive effect remained after one month in Britain, while one-third remained in the US, Prof Rand said.

“In social science, any evidence of effects persisting a month later is comparatively rare,” he pointed out.

Being polite, giving proof

The studies found that the most common tactic used by chatbots to persuade was “being polite and providing evidence”, and that bots instructed not to use facts were far less persuasive.

Such results “go against the dominant narrative in political psychology, which holds that ‘motivated reasoning’ makes people ignore facts that conflict with their identities or partisan commitments”, Prof Rand said.

But the facts and evidence cited by the chatbots were not necessarily truthful.

While most of their fact-checked claims were accurate, “AIs advocating for right-leaning candidates made more inaccurate claims”, Prof Rand said.

This was “likely because the models mirror patterns in their training data, and numerous studies have found that right-leaning content on the internet tends to be more inaccurate”, he added.

The authors recruited thousands of participants for the experiments on online gig-work platforms and warned them in advance that they would be speaking with AI.

Prof Rand said that further work could investigate the “upper limit” of just how far AI can change people’s minds – and how newer models released since the fieldwork, such as GPT-5 or Google’s Gemini 3, would perform. AFP

See more on