For subscribers
Disinformation now has a new channel – AI chatbots
By seeding the internet through a process of ‘LLM grooming’, governments could skew the responses of chatbots with their views delivered in subtler ways.
Sign up now: Get ST's newsletters delivered to your inbox
Artificial intelligence bots are susceptible to political biases and manipulation by governments, says the writer.
PHOTO: ISTOCKPHOTO
Follow topic:
As generative artificial intelligence (AI) technologies rapidly evolve, the general assumption of most regular users is that the AI chatbots and their programs that explore the web, indexing and collecting data as they go along, are differentiated only by their efficiency: some are better than others at answering queries, and some “hallucinate” less than others.
Yet it is now becoming clear that AI bots are not only vulnerable to the inherent biases of those who wrote the programs; they are also susceptible to political biases and manipulation by governments. So, what we get when we search online now is less an impartial summary of available information and more a skilfully curated narrative that may well skew reality, either intentionally or unintentionally.

