For subscribers

The day Grok lost its mind

The chatbot’s sudden obsession about ‘white genocide’ is a reminder of a thorny truth about our relationship with large language models.

Sign up now: Get ST's newsletters delivered to your inbox

Large language models, the kind of generative AI that forms the basis of Grok and other chatbots, are not traditional computer programs that simply follow our instructions.

Large language models, the kind of generative AI that forms the basis of Grok, ChatGPT, Gemini and other chatbots, are so big and complicated that how they work is opaque even to their owners and programmers.

PHOTO: AFP

Zeynep Tufekci

Follow topic:

Last Tuesday, someone posted a video on social platform X of a procession of crosses, with a caption reading, “Each cross represents a white farmer who was murdered in South Africa.” Mr Elon Musk, South African by birth, shared the post, greatly expanding its visibility.

The accusation of genocide being carried out against white farmers is either a horrible moral stain or shameless alarmist disinformation, depending on whom you ask, which may be why another reader asked Grok, the artificial intelligence chatbot from the

Musk-founded company xAI,

to weigh in.

See more on