For subscribers
Why AI hallucinations can be a good thing
Generative artificial intelligence should be welcomed as a giant mash-up machine to enhance creativity.
Sign up now: Get ST's newsletters delivered to your inbox
OpenAI’s ChatGPT 3.5 hallucinated 69 per cent of the time, while Meta’s Llama 2 model hit 88 per cent.
PHOTO: REUTERS
John Thornhill
Follow topic:
The tendency of generative artificial intelligence (AI) systems to “hallucinate” – or simply make stuff up – can be zany and sometimes scary, as one New Zealand supermarket chain found to its cost. After Pak’nSave released a chatbot in 2023 offering recipe suggestions to thrifty shoppers using leftover ingredients, its Savey Meal-bot recommended one customer make an “aromatic water mix” that would have produced chlorine gas.
Lawyers have also learnt to be wary of the output of generative AI models, given their ability to invent wholly fictitious cases. A recent Stanford University study of the responses generated by three state-of-the-art generative AI models to 200,000 legal queries found hallucinations were “pervasive and disturbing”. When asked specific, verifiable questions about random federal court cases, OpenAI’s ChatGPT 3.5 hallucinated 69 per cent of the time, while Meta’s Llama 2 model hit 88 per cent.

