For subscribers

ChatGPT-5 hasn’t fully fixed its most concerning problem

The new version isn’t as cold and professional as you might think. It’s still possible to get attached.

Sign up now: Get ST's newsletters delivered to your inbox

A spokesman for OpenAI said the company was building tools that could detect if someone was experiencing mental distress, so ChatGPT could “respond in ways that are safe, helpful and supportive.”

A spokesman for OpenAI said the company was building tools that could detect if someone was experiencing mental distress, so ChatGPT could “respond in ways that are safe, helpful and supportive.”

PHOTO ILLUSTRATION: UNSPLASH

Parmy Olson

Follow topic:

Mr Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis – a number that could hit a billion before the year is out – a backlash ensued when he abruptly changed the product last week.

OpenAI’s innovator’s dilemma, one that has beset the likes of Alphabet’s Google and Apple, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making its hugely popular chatbot safer.  

See more on