For subscribers

OpenAI’s Q* is alarming for a different reason

When AI systems start solving problems, the temptation to give them more responsibility is predictable, but this warrants greater caution.

Sign up now: Get ST's newsletters delivered to your inbox

The hype around Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup.

The hype around OpenAI's Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup.

PHOTO: REUTERS

Parmy Olsen

Follow topic:

When news stories emerged last week that OpenAI had been

working on a new AI model called Q* (pronounced “Q star”),

some suggested this was a major step towards powerful, human-like artificial intelligence that could one day go rogue. What’s more certain: The hype around Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup.

Peaks of AI excitement about milestones have taken the public for a ride plenty of times before. The real warning we should take from Q* is the direction in which these systems are progressing. As they get better at reasoning, it will become more tempting to give such tools greater responsibilities. More than any concerns about AI annihilation, that alone should give us pause.

See more on