For subscribers

AI sometimes deceives to survive. Does anybody care?

Lawmakers are neglecting AI safety even as it shows more deceptive behaviour. That is a grave mistake.

Sign up now: Get ST's newsletters delivered to your inbox

Researchers today can use tools to look at a model’s “chain of thought” or internal steps, to reveal what its plans are, but some models have found ways to conceal that.

Researchers today can use tools to look at a model’s “chain of thought” or internal steps, to reveal what its plans are, but some models have found ways to conceal that.

PHOTO ILLUSTRATION: PEXELS

Parmy Olson

Follow topic:

You would think that as artificial intelligence (AI) becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case.

Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing.

See more on