For subscribers

Why it’s hard for humans to have the final say over AI

One danger is the inclination of humans to trust machines even when they are warned not to.

Sign up now: Get ST's newsletters delivered to your inbox

Human control is at the centre of the row between Anthropic and the US government over the use of AI in weapons systems,

Human control is at the centre of the row between Anthropic and the US government over the use of AI in weapons systems.

PHOTO: REUTERS

Sarah O’Connor

Google Preferred Source badge

The riskier the setting in which powerful AI systems are deployed, the more we seem to reach for an intuitive solution: that humans should always be the ones to make the final decisions.

In the context of war, the public and regulatory debate (and one source of the recent row between Anthropic and the US government) has focused on the seemingly binary distinction between fully autonomous weapons and those that are subject to “human control”. In the corporate world, too, the deployment of semi-autonomous agents has led companies to turn to experienced humans as the ultimate decision-makers. Amazon, for instance, has reportedly said that junior and mid-level software engineers require more-senior engineers to sign off on AI-assisted changes.

See more on