For subscribers
The tech giants have an interest in AI regulation
It is a way of holding back open-source proliferation
Sign up now: Get ST's newsletters delivered to your inbox
ChatGPT is an example of “generative” AI, which creates humanlike content based on its analysis of texts, images and sounds.
PHOTO: REUTERS
The Economist
Follow topic:
One of the joys of writing about business is that rare moment when you realise conventions are shifting in front of you. It brings a shiver down the spine. Vaingloriously, you start scribbling down every detail of your surroundings, as if you are drafting the opening lines of a bestseller.
It happened to your columnist recently in San Francisco, sitting in the pristine offices of Anthropic, a darling of the artificial intelligence (AI) scene. When Mr Jack Clark, one of Anthropic’s co-founders, drew an analogy between the Baruch Plan, a (failed) effort in 1946 to put the world’s atomic weapons under United Nations control, and the need for global coordination to prevent the proliferation of harmful AI, there was that old familiar tingle. When entrepreneurs compare their creations, even tangentially, to nuclear bombs, it feels like a turning point.

