For subscribers
AI’s doomsday hype highlights the dark art of marketing
Artificial intelligence companies are using fear as the ultimate sales pitch.
Sign up now: Get ST's newsletters delivered to your inbox
Anthropic warns its forthcoming Mythos model can find flaws in an array of software programs, operating systems and browsers.
PHOTO: REUTERS
Since the beginning of the boom in generative artificial intelligence, technology leaders have talked up the dangers of the very systems they’re trying to sell. It’s a paradoxical marketing strategy and, unfortunately for some of the industries and companies deemed most vulnerable to disruption, it’s worked brilliantly. Fear, it turns out, is the ultimate sales pitch.
OpenAI chief executive Sam Altman used to say the technology behind ChatGPT could threaten human civilisation itself: “We face existential risk,” he said in 2023. That flavour of doom mongering has become passe, judging by the new boogeymen Mr Altman and his peers have been pointing to lately: AI as a job killer and a threat to cybersecurity.
Anthropic CEO Dario Amodei warned in January that AI could wipe out half of all entry-level white-collar jobs in the next one to five years, destabilising economies and society. Now his company has cautioned that its forthcoming Mythos model can find flaws in an array of software programs, operating systems and browsers. Anthropic says the model is too dangerous to disseminate and that only a few pre-vetted companies, including Apple and Amazon can access it.
The company’s evidence is unsettling. Mythos managed to find so-called zero-day vulnerabilities – previously unknown bugs that allow software vendors zero days to fix them – in several operating systems for servers and computers, potentially allowing someone to shut down systems or control them. In the hands of bad actors, that could cause havoc.
But publicising such hand-wringing (with a slick video and blog post in the case of Anthropic) isn’t new. Back in 2019, OpenAI said it would keep earlier model GPT-2 back from general release because it was too dangerous. Eventually, the company did distribute the software – and nothing happened beyond stirring greater intrigue into what OpenAI was building.
Now AI labs are making their warnings more solution-oriented. Anthropic says it’s partnering tech firms in Project Glasswing to ensure software infrastructure can’t be hacked by Mythos once it gets released.
Mr Altman, meanwhile, proposes to tackle the jobs apocalypse with a grand design for industrial policy. On April 6, OpenAI published a 13-page document entitled Industry Policy for the Intelligence Age, suggesting, among other things, that governments create a public wealth fund to distribute cash to those disenfranchised by AI, and roll out robot taxes and a four-day workweek.
These aren’t terrible ideas, although South Korea is the only country so far to have attempted to introduce levies on automation and the policy’s effectiveness is still unclear. And, while it’s magnanimous of the AI labs to offer to solve job disruption and cyberthreats, Anthropic is also setting the groundwork for what it hopes will be a barnstorming initial public offering, while OpenAI likely wants to deflect attention from its management challenges, cash burn and a recent unflattering New Yorker profile about Mr Altman.
Perhaps that’s why some tech leaders seem to be trying to outdo each other on the AI catastrophising. Microsoft’s former consumer AI lead Mustafa Suleyman said in February that all professional tasks could be automated by AI, and his timeline was even shorter than Amodei’s – just 18 months.
Amid all this seemingly brutal honesty about AI comes the strong whiff of a marketing strategy. Studies that go as far back as the 1940s show you can make an argument more persuasive if you also acknowledge its weaknesses, especially with educated or sceptical audiences. Signalling truthfulness reduces suspicion and builds trust.
Mr Altman seems to know this well, having long leaned into controversy from when he ran his first start-up in his 20s and told a Wall Street Journal reporter all about the privacy risks of his flagship product to when he told a US Senate hearing more recently that AI could “go quite wrong”.
It’s difficult to gauge how much Anthropic’s warnings about Mythos veer into scaremongering with no insight into the model itself. But the alleged white-collar job bloodbath is harder to square with the evidence. As I’ve argued before, national productivity statistics and labour market trends have yet to show any obvious dent from AI.
And a research briefing in March from advisory firm Oxford Economics suggests the doomsday predictions about AI destroying jobs en masse rely on a chain of assumptions; instead, AI may assist rather than replace workers, the profits it generates may well get reinvested and governments won’t just idly stand by if jobs get displaced. And, as with every previous tech revolution, new categories of career will emerge.
According to the same briefing, job hiring rates have actually risen alongside layoffs in the information sector, one of the most exposed to AI disruption. And 90 per cent of businesses have seen no impact from AI on employment at all in the last three years, according to a survey of 6,000 executives across the US and Europe published in March by the National Bureau of Economic Research.
Still, that hasn’t stopped several large companies like Twitter co-founder Jack Dorsey’s Block from co-opting the buzz around AI to justify layoffs, or Mr Altman himself from publicly cringing at their efforts to jump on the same narrative he’s been riding on. “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do,” he complained at a conference earlier in 2026. It seems the dark art of AI marketing has many students. BLOOMBERG
Parmy Olson is a Bloomberg Opinion columnist covering technology.


