Anthropic says it won’t give US military unconditional AI use
Sign up now: Get ST's newsletters delivered to your inbox
The Pentagon said if Anthropic does not agree by 5.01pm local time on Feb 27, it will be forced to comply under the Defense Production Act.
PHOTO: REUTERS
SAN FRANCISCO – AI company Anthropic said on Feb 26 that it would not give the US Defence Department unrestricted use of its technology despite being pressured to comply by the Pentagon.
“These threats do not change our position: We cannot in good conscience accede to their request,” Anthropic chief executive Dario Amodei said in a statement.
Washington had given the AI start-up until Feb 27 to agree to unconditional military use of its technology, even if it violates ethical standards at the company, or face being forced to comply under emergency federal powers.
Dr Amodei said Anthropic models have been deployed by the Pentagon and intelligence agencies to defend the country, but that it draws an ethical line regarding its use for mass surveillance of US citizens and fully autonomous weapons.
“Using these systems for mass domestic surveillance is incompatible with democratic values,” Dr Amodei said.
And leading AI systems are not yet reliable to be trusted to power deadly weapons without a human in ultimate control, he added.
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
After meeting with Anthropic early this week, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5.01pm local time on Feb 27 or face being forced to comply under the Defence Production Act.
The Cold War-era law, last used during the Covid-19 pandemic, grants the federal government sweeping powers to compel private industry to prioritise national security needs.
The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company’s ability to work with the US government and reputation.
A senior Pentagon official at the time pushed back on the company’s concerns, insisting the Defence Department had always operated within the law.
“Legality is the Pentagon’s responsibility as the end user,” the official said, adding that the department “has only given out lawful orders”.
Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute.
The Pentagon confirmed that Mr Elon Musk’s Grok system had been cleared for use in a classified setting, while other contracted companies – OpenAI and Google – were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line.
Anthropic was contracted alongside those companies in 2025 to supply AI models for a range of military applications under a US$200 million (S$252.7 million) agreement.
Former OpenAI employees founded Anthropic in 2021 on the premise that AI development should prioritise safety – a philosophy that now puts it on a collision course with the Pentagon and the White House.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” Dr Amodei said.
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” AFP


