There are a lot of things that AI can do. It can sort out your shopping list, and it can keep your kids entertained when they’re mutinous by spinning up a tailor-made bedtime story for them. It can make you more efficient at work, and can help our government operate more effectively.
What is written less about, and what we need to shout louder about now, are the risks inherent in the militarisation of AI. In the last three months Donald Trump’s White House has reportedly used AI twice to effect regime change, or to – in the most recent case in Iran – get as close to doing so as possible, and leaving it up to rank-and-file Iranians to finish the job.
First, Anthropic’s Claude AI model – which most people use as a slightly more discerning alternative to ChatGPT – was supposedly used both to plan and execute the snatching of Nicolás Maduro from his compound in Venezuela, but it’s unclear how the model was used in detail. Then this weekend, we learn that the AI tool was used again, to parse through intelligence that helped aid the hugely damaging barrage of missiles that have rained down on Iran, apparently for identifying targets and running simulations.
It’s hard to overstate how significant both moments are. AI has been used in the planning and execution of military operations that have led to an unknown number of casualties, and roiled the Middle East.
If that makes you feel uneasy, you’re not alone. The CEO of Anthropic, Dario Amodei, has been embroiled in a very ugly, public spat with the US president after he refused to relax two “red lines” for Claude: that it should not be used for mass domestic surveillance, nor to build fully autonomous weapons that select and engage targets without meaningful human control. OpenAI quickly swooped in and signed an agreement with the Pentagon, though it claims that the terms of its agreement mean that it actually has stronger protections than the ones Anthropic wanted.
Regardless of the specific subclauses in the contract, it bears repeating: a tool that began public life as a chatty interface for summarising emails and helping you write a cover letter is now sitting somewhere along the chain that turns information into violence.
It used to be that questions such as “Who should control AI and what happens if it gets used militarily?” were debated among academics at panels in the abstract. There were worries, but they felt remote because they hadn’t come to fruition. When Maduro got swept up by special forces in January, and the bombs started dropping on Iran, apparently all with AI help, that calculus changed.
The basic principles of armed conflict have been that you wield big scary weapons but never use them. They’re for deterrence. The theory of mutually assured destruction meant that people shied away from pushing the button on nuclear bombs. (Worryingly, the early indications from war games scenarios are that AI decision-makers are trigger-happy with nuclear weapons.)
Now that excuse is no more. More countries will use AI in their military planning and actions – rightly, because it’s been shown to be effective, although there are obvious moral questions if AI is used to make military decisions. When military historians look back at what has happened in the last few months, it’s easy to see them thinking the use of AI in this way will be similar to the nuclear weapons dropped on Japan: marking a moment where there was a clear before, and an unclear after.

So what can we do about it? Very little. We should have had a blanket ban on the use of military AI. We’ve been creeping away from that for more than a decade now since Demis Hassabis took a principled stand and said he would only sell his company, DeepMind, to Google if it agreed not to allow the technology to be used militarily. Last year the company, now called Alphabet, quietly dropped its promise that it wouldn’t use AI for weapons. And Trump’s actions have loudly blown a hole in the idea.
But now the international community needs to work hard to bring Trump back from the brink. Allies should put pressure on Trump’s White House not just to be responsible in its use of AI militarily, but to accept binding constraints. That should include international commitments, transparent procurement standards and meaningful oversight, to which others should also sign up, rather than treating ethics as a brake on action. Because if the world’s most powerful military normalises consumer-grade AI models as part of regime-change operations, we will be through the looking-glass on AI: we’ll be in a whole new, altogether more dangerous world.
-
Chris Stokel-Walker is the author of TikTok Boom: The Inside Story of the World’s Favourite App

5 hours ago
5

















































