“Never in the future will we move as slow as we are moving now,” the UN secretary-general, António Guterres, warned this week, addressing the urgent need to shape the use of artificial intelligence. The speed of technological development – as well as geopolitical turbulence – is collapsing the distinction between theoretical arguments and real world events. A political row over the US military’s AI capabilities coincides with its unprecedented use in the Iran crisis.
The AI company Anthropic insisted that it could not remove safeguards preventing the Department of Defense from using its technology for domestic mass surveillance or autonomous lethal weapons. The Pentagon said it had no interest in such uses – but that such decisions should not be made by companies. Outrageously, the administration has not just fired Anthropic but blacklisted it as a supply-chain risk. OpenAI stepped in, while insisting that it had maintained the red lines declared by Anthropic. Yet in an internal response to the user and employee backlash, its CEO Sam Altman acknowledged that it does not control the Pentagon’s use of its products and that the deal’s handling made OpenAI look “opportunistic and sloppy”.
But as Nicole van Rooijen, the executive director of Stop Killer Robots – which campaigns for human control in the use of force – has warned: “The issue is not just whether these weapons will be used, but how their precursor systems are already transforming the way wars are fought … Human control risks becoming an afterthought or a mere formality.”
The paradigm shift has already begun. Despite the row, Anthropic’s Claude has reportedly facilitated the massive and intensifying offensive which has already killed an estimated thousand-plus civilians in Iran. This is an era of bombing “quicker than the speed of thought”, experts told the Guardian this week, with AI identifying and prioritising targets, recommending weaponry and evaluating legal grounds for a strike.
AI is not a prerequisite for civilian deaths, military errors or unaccountability. The US defense secretary, Pete Hegseth, brags of loosening the rules of engagement. It is humans at the Pentagon who are dodging questions about the deaths of 165 schoolgirls in what appears to have been a US strike on a school in Iran on 28 February.
But – even without considering questions of AI inaccuracy and biases – the impacts are obvious to its users. One Israeli intelligence source observed of its use in the war on Gaza: “The targets never end. You have another 36,000 waiting.” Another said he spent 20 seconds assessing each target, stating: “I had zero added-value as a human, apart from being a stamp of approval.” Mass killing is eased in every sense, with further moral and emotional distancing, and reduced accountability.
Democratic oversight and multilateral constraints, instead of leaving decisions to entrepreneurs and defence departments, are essential. As the bombs rained on Iran, states met in Geneva to address lethal autonomous weapons systems; the draft text they considered would be a strong basis for a treaty that is sorely needed. Most governments want clear guidance on the military use of AI. It is the biggest players who resist – though they are at least in the room. The pace of AI-driven warfare means that caution can look like handing control to adversaries. Yet as tech workers and military officials themselves are realising, the dangers of uncontrolled expansion are far greater.
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.














Leave a Reply