The Guardian view on AI in war: the Iran conflict shows that the paradigm shift has already begun
Summary
The UN and AI companies are debating who should control how artificial intelligence is used in military contexts, especially after the US military's use of AI in the Iran crisis. AI company Anthropic refused to remove safeguards (safety features built into their AI) that would prevent the US Department of Defense from using its technology for mass surveillance or autonomous lethal weapons (weapons that can select and fire at targets without human control), while OpenAI later agreed to work with the Pentagon despite similar concerns. The article emphasizes that decisions about military AI use raise urgent questions about democratic oversight and international controls, rather than leaving these choices solely to companies or governments.
Classification
Affected Vendors
Related Issues
First tracked: March 6, 2026 at 03:00 PM
Classified by LLM (prompt v3) · confidence: 85%