What does the US military’s feud with Anthropic mean for AI used in war?
Summary
Anthropic, an AI company, is in a dispute with the US military over safety restrictions on its Claude AI model. Anthropic refuses to allow the government to use Claude for domestic mass surveillance (monitoring citizens' communications without proper oversight) or autonomous weapons systems (weapons that can select and attack targets without human control), while the Pentagon has declared Anthropic a supply chain risk (a company whose products pose a national security threat) for not agreeing to the government's demands, and Anthropic plans to challenge this designation in court.
Classification
Affected Vendors
Related Issues
Original source: https://www.theguardian.com/technology/2026/mar/07/anthropic-claude-ai-pentagon-us-military
First tracked: March 7, 2026 at 11:00 AM
Classified by LLM (prompt v3) · confidence: 92%