Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
Summary
Anthropic refused a Pentagon demand to remove safety precautions (safeguards built into AI systems to prevent harmful outputs) from its Claude AI model and allow unrestricted military use, despite threats to cancel a $200 million contract and damage the company's reputation. The Department of Defense demanded compliance by Friday or would label Anthropic a 'supply chain risk,' a designation that could harm the company financially.
Classification
Affected Vendors
Related Issues
Original source: https://www.theguardian.com/us-news/2026/feb/26/anthropic-pentagon-claude
First tracked: February 27, 2026 at 07:00 AM
Classified by LLM (prompt v3) · confidence: 92%