AI Safety Newsletter #69: Department of War, Anthropic, and National Security
Summary
The US Department of War designated Anthropic as a 'supply chain risk' (a classification that prevents a company from being used in government contracts) after the company refused to remove safety restrictions on its AI model Claude, specifically rejecting military demands to enable fully autonomous weapons and domestic mass surveillance. Anthropic is challenging this designation in court, and legal experts question whether the Department of War has the authority to impose such restrictions outside of actual contract disputes.
Classification
Affected Vendors
Related Issues
Original source: https://newsletter.safe.ai/p/ai-safety-newsletter-69-department
First tracked: March 13, 2026 at 12:00 PM
Classified by LLM (prompt v3) · confidence: 92%