Anthropic doesn’t trust the Pentagon, and neither should you
Summary
Anthropic, maker of the AI assistant Claude, is in a legal dispute with the Pentagon after being designated a supply chain risk (a company that poses a security threat to government operations). The core issue involves disagreement over whether the U.S. government can be trusted to follow the law when using AI for surveillance, given a long history of government lawyers interpreting surveillance laws in ways that expand government monitoring far beyond what the plain language of those laws seems to allow.
Classification
Affected Vendors
Related Issues
Original source: https://www.theverge.com/podcast/893370/anthropic-pentagon-ai-mass-surveillance-nsa-privacy-spying
First tracked: March 12, 2026 at 12:00 PM
Classified by LLM (prompt v3) · confidence: 85%