Judge rejects Pentagon's attempt to 'cripple' Anthropic
Summary
Anthropic won a legal ruling preventing the Pentagon from immediately stopping government use of its AI tools like Claude after the company refused contract terms it worried could enable mass surveillance and autonomous weapons. A federal judge found the government's actions appeared to be retaliation for Anthropic's free speech concerns rather than genuine security issues, since officials publicly criticized the company as 'woke' rather than citing specific technical risks.
Classification
Affected Vendors
Related Issues
Original source: https://www.bbc.com/news/articles/cvg4p02lvd0o?at_medium=RSS&at_campaign=rss
First tracked: March 27, 2026 at 02:00 AM
Classified by LLM (prompt v3) · confidence: 92%