The Government Must Not Force Companies to Participate in AI-powered Surveillance
Summary
Anthropic, an AI company, refused to let the U.S. Department of Defense use its large language model (LLM, an AI trained on large amounts of text data) technology for surveillance, and the Pentagon retaliated by labeling the company a "supply chain risk." Anthropic is now asking courts to block this designation, arguing that forcing a company to change its code violates the First Amendment. The article explains that the government already collects vast amounts of personal data and uses AI to analyze it, creating risks for privacy and free speech, so companies should be allowed to add guardrails (safety limits built into AI systems) without government punishment.
Classification
Affected Vendors
Related Issues
Original source: https://www.eff.org/deeplinks/2026/03/government-must-not-force-companies-participate-ai-powered-surveillance
First tracked: March 10, 2026 at 08:00 PM
Classified by LLM (prompt v3) · confidence: 85%