US military used Anthropic’s AI model Claude in Venezuela raid, report says
Summary
According to the Wall Street Journal, Claude (an AI model made by Anthropic) was used by the US military in an operation in Venezuela involving airstrikes and resulting in 83 deaths. This violates Anthropic's terms of use, which explicitly forbid Claude from being used for violence, weapons development, or surveillance.
Classification
Affected Vendors
Related Issues
Original source: https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
First tracked: February 14, 2026 at 03:00 PM
Classified by LLM (prompt v3) · confidence: 85%