Palantir is still using Anthropic's Claude as Pentagon blacklist plays out, CEO Karp says
Summary
Palantir continues using Anthropic's Claude (a large language model, or LLM, which is AI software trained to understand and generate text) despite the Pentagon designating Anthropic a supply-chain risk (a company or product deemed potentially unreliable or unsafe for government use). The Department of Defense plans to phase out Anthropic's tools over six months, though exemptions may be granted for critical national security operations.
Solution / Mitigation
According to the source, the Department of Defense has set a six-month period for federal agencies to phase out Anthropic's products. An internal Pentagon memo states that exemptions will be considered for 'mission-critical activities' in rare circumstances where 'no viable alternative exists.' The DOD Chief Technology Officer noted that the government will transition to other large language models, but that 'you can't just rip out a system that's deeply embedded overnight.'
Classification
Affected Vendors
Related Issues
Original source: https://www.cnbc.com/2026/03/12/karp-palantir-anthropic-claude-pentagon-blacklist.html
First tracked: March 12, 2026 at 12:00 PM
Classified by LLM (prompt v3) · confidence: 92%