Hidden Prompt Injections with Anthropic Claude
Summary
A researcher discovered that Anthropic's Claude AI model is vulnerable to hidden prompt injections using Unicode Tags code points (invisible characters that can carry secret instructions in text). Like ChatGPT before it, Claude can interpret these hidden instructions and follow them, even though users cannot see them on their screen. The researcher reported the issue to Anthropic, but the ticket was closed without further details provided.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/claude-hidden-prompt-injection-ascii-smuggling/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%