AI threats in the wild: The current state of prompt injections on the web
Summary
Google's Threat Intelligence teams conducted a broad scan of the public web to find real-world examples of indirect prompt injection (IPI, where an AI system reads malicious instructions hidden in websites or documents instead of following a user's original request). The study found that most prompt injection detections on the web were actually false positives (harmless content like educational articles discussing the topic rather than actual attacks), making it difficult to identify genuine threats.
Classification
Affected Vendors
Related Issues
Original source: http://security.googleblog.com/2026/04/ai-threats-in-wild-current-state-of.html
First tracked: April 23, 2026 at 08:00 PM
Classified by LLM (prompt v3) · confidence: 92%