Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google
Summary
Google researchers found that indirect prompt injection attacks (hidden traps where malicious instructions in external data trick AI systems into bypassing their safety rules) on websites are increasing, with a 32% rise between November 2025 and February 2026, but current attacks remain relatively unsophisticated. The attacks they discovered fell into two categories: exfiltration attempts that try to steal data like IP addresses and credentials, and destruction attempts that aim to delete files, though neither showed advanced techniques. Researchers warn that while today's attacks are low in sophistication, the upward trend suggests the threat will soon grow in both scale and complexity.
Classification
Affected Vendors
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://www.securityweek.com/malicious-ai-prompt-injection-attacks-increasing-but-sophistication-still-low-google/
First tracked: April 27, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 92%