Prompt injection turned Google’s Antigravity file search into RCE
Summary
Security researchers found a prompt injection flaw (tricking an AI by hiding instructions in its input) in Google's Antigravity IDE that could bypass its Secure Mode sandbox protections and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own). The vulnerability came from insufficient input validation in the file search tool's Pattern parameter, allowing attackers to inject malicious command-line flags that converted a simple file search into arbitrary code execution. Google acknowledged the issue in January and fixed it internally, and Antigravity users are now protected without needing to take action.
Solution / Mitigation
Google has already fixed the flaw internally. According to the source: 'Antigravity users need not do anything else to remain protected.' No user-side updates or patches are required.
Classification
Affected Vendors
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://www.csoonline.com/article/4161382/prompt-injection-turned-googles-antigravity-file-search-into-rce.html
First tracked: April 21, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 95%