Google Fixes Critical RCE Flaw in AI-Based Antigravity Tool
Summary
Google discovered a critical flaw in its AI-based tool for filesystem operations where a prompt injection vulnerability (tricking an AI by hiding instructions in its input) allowed attackers to escape the sandbox (a restricted environment meant to contain the program) and execute arbitrary code on the system. The problem was caused by inadequate input sanitization (cleaning/filtering of user data), which failed to prevent malicious instructions from being processed.
Classification
Affected Vendors
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://www.darkreading.com/vulnerabilities-threats/google-fixes-critical-rce-flaw-ai-based-antigravity-tool
First tracked: April 21, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 85%