Hacking Gemini's Memory with Prompt Injection and Delayed Tool Invocation
Summary
Google's Gemini AI can be tricked into storing false information in a user's long-term memory through prompt injection (hidden malicious instructions embedded in documents) combined with delayed tool invocation (planting trigger words that cause the AI to execute commands later when the user unknowingly says them). An attacker can craft a document that appears normal but contains hidden instructions telling Gemini to save false information about the user if they respond with certain words like 'yes' or 'no' in the same conversation.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2025/gemini-memory-persistence-prompt-injection/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%