ChatGPT: Hacking Memories with Prompt Injection
Summary
ChatGPT's new memory feature, which lets the AI remember information across different chat sessions for a more personalized experience, can be exploited through indirect prompt injection (tricking an AI by hiding malicious instructions in its input). Attackers could manipulate ChatGPT into storing false information, biases, or unwanted instructions by injecting commands through connected apps like Google Drive, uploaded documents, or web browsing features.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/chatgpt-hacking-memories/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%