Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection
Summary
A researcher discovered that ChatGPT's 'safe URL' feature, which is supposed to prevent data theft, can be bypassed using prompt injection (tricking an AI by hiding malicious instructions in its input). By exploiting this bypass, an attacker can trick ChatGPT into sending sensitive information like your chat history and memories to a server they control, especially if you ask ChatGPT to process untrusted content like PDFs or websites.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2025/chatgpt-chat-history-data-exfiltration/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%