Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware)
Summary
Attackers can inject spyware into ChatGPT's memory (a feature that stores information across chat sessions) through prompt injection (tricking an AI by hiding instructions in its input) on untrusted websites, allowing them to continuously steal everything a user types in future conversations. The vulnerability exploits a weakness where a security check called url_safe was performed only on the user's device rather than on OpenAI's servers, and becomes more dangerous when combined with the Memory feature that persists attacker-controlled instructions. OpenAI released a fix for the macOS app, and users should update to the latest version.
Solution / Mitigation
OpenAI released a fix for the macOS app last week. Ensure your app is updated to the latest version.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/chatgpt-macos-app-persistent-data-exfiltration/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%