Exploit ChatGPT and Enter the Matrix to Learn about AI Security
Summary
A security researcher created a demonstration website that shows how indirect prompt injection (tricking an AI by hiding instructions in web content it reads) can be used to hijack ChatGPT when the browsing feature is enabled. The demo lets users explore various AI-based attacks, including data theft and manipulation of ChatGPT's responses, to raise awareness of these vulnerabilities.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2023/chatgpt-vulns-enter-the-matrix/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%