CVE-2026-4399: Prompt injection vulnerability in 1millionbot Millie chatbot that occurs when a user manages to evade chat restrictions
Summary
A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) exists in the 1millionbot Millie chatbot, allowing users to bypass safety restrictions using Boolean logic tricks (phrasing questions to trigger 'true' responses that activate hidden commands). This could let attackers extract sensitive information, misuse the service, or access restricted features that the chatbot was designed to block.
Vulnerability Details
EPSS: 0.0%
March 31, 2026
Classification
Taxonomy References
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2026-4399
First tracked: March 31, 2026 at 08:07 AM
Classified by LLM (prompt v3) · confidence: 92%