Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Summary
LLMs (large language models) can output ANSI escape codes (special control characters that modify how terminal emulators display text and behave), and when LLM-powered applications print this output to a terminal without filtering it, attackers can use prompt injection (tricking an AI by hiding instructions in its input) to make the terminal execute harmful commands like clearing the screen, hiding text, or stealing clipboard data. The vulnerability affects LLM-integrated command-line tools and applications that don't properly handle or encode these control characters before displaying LLM output.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/terminal-dillmas-prompt-injection-ansi-sequences/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%