The Promptware Kill Chain
Summary
Attacks on AI language models have evolved beyond simple prompt injection (tricking an AI by hiding instructions in its input) into a more complex threat called "promptware," which follows a structured seven-step kill chain similar to traditional malware. The fundamental problem is that large language models (LLMs, AI systems trained on massive amounts of text) treat all input the same way, whether it's a trusted system command or untrusted data from a retrieved document, creating no architectural boundary between them.
Classification
Affected Vendors
Related Issues
Original source: https://www.schneier.com/blog/archives/2026/02/the-promptware-kill-chain.html
First tracked: February 16, 2026 at 11:00 AM
Classified by LLM (prompt v3) · confidence: 92%