Side-Channel Attacks Against LLMs
Summary
These three research papers describe side-channel attacks (exploiting indirect information leaks like timing or packet sizes rather than breaking encryption directly) against large language models. Attackers can monitor encrypted network traffic and infer sensitive information about user conversations, such as the topic of messages, specific queries, or even personal data, by analyzing patterns in response times, packet sizes, or token counts from the model's inference process.
Solution / Mitigation
The source text proposes several mitigations but notes that none provides complete protection. Specific defenses mentioned include: random padding (adding fake data to obscure patterns), token batching (grouping tokens together before sending), packet injection (inserting extra packets), and iteration-wise token aggregation (combining token counts across processing steps). The papers also note that responsible disclosure and collaboration with LLM providers has led to initial countermeasures being implemented, though the authors conclude that providers need to do more work to fully address these vulnerabilities.
Classification
Affected Vendors
Related Issues
Original source: https://www.schneier.com/blog/archives/2026/02/side-channel-attacks-against-llms.html
First tracked: February 17, 2026 at 11:00 AM
Classified by LLM (prompt v3) · confidence: 95%