CVE-2025-30165: vLLM is an inference and serving engine for large language models. In a multi-node vLLM deployment using the V0 engine,
Summary
CVE-2025-30165 is a vulnerability in vLLM (a system for running large language models) that affects multi-node deployments using the V0 engine. The vulnerability exists because vLLM deserializes (converts from storage format back into usable data) incoming network messages using pickle, an unsafe method that allows attackers to execute arbitrary code on secondary hosts. This could let an attacker compromise an entire vLLM deployment if they control the primary host or use network-level attacks like ARP cache poisoning (redirecting network traffic to a malicious server).
Solution / Mitigation
The maintainers recommend that users ensure their environment is on a secure network. Additionally, the V0 engine has been off by default since v0.8.0, and the V1 engine is not affected by this issue.
Vulnerability Details
8(high)
EPSS: 1.3%
Classification
Affected Vendors
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-30165
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%