CVE-2026-22807: vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to versio
Summary
vLLM (a system for running and serving large language models) had a security flaw in versions 0.10.1 through 0.13.x where it automatically loaded code from model repositories without checking if that code was trustworthy, allowing attackers to run malicious Python commands on the server when a model loads. This vulnerability doesn't require the attacker to have access to the API or send requests; they just need to control which model repository vLLM tries to load from.
Solution / Mitigation
Upgrade to vLLM version 0.14.0, which fixes this issue.
Vulnerability Details
8.8(high)
EPSS: 0.1%
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2026-22807
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%