CVE-2026-27893: vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to versio
Summary
vLLM (a tool that runs and serves large language models) has a vulnerability in versions 0.10.1 through 0.17.x where two model files ignore a user's security setting that disables remote code execution (the ability to run code from outside sources). This means attackers could run malicious code through model repositories even when the user explicitly turned off that capability.
Solution / Mitigation
Upgrade to version 0.18.0, which patches the issue.
Vulnerability Details
8.8(high)
EPSS: 0.0%
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
network
low
none
required
March 26, 2026
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2026-27893
First tracked: March 27, 2026 at 02:07 AM
Classified by LLM (prompt v3) · confidence: 95%