CVE-2026-24779: vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request
Summary
vLLM, a system for running and serving large language models, has a Server-Side Request Forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended targets) in its multimodal feature before version 0.14.1. The bug exists because two different Python libraries interpret backslashes differently, allowing attackers to bypass security checks and force the vLLM server to send requests to internal network systems, potentially stealing data or causing failures.
Solution / Mitigation
Update to version 0.14.1, which contains a patch for the issue.
Vulnerability Details
7.1(high)
EPSS: 0.0%
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2026-24779
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%