CVE-2025-62164: vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memor
Summary
vLLM versions 0.10.2 through 0.11.0 have a vulnerability in how they process user-supplied prompt embeddings (numerical representations of text). An attacker can craft malicious data that bypasses safety checks and causes memory corruption (writing data to the wrong location in computer memory), which can crash the system or potentially allow remote code execution (RCE, where an attacker runs commands on the server).
Solution / Mitigation
Update to vLLM version 0.11.1 or later. The source states: 'This issue has been patched in version 0.11.1.'
Vulnerability Details
8.8(high)
EPSS: 0.1%
Classification
Affected Vendors
Related Issues
CVE-2022-29200: TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implem
CVE-2021-29541: TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a dereference of a null p
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-62164
First tracked: February 15, 2026 at 08:37 PM
Classified by LLM (prompt v3) · confidence: 95%