CVE-2025-59425: vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support
Summary
vLLM, a system for running and serving large language models, had a security weakness in how it checked API keys (secret codes that authenticate users) before version 0.11.0rc2. The validation used a basic string comparison that took longer to complete the more correct characters an attacker guessed, allowing them to figure out the key one character at a time through a timing attack (analyzing how long the system takes to respond). This weakness could let attackers bypass authentication and gain unauthorized access.
Solution / Mitigation
Update vLLM to version 0.11.0rc2 or later, which fixes the issue.
Vulnerability Details
7.5(high)
EPSS: 0.4%
Classification
Taxonomy References
Affected Vendors
Related Issues
CVE-2022-21727: Tensorflow is an Open Source Machine Learning Framework. The implementation of shape inference for `Dequantize` is vulne
CVE-2026-22252: LibreChat is a ChatGPT clone with additional features. Prior to v0.8.2-rc2, LibreChat's MCP stdio transport accepts arbi
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-59425
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%