CVE-2026-22773: vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users
Summary
vLLM is a serving engine for running large language models, and versions 0.6.4 through 0.11.x have a vulnerability where attackers can crash the server by sending a tiny 1x1 pixel image to models using the Idefics3 vision component, causing a dimension mismatch (a size incompatibility between data structures) that terminates the entire service.
Solution / Mitigation
This issue has been patched in version 0.12.0. Users should upgrade to vLLM version 0.12.0 or later.
Vulnerability Details
6.5(medium)
EPSS: 0.0%
Classification
Affected Vendors
Related Issues
CVE-2022-29200: TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implem
CVE-2021-29541: TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a dereference of a null p
Original source: https://nvd.nist.gov/vuln/detail/CVE-2026-22773
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%