CVE-2025-29783: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Moo
Summary
CVE-2025-29783 is a remote code execution vulnerability in vLLM (a software engine for running large language models efficiently) that occurs when it is configured with Mooncake, a distributed system component. Attackers can exploit unsafe deserialization (the process of converting stored data back into usable objects) exposed over ZMQ/TCP (network communication protocols) to run arbitrary code on any connected systems in a distributed setup.
Solution / Mitigation
This vulnerability is fixed in vLLM version 0.8.0. Users should upgrade to this version or later.
Vulnerability Details
9(critical)
EPSS: 1.7%
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-29783
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%