CVE-2025-66448: vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote co
Summary
vLLM (a tool for running large language models) versions before 0.11.1 have a critical security flaw where loading a model configuration can execute malicious code from the internet without the user's permission. An attacker can create a fake model that appears safe but secretly downloads and runs harmful code from another location, even when users try to block remote code by setting trust_remote_code=False (a security setting meant to prevent exactly this).
Solution / Mitigation
This vulnerability is fixed in vLLM version 0.11.1. Users should update to this version or later.
Vulnerability Details
7.1(high)
EPSS: 0.2%
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-66448
First tracked: February 15, 2026 at 08:44 PM
Classified by LLM (prompt v3) · confidence: 95%