CVE-2025-24357: vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterato
Summary
vLLM is a library that loads AI models from HuggingFace using a function that calls torch.load, a PyTorch function for loading model data. The problem is that torch.load is set to accept untrusted data without verification, which means if someone provides a malicious model file, it could run harmful code on the system during the loading process (deserialization of untrusted data, where code runs while converting saved data back into usable form).
Solution / Mitigation
This vulnerability is fixed in v0.7.0. Users should upgrade to this version or later.
Vulnerability Details
7.5(high)
EPSS: 0.9%
Classification
Affected Vendors
Related Issues
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-24357
First tracked: February 15, 2026 at 08:43 PM
Classified by LLM (prompt v3) · confidence: 95%