CVE-2025-29770: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the | AI Sec Watch