CVE-2025-46560: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and p | AI Sec Watch