CVE-2025-30202: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and p | AI Sec Watch