CVE-2026-24779: vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request | AI Sec Watch