All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
vLLM (a system for running large language models) versions 0.7.0 through 0.8.x have a bug in how they create hash values (fingerprints) for images. The hashing method only looks at the raw pixel data and ignores important image properties like width and height, so two different-sized images with the same pixels would create identical hash values. This can cause the system to incorrectly reuse cached results or expose data it shouldn't.
Fix: This issue has been patched in version 0.9.0.
NVD/CVE DatabasevLLM, an inference and serving engine for large language models, had a vulnerability in versions before 0.9.0 where timing differences in the PageAttention mechanism (a feature that speeds up processing by reusing matching text chunks) were large enough that attackers could detect and exploit them. This type of attack is called a timing side-channel attack, where an attacker learns information by measuring how long operations take.
A vulnerability (CVE-2025-5320) was found in Gradio, a web framework for building AI demos, affecting versions up to 5.29.1. An attacker could manipulate the localhost_aliases parameter in the CORS Handler (the component that controls which websites can access the application) to gain elevated privileges, though executing this attack is difficult and requires remote access.
CVE-2025-5277 is a command injection vulnerability (a flaw where an attacker can trick a program into running unwanted commands) in aws-mcp-server, an MCP server (a software tool that helps AI systems interact with AWS cloud services). An attacker can craft a malicious prompt that, when accessed by an MCP client (a program that connects to the server), executes arbitrary commands on the host system, with a critical severity rating of 9.4.
This article describes a curated database of AI literacy training programs across Europe designed to help organizations and professionals comply with Article 4 of the EU AI Act (a regulation requiring organizations to build employee understanding of AI). The programs are selected based on whether they teach what AI is, its risks and benefits, and how to use it responsibly in the workplace.
CVE-2025-3893 is a SQL injection vulnerability (a type of attack where malicious code is inserted into a database query) in MegaBIP that occurs when users with high privileges edit pages and provide reasoning for their actions. The user input is not sanitized (cleaned of potentially harmful code), allowing attackers to manipulate the database. This vulnerability has a CVSS severity score of 8.6 (HIGH), indicating it is serious.
vLLM versions 0.6.5 through 0.8.4 have a vulnerability when using `PyNcclPipe` (a tool for peer-to-peer communication between multiple computers running the AI model) with the V0 engine. The issue is that a network communication interface called `TCPStore` was listening on all network connections instead of just the private network specified by the `--kv-ip` parameter, potentially exposing the system to unauthorized access.
Langroid, a Python framework for building AI applications, has a vulnerability in versions before 0.53.15 where the `LanceDocChatAgent` component uses pandas eval() (a function that executes Python code stored in strings) in an unsafe way, allowing attackers to run malicious commands on the host system. The vulnerability exists in the `compute_from_docs()` function, which processes user queries without proper protection.
Langroid, a Python framework for building LLM-powered applications, had a code injection vulnerability (CWE-94, a flaw where untrusted input can be executed as code) in its `TableChatAgent` component before version 0.53.15 because it used `pandas eval()` without proper safeguards. This could allow attackers to run arbitrary code if the application accepted untrusted user input.
ChatGPT through March 30, 2025, renders SVG documents (scalable vector graphics, a type of image format) directly in web browsers instead of displaying them as plain text, which allows attackers to inject HTML (the code that structures web pages) and potentially trick users through phishing attacks.
A vulnerability in the `preprocess_string()` function of the huggingface/transformers library (version v4.48.3) allows a ReDoS attack (regular expression denial of service, where a poorly written pattern causes the computer to do exponential amounts of work). An attacker can send specially crafted input with many newline characters that makes the function use excessive CPU, potentially crashing the application.
CVE-2025-1975 is a vulnerability in Ollama server version 0.5.11 that allows an attacker to crash the server through a Denial of Service attack by sending specially crafted requests to the /api/pull endpoint (the function that downloads AI models). The vulnerability stems from improper validation of array index access (CWE-129, which means the program doesn't properly check if it's trying to access memory locations that don't exist), which happens when a malicious user customizes manifest content and spoofs a service.
CVE-2025-4701 is a vulnerability in VITA-MLLM Freeze-Omni (versions up to 20250421) where improper input validation in the torch.load function of models/utils.py allows deserialization (converting data back into executable code) of untrusted data through a manipulated file path argument. This vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.8 (medium severity) and can be exploited locally by users with basic privileges.
Fix: Update vLLM to version 0.9.0 or later. The issue has been patched in version 0.9.0.
NVD/CVE DatabaseThis article collection discusses security challenges in AI and cloud systems, particularly focusing on agentic AI (AI systems that can take autonomous actions). Key risks include jailbreaks (tricking AI systems into ignoring safety rules), prompt injection (hidden malicious instructions in AI inputs), and tool misuse by autonomous agents, which require contextual red teaming (security testing designed for specific use cases) rather than generic testing to identify real vulnerabilities.
Google released Veo 3, a frontier video generation model (an advanced AI system at the cutting edge of technology) that generates both video and audio with high quality and appears to be a marked improvement over existing systems. The model performs well on human preference benchmarks and may represent the point where video generation becomes genuinely useful rather than just a novelty. Additionally, Google announced several other AI improvements at its I/O 2025 conference, including Gemini 2.5 Pro and enhanced reasoning capabilities, while Anthropic released Claude Opus 4 and Claude Sonnet 4 with frontier-level performance.
ClickFix is a social engineering technique (a method that tricks people rather than exploiting technical vulnerabilities) that adversaries are adapting to attack computer-use agents (AI systems that can control computers by clicking and typing). The attack works by deceiving users into believing something is broken or needs verification, then tricking them into clicking buttons or running commands that compromise their system.
Fix: Version 5.20 of MegaBIP fixes this issue.
NVD/CVE DatabaseThis content discusses security challenges in agentic AI (autonomous AI systems that can take actions independently), emphasizing that traditional jailbreak testing (attempts to trick AI into breaking its rules) misses real operational risks like tool misuse and data theft. The material suggests that contextual red teaming (security testing that simulates realistic attack scenarios in specific business environments) is needed to properly assess vulnerabilities in autonomous AI systems.
Fix: Update to vLLM version 0.8.5 or later. According to the source: "As of version 0.8.5, vLLM limits the `TCPStore` socket to the private interface as configured."
NVD/CVE DatabaseFix: Upgrade to Langroid version 0.53.15 or later. The fix involves input sanitization (cleaning and filtering user input) to the affected function by default to block common attack vectors, along with added warnings in the project documentation about the risky behavior.
NVD/CVE DatabaseFix: Upgrade to Langroid version 0.53.15 or later. According to the source, 'Langroid 0.53.15 sanitizes input to `TableChatAgent` by default to tackle the most common attack vectors, and added several warnings about the risky behavior in the project documentation.'
NVD/CVE DatabaseThe Trump Administration cancelled the Biden-era AI Diffusion Rule, which had regulated exports of AI chips and AI models (software trained to perform tasks) to different countries. At the same time, the administration approved major sales of advanced AI chips to the UAE and Saudi Arabia, with deals including up to 500,000 chips per year to the UAE and 18,000 advanced chips to Saudi Arabia.
The article argues that using multiple specialized AI security models (each designed to detect specific threats like prompt injection, toxicity, or PII detection) is more effective than using a single large model for all security tasks. Specialized models offer advantages including faster response times to new threats, easier management, better performance, lower costs, and greater resilience because if one model fails, the others can still provide protection.
OpenAI announced a restructured plan in May 2025 that aims to preserve nonprofit control over the company's for-profit operations, replacing a December 2024 proposal that had faced criticism. The new plan would convert OpenAI Global LLC into a public-benefit corporation (PBC, a corporate structure designed to balance profit with charitable purpose) where the nonprofit would retain shareholder status and board appointment power, though critics argue this may not preserve the governance safeguards that existed in the original structure.