All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
LangChain AI version 0.3.51 contains an indirect prompt injection vulnerability (a technique where attackers hide malicious instructions in data like emails to trick AI systems) in its GmailToolkit component that could allow attackers to run arbitrary code through crafted emails. However, the supplier disputes this, arguing the actual vulnerability comes from user code that doesn't follow LangChain's security guidelines rather than from LangChain itself.
A sandbox escape vulnerability (a security flaw allowing code to break out of a restricted execution environment) was found in huggingface/smolagents version 1.14.0 that lets attackers bypass safety restrictions and achieve remote code execution (RCE, running commands on a system they don't own). The flaw is in the local_python_executor.py module, which failed to properly block Python code execution even though it had safety checks in place.
skops is a Python library for sharing scikit-learn machine learning models. Versions 0.11.0 and below have a flaw in MethodNode that allows attackers to access unexpected object fields using dot notation, potentially leading to arbitrary code execution (running any code on a system) when loading a model file.
skops is a Python library for sharing scikit-learn (a machine learning toolkit) based models. Versions 0.11.0 and below have a flaw in the OperatorFuncNode component that allows attackers to hide the execution of untrusted code, potentially leading to arbitrary code execution (running any commands on a system). This vulnerability can be exploited through code reuse attacks that make unsafe functions appear trustworthy.
A Linux kernel vulnerability allowed invalid access to graphics memory (framebuffer) when PCI host bridges relocated memory addresses during boot. The fix applies PCI address offsets to the framebuffer information stored in screen_info (a kernel data structure tracking display memory locations) so the kernel uses the correct updated memory addresses instead of the original boot-time addresses.
OpenAI Codex CLI versions before 0.9.0 have a security flaw where ripgrep (a command-line search tool) can be executed automatically without requiring user approval, even when security flags like --pre, --hostname-bin, or --search-zip are used. This means an attacker could potentially run ripgrep commands without proper user consent.
The AI Engine WordPress plugin (a tool that adds AI features to WordPress websites) has a security flaw in versions up to 2.9.4 where the simpleTranscribeAudio endpoint (a connection point for audio transcription) fails to check what types of file locations are allowed before accessing files. This allows attackers with basic user access to read any file on the web server and steal it through the plugin's OpenAI integration (connection to OpenAI's service).
Roo Code is an AI coding agent that runs inside code editors, but versions 3.23.18 and earlier have a vulnerability where it doesn't check for line breaks in commands, allowing attackers to bypass the allow-list (a list of approved commands) by hiding extra commands on new lines. The tool only checks the first line of input when deciding whether to run a command, so attackers can inject additional malicious commands after a line break.
Ollama version 0.6.7 has a cross-domain token exposure vulnerability (CVE-2025-51471) in its authentication system where attackers can steal authentication tokens and bypass access controls by sending a malicious realm value in a WWW-Authenticate header (a standard web authentication response) through the /api/pull endpoint. This allows remote attackers, who don't need existing access, to gain unauthorized entry to the system.
CVE-2025-51480 is a path traversal vulnerability (a flaw where attackers use special sequences like '../' to access files outside intended directories) in ONNX 1.17.0's save_external_data function that allows attackers to overwrite arbitrary files by supplying malicious file paths. The vulnerability bypasses the intended directory restrictions that should prevent this kind of file manipulation.
CVE-2025-51863 is a self XSS (cross-site scripting, where an attacker tricks a user into running malicious code on a website by injecting it into the page) vulnerability in ChatGPT Unli that was present through May 26, 2025. The vulnerability allows attackers to execute arbitrary code (run any commands they want) by uploading a specially crafted SVG file (a type of image format) to the chat interface.
Chaindesk has a stored XSS vulnerability (cross-site scripting, where malicious code is saved and runs in users' browsers) in its chat feature through May 26, 2025. An attacker can trick the AI agent's system prompt (the instructions that control how an LLM behaves) to output harmful scripts that execute when users view conversations, potentially stealing session tokens (security credentials that prove who you are) and taking over accounts.
CVE-2025-49747 is a missing authorization vulnerability (a flaw where a system fails to properly check if a user has permission to perform an action) in Azure Machine Learning that allows someone who already has some access to the system to gain elevated privileges, or higher levels of access, over a network.
CVE-2025-49746 is a vulnerability in Azure Machine Learning where improper authorization (CWE-285, a flaw in how the system checks who is allowed to do what) allows someone who already has legitimate access to gain higher-level privileges over a network. This is categorized as a privilege escalation attack, where an authorized user exploits a weakness to gain permissions they shouldn't normally have.
CVE-2025-47995 is a vulnerability in Azure Machine Learning that involves weak authentication (a system that doesn't properly verify user identity), allowing someone who already has some access to gain elevated privileges (higher-level permissions) over a network. The vulnerability has a CVSS 4.0 severity rating, though a full assessment from NIST has not yet been provided.
A Linux kernel bug in epoll (a system for monitoring multiple file descriptors) allows a use-after-free vulnerability (accessing memory that has already been freed) when the reference count is decremented before releasing a mutex (a lock that ensures only one thread accesses code at a time). The problem occurs when multiple threads drop their references nearly simultaneously, allowing one thread to free the memory while another is still using the mutex to clean up.
This research addresses the problem of stealing attacks against healthcare APIs (application programming interfaces, which are tools that let software systems communicate with each other), where attackers try to copy or extract data from medical AI models. The authors propose a defense strategy called "adaptive teleportation" that modifies incoming queries (requests) in clever ways to fool attackers while still allowing legitimate users to get accurate results from the healthcare API.
Fix: The source proposes 'adaptive teleportation of incoming queries' as the defense mechanism. According to the text, 'The adaptive teleportation operations are generated based on the formulated bi-level optimization target and follows the evolution trajectory depicted by the Wasserstein gradient flows, which effectively push attacking queries to cross decision boundary while constraining the deviation level of benign queries.' This approach 'provides misleading information on malicious queries while preserving model utility.' The authors validated this mechanism on three healthcare prediction tasks (inhospital mortality, bleed risk, and ischemic risk prediction) and found it 'significantly more effective to suppress the performance of cloned model while maintaining comparable serving utility compared to existing defense approaches.'
IEEE Xplore (Security & AI Journals)The Month of AI Bugs 2025 is an initiative to expose security vulnerabilities in agentic AI systems (AI that can take actions on its own), particularly coding agents, through responsible disclosure and public education. The campaign will publish over 20 blog posts demonstrating exploits, including prompt injection (tricking an AI by hiding malicious instructions in its input) attacks that can allow attackers to compromise a developer's computer without permission. While some vendors have fixed reported vulnerabilities quickly, others have ignored reports for months or stopped responding, and many appear uncertain how to address novel AI security threats.
Fix: The issue is resolved in version 1.17.0.
NVD/CVE DatabaseFix: This is fixed in version 12.0.0. Users should update to version 12.0.0 or later.
NVD/CVE DatabaseFix: Update to version 0.12.0, where this vulnerability is fixed.
NVD/CVE DatabaseFix: The helper function pcibios_bus_to_resource() performs the relocation of the screen_info framebuffer resource, and commit 78aa89d1dfba ("firmware/sysfb: Update screen_info for relocated EFI framebuffers") added code to update screen_info with the corrected addresses. This approach mirrors similar existing functionality in efifb (the EFI framebuffer driver).
NVD/CVE DatabaseFix: Update OpenAI Codex CLI to version 0.9.0 or later.
NVD/CVE DatabaseFix: This is fixed in version 3.23.19.
NVD/CVE DatabaseOWASP's Agentic Security Initiative has created a taxonomy (a classification system for threats and their fixes) that is now being used in real developer tools like PENSAR, SPLX.AI Agentic Radar, and AI&ME to help teams build and test secure agentic AI systems (AI systems that can take actions autonomously). This taxonomy is also informing the development of OWASP's Top 10 for Agentic AI, a list of the most critical security risks in this area.
Fix: Patches are available through pull requests #6959 and #7040 on the ONNX GitHub repository (https://github.com/onnx/onnx/pull/6959 and https://github.com/onnx/onnx/pull/7040).
NVD/CVE DatabaseFix: Fix this by moving the ep refcount dropping to outside the mutex, since the refcount itself is atomic (thread-safe without locks) and doesn't need mutex protection. As the source states: 'the refcount itself is atomic, and doesn't need mutex protection (that's the whole _point_ of refcounts: unlike mutexes, they are inherently about object lifetimes).'
NVD/CVE DatabaseMeta's new Llama 4 models (Scout and Maverick) were tested for security vulnerabilities using Protect AI's Recon tool, which runs 450+ attack prompts across six categories including jailbreaks (attempts to make AI ignore safety rules), prompt injection (tricking an AI by hiding instructions in its input), and evasion (using obfuscation to hide malicious requests). Both models received medium-risk scores (Scout: 58/100, Maverick: 52/100), with Scout showing particular vulnerability to jailbreak attacks at 67.3% success rate, though Maverick demonstrated better overall resilience.