All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
NVIDIA Triton Inference Server for Windows and Linux has a vulnerability in its Python backend that allows attackers to execute arbitrary code remotely by manipulating the model name parameter in model control APIs (functions that manage AI models). This vulnerability could lead to remote code execution (RCE, where an attacker runs commands on a system they don't own), denial of service (making the system unavailable), information disclosure (exposing sensitive data), and data tampering (modifying stored information).
NVIDIA Triton Inference Server has a vulnerability in its DALI backend (a component that processes data) where improper input validation (the failure to check if data is safe before using it) allows attackers to execute code on the system. The issue is classified as CWE-20, a common weakness type related to input validation problems.
picklescan is a tool that checks if pickle files (a Python format for storing objects) are safe before loading them, but versions up to 0.0.30 have a vulnerability where attackers can bypass these safety checks by giving a malicious pickle file a PyTorch-related file extension. When the tool incorrectly marks this file as safe and it gets loaded, the attacker's malicious code can run on the system.
n8n, an open source workflow automation platform, has a stored XSS vulnerability (cross-site scripting, where malicious code is saved and runs in users' browsers) in versions 1.24.0 through 1.106.x. An authorized user can inject harmful JavaScript into the initialMessages field of the LangChain Chat Trigger node, and if public access is enabled, this code runs in the browsers of anyone visiting the public chat link, potentially allowing attackers to steal cookies or sensitive data through phishing.
CVE-2022-50326 is a memory leak (unused memory that is never freed) in the Linux kernel's airspy media driver. A previous update moved a variable called buf from the stack (temporary memory) to the heap (longer-term memory), but the code only freed this memory when errors occurred, not when the function succeeded, leaving memory wasted.
A ReDoS vulnerability (regular expression denial of service, where specially crafted input causes a program's pattern-matching code to consume excessive CPU) was found in the Hugging Face Transformers library's number normalization feature. An attacker could send text with long digit sequences to crash or slow down text-to-speech and number processing tasks. The vulnerability affects versions up to 4.52.4.
Langchaingo, a library for working with language models, uses jinja2 syntax (a templating language) to parse prompts, but the underlying gonja library it relies on supports file-reading commands like 'include' and 'extends'. This creates a server-side template injection vulnerability (SSTI, where an attacker tricks a server into executing unintended code by injecting malicious template syntax), allowing attackers to insert malicious statements into prompts to read sensitive files like /etc/passwd.
Flowise, a tool for building custom AI workflows through a visual interface, has a critical security flaw in versions 3.0.5 and earlier where the password reset endpoint leaks sensitive information like reset tokens without requiring authentication. This allows attackers to take over any user account by generating a fake reset token and changing the user's password.
A ReDoS vulnerability (regular expression denial of service, where specially crafted input causes a program to use excessive CPU by making regex matching extremely slow) was found in Hugging Face Transformers library version 4.52.4, specifically in the MarianTokenizer's `remove_language_code()` method. The bug is triggered by malformed language code patterns that force inefficient regex processing, potentially crashing or freezing the system.
CVE-2025-55319 is a command injection vulnerability (a type of attack where an attacker inserts malicious commands into a program's input) in Agentic AI (an AI system that can perform tasks independently) and Visual Studio Code that allows an unauthorized attacker to execute code over a network. The vulnerability stems from improper handling of special characters in commands, which lets attackers run arbitrary code on affected systems.
Claude Code, an agentic coding tool (software that can write and execute code with some autonomy), had a vulnerability where a maliciously configured git user email could trigger arbitrary code execution (running unintended commands on a system) when the tool started up, before the user approved workspace access. This affected all versions before 1.0.105.
Claude Code is a tool that helps AI write and run code, but versions before 1.0.105 had a bug in how it parsed commands that let attackers bypass the safety prompt (the confirmation step that checks if code is safe to run). An attacker would need to sneak malicious content into the conversation with Claude Code to exploit this.
The npm package `interactive-git-checkout` (a command-line tool for switching between git branches) has a command injection vulnerability (a flaw where attackers can run malicious commands by inserting code into input fields) in versions up to 1.1.4 because it doesn't properly check the branch name before passing it to the git command.
MONAI, an AI toolkit for medical imaging, has a deserialization vulnerability (unsafe unpickling, where untrusted data is converted back into executable code) in versions up to 1.5.0 when loading pre-trained model checkpoints from external sources. While one part of the code uses secure loading (`weights_only=True`), other parts load checkpoints insecurely, allowing attackers to execute malicious code if a checkpoint contains intentionally crafted harmful data.
Roo Code is an AI tool that helps developers write code directly in their editors, but versions 3.25.23 and older have a security flaw where npm install (a command that downloads and sets up code packages) is automatically approved without asking the user first. If a malicious repository's package.json file contains a postinstall script (code that runs automatically during package installation), it could execute harmful commands on the user's computer without their knowledge or consent.
Roo Code is an AI tool that helps developers write code directly in their editor, but versions 3.25.23 and earlier have a security flaw where attackers can bypass .rooignore (a file that tells Roo Code which files to ignore) using symlinks (shortcuts that point to other files). This allows someone with write access to the workspace to trick Roo Code into reading sensitive files like passwords or configuration files that should have been hidden.
Roo Code is an AI tool that automatically writes code in your editor, but versions 3.25.23 and earlier have a security flaw where workspace configuration files (.code-workspace files that store project settings) aren't properly protected. An attacker using prompt injection (tricking the AI by hiding malicious instructions in its input) could trick the agent into writing harmful settings that execute as code when you reopen your project, potentially giving the attacker control of your computer.
Roo Code is an AI tool that helps developers write code automatically within their editors. In versions 3.26.6 and earlier, a Github workflow (an automated process that runs tasks in a repository) used unsanitized pull request metadata (information that wasn't checked for malicious content) in a privileged context, allowing attackers to execute arbitrary commands on the Actions runner (a computer that runs automated tasks) through RCE (remote code execution, where an attacker can run commands on a system they don't own). This could let attackers steal secrets, modify code, or completely compromise the repository.
Roo Code is an AI tool that automatically writes code in your editor, but versions before 3.26.0 have a security flaw in how it parses commands (reads and interprets instructions). If someone configures the tool to automatically run commands without checking them first, an attacker could trick it into running extra harmful commands by manipulating the input the AI receives.
Fix: Update to version 1.107.0 or later. As a workaround, the affected chatTrigger node can be disabled.
NVD/CVE DatabaseFix: Fix this by freeing buf in the success path since this variable does not have any references in other code. The patch is available at: https://git.kernel.org/stable/c/23bc5eb55f8c9607965c20d9ddcc13cb1ae59568 and https://git.kernel.org/stable/c/f4285dd02b6b2ca3435b65fb62c053dd9408fd71
NVD/CVE DatabaseFix: Fixed in version 4.53.0 of the Hugging Face Transformers library.
NVD/CVE DatabaseFix: Upgrade to version 3.0.6 or later, which includes commit 9e178d68873eb876073846433a596590d3d9c863 that secures password reset endpoints. The source also recommends: (1) never return reset tokens or account details in API responses; (2) send tokens only through the user's registered email; (3) make the forgot-password endpoint respond with a generic success message to prevent attackers from discovering which accounts exist; (4) require strong validation of reset tokens, including making them single-use, giving them a short expiration time, and tying them to the request origin; and (5) apply these same fixes to both cloud and self-hosted deployments.
NVD/CVE DatabaseFix: Update to version 4.53.0, where the vulnerability has been fixed. A patch is available at https://github.com/huggingface/transformers/commit/47c34fba5c303576560cb29767efb452ff12b8be.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.105 or the latest version. Users with automatic updates enabled will have received this fix automatically; those updating manually should upgrade to version 1.0.105 or newer.
NVD/CVE DatabaseFix: Update to version 1.0.105 or the latest version. Users with auto-update enabled have already received this fix automatically.
NVD/CVE DatabaseFix: Commit 8dd832dd302af287a61611f4f85e157cd1c6bb41 fixes the issue. Users should update to a version that includes this commit.
NVD/CVE DatabaseResearchers studied how humans use two types of thinking (fast intuitive processing and slower logical reasoning) when looking at images, and tested whether AI systems like multimodal large language models (MLLMs, which process both text and images together) have similar abilities. They found that while MLLMs have improved at correcting intuitive errors, they still struggle with logical processing tasks that require deeper analysis, and segmentation models (AI systems that identify objects in images) make errors similar to human intuitive mistakes rather than using logical reasoning.
Fix: This is fixed in version 3.26.0.
NVD/CVE DatabaseFix: This is fixed in version 3.26.0.
NVD/CVE DatabaseFix: Update to version 3.26.0 or later, which fixes this issue.
NVD/CVE DatabaseFix: Update to version 3.26.7.
NVD/CVE DatabaseFix: Update to version 3.26.0 or later.
NVD/CVE Database