Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
CVE-2025-46150 is a bug in PyTorch (a machine learning framework) versions before 2.7.0 where FractionalMaxPool2d (a function that reduces image dimensions) produces inconsistent results when torch.compile (a performance optimization tool) is used. The issue causes the function to give different outputs under the same conditions, which is problematic for machine learning models that need reproducible, reliable results.
Fix: Upgrade to PyTorch version 2.7.0 or later.
NVD/CVE DatabaseCVE-2025-46149 is a bug in PyTorch (a machine learning library) versions before 2.7.0 where the nn.Fold function crashes with an assertion error when inductor (PyTorch's code optimization tool) is used. This is classified as a reachable assertion vulnerability, meaning the code reaches a safety check that fails unexpectedly.
PyTorch versions up to 2.6.0 have a bug where the nn.PairwiseDistance function (a tool that calculates distances between pairs of data points) produces wrong answers when using the p=2 parameter in eager mode (the default execution method). This is a correctness issue, meaning the calculation gives incorrect numerical results rather than causing a security breach.
Claude Code is a tool that uses AI to help write code, and it had a security flaw in versions before 1.0.39 where Yarn plugins (add-ons for a package manager) would run automatically when checking the version, bypassing Claude Code's trust dialog (a safety check asking users to confirm they trust a directory before working in it). This only affected users with Yarn versions 2.0 and newer, not those using the older Yarn Classic.
The huggingface/transformers library before version 4.53.0 has a vulnerability where malicious regular expressions (patterns used to match text) in certain settings can cause ReDoS (regular expression denial of service, a type of attack that makes a system use 100% CPU and become unresponsive). An attacker who can control these regex patterns in the AdamWeightDecay optimizer (a tool that helps train machine learning models) can make the system hang and stop working.
Codex CLI (a coding tool from OpenAI that runs on your computer) versions 0.2.0 to 0.38.0 had a sandbox bug that allowed the AI model to trick the system into writing files and running commands outside the intended workspace folder. The sandbox (a restricted area meant to contain the tool's actions) wasn't properly checking where it should allow file access, which bypassed security boundaries, though network restrictions still worked.
Flowise is a tool with a visual interface for building customized AI workflows. Before August 2025, free-tier users on Flowise Cloud could access sensitive secrets (like API keys for OpenAI, AWS, and Google Cloud) belonging to other users through a Custom JavaScript Function node, exposing data across different user accounts. This cross-tenant data exposure vulnerability has been patched in the August 2025 update.
Flowise version 3.0.5 has a remote code execution (RCE, where an attacker can run commands on a system they don't own) vulnerability in its CustomMCP node. When users input configuration settings, the software unsafely executes the input as JavaScript code using the Function() constructor without checking if it's safe, allowing attackers to access dangerous system functions like running programs or reading files.
Flowise version 3.0.5 contains a Server-Side Request Forgery vulnerability (SSRF, a flaw that lets attackers trick the server into making requests to internal networks on their behalf) in the /api/v1/fetch-links endpoint, allowing attackers to use the Flowise server as a proxy to access and explore internal web services. This vulnerability was patched in version 3.0.6.
A vulnerability (CVE-2025-10772) was found in huggingface LeRobot versions up to 0.3.3 in the ZeroMQ Socket Handler (a tool for sending messages between programs), which allows attackers to bypass authentication (verification of who you are) when accessing the system from within a local network. The vendor was notified but did not respond with a fix.
A vulnerability in Keras (a machine learning library) allows attackers to run arbitrary code on a system by creating a malicious .keras model file that tricks the load_model function into disabling its safety protections, even when safe_mode is enabled. The attacker does this by embedding a command in the model's configuration file that turns off safe mode, then hiding executable code in a Lambda layer (a Keras feature that can contain custom Python code), allowing the malicious code to run when the model is loaded.
A vulnerability exists in Keras' Model.load_model method where specially crafted .h5 or .hdf5 model files (archive formats that store trained AI models) can execute arbitrary code on a system, even when safe_mode is enabled to prevent this. The attack works by embedding malicious pickled code (serialized Python code) in a Lambda layer, a Keras feature that allows custom Python functions, which bypasses the intended security protection.
Lobe Chat, an open-source AI chat framework, has a cross-site scripting vulnerability (XSS, where attackers inject malicious code into web pages) in versions before 1.129.4. When the app renders certain chat messages containing SVG images, it uses a method called dangerouslySetInnerHTML that doesn't filter the content, allowing attackers who can inject code into chat messages (through malicious websites, compromised servers, or tool integrations) to potentially run commands on the user's computer.
CVE-2025-23336 is a vulnerability in NVIDIA Triton Inference Server (software that runs AI models on Windows and Linux) where an attacker could cause a denial of service (making the system unavailable) by loading a misconfigured model. The vulnerability stems from improper input validation (the system not properly checking whether data is safe before using it).
CVE-2025-23329 is a vulnerability in NVIDIA Triton Inference Server (a tool used to run AI models efficiently) on Windows and Linux where an attacker could damage data in memory by accessing a shared memory region used by the Python backend, potentially causing the service to crash. The vulnerability involves improper access control (failing to properly restrict who can access certain resources) and out-of-bounds writing (writing data to memory locations it shouldn't).
CVE-2025-23328 is a vulnerability in NVIDIA Triton Inference Server (software that runs AI models on Windows and Linux) where an attacker could send specially crafted input to cause an out-of-bounds write (writing data outside the intended memory location), potentially causing a denial of service (making the service unavailable). The vulnerability has a CVSS score of 4.0, indicating moderate severity.
NVIDIA Triton Inference Server for Windows and Linux has a vulnerability in its Python backend that allows attackers to execute arbitrary code remotely by manipulating the model name parameter in model control APIs (functions that manage AI models). This vulnerability could lead to remote code execution (RCE, where an attacker runs commands on a system they don't own), denial of service (making the system unavailable), information disclosure (exposing sensitive data), and data tampering (modifying stored information).
NVIDIA Triton Inference Server has a vulnerability in its DALI backend (a component that processes data) where improper input validation (the failure to check if data is safe before using it) allows attackers to execute code on the system. The issue is classified as CWE-20, a common weakness type related to input validation problems.
picklescan is a tool that checks if pickle files (a Python format for storing objects) are safe before loading them, but versions up to 0.0.30 have a vulnerability where attackers can bypass these safety checks by giving a malicious pickle file a PyTorch-related file extension. When the tool incorrectly marks this file as safe and it gets loaded, the attacker's malicious code can run on the system.
n8n, an open source workflow automation platform, has a stored XSS vulnerability (cross-site scripting, where malicious code is saved and runs in users' browsers) in versions 1.24.0 through 1.106.x. An authorized user can inject harmful JavaScript into the initialMessages field of the LangChain Chat Trigger node, and if public access is enabled, this code runs in the browsers of anyone visiting the public chat link, potentially allowing attackers to steal cookies or sensitive data through phishing.
Fix: Upgrade to PyTorch version 2.7.0 or later.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.39 or later. Users with auto-update enabled will have received the fix automatically. Users updating manually should update to the latest version.
NVD/CVE DatabaseFix: Update to huggingface/transformers version 4.53.0 or later.
NVD/CVE DatabaseFix: Update to Codex CLI 0.39.0 or later, which fixes the sandbox boundary validation. The patch now checks that the sandbox boundaries are based on where the user started the session, not on paths generated by the model. If using the Codex IDE extension, update immediately to version 0.4.12. Users on 0.38.0 or earlier should update via their package manager or reinstall the latest version.
NVD/CVE DatabaseFix: Update to the August 2025 Cloud-Hosted Flowise version or later, which includes the patch for this vulnerability.
NVD/CVE DatabaseFix: This issue has been patched in version 3.0.6.
NVD/CVE DatabaseFix: Update to version 3.0.6, which contains the patch for this vulnerability.
NVD/CVE DatabaseFix: Update to Lobe Chat version 1.129.4 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to version 1.107.0 or later. As a workaround, the affected chatTrigger node can be disabled.
NVD/CVE Database