aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,754
[LAST_24H]
22
[LAST_7D]
174
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code was accidentally leaked through an npm package containing a source map file, exposing nearly 2,000 TypeScript files and over 512,000 lines of code. Users who downloaded the affected version on March 31, 2026 may have received a trojanized HTTP client (compromised software) containing malware.

>

AI Tool Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that would allow attackers to execute arbitrary code by tricking users into opening malicious files. Claude Code generated proof-of-concept exploits (working examples of attacks) within minutes, demonstrating how AI can accelerate vulnerability discovery.

Latest Intel

page 142/276
VIEW ALL
01

CVE-2025-62609: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a segmentation fault

security
Nov 21, 2025

MLX is an array framework for machine learning on Apple silicon that has a vulnerability where loading malicious GGUF files (a machine learning model format) causes a segmentation fault (a crash where the program tries to access invalid memory). The problem occurs because the code dereferences an untrusted pointer (uses a memory address without checking if it's valid) from an external library without validation.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
>

Critical Python Sandbox Escape in PraisonAI: PraisonAI's `execute_code()` function can be bypassed by creating a custom string subclass with an overridden `startswith()` method, allowing attackers to run arbitrary OS commands on the host system (CVE-2026-34938). This is especially dangerous because many deployments auto-approve code execution, so attackers could trigger it silently through indirect prompt injection (sneaking malicious instructions into the AI's input).

>

Multiple High-Severity Vulnerabilities in ONNX Format: ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) versions before 1.21.0 contain several high-severity vulnerabilities including path traversal via symlink (CVE-2026-27489, CVSS 8.7) and improper validation allowing attackers to craft malicious models that overwrite internal object properties (CVE-2026-34445). These flaws allow attackers to read arbitrary files outside intended directories or manipulate model behavior.

Fix: This issue has been patched in version 0.29.4. Users should update MLX to version 0.29.4 or later.

NVD/CVE Database
02

CVE-2025-62608: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a heap buffer overflo

security
Nov 21, 2025

MLX is an array framework (a software library for handling arrays of data in machine learning) for Apple silicon computers. Before version 0.29.4, the software had a heap buffer overflow (a memory safety bug where the program reads beyond allocated memory) in its file-loading function when processing malicious NumPy .npy files (a common data format in machine learning), which could crash the program or leak sensitive information.

Fix: Update MLX to version 0.29.4 or later. The vulnerability has been patched in this version.

NVD/CVE Database
03

Human-Inspired Scene Understanding: A Grounded Cognition Method for Unbiased Scene Graph Generation

research
Nov 21, 2025

Scene Graph Generation (SGG, a method that identifies objects and their relationships in images) is limited by long-tailed bias, where the AI model performs well on common relationships but poorly on rare ones. This paper proposes a Grounded Cognition Method (GCM) that mimics human thinking by using techniques like Out Domain Knowledge Injection to broaden visual understanding, a Semantic Group Aware Synthesizer to organize relationship categories, modality erasure (removing one type of input at a time) to improve robustness, and a Shapley Enhanced Multimodal Counterfactual module to handle diverse contexts.

IEEE Xplore (Security & AI Journals)
04

Rethinking Rotation-Invariant Recognition of Fine-Grained Shapes From the Perspective of Contour Points

research
Nov 21, 2025

This research addresses the problem of recognizing shapes that have been rotated at different angles in computer vision (the field of teaching computers to understand images). The authors propose a new method that focuses on analyzing the outline or contour points of shapes rather than individual pixels, and they use a special neural network module to identify geometric patterns in these contours while ignoring rotation. Their approach shows better results than previous methods, especially for complex shapes, and it works even when the contour data is slightly noisy or imperfect.

IEEE Xplore (Security & AI Journals)
05

CVE-2025-62426: vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/c

security
Nov 21, 2025

vLLM is a tool that runs large language models and serves them to users. In versions 0.5.5 through 0.11.0, two API endpoints accept a parameter called chat_template_kwargs that isn't properly checked before being used, allowing attackers to send specially crafted requests that freeze the server and prevent other users' requests from being processed.

Fix: Update to vLLM version 0.11.1 or later, where this issue has been patched.

NVD/CVE Database
06

CVE-2025-62372: vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can

security
Nov 21, 2025

vLLM (an inference and serving engine for large language models) versions 0.5.5 through 0.11.0 have a vulnerability where users can crash the engine by sending multimodal embedding inputs (data that combines multiple types of information, like images and text) with incorrect shape parameters, even if the model doesn't support such inputs. This bug has a CVSS score of 8.3 (a 0-10 scale measuring vulnerability severity), indicating it's a high-severity issue.

Fix: This issue has been patched in version 0.11.1. Users should upgrade to vLLM version 0.11.1 or later.

NVD/CVE Database
07

CVE-2025-62164: vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memor

security
Nov 21, 2025

vLLM versions 0.10.2 through 0.11.0 have a vulnerability in how they process user-supplied prompt embeddings (numerical representations of text). An attacker can craft malicious data that bypasses safety checks and causes memory corruption (writing data to the wrong location in computer memory), which can crash the system or potentially allow remote code execution (RCE, where an attacker runs commands on the server).

Fix: Update to vLLM version 0.11.1 or later. The source states: 'This issue has been patched in version 0.11.1.'

NVD/CVE Database
08

CVE-2025-64755: Claude Code is an agentic coding tool. Prior to version 2.0.31, due to an error in sed command parsing, it was possible

security
Nov 20, 2025

Claude Code is an agentic coding tool (a program that can write code automatically) that had a vulnerability before version 2.0.31 where a mistake in how it parsed sed commands (a tool for editing text) allowed attackers to bypass safety checks and write files anywhere on a computer system. This vulnerability has been fixed.

Fix: Update to version 2.0.31 or later. The issue has been patched in version 2.0.31.

NVD/CVE Database
09

CVE-2025-64660: Improper access control in GitHub Copilot and Visual Studio Code allows an authorized attacker to execute code over a ne

security
Nov 20, 2025

CVE-2025-64660 is a vulnerability in GitHub Copilot and Visual Studio Code that involves improper access control (a flaw in how the software checks who is allowed to do what), allowing an authorized attacker to execute code over a network. The vulnerability has a CVSS 4.0 severity rating (a 0-10 scale measuring how serious a vulnerability is). This means someone with legitimate access to these tools could potentially run malicious code remotely.

NVD/CVE Database
10

CVE-2025-65099: Claude Code is an agentic coding tool. Prior to version 1.0.39, when running on a machine with Yarn 3.0 or above, Claude

security
Nov 19, 2025

Claude Code, an agentic coding tool (software that can write and execute code), had a vulnerability before version 1.0.39 where it could run code from yarn plugins (add-ons for the Yarn package manager) before asking the user for permission, but only on machines with Yarn 3.0 or newer. This attack required tricking a user into opening Claude Code in an untrusted directory (a folder with malicious code).

Fix: Update Claude Code to version 1.0.39 or later. The source states: 'This issue has been patched in version 1.0.39.'

NVD/CVE Database
Prev1...140141142143144...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026