aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,754
[LAST_24H]
22
[LAST_7D]
174
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code was accidentally leaked through an npm package containing a source map file, exposing nearly 2,000 TypeScript files and over 512,000 lines of code. Users who downloaded the affected version on March 31, 2026 may have received a trojanized HTTP client (compromised software) containing malware.

>

AI Tool Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that would allow attackers to execute arbitrary code by tricking users into opening malicious files. Claude Code generated proof-of-concept exploits (working examples of attacks) within minutes, demonstrating how AI can accelerate vulnerability discovery.

Latest Intel

page 144/276
VIEW ALL
01

v0.14.8

security
Nov 10, 2025

This release notes document describes version updates across multiple llama-index (a framework for building AI applications with language models) components, including fixes for bugs like a ReActOutputParser (a tool that interprets AI agent outputs) getting stuck, improved support for multiple AI model providers like OpenAI and Google Gemini, and updates to various integrations with external services. The updates span from core functionality fixes to documentation improvements and SDK compatibility updates across dozens of sub-packages.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
>

Critical Python Sandbox Escape in PraisonAI: PraisonAI's `execute_code()` function can be bypassed by creating a custom string subclass with an overridden `startswith()` method, allowing attackers to run arbitrary OS commands on the host system (CVE-2026-34938). This is especially dangerous because many deployments auto-approve code execution, so attackers could trigger it silently through indirect prompt injection (sneaking malicious instructions into the AI's input).

>

Multiple High-Severity Vulnerabilities in ONNX Format: ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) versions before 1.21.0 contain several high-severity vulnerabilities including path traversal via symlink (CVE-2026-27489, CVSS 8.7) and improper validation allowing attackers to craft malicious models that overwrite internal object properties (CVE-2026-34445). These flaws allow attackers to read arbitrary files outside intended directories or manipulate model behavior.

LlamaIndex Security Releases
02

CVE-2025-64504: Langfuse is an open source large language model engineering platform. Starting in version 2.70.0 and prior to versions 2

security
Nov 10, 2025

Langfuse, an open source platform for managing large language models, had a vulnerability in versions 2.70.0 through 2.95.10 and 3.x through 3.124.0 where the server didn't properly check which organization a user belonged to, allowing any authenticated user to see names and email addresses of members in other organizations if they knew the target organization's ID. The vulnerability required the attacker to have a valid account on the same Langfuse instance and knowledge of the target organization's ID, and no customer data like traces, prompts, or evaluations were exposed.

Fix: Upgrade to patched versions: v2.95.11 for major version 2 or v3.124.1 for major version 3. According to the source, 'there are no known workarounds' and 'upgrading is required to fully mitigate this issue.'

NVD/CVE Database
03

CVE-2025-11972: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to SQL Injection

security
Nov 8, 2025

A WordPress plugin called Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI has a SQL injection vulnerability (a security flaw where attackers can insert harmful database commands into the plugin's code) in versions up to 3.40.0. Attackers with Editor-level access or higher can exploit the 'post_types' parameter to extract sensitive information from the website's database because the plugin doesn't properly clean up user input before using it in database queries.

NVD/CVE Database
04

v5.1.0

securityresearch
Nov 6, 2025

ATLAS Data v5.1.0 is an updated framework that documents security threats and defenses related to AI systems, now containing 16 tactics, 84 techniques, and 32 mitigations. The update adds new attack methods targeting AI, such as prompt injection (tricking an AI by hiding instructions in its input), deepfake generation, and data theft from AI services, along with new defensive measures like human oversight of AI agent actions and restricted permissions for AI tools. It also includes 42 real-world case studies showing how these attacks and defenses apply in practice.

MITRE ATLAS Releases
05

CVE-2025-12488: oobabooga text-generation-webui trust_remote_code Reliance on Untrusted Inputs Remote Code Execution Vulnerability. This

security
Nov 6, 2025

A vulnerability in oobabooga text-generation-webui (CVE-2025-12488) allows attackers to execute arbitrary code (running any commands they want on a system) by exploiting the trust_remote_code parameter in the load endpoint. The flaw occurs because the software doesn't properly validate user input before using it to load a model, and no authentication is required to exploit it.

NVD/CVE Database
06

CVE-2025-12487: oobabooga text-generation-webui trust_remote_code Reliance on Untrusted Inputs Remote Code Execution Vulnerability. This

security
Nov 6, 2025

A vulnerability in oobabooga text-generation-webui allows attackers to run arbitrary code (unauthorized commands) on the system without needing to log in. The flaw occurs because the software doesn't properly check user input for the trust_remote_code parameter before using it to load a model, letting attackers execute code with the same permissions as the service.

NVD/CVE Database
07

CVE-2025-62039: Insertion of Sensitive Information Into Sent Data vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator

security
Nov 6, 2025

A vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator (version 2.6.6 and earlier) allows sensitive information to be exposed when data is sent. The flaw, called CWE-201 (insertion of sensitive information into sent data), means attackers could potentially retrieve embedded sensitive data from the plugin.

NVD/CVE Database
08

CVE-2025-12360: The Better Find and Replace – AI-Powered Suggestions plugin for WordPress is vulnerable to unauthorized API usage due to

security
Nov 6, 2025

The Better Find and Replace plugin for WordPress (versions up to 1.7.7) has a security flaw where a function called rtafar_ajax() doesn't properly check user permissions, allowing low-level authenticated users (Subscriber-level access) to trigger OpenAI API key usage and consume quota, potentially costing money. This happens because the code is missing a capability check (a permission verification system that controls what users can do).

NVD/CVE Database
09

Modifying AI Under the EU AI Act: Lessons from Practice on Classification and Compliance

policy
Nov 5, 2025

Under the EU AI Act, organizations that modify existing AI systems or general-purpose AI models (GPAI models, which are foundational AI systems designed to perform many different tasks) may become legally classified as "providers" and face significant compliance responsibilities. The article explains that modifications triggering higher compliance burdens typically involve high-risk AI systems or substantial changes to a GPAI model's capabilities or generality, such as fine-tuning (customizing a model for specific tasks). Proper assessment of whether a modification triggers provider status is critical, since misclassification can result in fines up to €15 million or 3% of global annual revenue.

EU AI Act Updates
10

CVE-2025-64110: Cursor is a code editor built for programming with AI. In versions 1.7.23 and below, a logic bug allows a malicious agen

security
Nov 4, 2025

Cursor, a code editor designed for programming with AI, has a logic bug in versions 1.7.23 and below that allows attackers to bypass cursorignore (a file that protects sensitive files from being read). An attacker who has already performed prompt injection (tricking an AI by hiding instructions in its input) or controls a malicious AI model could create a new cursorignore file to override existing protections and access protected files.

Fix: Update to version 2.0, where this issue is fixed.

NVD/CVE Database
Prev1...142143144145146...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026