aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,754
[LAST_24H]
29
[LAST_7D]
174
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code was accidentally leaked through an npm package containing a source map file, exposing nearly 2,000 TypeScript files and over 512,000 lines of code. Users who downloaded the affected version on March 31, 2026 may have received a trojanized HTTP client (compromised software) containing malware.

>

AI Tool Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that would allow attackers to execute arbitrary code by tricking users into opening malicious files. Claude Code generated proof-of-concept exploits (working examples of attacks) within minutes, demonstrating how AI can accelerate vulnerability discovery.

Latest Intel

page 147/276
VIEW ALL
01

CVE-2025-53066: Vulnerability in the Oracle Java SE, Oracle GraalVM for JDK, Oracle GraalVM Enterprise Edition product of Oracle Java SE

security
Oct 21, 2025

A vulnerability (CVE-2025-53066) exists in Oracle Java SE and related products, affecting multiple versions including Java 8, 11, 17, 21, and 25. An attacker with network access can exploit this flaw in the JAXP component (a Java library for processing XML data) without needing to log in, potentially gaining unauthorized access to sensitive data. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 7.5, indicating it is a serious threat.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
>

Critical Python Sandbox Escape in PraisonAI: PraisonAI's `execute_code()` function can be bypassed by creating a custom string subclass with an overridden `startswith()` method, allowing attackers to run arbitrary OS commands on the host system (CVE-2026-34938). This is especially dangerous because many deployments auto-approve code execution, so attackers could trigger it silently through indirect prompt injection (sneaking malicious instructions into the AI's input).

>

Multiple High-Severity Vulnerabilities in ONNX Format: ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) versions before 1.21.0 contain several high-severity vulnerabilities including path traversal via symlink (CVE-2026-27489, CVSS 8.7) and improper validation allowing attackers to craft malicious models that overwrite internal object properties (CVE-2026-34445). These flaws allow attackers to read arbitrary files outside intended directories or manipulate model behavior.

NVD/CVE Database
02

CVE-2025-60511: Moodle OpenAI Chat Block plugin 3.0.1 (2025021700) suffers from an Insecure Direct Object Reference (IDOR) vulnerability

security
Oct 21, 2025

The Moodle OpenAI Chat Block plugin version 3.0.1 has an IDOR vulnerability (insecure direct object reference, where a user can access resources by directly requesting them without proper permission checks). An authenticated student can bypass validation of the blockId parameter in the plugin's API and impersonate another user's block, such as an administrator's block, allowing them to execute queries with that block's settings, expose sensitive information, and potentially misuse API resources.

NVD/CVE Database
03

CVE-2025-49655: Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not inc

security
Oct 17, 2025

CVE-2025-49655 is a vulnerability in Keras (a machine learning framework) versions 3.11.0 through 3.11.2 where deserialization (converting saved data back into usable form) of untrusted data can allow malicious code to run on a user's computer when they load a specially crafted Keras file, even if safe mode is enabled. This vulnerability affects both locally stored and remotely downloaded files.

Fix: Update Keras to version 3.11.3 or later. The GitHub pull request at https://github.com/keras-team/keras/pull/21575 contains the fix.

NVD/CVE Database
04

CVE-2025-62356: A path traversal vulnerability in all versions of the Qodo Qodo Gen IDE enables a threat actor to read arbitrary local f

security
Oct 17, 2025

CVE-2025-62356 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Qodo Gen IDE that allows attackers to read any local files on a user's computer, both inside and outside their projects. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).

NVD/CVE Database
05

CVE-2025-62353: A path traversal vulnerability in all versions of the Windsurf IDE enables a threat actor to read and write arbitrary lo

security
Oct 17, 2025

CVE-2025-62353 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Windsurf IDE that allows attackers to read and write any files on a user's computer. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).

NVD/CVE Database
06

AI Safety Newsletter #64: New AGI Definition and Senate Bill Would Establish Liability for AI Harms

policyindustry
Oct 16, 2025

The Senate introduced the AI LEAD Act, which would make AI companies legally liable for harms their systems cause, similar to how traditional product liability (the legal responsibility companies have when their products injure people) works for other products. The act would clarify that AI systems count as products subject to liability and would hold companies accountable if they failed to exercise reasonable care in designing the system, providing warnings, or if they sold a defective system. Additionally, China announced new export controls on rare earth metals (elements essential to semiconductors and AI hardware), which could disrupt global AI supply chains if strictly enforced.

Fix: The AI LEAD Act itself serves as the proposed solution: it would establish federal product liability for AI systems, clarify that AI companies are liable for harms if they fail to exercise reasonable care in design or warnings or breach warranties, allow deployers to be held liable for substantially modifying or dangerously misusing systems, prohibit AI companies from limiting liability through consumer contracts, and require foreign AI developers to register agents for service of process in the US before selling products domestically.

CAIS AI Safety Newsletter
07

v0.14.5

industry
Oct 15, 2025

LlamaIndex v0.14.5 is a release that fixes multiple bugs and adds new features across its ecosystem of AI/LLM tools. Changes include fixing duplicate node positions in documents, improving streaming functionality with AI providers like Anthropic and OpenAI, adding support for new AI models, and enhancing vector storage (database systems that store AI embeddings, which are numerical representations of text meaning) capabilities. The release also introduces new integrations, such as Sglang LLM support and SignNow MCP (model context protocol, a standard for connecting AI tools) tools.

LlamaIndex Security Releases
08

v5.0.0

securityresearch
Oct 15, 2025

ATLAS Data v5.0.0 introduces a new "Technique Maturity" field that categorizes AI attack techniques based on evidence level, ranging from feasible (proven in research) to realized (used in actual attacks). The release adds 11 new techniques covering AI agent attacks like context poisoning (injecting false information into an AI system's memory), credential theft from AI configurations, and prompt injection (tricking an AI by hiding malicious instructions in its input), plus updates to existing techniques and case studies.

MITRE ATLAS Releases
09

CVE-2025-36730: A prompt injection vulnerability exists in Windsurft version 1.10.7 in Write mode using SWE-1 model. It is possible to

security
Oct 14, 2025

A prompt injection vulnerability (tricking an AI by hiding instructions in its input) exists in Windsurf version 1.10.7 when using Write mode with the SWE-1 model. An attacker can create a specially crafted file name that gets added to the user's prompt, causing Windsurf to follow malicious instructions instead of the user's intended commands. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.6, classified as medium severity.

NVD/CVE Database
10

A Mathematical Certification for Positivity Conditions in Neural Networks With Applications to Partial Monotonicity and Trustworthy AI

researchsafety
Oct 14, 2025

This research presents LipVor, an algorithm that mathematically verifies whether a trained neural network (a computer model with interconnected nodes that learns patterns) follows partial monotonicity constraints, which means outputs change predictably with certain inputs. The method works by testing the network at specific points and using mathematical properties to guarantee the network behaves correctly across its entire domain, potentially allowing neural networks to be used in critical applications like credit scoring where trustworthiness and predictable behavior are required.

IEEE Xplore (Security & AI Journals)
Prev1...145146147148149...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026