aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
23
[LAST_7D]
175
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 133/276
VIEW ALL
01

v0.14.12

security
Dec 29, 2025

This is a release of llama-index v0.14.12, a framework for building AI applications, containing various updates across multiple components including bug fixes, new features for asynchronous tool support, and improvements to integrations with services like OpenAI, Google, Anthropic, and various vector stores (databases that store numerical representations of data for AI searching). Key fixes address issues like crashes in logging, missing parameters in tool handling, and compatibility improvements for newer Python versions.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
LlamaIndex Security Releases
02

CVE-2025-67729: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Prior to version 0.11.1, an insecure deserialization

security
Dec 26, 2025

LMDeploy is a toolkit for compressing, deploying, and serving large language models (LLMs). Prior to version 0.11.1, the software had an insecure deserialization vulnerability (unsafe conversion of data back into executable code) where it used torch.load() without the weights_only=True parameter when opening model checkpoint files, allowing attackers to run arbitrary code on a victim's machine by tricking them into loading a malicious .bin or .pt model file.

Fix: This issue has been patched in version 0.11.1.

NVD/CVE Database
03

HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack

securityresearch
Dec 26, 2025

Hypergraph Neural Networks (HGNNs, which are AI models that learn from data where connections can link multiple items together instead of just pairs) can be weakened by structural attacks that corrupt their connections and reduce accuracy. HGNN Shield is a defense framework with two main components: Hyperedge-Dependent Estimation (which assesses how important each connection is within the network) and High-Order Shield (which detects and removes harmful connections before the AI processes data). Experiments show the framework improves performance by an average of 9.33% compared to existing defenses.

Fix: The HGNN Shield defense framework addresses the vulnerability through two modules: (1) Hyperedge-Dependent Estimation (HDE) that 'prioritizes vertex dependencies within hyperedges and adapts traditional connectivity measures to hypergraphs, facilitating precise structural modifications,' and (2) High-Order Shield (HOS) positioned before convolutional layers, which 'consists of three submodules: Hyperpath Cut, Hyperpath Link, and Hyperpath Refine' that 'collectively detect, disconnect, and refine adversarial connections, ensuring robust message propagation.'

IEEE Xplore (Security & AI Journals)
04

CVE-2025-68665: LangChain is a framework for building LLM-powered applications. Prior to @langchain/core versions 0.3.80 and 1.1.8, and

security
Dec 23, 2025

LangChain, a framework for building applications powered by LLMs (large language models), had a serialization injection vulnerability (a flaw where specially crafted data can be misinterpreted as legitimate code during the conversion of objects to JSON format) in its toJSON() method. The vulnerability occurred because the method failed to properly escape objects containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating malicious user data as legitimate LangChain objects when deserializing (converting back from JSON format).

Fix: Update @langchain/core to version 0.3.80 or 1.1.8, and update langchain to version 0.3.37 or 1.2.3. According to the source: 'This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3.'

NVD/CVE Database
05

CVE-2025-68664: LangChain is a framework for building agents and LLM-powered applications. Prior to versions 0.3.81 and 1.2.5, a seriali

security
Dec 23, 2025

LangChain, a framework for building AI agents and applications powered by large language models, had a serialization injection vulnerability (a flaw in how it converts data to stored formats) in its dumps() and dumpd() functions before versions 0.3.81 and 1.2.5. The functions failed to properly escape dictionaries containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating user-supplied data as legitimate LangChain objects during deserialization (converting stored data back into usable form).

Fix: Update to LangChain version 0.3.81 or version 1.2.5, where this issue has been patched.

NVD/CVE Database
06

CVE-2025-14930: Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers GLM4 allows attackers to run harmful code on a system by tricking users into opening a malicious file or visiting a malicious webpage. The problem occurs because the software doesn't properly check data when loading model weights (the numerical values that make the AI work), allowing deserialization of untrusted data (converting unsafe external files into code the system will execute).

NVD/CVE Database
07

CVE-2025-14929: Hugging Face Transformers X-CLIP Checkpoint Conversion Deserialization of Untrusted Data Remote Code Execution Vulnerabi

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers' X-CLIP checkpoint conversion allows attackers to execute arbitrary code (running commands they choose on a system) by tricking users into opening malicious files or visiting malicious pages. The flaw occurs because the code doesn't properly validate checkpoint data before deserializing it (converting stored data back into usable objects), which lets attackers inject malicious code that runs with the same permissions as the application.

NVD/CVE Database
08

CVE-2025-14928: Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability a

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers' HuBERT convert_config function allows attackers to execute arbitrary code (RCE, or remote code execution, where an attacker runs commands on a system) by tricking users into converting a malicious checkpoint (a saved model file). The flaw occurs because the function doesn't properly validate user input before using it to run Python code.

NVD/CVE Database
09

CVE-2025-14927: Hugging Face Transformers SEW-D convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability al

security
Dec 23, 2025

Hugging Face Transformers (a popular library for working with AI language models) has a vulnerability in its SEW-D convert_config function that allows attackers to run arbitrary code (any commands they want) on a victim's computer. The flaw exists because the function doesn't properly check user input before using it to execute Python code, and an attacker can exploit this by tricking a user into converting a malicious checkpoint (a saved model file).

NVD/CVE Database
10

CVE-2025-14926: Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allo

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers (a popular AI library) allows attackers to run arbitrary code on a user's computer through a malicious checkpoint (a saved model file). The flaw exists in the convert_config function, which doesn't properly validate user input before executing it as Python code, meaning an attacker can trick a user into converting a malicious checkpoint to execute code with the user's permissions.

NVD/CVE Database
Prev1...131132133134135...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026