aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
39
[LAST_7D]
176
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 175/272
VIEW ALL
01

AI Regulatory Sandbox Approaches: EU Member State Overview

policy
May 2, 2025

AI regulatory sandboxes are controlled testing environments where companies can develop and test AI systems with guidance from regulators before releasing them to the public, as required by the EU AI Act (EU's new rules for artificial intelligence). These sandboxes help companies understand what regulations they must follow, protect them from fines if they follow official guidance, and make it easier for small startups to enter the market. Each EU Member State must create at least one sandbox by August 2, 2026, though different countries are taking different approaches to organizing them.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

EU AI Act Updates
02

CVE-2025-46567: LLama Factory enables fine-tuning of large language models. Prior to version 1.0.0, a critical vulnerability exists in t

security
May 1, 2025

CVE-2025-46567 is a critical vulnerability in LLaMA-Factory (a tool for fine-tuning large language models) that exists before version 1.0.0. The vulnerability is in the `llamafy_baichuan2.py` script, which unsafely loads user-supplied files using `torch.load()` (a function that deserializes, or reconstructs, Python objects from saved data), allowing attackers to execute arbitrary commands by crafting a malicious file.

Fix: This issue has been patched in version 1.0.0. Users should upgrade to version 1.0.0 or later. A patch is available at: https://github.com/hiyouga/LLaMA-Factory/commit/2989d39239d2f46e584c1e1180ba46b9768afb2a

NVD/CVE Database
03

CVE-2025-46560: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and p

security
Apr 30, 2025

vLLM (a system for running large language models efficiently) versions 0.8.0 through 0.8.4 have a critical performance bug in how it processes multimodal input (text, images, audio). The bug uses an inefficient algorithm (quadratic time complexity, meaning it slows down exponentially as input size grows) when replacing placeholder tokens (special markers like <|audio_|> that get expanded into repeated tokens), which allows attackers to crash or freeze the system by sending specially crafted malicious inputs.

Fix: This issue has been patched in version 0.8.5.

NVD/CVE Database
04

CVE-2025-32444: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and p

security
Apr 30, 2025

vLLM (a system for running AI models efficiently) versions 0.6.5 through 0.8.4 have a critical vulnerability when using mooncake integration. Attackers can execute arbitrary code remotely because the system uses pickle (an unsafe method for converting data into a format that can be transmitted) over unencrypted ZeroMQ sockets (communication channels) that listen to all network connections, making them easily accessible from the internet.

Fix: Update to vLLM version 0.8.5 or later, which has patched this vulnerability.

NVD/CVE Database
05

CVE-2025-30202: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and p

security
Apr 30, 2025

vLLM versions 0.5.2 through 0.8.4 have a security vulnerability in multi-node deployments where a ZeroMQ socket (a tool for sending messages between different computers) is left open to all network interfaces. An attacker with network access can connect to this socket to see internal vLLM data or deliberately slow down the system by connecting repeatedly without reading the data, causing a denial of service (making the system unavailable or very slow).

Fix: This issue has been patched in version 0.8.5. Update vLLM to version 0.8.5 or later.

NVD/CVE Database
06

CVE-2025-1194: A Regular Expression Denial of Service (ReDoS) vulnerability was identified in the huggingface/transformers library, spe

security
Apr 29, 2025

A ReDoS vulnerability (regular expression denial of service, where specially crafted text causes a regex to consume excessive CPU by repeatedly backtracking) was found in the huggingface/transformers library version 4.48.1, specifically in the GPT-NeoX-Japanese model's tokenizer. An attacker could exploit this by sending malicious input that causes the application to hang or crash due to high CPU usage.

NVD/CVE Database
07

AI Safety Newsletter #53: An Open Letter Attempts to Block OpenAI Restructuring

policy
Apr 29, 2025

Former OpenAI employees and experts published an open letter asking California and Delaware officials to block OpenAI's restructuring from a nonprofit organization into a for-profit company (a Public Benefit Corporation, which balances profit with public benefit). The letter argues that the restructuring would eliminate governance safeguards designed to prevent profit motives from influencing decisions about AGI (artificial general intelligence, highly autonomous systems that outperform humans at most economically valuable work), and would shift control away from a nonprofit board accountable to the public toward a board partly accountable to shareholders.

CAIS AI Safety Newsletter
08

Recap from OWASP Gen AI Security Project’s – NYC Insecure Agents Hackathon

securityresearch
Apr 25, 2025

AI agents (automated systems that can take actions based on AI decisions) are easy to build with modern tools, but they face several security threats. The OWASP Gen AI Security Project held a hackathon in New York where participants intentionally created insecure agents to identify common security problems.

OWASP GenAI Security
09

Providers of General-Purpose AI Models — What We Know About Who Will Qualify

policy
Apr 25, 2025

On April 22, 2025, the European AI Office published preliminary guidelines explaining which companies count as providers of GPAI models (general-purpose AI models, which are AI systems capable of performing many different tasks across various applications). The guidelines cover seven key topics, including defining what a GPAI model is, identifying who qualifies as a provider, handling open-source exemptions, and compliance requirements such as documentation, copyright policies, and security protections for higher-risk models.

EU AI Act Updates
10

Securing AI’s New Frontier: The Power of Open Collaboration on MCP Security

securitysafety
Apr 22, 2025

As AI systems start connecting to real tools and databases through the Model Context Protocol (MCP, a system that lets AI models interact with external applications and data), new security risks appear that older security methods cannot fully handle. The OWASP GenAI Security Project has released research on how to secure MCP, offering defense-in-depth strategies (a layered security approach using multiple protective measures) to help developers build safer AI applications that can act independently in real time.

OWASP GenAI Security
Prev1...173174175176177...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026