aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
39
[LAST_7D]
176
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 176/272
VIEW ALL
01

v4.9.0

securityresearch
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

Apr 22, 2025

Version 4.9.0 is a release of the MITRE ATLAS framework, which documents attack techniques and defenses specific to AI systems. The update adds new attack methods like reverse shells (unauthorized remote access to a system), model corruption, and supply chain attacks targeting AI tools, while also updating existing security techniques and adding real-world case studies of AI-related security breaches.

MITRE ATLAS Releases
02

AI Safety Newsletter #52: An Expert Virology Benchmark

safetyresearch
Apr 22, 2025

Researchers created the Virology Capabilities Test (VCT), a benchmark measuring how well AI systems can solve complex virology lab problems, and found that leading AI models like OpenAI's o3 now outperform human experts in specialized virology knowledge. This is concerning because virology knowledge has dual-use potential, meaning the same capabilities that could help prevent disease could also be misused by bad actors to develop dangerous pathogens.

Fix: The authors recommend that highly dual-use virology capabilities should be excluded from publicly-available AI systems, and know-your-customer mechanisms (verification processes to confirm who customers are and what they'll use the technology for) could ensure these capabilities remain accessible only to researchers in institutions with appropriate safety protocols. As a result of the paper, xAI has added new safeguards to their systems.

CAIS AI Safety Newsletter
03

CVE-2025-32434: PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built

security
Apr 18, 2025

PyTorch (a Python package for machine learning computations) versions 2.5.1 and earlier contain a remote code execution (RCE, where an attacker can run commands on a system they don't own) vulnerability when loading models with the torch.load function set to weights_only=True. The vulnerability stems from insecure deserialization (converting data back into executable code without checking if it's safe), which allows attackers to execute arbitrary commands remotely.

Fix: This issue has been patched in version 2.6.0. Users should upgrade PyTorch to version 2.6.0 or later.

NVD/CVE Database
04

CVE-2025-32377: Rasa Pro is a framework for building scalable, dynamic conversational AI assistants that integrate large language models

security
Apr 18, 2025

Rasa Pro is a framework for building conversational AI assistants that use large language models. A vulnerability was found where voice connectors (tools that receive audio input) did not properly check user authentication even when security tokens were configured, allowing attackers to send voice data to the system without permission.

Fix: This issue has been patched in versions 3.9.20, 3.10.19, 3.11.7 and 3.12.6 for the audiocodes, audiocodes_stream, and genesys connectors. Update Rasa Pro to one of these versions or later.

NVD/CVE Database
05

OWASP Gen AI Security Project Announces Nine New Sponsors and Major RSA Conference Presence to Advance Generative AI Security

policyindustry
Apr 17, 2025

The OWASP Generative AI Security Project, an organization focused on application security, announced nine new corporate sponsors to support efforts in improving security for generative AI technologies. The sponsors, including companies like ByteDance and Trend Micro, represent increased investment and momentum in making AI systems more secure.

OWASP GenAI Security
06

CVE-2025-3730: A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.func

security
Apr 16, 2025

PyTorch 2.6.0 contains a vulnerability in the torch.nn.functional.ctc_loss function (a component used for speech recognition tasks) that can cause denial of service (making the system unavailable). The vulnerability requires local access to exploit and has been publicly disclosed, though its actual existence is still uncertain.

Fix: Apply patch 46fc5d8e360127361211cb237d5f9eef0223e567. The project's security policy also recommends avoiding unknown models, which could have malicious effects.

NVD/CVE Database
07

CVE-2025-3677: A vulnerability classified as critical was found in lm-sys fastchat up to 0.2.36. This vulnerability affects the functio

security
Apr 16, 2025

A critical vulnerability (CVE-2025-3677) was found in lm-sys FastChat version 0.2.36 and earlier in the file apply_delta.py. The flaw involves deserialization (converting data back into code or objects, which can be dangerous if the data comes from an untrusted source) and can only be exploited by someone with local access to the affected system.

NVD/CVE Database
08

CVE-2025-31363: Mattermost versions 10.4.x <= 10.4.2, 10.5.x <= 10.5.0, 9.11.x <= 9.11.9 fail to restrict domains the LLM can request to

security
Apr 16, 2025

Mattermost (a team communication platform) versions 10.4.2 and earlier, 10.5.0 and earlier, and 9.11.9 and earlier don't properly block which websites their built-in AI tool can contact. This allows logged-in users to use prompt injection (tricking the AI by hiding instructions in their input) to steal data from servers that the Mattermost system can access.

NVD/CVE Database
09

AI Safety Newsletter #51: AI Frontiers

policysafety
Apr 15, 2025

The AI Safety Newsletter highlights the launch of AI Frontiers, a new publication featuring expert commentary on critical AI challenges including national security risks, resource access inequality, risk management approaches, and governance of autonomous systems (AI agents that can make decisions without human input). The newsletter presents diverse viewpoints on how society should navigate AI's wide-ranging impacts on jobs, health, and security.

CAIS AI Safety Newsletter
10

CVE-2025-3579: In versions prior to Aidex 1.7, an authenticated malicious user, taking advantage of an open registry, could execute una

security
Apr 15, 2025

In Aidex versions before 1.7, a logged-in attacker could exploit an open registry to run unauthorized commands on the system through prompt injection attacks (tricking the AI by hiding malicious instructions in user input) via the chat message endpoint. This allowed them to execute operating system commands, access databases, and invoke framework functions.

Fix: Update to Aidex version 1.7 or later.

NVD/CVE Database
Prev1...174175176177178...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026