aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
23
[LAST_7D]
176
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 139/276
VIEW ALL
01

A Unified Decision Rule for Generalized Out-of-Distribution Detection

researchsafety
Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
Dec 9, 2025

This research paper addresses generalized out-of-distribution detection (OOD detection, where an AI system identifies inputs that are very different from its training data), which is important for AI systems used in safety-critical applications. Rather than focusing on designing better scoring functions, the authors propose a new decision rule called the generalized Benjamini Hochberg procedure that uses hypothesis testing (a statistical method for making decisions about data) to determine whether an input is out-of-distribution, and they prove this method controls false positive rates better than traditional threshold-based approaches.

IEEE Xplore (Security & AI Journals)
02

Side-Channel Analysis Based on Multiple Leakage Models Ensemble

researchsecurity
Dec 8, 2025

This research proposes a new framework for side-channel analysis (SCA, a type of attack that exploits physical information like power consumption or timing to break cryptography) by combining multiple different leakage models (ways of measuring how a cryptographic device leaks secrets) using ensemble learning (combining many weaker models into one stronger one). The framework improves how well attackers can recover secret keys by using deep learning with complementary information from different measurement approaches, and the authors prove mathematically that their ensemble model gets closer to the true secret distribution.

IEEE Xplore (Security & AI Journals)
03

CVE-2025-13922: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to time-based bli

security
Dec 6, 2025

A WordPress plugin called AI Autotagger with OpenAI has a security flaw called time-based blind SQL injection (a technique where attackers sneak extra database commands into legitimate queries by exploiting how the software processes user input) in versions up to 3.40.1. Attackers with contributor-level access or higher can use this flaw to steal sensitive data from the database, slow down the website, or extract information through time-delay tricks.

NVD/CVE Database
04

CVE-2025-34291: Langflow versions up to and including 1.6.9 contain a chained vulnerability that enables account takeover and remote cod

security
Dec 5, 2025

Langflow versions up to 1.6.9 have a chained vulnerability that allows attackers to take over user accounts and run arbitrary code on the system. The flaw combines two misconfigurations: overly permissive CORS settings (CORS, or cross-origin resource sharing, allows webpages from different domains to access each other) that accept requests from any origin with credentials, and refresh token cookies (a token used to get new access credentials) set to SameSite=None, which allows a malicious webpage to steal valid tokens and impersonate a victim.

NVD/CVE Database
05

Homophily Edge Augment Graph Neural Network for High-Class Homophily Variance Learning

research
Dec 5, 2025

Graph Neural Networks (GNNs, machine learning models that work with interconnected data) perform poorly at detecting anomalies in graphs because of high Class Homophily Variance (CHV), meaning some node types cluster together while others scatter. The researchers propose HEAug, a new GNN model that creates additional connections between nodes that are similar in features but not originally linked, and adjusts its training process to avoid generating unwanted connections.

Fix: The proposed mitigation is the HEAug (Homophily Edge Augment Graph Neural Network) model itself. According to the source, it works by: (1) sampling new homophily adjacency matrices (connection patterns) from scratch using self-attention mechanisms, (2) leveraging nodes that are relevant in feature space but not directly connected in the original graph, and (3) modifying the loss function to punish the generation of unnecessary heterophilic edges by the model.

IEEE Xplore (Security & AI Journals)
06

CVE-2025-12189: The Bread & Butter: Gate content + Capture leads + Collect first-party data + Nurture with Ai agents plugin for WordPres

security
Dec 5, 2025

A WordPress plugin called 'The Bread & Butter' has a security flaw called CSRF (cross-site request forgery, where an attacker tricks someone into performing an unwanted action on a website) in versions up to 7.10.1321. The flaw exists in the image upload function because it lacks proper nonce validation (a security token that verifies a request is legitimate), allowing attackers to upload malicious files that could lead to RCE (remote code execution, where an attacker runs commands on the website) if they can trick an admin into clicking a malicious link.

NVD/CVE Database
07

The Normalization of Deviance in AI

safetyresearch
Dec 4, 2025

The AI industry is gradually accepting LLM (large language model) outputs as reliable without questioning them, similar to how NASA ignored warning signs before the Challenger disaster. This 'normalization of deviance' (accepting behavior that deviates from proper standards as normal) is particularly risky in agentic systems (AI systems that can take independent actions without human approval at each step), where unchecked LLM decisions could cause serious problems.

Embrace The Red
08

CVE-2025-66479: Anthropic Sandbox Runtime is a lightweight sandboxing tool for enforcing filesystem and network restrictions on arbitrar

security
Dec 4, 2025

Anthropic Sandbox Runtime is a tool that restricts what processes can access on a computer's filesystem (file storage) and network without needing containers (isolated computing environments). Before version 0.0.16, a bug prevented the network sandbox from working correctly when no allowed domains were specified, which could let code inside the sandbox make network requests it shouldn't be able to make.

Fix: A patch was released in v0.0.16 that fixes this issue.

NVD/CVE Database
09

v0.14.10

industry
Dec 4, 2025

Version 0.14.10 of llama-index-core added a mock function calling LLM (a simulated language model that can pretend to execute functions), while related packages fixed typos and added new integrations like Airweave tool support for advanced search capabilities. This is a routine software release with feature additions and bug fixes.

LlamaIndex Security Releases
10

CVE-2025-33211: NVIDIA Triton Server for Linux contains a vulnerability where an attacker may cause an improper validation of specified

security
Dec 3, 2025

NVIDIA Triton Server for Linux has a vulnerability where attackers can bypass input validation (improper validation of specified quantity in input) by sending malformed data. This flaw could allow an attacker to cause a denial of service attack (making a system unavailable to legitimate users).

NVD/CVE Database
Prev1...137138139140141...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026