aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
39
[LAST_7D]
177
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 165/273
VIEW ALL
01

CVE-2025-54430: dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution qui

security
Jul 30, 2025

The dedupe Python library (which uses machine learning for fuzzy matching, deduplication, and entity resolution on structured data) had a critical vulnerability in its GitHub Actions workflow that allowed attackers to trigger code execution by commenting @benchmark on pull requests, potentially exposing the GITHUB_TOKEN (a credential that grants access to modify repository contents) and leading to repository takeover.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: This is fixed by commit 3f61e79.

NVD/CVE Database
02

CVE-2025-54381: BentoML is a Python library for building online serving systems optimized for AI apps and model inference. In versions 1

security
Jul 29, 2025

BentoML versions 1.4.0 to 1.4.19 have an SSRF vulnerability (server-side request forgery, where an attacker tricks a server into making requests to internal or restricted addresses) in their file upload feature. An unauthenticated attacker can exploit this to force the server to download files from any URL, including internal network addresses and cloud metadata endpoints (services that store sensitive information), without any validation.

Fix: Upgrade to version 1.4.19 or later, which contains a patch for the issue.

NVD/CVE Database
03

CVE-2025-46059: langchain-ai v0.3.51 was discovered to contain an indirect prompt injection vulnerability in the GmailToolkit component.

security
Jul 29, 2025

LangChain AI version 0.3.51 contains an indirect prompt injection vulnerability (a technique where attackers hide malicious instructions in data like emails to trick AI systems) in its GmailToolkit component that could allow attackers to run arbitrary code through crafted emails. However, the supplier disputes this, arguing the actual vulnerability comes from user code that doesn't follow LangChain's security guidelines rather than from LangChain itself.

NVD/CVE Database
04

Teleportation: Defense Against Stealing Attacks of Data-Driven Healthcare APIs

securityresearch
Jul 29, 2025

This research addresses the problem of stealing attacks against healthcare APIs (application programming interfaces, which are tools that let software systems communicate with each other), where attackers try to copy or extract data from medical AI models. The authors propose a defense strategy called "adaptive teleportation" that modifies incoming queries (requests) in clever ways to fool attackers while still allowing legitimate users to get accurate results from the healthcare API.

Fix: The source proposes 'adaptive teleportation of incoming queries' as the defense mechanism. According to the text, 'The adaptive teleportation operations are generated based on the formulated bi-level optimization target and follows the evolution trajectory depicted by the Wasserstein gradient flows, which effectively push attacking queries to cross decision boundary while constraining the deviation level of benign queries.' This approach 'provides misleading information on malicious queries while preserving model utility.' The authors validated this mechanism on three healthcare prediction tasks (inhospital mortality, bleed risk, and ischemic risk prediction) and found it 'significantly more effective to suppress the performance of cloned model while maintaining comparable serving utility compared to existing defense approaches.'

IEEE Xplore (Security & AI Journals)
05

The Month of AI Bugs 2025

securityresearch
Jul 28, 2025

The Month of AI Bugs 2025 is an initiative to expose security vulnerabilities in agentic AI systems (AI that can take actions on its own), particularly coding agents, through responsible disclosure and public education. The campaign will publish over 20 blog posts demonstrating exploits, including prompt injection (tricking an AI by hiding malicious instructions in its input) attacks that can allow attackers to compromise a developer's computer without permission. While some vendors have fixed reported vulnerabilities quickly, others have ignored reports for months or stopped responding, and many appear uncertain how to address novel AI security threats.

Embrace The Red
06

CVE-2025-5120: A sandbox escape vulnerability was identified in huggingface/smolagents version 1.14.0, allowing attackers to bypass the

security
Jul 27, 2025

A sandbox escape vulnerability (a security flaw allowing code to break out of a restricted execution environment) was found in huggingface/smolagents version 1.14.0 that lets attackers bypass safety restrictions and achieve remote code execution (RCE, running commands on a system they don't own). The flaw is in the local_python_executor.py module, which failed to properly block Python code execution even though it had safety checks in place.

Fix: The issue is resolved in version 1.17.0.

NVD/CVE Database
07

CVE-2025-54413: skops is a Python library which helps users share and ship their scikit-learn based models. Versions 0.11.0 and below co

security
Jul 26, 2025

skops is a Python library for sharing scikit-learn machine learning models. Versions 0.11.0 and below have a flaw in MethodNode that allows attackers to access unexpected object fields using dot notation, potentially leading to arbitrary code execution (running any code on a system) when loading a model file.

Fix: This is fixed in version 12.0.0. Users should update to version 12.0.0 or later.

NVD/CVE Database
08

CVE-2025-54412: skops is a Python library which helps users share and ship their scikit-learn based models. Versions 0.11.0 and below co

security
Jul 26, 2025

skops is a Python library for sharing scikit-learn (a machine learning toolkit) based models. Versions 0.11.0 and below have a flaw in the OperatorFuncNode component that allows attackers to hide the execution of untrusted code, potentially leading to arbitrary code execution (running any commands on a system). This vulnerability can be exploited through code reuse attacks that make unsafe functions appear trustworthy.

Fix: Update to version 0.12.0, where this vulnerability is fixed.

NVD/CVE Database
09

CVE-2025-54558: OpenAI Codex CLI before 0.9.0 auto-approves ripgrep (aka rg) execution even with the --pre or --hostname-bin or --search

security
Jul 25, 2025

OpenAI Codex CLI versions before 0.9.0 have a security flaw where ripgrep (a command-line search tool) can be executed automatically without requiring user approval, even when security flags like --pre, --hostname-bin, or --search-zip are used. This means an attacker could potentially run ripgrep commands without proper user consent.

Fix: Update OpenAI Codex CLI to version 0.9.0 or later.

NVD/CVE Database
10

CVE-2025-7780: The AI Engine plugin for WordPress is vulnerable to Sensitive Information Exposure in all versions up to, and including,

security
Jul 24, 2025

The AI Engine WordPress plugin (a tool that adds AI features to WordPress websites) has a security flaw in versions up to 2.9.4 where the simpleTranscribeAudio endpoint (a connection point for audio transcription) fails to check what types of file locations are allowed before accessing files. This allows attackers with basic user access to read any file on the web server and steal it through the plugin's OpenAI integration (connection to OpenAI's service).

NVD/CVE Database
Prev1...163164165166167...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026