aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,677
[LAST_24H]
23
[LAST_7D]
166
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 205/268
VIEW ALL
01

CVE-2024-1455: A vulnerability in the langchain-ai/langchain repository allows for a Billion Laughs Attack, a type of XML External Enti

security
Mar 26, 2024

CVE-2024-1455 is a vulnerability in the langchain-ai/langchain repository that allows a Billion Laughs Attack, a type of XML External Entity (XXE) exploitation where an attacker nests multiple layers of entities within an XML document to make the parser consume excessive CPU and memory resources, causing a denial of service (DoS, where a system becomes unavailable to legitimate users).

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

Fix: A patch is available at https://github.com/langchain-ai/langchain/commit/727d5023ce88e18e3074ef620a98137d26ff92a3

NVD/CVE Database
02

The AI Office is hiring

policy
Mar 22, 2024

The European Commission is hiring AI specialists to work in the AI Office, which will enforce the EU's AI Act by overseeing compliance of general-purpose AI models (large AI systems available to the public). The office will have real regulatory powers to require companies to implement safety measures, restrict models, or remove them from the market, and will develop evaluation tools and benchmarks to identify dangerous AI behaviors.

EU AI Act Updates
03

CVE-2024-1727: A Cross-Site Request Forgery (CSRF) vulnerability in gradio-app/gradio allows attackers to upload multiple large files t

security
Mar 21, 2024

CVE-2024-1727 is a CSRF vulnerability (cross-site request forgery, where an attacker tricks a victim into making unintended requests) in Gradio that lets attackers upload large files to a victim's computer without permission. An attacker can create a malicious webpage that, when visited, automatically uploads files to the victim's system, potentially filling up their disk space and causing a denial of service (making the system unusable).

Fix: A patch is available at https://github.com/gradio-app/gradio/commit/84802ee6a4806c25287344dce581f9548a99834a

NVD/CVE Database
04

The AI Office: What is it, and how does it work?

policy
Mar 21, 2024

The European AI Office is a new EU regulator created to oversee general purpose AI (GPAI) models and systems, which are AI systems designed to perform a wide range of tasks, across all 27 EU Member States under the AI Act. It monitors compliance, analyzes emerging risks, develops evaluation capabilities, produces voluntary codes of practice for companies to follow, and coordinates enforcement between national regulators and international partners. The Office also supports small and medium businesses with compliance resources and oversees regulatory sandboxes, which are controlled environments where companies can test AI systems before full deployment.

EU AI Act Updates
05

CVE-2024-29037: datahub-helm provides the Kubernetes Helm charts for deploying Datahub and its dependencies on a Kubernetes cluster. Sta

security
Mar 20, 2024

A vulnerability in datahub-helm (Helm charts, which are templates for deploying applications on Kubernetes clusters) versions 0.1.143 through 0.2.181 allowed personal access tokens (credentials that grant access to the system) to be created using a publicly known default secret key instead of a random one. This meant attackers could potentially generate their own valid tokens to access DataHub instances if Metadata Service Authentication (a security feature) was enabled during a specific vulnerable time period.

Fix: Update to version 0.2.182, which contains a patch for this issue. As a workaround, reset the token signing key to be a random value, which will invalidate active personal access tokens.

NVD/CVE Database
06

CVE-2024-29018: Moby is an open source container framework that is a key component of Docker Engine, Docker Desktop, and other distribut

security
Mar 20, 2024

Moby (the container framework underlying Docker) has a bug in how it handles DNS requests from internal networks (networks isolated from external communication). When a container on an internal network needs to resolve a domain name, Moby forwards the request through the host's network namespace instead of the container's own network, which can leak data to external servers that an attacker controls. Docker Desktop is not affected by this issue.

Fix: Moby releases 26.0.0, 25.0.4, and 23.0.11 are patched to prevent forwarding any DNS requests from internal networks. As a workaround, run containers intended to be solely attached to internal networks with a custom upstream address, which will force all upstream DNS queries to be resolved from the container's network namespace.

NVD/CVE Database
07

CVE-2023-49785: NextChat, also known as ChatGPT-Next-Web, is a cross-platform chat user interface for use with ChatGPT. Versions 2.11.2

security
Mar 12, 2024

NextChat (also called ChatGPT-Next-Web) version 2.11.2 and earlier has two security flaws: SSRF (server-side request forgery, where attackers trick the server into making unwanted requests) and XSS (cross-site scripting, where attackers inject malicious code into web pages). These flaws let attackers read internal server data, make changes to it, hide their location by routing traffic through the app, or attack other targets on the internet.

Fix: According to the source: "Users may avoid exposing the application to the public internet or, if exposing the application to the internet, ensure it is an isolated network with no access to any other internal resources." The source also notes that as of publication, no patch is available.

NVD/CVE Database
08

CVE-2024-27565: A Server-Side Request Forgery (SSRF) in weixin.php of ChatGPT-wechat-personal commit a0857f6 allows attackers to force t

security
Mar 5, 2024

CVE-2024-27565 is a server-side request forgery (SSRF, a flaw that allows attackers to trick a server into making unwanted requests to other systems) vulnerability found in the weixin.php file of ChatGPT-wechat-personal at commit a0857f6. This vulnerability lets attackers force the application to make arbitrary requests on their behalf. The vulnerability has a CVSS 4.0 severity rating (a moderate score on a 0-10 scale measuring how serious a security flaw is).

NVD/CVE Database
09

CVE-2024-28088: LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path pa

security
Mar 4, 2024

LangChain versions up to 0.1.10 have a path traversal vulnerability (a flaw where an attacker can use ../ sequences to access files outside the intended directory) that allows someone controlling part of a file path to load configurations from anywhere instead of just the intended GitHub repository, potentially exposing API keys or enabling remote code execution (running malicious commands on a system). This bug affects how the load_chain function handles file paths.

Fix: A patch is available in langchain-core version 0.1.29 and later. Update to this version or newer to fix the vulnerability.

NVD/CVE Database
10

Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot

securityresearch
Mar 3, 2024

Attackers can create conditional prompt injection attacks (tricking an AI by hiding malicious instructions in its input that activate only for specific users) against Microsoft Copilot by leveraging user identity information like names and job titles that the AI includes in its context. A researcher demonstrated this by sending an email with hidden instructions that made Copilot behave differently depending on which person opened it, showing that LLM applications become more vulnerable as attackers learn to target specific users rather than all users equally.

Embrace The Red
Prev1...203204205206207...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026