aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,677
[LAST_24H]
23
[LAST_7D]
167
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 209/268
VIEW ALL
01

CVE-2023-6753: Path Traversal in GitHub repository mlflow/mlflow prior to 2.9.2.

security
Dec 13, 2023

CVE-2023-6753 is a path traversal vulnerability (a security flaw where an attacker can access files outside the intended directory by using special path characters) found in MLflow versions before 2.9.2. The vulnerability allows unauthorized access to restricted files on a system running the affected software.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

Fix: Update MLflow to version 2.9.2 or later. A patch is available at https://github.com/mlflow/mlflow/commit/1c6309f884798fbf56017a3cc808016869ee8de4.

NVD/CVE Database
02

Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)

securitysafety
Dec 12, 2023

A researcher demonstrated that malicious GPTs (custom ChatGPT agents) can secretly steal user data by embedding hidden images in conversations that send information to external servers, and can also trick users into sharing personal details like passwords. OpenAI's validation checks for publishing GPTs can be easily bypassed by slightly rewording malicious instructions, allowing harmful GPTs to be shared publicly, though the researcher reported these vulnerabilities to OpenAI in November 2023 without receiving a fix.

Embrace The Red
03

CVE-2023-35625: Azure Machine Learning Compute Instance for SDK Users Information Disclosure Vulnerability

security
Dec 12, 2023

CVE-2023-35625 is a vulnerability in Azure Machine Learning Compute Instance that allows unauthorized users to access sensitive information through the SDK (software development kit, a collection of tools for building applications). The vulnerability is classified as an information disclosure issue, meaning private data could be exposed to people who shouldn't see it.

NVD/CVE Database
04

CVE-2023-6709: Improper Neutralization of Special Elements Used in a Template Engine in GitHub repository mlflow/mlflow prior to 2.9.2.

security
Dec 12, 2023

CVE-2023-6709 is a vulnerability in MLflow (a machine learning tool) versions before 2.9.2 involving improper neutralization of special elements in a template engine (a system that generates text by filling in placeholders in templates). This weakness could potentially allow attackers to manipulate how the software processes certain input data.

Fix: Update MLflow to version 2.9.2 or later. A patch is available at https://github.com/mlflow/mlflow/commit/432b8ccf27fd3a76df4ba79bb1bec62118a85625.

NVD/CVE Database
05

CVE-2023-6568: A reflected Cross-Site Scripting (XSS) vulnerability exists in the mlflow/mlflow repository, specifically within the han

security
Dec 7, 2023

MLflow, an open-source machine learning platform, has a reflected XSS (cross-site scripting, where an attacker injects malicious JavaScript that runs in a victim's browser) vulnerability in how it handles the Content-Type header in POST requests. An attacker can craft a malicious Content-Type header that gets sent back to the user without proper filtering, allowing arbitrary JavaScript code to execute in the victim's browser.

NVD/CVE Database
06

CVE-2023-43472: An issue in MLFlow versions 2.8.1 and before allows a remote attacker to obtain sensitive information via a crafted requ

security
Dec 5, 2023

CVE-2023-43472 is a vulnerability in MLFlow (an open-source platform for managing machine learning workflows) versions 2.8.1 and earlier that allows a remote attacker to obtain sensitive information by sending a specially crafted request to the REST API (the interface that programs use to communicate with MLFlow). The vulnerability has a CVSS severity score of 4.0 (a moderate risk level on a scale of 0-10).

NVD/CVE Database
07

Ekoparty Talk - Prompt Injections in the Wild

securityresearch
Nov 28, 2023

A security researcher presented at Ekoparty 2023 about prompt injections (attacks where malicious instructions are hidden in inputs to trick an AI into misbehaving) found in real-world LLM applications and chatbots like ChatGPT, Bing Chat, and Google Bard, demonstrating various exploits and discussing mitigations. The talk covered both basic LLM concepts and deep dives into how these attacks work across different AI platforms.

Embrace The Red
08

CVE-2023-48299: TorchServe is a tool for serving and scaling PyTorch models in production. Starting in version 0.1.0 and prior to versio

security
Nov 21, 2023

TorchServe (a tool for running PyTorch machine learning models as web services) versions before 0.9.0 had a ZipSlip vulnerability (a flaw where an attacker can extract files outside the intended folder by crafting malicious archive files), allowing attackers to upload harmful code disguised in publicly available models that could execute on machines running TorchServe. The vulnerability affected the model and workflow management API, which handles uploaded files.

Fix: Upgrade to TorchServe version 0.9.0 or later. The fix validates the file paths in zip archives before extracting them to prevent files from being placed in unintended filesystem locations.

NVD/CVE Database
09

CVE-2023-46302: Apache Software Foundation Apache Submarine has a bug when serializing against yaml. The bug is caused by snakeyaml htt

security
Nov 20, 2023

Apache Submarine has a security vulnerability in how it handles YAML (a data format language) requests because it uses an unsafe library called snakeyaml. When users send YAML data to the application through its REST API (a system for receiving web requests), the unsafe handling could allow attackers to execute malicious code.

Fix: Users should upgrade to Apache Submarine version 0.8.0, which fixes this issue by replacing snakeyaml with jackson-dataformat-yaml. If upgrading is not possible, users can cherry-pick (apply a specific code fix from) PR https://github.com/apache/submarine/pull/1054 and rebuild the submarine-server image.

NVD/CVE Database
10

CVE-2023-6020: LFI in Ray's /static/ directory allows attackers to read any file on the server without authentication.

security
Nov 16, 2023

CVE-2023-6020 is a local file inclusion (LFI, a vulnerability that lets attackers read files they shouldn't access) in Ray's /static/ directory that allows attackers to read any file on the server without needing to log in. The vulnerability stems from missing authorization checks (the system doesn't verify whether a user should have access before serving files).

NVD/CVE Database
Prev1...207208209210211...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026