aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,669
[LAST_24H]
17
[LAST_7D]
162
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)

Latest Intel

page 214/267
VIEW ALL
01

CVE-2023-34239: Gradio is an open-source Python library that is used to build machine learning and data science. Due to a lack of path f

security
Jun 8, 2023

Gradio, an open-source Python library for building machine learning and data science applications, has a vulnerability where it fails to properly filter file paths and restrict which URLs can be proxied (accessed through Gradio as an intermediary), allowing unauthorized file access. This vulnerability affects input validation (the process of checking that data entering a system is safe and expected).

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)

>

LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.

Fix: Users are advised to upgrade to version 3.34.0. The source notes there are no known workarounds for this vulnerability.

NVD/CVE Database
02

CVE-2023-34094: ChuanhuChatGPT is a graphical user interface for ChatGPT and many large language models. A vulnerability in versions 202

security
Jun 2, 2023

ChuanhuChatGPT (a graphical interface for ChatGPT and other large language models) has a vulnerability in versions 20230526 and earlier that allows attackers to access the config.json file (a configuration file storing sensitive settings) without permission when authentication is disabled, potentially exposing API keys (credentials that grant access to external services). The vulnerability allows attackers to steal these API keys from the configuration file.

Fix: The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication (a login system that restricts who can access the software) can help mitigate the vulnerability.

NVD/CVE Database
03

CVE-2023-33979: gpt_academic provides a graphical interface for ChatGPT/GLM. A vulnerability was found in gpt_academic 3.37 and prior. T

security
May 31, 2023

gpt_academic (a tool that provides a graphical interface for ChatGPT/GLM) versions 3.37 and earlier have a vulnerability where the Configuration File Handler allows attackers to read sensitive files through the `/file` route because no files are protected from access. This can leak sensitive information from working directories to users who shouldn't have access to it.

Fix: A patch is available at commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02. As a workaround, users can configure the project using environment variables instead of `config*.py` files, or use docker-compose installation (a tool for running containerized applications) to configure the project instead of configuration files.

NVD/CVE Database
04

ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data

securitysafety
May 28, 2023

ChatGPT plugins can be exploited through indirect prompt injections (attacks that hide malicious instructions in data the AI reads from external sources rather than directly from the user), which hackers have used to access private data through cross-plugin request forgery (a vulnerability where one plugin tricks another into performing unauthorized actions). The post documents a real exploit found in the wild and explains the security fix that was applied.

Embrace The Red
05

CVE-2023-32676: Autolab is a course management service that enables auto-graded programming assignments. A Tar slip vulnerability was fo

security
May 26, 2023

Autolab, a service that automatically grades programming assignments in courses, has a tar slip vulnerability (a flaw where extracted files can be placed outside their intended directory) in its assessment installation feature. An attacker with instructor permissions could upload a specially crafted tar file (a compressed archive format) with file paths like `../../../../tmp/tarslipped1.sh` to place files anywhere on the system when the form is submitted.

Fix: Upgrade to version 2.11.0 or later.

NVD/CVE Database
06

CVE-2023-2800: Insecure Temporary File in GitHub repository huggingface/transformers prior to 4.30.0.

security
May 18, 2023

CVE-2023-2800 is a vulnerability in the Hugging Face Transformers library (a popular tool for working with AI language models) prior to version 4.30.0 that involves insecure temporary files (CWE-377, a weakness where temporary files are created in ways that attackers could exploit). The vulnerability was discovered and reported through the huntr.dev bug bounty platform.

Fix: Update to version 4.30.0 or later. A patch is available at https://github.com/huggingface/transformers/commit/80ca92470938bbcc348e2d9cf4734c7c25cb1c43.

NVD/CVE Database
07

CVE-2023-2780: Path Traversal: '\..\filename' in GitHub repository mlflow/mlflow prior to 2.3.1.

security
May 17, 2023

MLflow (a tool for managing machine learning experiments) versions before 2.3.1 contain a path traversal vulnerability (CWE-29, a weakness where attackers can access files outside intended directories by using special characters like '..\'). This vulnerability could allow an attacker to read or manipulate files they shouldn't have access to.

Fix: Update MLflow to version 2.3.1 or later. A patch is available at https://github.com/mlflow/mlflow/commit/fae77a525dd908c56d6204a4cef1c1c75b4e9857.

NVD/CVE Database
08

ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery

security
May 16, 2023

A malicious website can hijack a ChatGPT chat session and steal conversation history by controlling the data that plugins (add-ons that extend ChatGPT's abilities) retrieve. The post highlights that while plugins can leak data by receiving too much information, the main risk here is when an attacker controls what data the plugin pulls in, enabling them to extract sensitive information.

Embrace The Red
09

Indirect Prompt Injection via YouTube Transcripts

securitysafety
May 14, 2023

ChatGPT can access YouTube transcripts through plugins, which is useful but creates a security risk called indirect prompt injection (hidden instructions embedded in content that an AI reads and then follows). Attackers can hide malicious commands in video transcripts, and when ChatGPT reads those transcripts to answer user questions, it may follow the hidden instructions instead of the user's intended request.

Embrace The Red
10

Adversarial Prompting: Tutorial and Lab

securityresearch
May 12, 2023

This resource is a tutorial and lab (an interactive learning environment for hands-on practice) that teaches prompt injection, which is a technique for tricking AI systems by embedding hidden instructions in their input. The tutorial covers examples ranging from simple prompt engineering (getting an AI to change its output) to more complex attacks like injecting malicious code (HTML/XSS, which runs unwanted scripts in web browsers) and stealing data from AI systems.

Embrace The Red
Prev1...212213214215216...267Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026