aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,677
[LAST_24H]
25
[LAST_7D]
167
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 211/268
VIEW ALL
01

CVE-2023-45063: Cross-Site Request Forgery (CSRF) vulnerability in ReCorp AI Content Writing Assistant (Content Writer, GPT 3 & 4, ChatG

security
Oct 12, 2023

A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into performing unwanted actions on a website they're logged into) was found in the ReCorp AI Content Writing Assistant plugin for WordPress in versions 1.1.5 and earlier. This flaw could allow attackers to exploit users of the plugin without their knowledge.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

NVD/CVE Database
02

CVE-2023-44467: langchain_experimental (aka LangChain Experimental) in LangChain before 0.0.306 allows an attacker to bypass the CVE-202

security
Oct 9, 2023

CVE-2023-44467 is a vulnerability in LangChain Experimental (a library for building AI applications) before version 0.0.306 that allows attackers to bypass a previous security fix and run arbitrary code (unauthorized commands) on a system using the __import__ function in Python, which the pal_chain/base.py file failed to block.

Fix: Upgrade LangChain to version 0.0.306 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/4c97a10bd0d9385cfee234a63b5bd826a295e483.

NVD/CVE Database
03

Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground

security
Sep 29, 2023

LLM applications like chatbots are vulnerable to data exfiltration (unauthorized data theft) through image markdown injection, a technique where attackers embed hidden instructions in untrusted data to make the AI generate image tags that leak information. Microsoft patched this vulnerability in Azure AI Playground, though the source does not describe the specific technical details of their fix.

Embrace The Red
04

CVE-2023-43654: TorchServe is a tool for serving and scaling PyTorch models in production. TorchServe default configuration lacks proper

security
Sep 28, 2023

TorchServe (a tool for running PyTorch machine learning models as web services) has a vulnerability in its default configuration that fails to validate user inputs properly, allowing attackers to download files from any URL and save them to the server's disk. This could let attackers damage the system or steal sensitive information, affecting versions 0.1.0 through 0.8.1.

Fix: Upgrade to TorchServe release 0.8.2 or later, which includes a warning when the default value for allowed_urls is used. Users should also configure the allowed_urls setting and specify which model URLs are permitted.

NVD/CVE Database
05

Advanced Data Exfiltration Techniques with ChatGPT

security
Sep 28, 2023

An indirect prompt injection attack (tricking an AI into following hidden instructions in its input) can allow an attacker to steal chat data from ChatGPT users by either having the AI embed information into image URLs (image markdown injection, which embeds data into web links displayed as images) or convincing users to click malicious links. ChatGPT Plugins, which are add-ons that extend ChatGPT's functionality, create additional exfiltration risks because they have minimal security review before being deployed.

Embrace The Red
06

HITCON CMT 2023 - LLM Security Presentation and Trip Report

securityresearch
Sep 18, 2023

This article is a trip report from HITCON CMT 2023, a security conference in Taiwan, where the author attended talks on various topics including LLM security, reverse engineering with AI, and application exploits. Key presentations covered indirect prompt injections (attacks where malicious instructions are hidden in data fed to an AI system), Electron app vulnerabilities, and PHP security issues. The author gave a talk on indirect prompt injections and notes this technique could become a significant attack vector for AI-integrated applications like chatbots.

Embrace The Red
07

LLM Apps: Don't Get Stuck in an Infinite Loop! 💵💰

securitysafety
Sep 16, 2023

An attacker can use indirect prompt injection (tricking an AI by hiding malicious instructions in data it reads) to make an LLM call its own tools or plugins repeatedly in a loop, potentially increasing costs or disrupting service. While ChatGPT users are mostly protected by subscription pricing, call limits, and a manual stop button, this technique demonstrates a real vulnerability in how LLM applications handle recursive tool calls.

Embrace The Red
08

CVE-2023-41626: Gradio v3.27.0 was discovered to contain an arbitrary file upload vulnerability via the /upload interface.

security
Sep 15, 2023

Gradio version 3.27.0 has a security flaw that allows attackers to upload any type of file through the /upload interface without proper restrictions (CWE-434, unrestricted file upload with dangerous type). This means someone could potentially upload malicious files to a system running this vulnerable version.

NVD/CVE Database
09

CVE-2023-39631: An issue in LanChain-ai Langchain v.0.0.245 allows a remote attacker to execute arbitrary code via the evaluate function

security
Sep 1, 2023

CVE-2023-39631 is a code injection vulnerability (a flaw where an attacker can insert malicious code into a program) in Langchain version 0.0.245 that allows a remote attacker to execute arbitrary code through the evaluate function in the numexpr library (a Python tool for fast numerical expression evaluation). The vulnerability has a CVSS severity score of 4.0, indicating low to moderate risk.

NVD/CVE Database
10

v2: make download.sh executable (#695)

industry
Sep 1, 2023

This is a minor update to the Llama repository that makes download.sh (a script file used to download files) executable and adds error handling so the script stops running if it encounters a problem. The change was submitted as a pull request to improve the reliability of the download process.

Meta Llama Releases
Prev1...209210211212213...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026