aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,678
[LAST_24H]
22
[LAST_7D]
163
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 200/268
VIEW ALL
01

CVE-2024-4858: The Testimonial Carousel For Elementor plugin for WordPress is vulnerable to unauthorized modification of data due to a

security
May 25, 2024

The Testimonial Carousel For Elementor WordPress plugin (versions up to 10.2.0) has a missing authorization check in the 'save_testimonials_option_callback' function, allowing unauthenticated attackers to modify data like OpenAI API keys without permission. This vulnerability is classified as CWE-862 (missing authorization, where a system doesn't verify that a user has permission to perform an action).

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

NVD/CVE Database
02

Robust governance for the AI Act: Insights and highlights from Novelli et al. (2024)

policy
May 24, 2024

This overview discusses the European AI Act and the governance framework needed to implement it, focusing on the European Commission's responsibilities and the AI Office. Key tasks include establishing guidelines for classifying high-risk AI systems, defining what counts as significant modifications (changes that alter a system's risk level), and setting standards for transparency and enforcement across EU member states.

Fix: The source suggests that the Commission should adopt 'predetermined change management plans akin to those in medicine' to assess modifications to AI systems. These plans would be documents outlining anticipated changes (such as performance adjustments or shifts in intended use) and the methods for evaluating whether those changes substantially alter the system's risk level. The source also recommends that standard fine-tuning of foundation models (training adjustments to pre-existing AI models) should not be considered a significant modification unless safety layers are removed or other actions clearly increase risk.

EU AI Act Updates
03

ChatGPT: Hacking Memories with Prompt Injection

securitysafety
May 22, 2024

ChatGPT's new memory feature, which lets the AI remember information across different chat sessions for a more personalized experience, can be exploited through indirect prompt injection (tricking an AI by hiding malicious instructions in its input). Attackers could manipulate ChatGPT into storing false information, biases, or unwanted instructions by injecting commands through connected apps like Google Drive, uploaded documents, or web browsing features.

Embrace The Red
04

CVE-2024-0453: The AI ChatBot plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check

security
May 22, 2024

The AI ChatBot plugin for WordPress (up to version 5.3.4) has a security flaw where a function called openai_file_delete_callback lacks a capability check (verification that a user has permission to perform an action). This allows any authenticated user with subscriber-level access or higher to delete files from a connected OpenAI account without proper authorization.

NVD/CVE Database
05

CVE-2024-0452: The AI ChatBot plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check

security
May 22, 2024

The AI ChatBot plugin for WordPress (up to version 5.3.4) has a missing capability check (a missing authorization check that verifies user permissions) in its file upload function, allowing authenticated users with basic subscriber access to upload files to a connected OpenAI account without proper permission verification. This vulnerability affects all versions through 5.3.4 and could let low-privilege attackers modify data on the linked OpenAI account.

NVD/CVE Database
06

CVE-2024-0451: The AI ChatBot plugin for WordPress is vulnerable to unauthorized access of data due to a missing capability check on th

security
May 22, 2024

The AI ChatBot plugin for WordPress has a security flaw in versions up to 5.3.4 where a function lacks a capability check (a security control that verifies a user has permission to perform an action). This allows authenticated users with subscriber-level access or higher to view files stored in a connected OpenAI account without authorization.

Fix: A patch is available at https://plugins.trac.wordpress.org/changeset/3089461/chatbot/trunk/includes/openai/qcld-bot-openai.php. Users should update their AI ChatBot plugin to a version after 5.3.4.

NVD/CVE Database
07

Machine Learning Attack Series: Backdooring Keras Models and How to Detect It

securityresearch
May 18, 2024

This post examines how attackers can insert hidden malicious code into machine learning models (a technique called backdooring) through supply chain attacks, specifically targeting Keras models (a popular framework for building AI systems). The authors demonstrate this attack and then explore tools that can detect when a model has been compromised in this way.

Embrace The Red
08

CVE-2024-4263: A broken access control vulnerability exists in mlflow/mlflow versions before 2.10.1, where low privilege users with onl

security
May 16, 2024

MLflow (a tool for managing machine learning experiments) versions before 2.10.1 have a broken access control vulnerability where users with only EDIT permissions can delete artifacts (saved files or data from experiments) they shouldn't be able to delete. The bug happens because the system doesn't properly check permissions when users request to delete artifacts, even though the documentation says EDIT users should only be able to read and update, not delete.

Fix: Update mlflow to version 2.10.1 or later.

NVD/CVE Database
09

CVE-2024-3848: A path traversal vulnerability exists in mlflow/mlflow version 2.11.0, identified as a bypass for the previously address

security
May 16, 2024

MLflow version 2.11.0 has a path traversal vulnerability (a security flaw where an attacker can access files outside intended directories) that bypasses a previous fix. An attacker can use a '#' character in artifact URLs to skip validation and read sensitive files like SSH keys and cloud credentials from the server's filesystem. The vulnerability exists because the application doesn't properly validate the fragment portion (the part after '#') of URLs before converting them to filesystem paths.

NVD/CVE Database
10

CVE-2024-4181: A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the

security
May 16, 2024

A command injection vulnerability (a flaw that lets attackers run unauthorized commands) exists in the RunGptLLM class of the llama_index library version 0.9.47, which connects applications to language models. The vulnerability uses the eval function (a tool that executes text as code) unsafely, potentially allowing a malicious LLM provider to run arbitrary commands and take control of a user's machine.

Fix: This issue was fixed in version 0.10.13 of the llama_index library. Users should upgrade to version 0.10.13 or later.

NVD/CVE Database
Prev1...198199200201202...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026