aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
18
[LAST_7D]
163
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 190/269
VIEW ALL
01

CVE-2024-7714: The AI ChatBot with ChatGPT and Content Generator by AYS WordPress plugin before 2.1.0 lacks sufficient access controls

security
Sep 27, 2024

A WordPress plugin called 'AI ChatBot with ChatGPT and Content Generator by AYS' (versions before 2.1.0) has a security flaw where it doesn't properly check who is allowed to perform certain actions. This means someone without a user account can disconnect the plugin from OpenAI (the AI service it relies on), effectively breaking the chatbot. The vulnerable actions include connecting, disconnecting, and saving feedback.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Fix: Update the plugin to version 2.1.0 or later.

NVD/CVE Database
02

CVE-2024-7713: The AI ChatBot with ChatGPT and Content Generator by AYS WordPress plugin before 2.1.0 discloses the Open AI API Key, al

security
Sep 27, 2024

A WordPress plugin called 'AI ChatBot with ChatGPT and Content Generator by AYS' versions before 2.1.0 has a vulnerability where it exposes the OpenAI API key (a secret credential used to access OpenAI's services) in cleartext (unencrypted, readable form), allowing anyone without authentication (login access) to steal it. This vulnerability is tracked as CVE-2024-7713 and was reported on September 27, 2024.

NVD/CVE Database
03

CVE-2024-4099: An issue has been discovered in GitLab EE affecting all versions starting from 16.0 prior to 17.2.8, from 17.3 prior to

security
Sep 26, 2024

CVE-2024-4099 is a vulnerability in GitLab EE (a Git repository management tool) affecting versions 16.0-17.2.7, 17.3-17.3.3, and 17.4-17.4.0 where an AI feature failed to clean up unsanitized input, potentially allowing attackers to perform prompt injection (tricking the AI by hiding instructions in its input). The vulnerability has a CVSS score (a 0-10 severity rating) of 4.0, indicating low to moderate severity.

NVD/CVE Database
04

CVE-2024-45989: Monica AI Assistant desktop application v2.3.0 is vulnerable to Exposure of Sensitive Information to an Unauthorized Act

security
Sep 26, 2024

Monica AI Assistant desktop application v2.3.0 has a vulnerability where attackers can use prompt injection (tricking an AI by hiding instructions in its input) with a specially crafted image to steal sensitive chat data from the current session and send it to an attacker-controlled server. This flaw allows unauthorized people to access private information from users' conversations.

NVD/CVE Database
05

CVE-2024-6845: The Chatbot with ChatGPT WordPress plugin before 2.4.6 does not have proper authorization in one of its REST endpoint, a

security
Sep 25, 2024

The Chatbot with ChatGPT WordPress plugin before version 2.4.6 has a missing authorization flaw in one of its REST endpoints (a web interface for accessing the plugin's functions), which allows unauthenticated users (anyone without login credentials) to retrieve and decode an OpenAI API key (a secret credential that grants access to OpenAI's services). This vulnerability exposes the API key to attackers.

Fix: Update the Chatbot with ChatGPT WordPress plugin to version 2.4.6 or later.

NVD/CVE Database
06

CVE-2024-40442: An issue in Doccano Open source annotation tools for machine learning practitioners v.1.8.4 and Doccano Auto Labeling Pi

security
Sep 23, 2024

CVE-2024-40442 is a privilege escalation vulnerability (a security flaw where an attacker gains higher access levels than they should have) in Doccano v.1.8.4 and its Auto Labeling Pipeline module v.0.1.23. A remote attacker can exploit this weakness by sending a specially crafted REST request (a malicious command sent over the web), which involves improper code injection (inserting malicious code into the system).

NVD/CVE Database
07

CVE-2024-40441: An issue in Doccano Open source annotation tools for machine learning practitioners v.1.8.4 and Doccano Auto Labeling Pi

security
Sep 23, 2024

CVE-2024-40441 is a privilege escalation vulnerability (a bug that lets attackers gain higher-level access than they should have) in Doccano v.1.8.4, an open source tool for labeling data to train machine learning models, and its Auto Labeling Pipeline module v.0.1.23. A remote attacker can exploit this by manipulating the model_attribs parameter to escalate their privileges.

NVD/CVE Database
08

Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware)

securitysafety
Sep 20, 2024

Attackers can inject spyware into ChatGPT's memory (a feature that stores information across chat sessions) through prompt injection (tricking an AI by hiding instructions in its input) on untrusted websites, allowing them to continuously steal everything a user types in future conversations. The vulnerability exploits a weakness where a security check called url_safe was performed only on the user's device rather than on OpenAI's servers, and becomes more dangerous when combined with the Memory feature that persists attacker-controlled instructions. OpenAI released a fix for the macOS app, and users should update to the latest version.

Fix: OpenAI released a fix for the macOS app last week. Ensure your app is updated to the latest version.

Embrace The Red
09

CVE-2024-46946: langchain_experimental (aka LangChain Experimental) 0.1.17 through 0.3.0 for LangChain allows attackers to execute arbit

security
Sep 19, 2024

LangChain Experimental versions 0.1.17 through 0.3.0 contain a vulnerability that allows attackers to execute arbitrary code (run malicious commands on a system) through a component called LLMSymbolicMathChain, which uses sympy.sympify (a function that evaluates mathematical expressions in an unsafe way). The root cause is improper input validation (failing to check that user input is safe before processing it).

NVD/CVE Database
10

CVE-2024-8939: A vulnerability was found in the ilab model serve component, where improper handling of the best_of parameter in the vll

security
Sep 17, 2024

A vulnerability in the ilab model serve component allows attackers to cause a Denial of Service (DoS, where a service becomes unavailable to legitimate users) by sending a large value for the best_of parameter to the vllm JSON web API (a web interface for accessing an LLM). The API doesn't properly manage timeouts or resource limits, so an attacker can exhaust system resources and crash the service.

NVD/CVE Database
Prev1...188189190191192...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026