aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
18
[LAST_7D]
165
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 191/269
VIEW ALL
01

CVE-2024-8768: A flaw was found in the vLLM library. A completions API request with an empty prompt will crash the vLLM API server, res

security
Sep 17, 2024

CVE-2024-8768 is a bug in vLLM (a library for running large language models) where sending an API request with an empty prompt crashes the server, causing a denial of service (making the service unavailable to users). The flaw is classified as a reachable assertion vulnerability, meaning the code hits an unexpected condition it wasn't designed to handle.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

NVD/CVE Database
02

CVE-2024-5998: A vulnerability in the FAISS.deserialize_from_bytes function of langchain-ai/langchain allows for pickle deserialization

security
Sep 17, 2024

A vulnerability in langchain's FAISS.deserialize_from_bytes function allows deserialization of untrusted data using pickle (a Python library that converts data into a format that can be stored or transmitted), which can lead to arbitrary command execution through the os.system function. This affects the latest version of the product and is classified as CWE-502 (deserialization of untrusted data).

Fix: A patch is available at https://github.com/langchain-ai/langchain/commit/604dfe2d99246b0c09f047c604f0c63eafba31e7

NVD/CVE Database
03

CVE-2024-6587: A Server-Side Request Forgery (SSRF) vulnerability exists in berriai/litellm version 1.38.10. This vulnerability allows

security
Sep 13, 2024

CVE-2024-6587 is a server-side request forgery vulnerability (SSRF, a flaw that tricks a server into making requests to unintended locations) in litellm version 1.38.10 that lets users control where the application sends requests by setting the `api_base` parameter, potentially allowing attackers to intercept sensitive OpenAI API keys. A malicious user could redirect requests to their own domain and steal the API key, gaining unauthorized access to the OpenAI service.

Fix: A patch is available at https://github.com/berriai/litellm/commit/ba1912afd1b19e38d3704bb156adf887f91ae1e0

NVD/CVE Database
04

CVE-2024-45848: An arbitrary code execution vulnerability exists in versions 23.12.4.0 up to 24.7.4.1 of the MindsDB platform, when the

security
Sep 12, 2024

MindsDB versions 23.12.4.0 through 24.7.4.1 contain an arbitrary code execution vulnerability (the ability to run unwanted commands on a server) when the ChromaDB integration is installed. An attacker can craft a malicious 'INSERT' query containing Python code that gets executed on the server because the code is passed to an eval function (a function that runs text as if it were code).

NVD/CVE Database
05

CVE-2024-45846: An arbitrary code execution vulnerability exists in versions 23.10.3.0 up to 24.7.4.1 of the MindsDB platform, when the

security
Sep 12, 2024

MindsDB versions 23.10.3.0 through 24.7.4.1 have a vulnerability that allows arbitrary code execution (running unauthorized commands on a server) when the Weaviate integration is installed. An attacker can exploit this by crafting a malicious SQL SELECT WHERE clause containing Python code, which gets executed through an eval function (a function that interprets and runs code as if it were written in the program).

NVD/CVE Database
06

CVE-2024-45855: Deserialization of untrusted data can occur in versions 23.10.2.0 and newer of the MindsDB platform, enabling a maliciou

security
Sep 12, 2024

CVE-2024-45855 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.10.2.0 and newer where deserialization of untrusted data (converting data from an external format into code without checking if it's safe) can occur. An attacker can upload a malicious 'inhouse' model and use the 'finetune' feature to run arbitrary code (any commands they want) on the server.

NVD/CVE Database
07

CVE-2024-45854: Deserialization of untrusted data can occur in versions 23.10.3.0 and newer of the MindsDB platform, enabling a maliciou

security
Sep 12, 2024

CVE-2024-45854 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.10.3.0 and newer where deserialization of untrusted data (converting data from an external format back into executable code without checking if it's safe) allows an attacker to upload a malicious model that runs arbitrary code (any commands the attacker wants) on the server when a describe query is executed on it.

NVD/CVE Database
08

CVE-2024-45853: Deserialization of untrusted data can occur in versions 23.10.2.0 and newer of the MindsDB platform, enabling a maliciou

security
Sep 12, 2024

CVE-2024-45853 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.10.2.0 and newer where deserialization of untrusted data (the process of converting received data back into usable objects without checking if it's safe) allows an attacker to upload a malicious model that runs arbitrary code on the server when making predictions. This is a serious flaw because it gives attackers full control to execute whatever commands they want on the affected system.

NVD/CVE Database
09

CVE-2024-45852: Deserialization of untrusted data can occur in versions 23.3.2.0 and newer of the MindsDB platform, enabling a malicious

security
Sep 12, 2024

CVE-2024-45852 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.3.2.0 and newer that allows deserialization of untrusted data (converting untrusted incoming data back into executable code). An attacker can upload a malicious model that runs arbitrary code (any commands they choose) on the server when someone interacts with it.

NVD/CVE Database
10

CVE-2024-6846: The Chatbot with ChatGPT WordPress plugin before 2.4.5 does not validate access on some REST routes, allowing for an una

security
Sep 5, 2024

A security flaw was found in the Chatbot with ChatGPT WordPress plugin (versions before 2.4.5) where certain REST routes (endpoints that external programs use to interact with the plugin) did not properly check user permissions, allowing anyone without logging in to delete error and chat logs.

Fix: Update the Chatbot with ChatGPT WordPress plugin to version 2.4.5 or later.

NVD/CVE Database
Prev1...189190191192193...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026