aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
29
[LAST_7D]
171
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 197/269
VIEW ALL
01

CVE-2024-37902: DeepJavaLibrary(DJL) is an Engine-Agnostic Deep Learning Framework in Java. DJL versions 0.1.0 through 0.27.0 do not pre

security
Jun 17, 2024

DeepJavaLibrary (DJL), a framework for building deep learning applications in Java, has a path traversal vulnerability (CWE-22, a flaw where an attacker can access files outside intended directories) in versions 0.1.0 through 0.27.0. This flaw allows attackers to overwrite system files by inserting archived files from absolute paths into the system.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Fix: Upgrade to DJL version 0.28.0 or patch to DJL Large Model Inference containers version 0.27.0.

NVD/CVE Database
02

CVE-2024-38459: langchain_experimental (aka LangChain Experimental) before 0.0.61 for LangChain provides Python REPL access without an o

security
Jun 16, 2024

A security vulnerability in LangChain Experimental (a Python library for building AI applications) before version 0.0.61 allows users to access a Python REPL (read-eval-print loop, an interactive environment where code can be run directly) without requiring explicit permission. This issue happened because a previous attempt to fix a related vulnerability (CVE-2024-27444) was incomplete.

Fix: Update langchain_experimental to version 0.0.61 or later. A patch is available in the commit ce0b0f22a175139df8f41cdcfb4d2af411112009 and the version comparison between 0.0.60 and 0.0.61 shows the fix.

NVD/CVE Database
03

GitHub Copilot Chat: From Prompt Injection to Data Exfiltration

security
Jun 15, 2024

GitHub Copilot Chat, a VS Code extension that lets users ask questions about their code by sending it to an AI model, was vulnerable to prompt injection (tricking an AI by hiding instructions in its input) attacks. When analyzing untrusted source code, attackers could embed malicious instructions in the code itself, which would be sent to the AI and potentially lead to data exfiltration (unauthorized copying of sensitive information).

Embrace The Red
04

CVE-2024-0103: NVIDIA Triton Inference Server for Linux contains a vulnerability where a user may cause an incorrect Initialization of

security
Jun 13, 2024

CVE-2024-0103 is a vulnerability in NVIDIA Triton Inference Server for Linux where incorrect initialization of resources caused by network issues could allow a user to disclose sensitive information. The vulnerability has a CVSS 4.0 severity rating, which measures the seriousness of security flaws on a scale of 0-10.

NVD/CVE Database
05

CVE-2024-0095: NVIDIA Triton Inference Server for Linux and Windows contains a vulnerability where a user can inject forged logs and ex

security
Jun 13, 2024

CVE-2024-0095 is a vulnerability in NVIDIA Triton Inference Server (software that runs AI models) for Linux and Windows that allows users to inject fake log entries and commands, potentially leading to code execution (running unauthorized programs), denial of service (making the system unavailable), privilege escalation (gaining higher access rights), information disclosure (exposing sensitive data), and data tampering (modifying information). The vulnerability stems from improper neutralization of log output, meaning the system doesn't properly sanitize or clean user input before adding it to logs.

NVD/CVE Database
06

CVE-2024-37014: Langflow through 0.6.19 allows remote code execution if untrusted users are able to reach the "POST /api/v1/custom_compo

security
Jun 10, 2024

Langflow versions up to 0.6.19 have a vulnerability that allows remote code execution (RCE, where attackers can run commands on a system they don't own) if untrusted users can access a specific API endpoint called POST /api/v1/custom_component and submit Python code through it. The vulnerability stems from code injection (CWE-94, where malicious code is inserted into a program), which happens because the application does not properly control how user-provided Python scripts are executed.

NVD/CVE Database
07

Why work at the EU AI Office?

policy
Jun 7, 2024

This article describes the EU AI Office, a newly established regulatory organization within the European Commission tasked with enforcing the AI Act (the world's first comprehensive binding AI regulation) across the European Union. Unlike other AI safety institutes in other countries, the EU AI Office has actual enforcement powers to require AI model providers to fix problems or remove non-compliant models from the market. The office will conduct model evaluations, investigate violations, and work with international partners to shape global AI governance standards.

EU AI Act Updates
08

CVE-2024-5206: A sensitive data leakage vulnerability was identified in scikit-learn's TfidfVectorizer, specifically in versions up to

securityprivacy
Jun 6, 2024

A vulnerability in scikit-learn's TfidfVectorizer (a tool that converts text into numerical data for machine learning) stored all words from training data in an attribute called `stop_words_`, instead of just the necessary ones, potentially leaking sensitive information like passwords or keys. The vulnerability affected versions up to 1.4.1.post1 but the risk depends on what type of data is being processed.

Fix: Fixed in version 1.5.0.

NVD/CVE Database
09

CVE-2024-5187: A vulnerability in the `download_model_with_test_data` function of the onnx/onnx framework, version 1.16.0, allows for a

security
Jun 6, 2024

A vulnerability in the ONNX framework (version 1.16.0) allows attackers to overwrite any file on a system by uploading a malicious tar file (a compressed archive format) with specially crafted paths. Because the vulnerable function doesn't check whether file paths are safe before extracting the tar file, attackers could potentially execute malicious code, delete important files, or compromise system security.

NVD/CVE Database
10

CVE-2024-4888: BerriAI's litellm, in its latest version, is vulnerable to arbitrary file deletion due to improper input validation on t

security
Jun 6, 2024

BerriAI's litellm has a vulnerability (CVE-2024-4888) where the `/audio/transcriptions` endpoint improperly validates user input, allowing attackers to delete arbitrary files on the server without authorization. The flaw occurs because the code uses `os.remove()` (a function that deletes files) directly on user-supplied file paths, potentially exposing sensitive files like SSH keys or databases.

NVD/CVE Database
Prev1...195196197198199...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026