aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
23
[LAST_7D]
162
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 183/270
VIEW ALL
01

ChatGPT Operator: Prompt Injection Exploits & Defenses

securityresearch
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Feb 17, 2025

ChatGPT Operator is an AI agent that can control web browsers to complete tasks, but it is vulnerable to prompt injection (tricking the AI by hiding malicious instructions in its input) that could allow attackers to steal data or perform unauthorized actions. OpenAI has implemented three defensive layers: user monitoring to watch what the agent does, inline confirmation requests within the chat asking the user to approve actions, and out-of-band confirmation requests that appear when the agent crosses website boundaries, though these mitigations are not foolproof.

Fix: OpenAI has implemented three primary mitigation techniques: (1) User Monitoring, where users are prompted to observe what Operator is doing, what text it types, and which buttons it clicks, likely based on a data classification model that detects sensitive information on screen; (2) Inline Confirmation Requests, where Operator asks the user within the chat conversation to approve certain actions or clarify requests before proceeding; and (3) Out-of-Band Confirmation Requests, which appear when Operator navigates across websites or performs complex actions, informing the user what is about to happen and giving them the option to pause or resume the operation.

Embrace The Red
02

CVE-2024-3303: An issue was discovered in GitLab EE affecting all versions starting from 16.0 prior to 17.6.5, starting from 17.7 prior

security
Feb 13, 2025

A vulnerability (CVE-2024-3303) was found in GitLab EE (a version control platform for managing code) that allows attackers to steal the contents of private issues through prompt injection (tricking the AI by hiding instructions in its input). The flaw affects multiple versions: 16.0 through 17.6.4, 17.7 through 17.7.3, and 17.8 through 17.8.1.

NVD/CVE Database
03

CVE-2024-53880: NVIDIA Triton Inference Server contains a vulnerability in the model loading API, where a user could cause an integer ov

security
Feb 12, 2025

NVIDIA Triton Inference Server has a vulnerability where loading a model with an extremely large file size causes an integer overflow or wraparound error (a type of bug where a number gets too big for its storage space and wraps around to an incorrect value), potentially causing a denial of service (making the system unavailable). The vulnerability exists in the model loading API (the interface used to load AI models into the server).

NVD/CVE Database
04

CVE-2024-12366: PandasAI uses an interactive prompt function that is vulnerable to prompt injection and run arbitrary Python code that c

security
Feb 11, 2025

PandasAI contains a vulnerability where its interactive prompt function can be exploited through prompt injection (tricking the AI by hiding instructions in its input), allowing attackers to run arbitrary Python code and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own) instead of just getting explanations from the language model.

NVD/CVE Database
05

Hacking Gemini's Memory with Prompt Injection and Delayed Tool Invocation

securitysafety
Feb 10, 2025

Google's Gemini AI can be tricked into storing false information in a user's long-term memory through prompt injection (hidden malicious instructions embedded in documents) combined with delayed tool invocation (planting trigger words that cause the AI to execute commands later when the user unknowingly says them). An attacker can craft a document that appears normal but contains hidden instructions telling Gemini to save false information about the user if they respond with certain words like 'yes' or 'no' in the same conversation.

Embrace The Red
06

CVE-2025-25183: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements

security
Feb 7, 2025

vLLM, a system for running large language models efficiently, has a vulnerability where attackers can craft malicious input to cause hash collisions (when two different inputs produce the same fingerprint value), allowing them to reuse cached data (stored computation results) from previous requests and corrupt subsequent responses. Python 3.12 made hash values predictable, making this attack easier to execute intentionally.

Fix: This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

NVD/CVE Database
07

CVE-2025-24981: MDC is a tool to take regular Markdown and write documents interacting deeply with a Vue component. In affected versions

security
Feb 6, 2025

MDC is a tool that converts Markdown into documents that work with Vue components (a JavaScript framework for building user interfaces). In affected versions, the tool has a security flaw where it doesn't properly validate URLs in Markdown, allowing attackers to sneak in malicious JavaScript code by encoding it in a special format (hex-encoded HTML entities). This can lead to XSS (cross-site scripting, where unauthorized code runs in a user's browser) if the tool processes untrusted Markdown.

Fix: Upgrade to version 0.13.3 or later. The source states: 'This vulnerability has been addressed in version 0.13.3 and all users are advised to upgrade.'

NVD/CVE Database
08

CVE-2025-24357: vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterato

security
Jan 27, 2025

vLLM is a library that loads AI models from HuggingFace using a function that calls torch.load, a PyTorch function for loading model data. The problem is that torch.load is set to accept untrusted data without verification, which means if someone provides a malicious model file, it could run harmful code on the system during the loading process (deserialization of untrusted data, where code runs while converting saved data back into usable form).

Fix: This vulnerability is fixed in v0.7.0. Users should upgrade to this version or later.

NVD/CVE Database
09

CVE-2024-13698: The Jobify - Job Board WordPress Theme for WordPress is vulnerable to unauthorized access and modification of data due t

security
Jan 24, 2025

The Jobify WordPress theme (versions up to 4.2.7) has a missing authorization vulnerability that allows unauthenticated attackers to bypass security checks on two AI image functions. Attackers can exploit this to upload image files from arbitrary locations and generate AI images using the site's OpenAI API key without permission.

NVD/CVE Database
10

CVE-2025-23042: Gradio is an open-source Python package that allows quick building of demos and web application for machine learning mod

security
Jan 14, 2025

Gradio, an open-source Python package for building web applications around machine learning models, has a security flaw in its Access Control List (ACL, a system that controls which files users can access). Attackers can bypass this protection on Windows and macOS by changing the capitalization of file paths, since these operating systems treat uppercase and lowercase letters as the same in file names. This allows unauthorized access to sensitive files that should be blocked.

Fix: This issue has been addressed in release version 5.6.0. Users are advised to upgrade. There are no known workarounds for this vulnerability.

NVD/CVE Database
Prev1...181182183184185...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026