aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
26
[LAST_7D]
172
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 188/270
VIEW ALL
01

CVE-2024-39720: An issue was discovered in Ollama before 0.1.46. An attacker can use two HTTP requests to upload a malformed GGUF file c

security
Oct 31, 2024

A vulnerability in Ollama before version 0.1.46 allows an attacker to crash the application by uploading a malformed GGUF file (a model format file) using two HTTP requests and then referencing it in a custom Modelfile. This causes a segmentation fault (a type of crash where the program tries to access memory it shouldn't), making the application unavailable.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Fix: Update Ollama to version 0.1.46 or later.

NVD/CVE Database
02

CVE-2024-39719: An issue was discovered in Ollama through 0.3.14. File existence disclosure can occur via api/create. When calling the C

security
Oct 31, 2024

Ollama versions through 0.3.14 have a vulnerability where the api/create endpoint leaks information about which files exist on the server. When someone calls the CreateModel route with a path that doesn't exist, the server returns an error message saying 'File does not exist', which allows attackers to probe the server's file system.

NVD/CVE Database
03

CVE-2024-42835: langflow v1.0.12 was discovered to contain a remote code execution (RCE) vulnerability via the PythonCodeTool component.

security
Oct 31, 2024

Langflow v1.0.12 contains a remote code execution vulnerability (RCE, where an attacker can run commands on a system they don't own) in its PythonCodeTool component. This flaw allows attackers to execute arbitrary code through the tool. The vulnerability was publicly disclosed in October 2024.

NVD/CVE Database
04

CVE-2024-48063: In PyTorch <=2.4.1, the RemoteModule has Deserialization RCE. NOTE: this is disputed by multiple parties because this is

security
Oct 29, 2024

PyTorch versions 2.4.1 and earlier contain a vulnerability in RemoteModule that allows RCE (remote code execution, where an attacker can run commands on a system they don't own) through deserialization of untrusted data. However, multiple parties dispute whether this is actually a security flaw, arguing it is intended behavior in PyTorch's distributed computing features (tools for running AI computations across multiple machines).

NVD/CVE Database
05

CVE-2024-8309: A vulnerability in the GraphCypherQAChain class of langchain-ai/langchain version 0.2.5 allows for SQL injection through

security
Oct 29, 2024

A vulnerability in langchain version 0.2.5's GraphCypherQAChain class allows attackers to use prompt injection (tricking an AI by hiding instructions in its input) to perform SQL injection attacks on databases. This can let attackers steal data, delete information, disrupt services, or access data they shouldn't have access to, especially in systems serving multiple users.

NVD/CVE Database
06

CVE-2024-7774: A path traversal vulnerability exists in the `getFullPath` method of langchain-ai/langchainjs version 0.2.5. This vulner

security
Oct 29, 2024

CVE-2024-7774 is a path traversal vulnerability (a security flaw where attackers can access files outside the intended directory) in langchain-ai/langchainjs version 0.2.5 that allows attackers to save, overwrite, read, and delete files anywhere on a system. The vulnerability exists in the `getFullPath` method and related functions because they do not properly filter or validate user input before handling file paths.

Fix: A patch is available at https://github.com/langchain-ai/langchainjs/commit/a0fad77d6b569e5872bd4a9d33be0c0785e538a9

NVD/CVE Database
07

CVE-2024-7042: A vulnerability in the GraphCypherQAChain class of langchain-ai/langchainjs versions 0.2.5 and all versions with this cl

security
Oct 29, 2024

A vulnerability exists in the GraphCypherQAChain class of langchain-ai/langchainjs versions 0.2.5 that allows prompt injection (tricking an AI by hiding instructions in its input), which can lead to SQL injection (inserting malicious database commands). This vulnerability could allow attackers to manipulate data, steal sensitive information, delete data to cause service outages, or breach security in systems serving multiple users.

NVD/CVE Database
08

ZombAIs: From Prompt Injection to C2 with Claude Computer Use

securitysafety
Oct 24, 2024

Claude Computer Use is a new AI tool from Anthropic that lets Claude take screenshots and run commands on computers autonomously. The feature carries serious security risks because of prompt injection (tricking an AI by hiding malicious instructions in its input), which could allow attackers to make Claude execute unwanted commands on machines it controls.

Embrace The Red
09

CVE-2024-48142: A prompt injection vulnerability in the chatbox of Butterfly Effect Limited Monica ChatGPT AI Assistant v2.4.0 allows at

security
Oct 24, 2024

CVE-2024-48142 is a prompt injection vulnerability (a technique where attackers hide malicious instructions in text sent to an AI) in Monica ChatGPT AI Assistant v2.4.0 that lets attackers steal all chat messages between a user and the AI through a specially crafted message.

NVD/CVE Database
10

CVE-2024-48140: A prompt injection vulnerability in the chatbox of Butterfly Effect Limited Monica Your AI Copilot powered by ChatGPT4 v

security
Oct 24, 2024

A prompt injection vulnerability (tricking an AI by hiding instructions in its input) was found in Monica Your AI Copilot v6.3.0, a ChatGPT-powered browser extension. Attackers can exploit this flaw by sending a specially crafted message to access and steal all chat data between the user and the AI assistant, both from past conversations and future ones.

NVD/CVE Database
Prev1...186187188189190...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026