aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
25
[LAST_7D]
171
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 185/270
VIEW ALL
01

CVE-2024-56516: free-one-api allows users to access large language model reverse engineering libraries through the standard OpenAI API f

security
Dec 30, 2024

free-one-api, a tool that lets users access large language model reverse engineering libraries (code or techniques to understand how AI models work) through OpenAI's API format, uses MD5 (a password hashing algorithm, or mathematical function to scramble passwords) to protect user passwords in versions 1.0.1 and earlier. MD5 is cryptographically broken (mathematically compromised and no longer secure), making it vulnerable to collision attacks (where attackers can forge different inputs that produce the same hash) and easy to crack with modern computers, putting user credentials at risk.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

NVD/CVE Database
02

CVE-2024-56800: Firecrawl is a web scraper that allows users to extract the content of a webpage for a large language model. Versions pr

security
Dec 30, 2024

Firecrawl, a web scraper that extracts webpage content for large language models, had a server-side request forgery vulnerability (SSRF, a flaw where an attacker tricks a server into making unwanted requests to internal networks) in versions before 1.1.1 that could expose local network resources. The cloud service was patched on December 27th, 2024, and the open-source version was patched on December 29th, 2024, with no user data exposed.

Fix: All open-source Firecrawl users should upgrade to v1.1.1. For the unpatched playwright services, users should configure a secure proxy by setting the `PROXY_SERVER` environment variable and ensure the proxy is configured to block all traffic to link-local IP addresses (see documentation for setup instructions).

NVD/CVE Database
03

CVE-2024-11896: The Text Prompter – Unlimited chatgpt text prompts for openai tasks plugin for WordPress is vulnerable to Stored Cross-S

security
Dec 24, 2024

A WordPress plugin called Text Prompter is vulnerable to stored cross-site scripting (XSS, a type of attack where harmful code is hidden in web pages and runs when users visit them) in all versions up to 1.0.7. Attackers with contributor-level access or higher can inject malicious scripts through the plugin's shortcode feature because the plugin does not properly filter or secure user input.

NVD/CVE Database
04

Trust No AI: Prompt Injection Along the CIA Security Triad Paper

securityresearch
Dec 23, 2024

A new research paper examines prompt injection attacks (tricks where hidden instructions in user inputs manipulate AI systems) and how they can compromise the CIA triad (confidentiality, integrity, and availability, the three core principles of security). The paper includes real-world examples of these attacks against major AI vendors like OpenAI, Google, Anthropic, and Microsoft, and aims to help traditional cybersecurity experts better understand and defend against these emerging AI-specific threats.

Embrace The Red
05

Security ProbLLMs in xAI's Grok: A Deep Dive

securityresearch
Dec 16, 2024

A security researcher analyzed xAI's Grok chatbot (an AI assistant available through X and an API) for vulnerabilities and found multiple security issues, including prompt injection (tricking the AI by hiding instructions in user posts, images, and PDFs), data exfiltration (stealing information from the system), phishing attacks through clickable links, and ASCII smuggling (hiding invisible text to manipulate the AI's behavior). The researcher responsibly disclosed these findings to xAI.

Embrace The Red
06

CVE-2024-54306: Cross-Site Request Forgery (CSRF) vulnerability in KCT AIKCT Engine Chatbot, ChatGPT, Gemini, GPT-4o Best AI Chatbot all

security
Dec 13, 2024

A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into making unwanted requests on a website they're logged into) was found in the KCT AIKCT Engine Chatbot plugin affecting versions up to 1.6.2. The vulnerability allows attackers to perform unauthorized actions by exploiting this weakness in how the chatbot handles user requests.

NVD/CVE Database
07

CVE-2024-12236: A security issue exists in Vertex Gemini API for customers using VPC-SC. By utilizing a custom crafted file URI for imag

security
Dec 10, 2024

A security vulnerability in Google's Vertex Gemini API (a generative AI service) affects customers using VPC-SC (VPC Service Controls, a security tool that restricts data leaving a virtual private network). An attacker could craft a malicious file path that tricks the API into sending image data outside the security perimeter, bypassing the intended protections.

Fix: Google Cloud Platform implemented a fix to return an error message when a media file URL is specified in the fileUri parameter and VPC Service Controls is enabled. No further fix actions are needed.

NVD/CVE Database
08

Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection

securityresearch
Dec 6, 2024

LLMs (large language models) can output ANSI escape codes (special control characters that modify how terminal emulators display text and behave), and when LLM-powered applications print this output to a terminal without filtering it, attackers can use prompt injection (tricking an AI by hiding instructions in its input) to make the terminal execute harmful commands like clearing the screen, hiding text, or stealing clipboard data. The vulnerability affects LLM-integrated command-line tools and applications that don't properly handle or encode these control characters before displaying LLM output.

Embrace The Red
09

DeepSeek AI: From Prompt Injection To Account Takeover

security
Nov 29, 2024

A researcher discovered that DeepSeek-R1-Lite, a new AI reasoning model, is vulnerable to prompt injection (tricking an AI by hiding instructions in its input) combined with XSS (cross-site scripting, where malicious code runs in a user's browser). By uploading a specially crafted document with base64-encoded malicious code, an attacker could trick the AI into executing JavaScript that steals a user's session token (a credential stored in browser memory that proves who you are), leading to complete account takeover.

Embrace The Red
10

CVE-2024-32965: Lobe Chat is an open-source, AI chat framework. Versions of lobe-chat prior to 1.19.13 have an unauthorized ssrf vulnera

security
Nov 26, 2024

Lobe Chat, an open-source AI chat framework, has a vulnerability in versions before 1.19.13 that allows attackers to perform SSRF (server-side request forgery, where an attacker tricks a server into making unauthorized requests to other systems) without logging in. Attackers can exploit this to scan internal networks and steal sensitive information like API keys stored in authentication headers.

Fix: Upgrade to lobe-chat version 1.19.13 or later. According to the source, 'This issue has been addressed in release version 1.19.13 and all users are advised to upgrade.' There are no known workarounds for this vulnerability.

NVD/CVE Database
Prev1...183184185186187...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026