aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 309/371
VIEW ALL
01

CVE-2024-27565: A Server-Side Request Forgery (SSRF) in weixin.php of ChatGPT-wechat-personal commit a0857f6 allows attackers to force t

security
Mar 5, 2024

CVE-2024-27565 is a server-side request forgery (SSRF, a flaw that allows attackers to trick a server into making unwanted requests to other systems) vulnerability found in the weixin.php file of ChatGPT-wechat-personal at commit a0857f6. This vulnerability lets attackers force the application to make arbitrary requests on their behalf. The vulnerability has a CVSS 4.0 severity rating (a moderate score on a 0-10 scale measuring how serious a security flaw is).

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

NVD/CVE Database
02

CVE-2024-28088: LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path pa

security
Mar 4, 2024

LangChain versions up to 0.1.10 have a path traversal vulnerability (a flaw where an attacker can use ../ sequences to access files outside the intended directory) that allows someone controlling part of a file path to load configurations from anywhere instead of just the intended GitHub repository, potentially exposing API keys or enabling remote code execution (running malicious commands on a system). This bug affects how the load_chain function handles file paths.

Fix: A patch is available in langchain-core version 0.1.29 and later. Update to this version or newer to fix the vulnerability.

NVD/CVE Database
03

Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot

securityresearch
Mar 3, 2024

Attackers can create conditional prompt injection attacks (tricking an AI by hiding malicious instructions in its input that activate only for specific users) against Microsoft Copilot by leveraging user identity information like names and job titles that the AI includes in its context. A researcher demonstrated this by sending an email with hidden instructions that made Copilot behave differently depending on which person opened it, showing that LLM applications become more vulnerable as attackers learn to target specific users rather than all users equally.

Embrace The Red
04

CVE-2024-2057: A vulnerability was found in LangChain langchain_community 0.0.26. It has been classified as critical. Affected is the f

security
Mar 1, 2024

A critical vulnerability was found in LangChain's langchain_community library version 0.0.26 in the TFIDFRetriever component (a tool that retrieves relevant documents for AI systems). The flaw allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted network requests on their behalf), and it can be exploited remotely.

Fix: Upgrading to version 0.0.27 addresses this issue.

NVD/CVE Database
05

AI Act Implementation: Timelines & Next steps

policy
Feb 28, 2024

The EU AI Act is a regulatory framework that requires companies to comply with rules for different types of AI systems on specific timelines, starting with prohibitions on the riskiest AI uses within 6 months and expanding to cover high-risk AI systems (such as those used in law enforcement, hiring, or education) by 24 months after the law takes effect. The article outlines key compliance deadlines, secondary laws the EU Commission might create to clarify the rules, and guidance documents to help organizations understand how to follow the AI Act.

EU AI Act Updates
06

CVE-2024-25723: ZenML Server in the ZenML machine learning package before 0.46.7 for Python allows remote privilege escalation because t

security
Feb 27, 2024

ZenML Server in the ZenML machine learning package before version 0.46.7 has a remote privilege escalation vulnerability (CVE-2024-25723), meaning an attacker can gain higher-level access to the system from a distance. The flaw exists in a REST API endpoint (a web-based interface for requests) that activates user accounts, because it only requires a valid username and new password to change account settings, without proper access controls checking who should be allowed to do this.

Fix: Update ZenML to version 0.46.7 or use one of the patched versions: 0.44.4, 0.43.1, or 0.42.2.

NVD/CVE Database
07

High-level summary of the AI Act

policy
Feb 27, 2024

The EU AI Act classifies AI systems by risk level, from prohibited (like social scoring systems that manipulate behavior) to minimal risk (unregulated). High-risk AI systems, such as those used in critical decisions affecting people's lives, face strict regulations requiring developers to provide documentation, conduct testing, and monitor for problems. General-purpose AI (large language models that can do many tasks) have lighter requirements unless they present systemic risk, in which case developers must test them against adversarial attacks (attempts to trick or break them) and report serious incidents.

EU AI Act Updates
08

CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-

security
Feb 26, 2024

CVE-2024-27444 is a vulnerability in LangChain Experimental (a Python library for building AI applications) before version 0.1.8 that allows attackers to bypass a previous security fix and run arbitrary code (malicious commands they choose) by using Python's special attributes like __import__ and __globals__, which were not blocked by the pal_chain/base.py security checks.

Fix: Update to LangChain version 0.1.8 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/de9a6cdf163ed00adaf2e559203ed0a9ca2f1de7.

NVD/CVE Database
09

CVE-2024-27133: Insufficient sanitization in MLflow leads to XSS when running a recipe that uses an untrusted dataset. This issue leads

security
Feb 23, 2024

MLflow, a machine learning platform, has a vulnerability where it doesn't properly clean user input from dataset tables, allowing XSS (cross-site scripting, where attackers inject malicious code into web pages). When someone runs a recipe using an untrusted dataset in Jupyter Notebook, this can lead to RCE (remote code execution, where an attacker can run commands on the user's computer).

Fix: A patch is available at https://github.com/mlflow/mlflow/pull/10893

NVD/CVE Database
10

CVE-2024-27132: Insufficient sanitization in MLflow leads to XSS when running an untrusted recipe. This issue leads to a client-side RC

security
Feb 23, 2024

MLflow has a vulnerability (CVE-2024-27132) where template variables are not properly sanitized, allowing XSS (cross-site scripting, where malicious code runs in a user's browser) when running an untrusted recipe in Jupyter Notebook. This can lead to client-side RCE (remote code execution, where an attacker can run commands on the user's computer) through insufficient input cleaning.

NVD/CVE Database
Prev1...307308309310311...371Next