aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
5
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 32/371
VIEW ALL
01

Choco automates food distribution with AI agents

industry
Apr 26, 2026

Choco, an AI-powered food distribution platform serving over 100,000 buyers, uses OpenAI APIs to power AI agents that automate order processing from multiple input types (emails, texts, images, voice calls). OrderAgent and VoiceAgent convert unstructured customer inputs into structured ERP (enterprise resource planning, a system that manages business operations) orders by learning from each customer's ordering history, achieving up to 50% reduction in manual work and error rates below 1-5%.

Critical This Week3 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: The source explicitly recommends three practices: (1) 'Start with evaluation from day one: Even a small ground-truth dataset (10–20 examples) enables teams to measure progress, validate improvements, and iterate with confidence.' (2) 'Invest in AI-native observability: Debugging AI systems requires more than traditional logs—capturing model inputs, outputs, and reasoning traces is essential to understand and improve performance.' (3) 'Set the right expectations early: Unlike deterministic software, LLMs are probabilistic. Educating teams and users on this difference is key to building trust and avoiding friction during adoption.'

OpenAI Blog
02

CVE-2026-7061: A weakness has been identified in Toowiredd chatgpt-mcp-server up to 0.1.0. Affected by this issue is some unknown funct

security
Apr 26, 2026

A vulnerability (CVE-2026-7061) was found in Toowiredd chatgpt-mcp-server version 0.1.0 that allows OS command injection (running unauthorized system commands on a server through malicious input) in the MCP/HTTP component. The flaw can be exploited remotely by attackers, and public exploit code is already available, but the developers have not yet responded to the security report.

NVD/CVE Database
03

Benchmarking the effectiveness of multi-agent LLMs in collaborative privacy threat modeling with <span class="small-caps">LINDDUN GO</span>

researchsecurity
Apr 26, 2026

This research paper evaluates whether multiple AI agents working together can effectively help identify privacy threats in software systems using LINDDUN GO, a structured methodology for privacy threat modeling (a process of identifying ways a system could leak or misuse personal data). The study, published in July 2026, examines whether collaborative multi-agent LLM (large language model) systems can improve the quality and completeness of privacy threat identification compared to single AI agents or human analysis.

Elsevier Security Journals
04

Musk and Altman’s bitter feud over OpenAI to be laid bare in court

policy
Apr 26, 2026

Elon Musk is suing Sam Altman and OpenAI in court, claiming that Altman broke the company's original founding agreement. The lawsuit focuses on OpenAI's early years when it was started as a nonprofit, and the trial could influence the direction of AI development in the tech industry.

The Guardian Technology
05

CVE-2026-7020: A security flaw has been discovered in Ollama up to 0.20.2. This affects the function digestToPath of the file x/imagege

security
Apr 26, 2026

A security flaw called CVE-2026-7020 was found in Ollama versions up to 0.20.2 that allows path traversal (an attack where someone manipulates file paths to access files they shouldn't be able to reach) through the digestToPath function in the Tensor Model Transfer Handler component. An attacker can exploit this remotely, though it requires high complexity to perform, and the vulnerability details have been released publicly.

NVD/CVE Database
06

GHSA-wg4g-395p-mqv3: n8n-MCP: Sensitive MCP tool-call arguments logged on authenticated requests in HTTP mode

securityprivacy
Apr 25, 2026

n8n-mcp (a tool for connecting AI systems to external services) was logging sensitive information like passwords and API keys when running in HTTP mode (a way to communicate over the internet). When authenticated users made requests to call tools, their secret credentials were written to server logs before being hidden, which could expose them if logs were shared or accessed by unauthorized people. The issue only affected HTTP mode and required authentication, so it couldn't be exploited by random internet users.

Fix: Upgrade to n8n-mcp v2.47.13 or later using either `npx n8n-mcp@latest` (npm) or `docker pull ghcr.io/czlonkowski/n8n-mcp:latest` (Docker). The patch changes how tool arguments are logged by using a `summarizeToolCallArgs` function that records only the structure and size of data, never the actual secret values. As a temporary workaround if you cannot upgrade immediately: restrict HTTP port access through firewall or VPN, limit who can read server logs, or switch to stdio transport mode (`MCP_MODE=stdio`).

GitHub Advisory Database
07

GHSA-v4p8-mg3p-g94g: LiteLLM: Authenticated command execution via MCP stdio test endpoints

security
Apr 25, 2026

LiteLLM had a security flaw in two test endpoints (`POST /mcp-rest/test/connection` and `POST /mcp-rest/test/tools/list`) that allowed authenticated users to run arbitrary commands on the server. These endpoints accepted server configurations including command and arguments, and would execute them as subprocesses with the proxy's privileges, even for users with low-level permissions.

Fix: Fixed in version 1.83.7. Both test endpoints now require the `PROXY_ADMIN` role (a permission level for administrators only). As a temporary workaround, developers should block `POST /mcp-rest/test/connection` and `POST /mcp-rest/test/tools/list` at their reverse proxy or API gateway (the server that sits between users and the application to filter traffic).

GitHub Advisory Database
08

AI talent war: Software industry is a new target as top executives jump ship to OpenAI

industry
Apr 25, 2026

Top software executives from companies like Salesforce, Snowflake, and Datadog are being recruited by AI companies OpenAI and Anthropic with large compensation packages, because these AI giants want their expertise in selling to enterprise customers (large organizations). This talent drain is part of a broader shift where AI companies are prioritizing business growth in the enterprise segment, which is more profitable, while traditional software companies are struggling with concerns that AI tools will disrupt their business models.

CNBC Technology
09

We tried out xAI's Grok chatbot while driving a Tesla in NYC. Here's what happened.

safety
Apr 25, 2026

Tesla and other automakers are integrating AI chatbots like Grok (xAI's conversational AI assistant) into vehicles to provide hands-free information access, but safety experts warn these tools create dangerous distractions for drivers. A Tesla owner demonstrated how engaging with Grok while driving—even with Tesla's partially automated driving system (FSD, or Full Self-Driving Supervised) active—caused him to lose attention to the road, raising concerns about driver distraction that isn't yet well understood.

CNBC Technology
10

Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos

securityprivacy
Apr 25, 2026

A group of Discord users gained unauthorized access to Anthropic's Mythos Preview (a restricted AI model designed to find security vulnerabilities) by examining data from a breach of Mercor (an AI training startup) and making an educated guess about the model's online location based on Anthropic's known URL patterns. They exploited this access to build simple websites rather than conduct more harmful activities, potentially avoiding detection by Anthropic.

Wired (Security)
Prev1...3031323334...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026