aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
71
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 1/371
VIEW ALL
01

Shifting Budget Dynamics for Identity Security and AI Agents

policyindustry
Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

May 21, 2026

Enterprises are rapidly deploying AI agents (software systems that can act independently to complete tasks), and these agents need identity management (systems that verify who or what is accessing resources and what they're allowed to do). New research shows that budgeting for AI agent security differs significantly from how companies budget for traditional identity management projects.

Dark Reading
02

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

security
May 8, 2026

n8n-mcp versions before 2.50.1 had three security issues: unvalidated workflow IDs allowed attackers to bypass access controls and leak API keys, webhook URLs followed redirects to unintended hosts (SSRF, a type of attack where a server makes unwanted requests to other systems), and telemetry (usage data sent to developers) stored sensitive information like API keys without hiding it. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 8.3 and requires an authenticated attacker with access to the n8n API.

Fix: Upgrade to n8n-mcp version 2.50.1 or later. If upgrading is not immediately possible, the source provides these workarounds: for issues 1 and 2, restrict network access to the HTTP port through firewall rules or switch to stdio mode (a communication method that does not expose HTTP); for issue 3, set the environment variable `N8N_MCP_TELEMETRY_DISABLED=true` before starting the server, or run `npx n8n-mcp telemetry disable` once.

GitHub Advisory Database
03

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

security
May 8, 2026

An authenticated SSRF (server-side request forgery, where an attacker tricks a server into making unwanted requests to internal services) vulnerability affects n8n-mcp's webhook and API client features. An attacker with access to the system can make the n8n-mcp host send HTTP requests to internal services or cloud credential endpoints that should be blocked, allowing them to steal credentials or enumerate internal systems.

Fix: Fixed in n8n-mcp@2.50.2. If you cannot upgrade immediately, the source suggests three workarounds: (1) Restrict network egress from the n8n-mcp host using a firewall or cloud security group to deny cloud metadata IPs (169.254.169.254, 169.254.170.2, 100.100.100.200, 192.0.0.192, and GCP metadata.google.internal) and RFC1918 networks; (2) Run in stdio mode instead of HTTP if multi-tenant mode is not needed; (3) Disable workflow management tools via `DISABLED_TOOLS=n8n_trigger_webhook_workflow,n8n_create_workflow,n8n_test_workflow` if not needed. Additionally, if N8N_API_URL points to localhost or a private network address, set `WEBHOOK_SECURITY_MODE=moderate` (allows localhost, blocks private networks and cloud metadata) or `WEBHOOK_SECURITY_MODE=permissive` (allows private networks too, only safe on trusted networks).

GitHub Advisory Database
04

Anthropic's Mythos set off a cybersecurity 'hysteria.' Experts say the threat was already here

securityindustry
May 8, 2026

Anthropic released Mythos, an AI model that can find thousands of previously unknown software vulnerabilities (flaws in code that haven't been patched yet), which sparked concern among banks, governments, and tech companies about a new wave of AI-enabled cyberattacks. However, cybersecurity experts say this vulnerability-finding capability already exists in older, publicly available AI models from Anthropic and OpenAI, and can be achieved through orchestration (coordinating multiple tools or models to work together on a task).

CNBC Technology
05

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

security
May 8, 2026

Langfuse, an open source platform for managing large language models, had a role-based access control flaw (a security issue where user permissions weren't properly enforced) in versions 3.68.0 through 3.166.9 that allowed low-privileged project members to redirect API requests to attacker-controlled servers, potentially exposing sensitive API keys. The vulnerability required the attacker to already have basic access to a project as a member.

Fix: Update to Langfuse version 3.167.0 or later, where the issue has been patched.

NVD/CVE Database
06

Everybody wants to rule the AI world

industry
May 8, 2026

This article discusses the chaotic leadership transition at OpenAI in 2024, when Sam Altman was removed as CEO under unclear circumstances involving video calls and informal communications between current and former leadership. The situation's complexity is now being revealed through an ongoing legal dispute between Elon Musk and Altman.

The Verge (AI)
07

Claude in Chrome is taking orders from the wrong extensions

security
May 8, 2026

Claude in Chrome, Anthropic's browser extension, has a bug called ClaudeBleed that allows malicious extensions to hijack it and trick it into performing unauthorized actions like stealing files, sending emails, or stealing code from private repositories. The vulnerability exists because the extension trusts any script from its origin (claude.ai) without checking who is actually running it, breaking Chrome's normal security model. Anthropic released a partial fix in version 1.0.70 on May 6, but researchers found the vulnerability can still be exploited by switching the extension to privileged mode.

Fix: Anthropic released version 1.0.70 on May 6 with added security checks that prevent extensions from executing remote commands in standard mode. The company also stated that 'a fix that removes the affected message handler has been merged and will ship in an upcoming extension release,' though the source notes this promised fix did not fully materialize in version 1.0.70.

CSO Online
08

The Tech Download: Meta, Google enter AI agent race as ‘agentic wars’ heat up

industrysafety
May 8, 2026

Major tech companies like Meta and Google are racing to develop AI agents (AI tools that can perform tasks for users rather than just answer questions), following the viral success of OpenClaw earlier this year. While AI agents promise major business benefits through increased user engagement and revenue opportunities, significant security and governance challenges remain unresolved, particularly the risk of agents "doing the wrong thing" rather than just saying the wrong thing.

CNBC Technology
09

Your CTEM program is probably ignoring MCP. Here’s how to fix it

securitypolicy
May 8, 2026

Model Context Protocol (MCP, a plugin system that lets AI agents connect to external tools) has become a major security blind spot because organizations aren't scanning for or monitoring MCP risks, leaving them vulnerable to attacks that exploit supply chain vulnerabilities, exposed credentials, and malicious AI tool installations. The article highlights how attackers can compromise widely-trusted MCP packages (like the postmark-mcp npm package that exfiltrated emails from 300 organizations) and how developers often hardcode sensitive credentials into AI configurations, making MCP a vehicle for old attack types (like supply chain attacks and credential theft) to cause new damage.

CSO Online
10

Pen tests show AI security flaws far more severe than legacy software bugs

securityresearch
May 8, 2026

Penetration tests (security checks where experts try to break into systems) show that AI and large language model (LLM, advanced AI systems trained on huge amounts of text) systems have significantly more high-risk security flaws than traditional software, with 32% of AI findings rated high-risk compared to 13% for legacy systems. LLM vulnerabilities are also fixed less often, with only 38% of high-risk issues resolved, and experts attribute this to AI systems being deployed quickly without mature security controls, newer attack surfaces like prompt injection (tricking an AI by hiding instructions in its input), and unclear responsibility for fixing problems across teams.

CSO Online
123...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026