aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
4
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 34/371
VIEW ALL
01

GHSA-rp7v-4384-hfrp: k8sGPT has Prompt Injection through its k8sGPT-Operator

security
Apr 24, 2026

This item describes a prompt injection vulnerability (tricking an AI by hiding malicious instructions in its input) in k8sGPT-Operator, a tool that helps manage Kubernetes clusters (container orchestration systems). The content explains the framework for measuring vulnerability severity through metrics like attack complexity and potential impact, but does not provide specific details about the vulnerability itself or how it works.

Critical This Week2 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

GitHub Advisory Database
02

GHSA-q5hj-mxqh-vv77: Claude Code: Trust Dialog Bypass via Git Worktree Spoofing Allows Arbitrary Code Execution

security
Apr 24, 2026

Claude Code had a security flaw where it checked a git worktree (a Git feature allowing multiple branch checkouts in separate directories) `commondir` file to decide if a folder was trustworthy, but didn't verify the file's contents. An attacker could create a malicious repository with a fake `commondir` file pointing to a folder the victim had previously trusted, tricking Claude Code into skipping its safety dialog and running malicious code from `.claude/settings.json` (a configuration file). This attack required the victim to clone the malicious repository and open it in Claude Code, and the attacker had to know a path the victim had already marked as safe.

Fix: Users on standard Claude Code auto-update have received this fix already. Users performing manual updates are advised to update to the latest version.

GitHub Advisory Database
03

GHSA-r75f-5x8p-qvmc: LiteLLM has SQL Injection in Proxy API key verification

security
Apr 24, 2026

LiteLLM's proxy API key verification has a SQL injection vulnerability (a type of attack where an attacker inserts malicious database commands into input fields). An unauthenticated attacker could send a specially crafted authorization header to exploit this flaw and potentially read or modify the proxy's database, gaining unauthorized access to stored credentials.

Fix: Fixed in version 1.83.7. The caller-supplied value is now always passed to the database as a separate parameter. Upgrade to 1.83.7 or later. Alternatively, if upgrading is not immediately possible, set `disable_error_logs: true` under `general_settings` to remove the path through which unauthenticated input reaches the vulnerable query.

GitHub Advisory Database
04

GHSA-mw35-8rx3-xf9r: Ray: Remote Code Execution via Parquet Arrow Extension Type Deserialization

security
Apr 24, 2026

Ray Data registers custom Arrow extension types (special data format handlers) globally in PyArrow, and when PyArrow reads a Parquet file (a data storage format) containing these types, it automatically deserializes metadata bytes using cloudpickle.loads(), which can execute arbitrary code. This vulnerability was reintroduced in July 2025 after a similar issue was supposedly fixed in May 2024, allowing attackers to run malicious code just by having Ray read a specially crafted Parquet file.

Hugging Face Security Advisories
05

GHSA-xqmj-j6mv-4862: LiteLLM: Server-Side Template Injection in /prompts/test endpoint

security
Apr 24, 2026

LiteLLM Proxy had a server-side template injection vulnerability (a security flaw where user input is processed as code rather than plain text) in its `/prompts/test` endpoint that allowed authenticated users to run arbitrary code within the proxy process and potentially access sensitive information like API keys or database credentials. The vulnerability affects any deployment running an affected version of LiteLLM Proxy.

Fix: Upgrade to version `1.83.7-stable` or later, which fixes the issue by switching the prompt template renderer to a sandboxed environment (a restricted area where code runs with limited permissions) that blocks the attack. If upgrading is not immediately possible, block the `POST /prompts/test` endpoint at your reverse proxy or API gateway, and review and rotate API keys that should not have access to prompt management routes.

GitHub Advisory Database
06

Glasswing Secured the Code. The Rest of Your Stack Is Still on You

security
Apr 24, 2026

Organizations often have forgotten software integrations, unauthorized IT systems (shadow IT), and now hidden AI tools and agents scattered across their networks that they don't fully track or manage. Attackers can exploit these overlooked systems without needing advanced AI models, making security harder when companies don't know what's running in their own infrastructure.

Dark Reading
07

Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents

securitypolicy
Apr 24, 2026

Agentic AI (artificial intelligence systems that can make decisions and take actions without human intervention) is becoming a major cybersecurity concern because the same capabilities that help defenders also empower attackers to launch autonomous, adaptive, and large-scale attacks. The industry is responding by treating AI systems as identities (entities with credentials and access permissions) rather than separate tools, and using identity threat detection to monitor their behavior for suspicious activity.

Fix: The source recommends treating agentic AI as an identity and using identity threat detection and risk mitigation solutions as the main defense. This approach combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform to enable behavioral visibility, risk-based controls, unified policy enforcement across human and machine identities, and lifecycle management to prevent orphaned or unmanaged agents.

SecurityWeek
08

The Download: supercharged scams and studying AI healthcare

securityindustry
Apr 24, 2026

Cybercriminals are increasingly using LLMs (large language models, AI systems trained on massive amounts of text) to launch faster and cheaper attacks, including phishing emails (deceptive messages designed to steal information), deepfakes (AI-generated fake videos or images), and automated vulnerability scans (tools that search for security weaknesses). Meanwhile, AI tools are being deployed in healthcare for tasks like note-taking, reviewing patient records, and interpreting medical images, but researchers still don't know whether using these tools actually leads to better health outcomes for patients.

MIT Technology Review
09

Elon Musk and Sam Altman’s court showdown will dish the dirt

policy
Apr 24, 2026

Elon Musk, who cofounded OpenAI but left after not becoming CEO, is suing the company and Sam Altman in a trial starting April 27th in Oakland, California. The lawsuit centers on claims that OpenAI committed fraud, though it also involves broader allegations of breach of contract and unfair business practices. This legal case is primarily about the conflict between Musk and Altman over control of the AI company.

The Verge (AI)
10

Bridging the AI Agent Authority Gap: Continuous Observability as the Decision Engine

securitysafety
Apr 24, 2026

AI agents create a security challenge called the 'Authority Gap' because they inherit permissions from the humans and systems that activate them, rather than having their own independent authority. The article argues that enterprises cannot safely govern AI agents unless they first reduce 'identity dark matter' (hidden credentials and unmanaged permissions scattered across systems) in their traditional users and software, and then use continuous observability (real-time monitoring of who is doing what) to dynamically control what authority agents receive based on who is delegating to them and the context of their actions.

The Hacker News
Prev1...3233343536...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026