aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
5
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 31/371
VIEW ALL
01

Microsoft patched an ‘agent-only’ role that was not

security
Apr 27, 2026

Microsoft's 'Agent ID Administrator' role, designed to let AI agents have controlled identities in Entra ID (Microsoft's identity management system), had a security flaw that let users take ownership of unrelated service principals (the tenant-specific identities that applications use to authenticate and access resources). This meant attackers could gain the same privileges as more powerful administrator roles and potentially take over the entire tenant (organization's cloud environment).

Critical This Week3 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: Microsoft patched the issue by blocking the Agent ID Administrator role from modifying non-agent service principals. The fix was fully rolled out by April 9, 2026, across all cloud environments.

CSO Online
02

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models

industryresearch
Apr 27, 2026

DeepSeek released V4, a new AI model that can process longer text more efficiently and matches the performance of leading competitors from OpenAI, Anthropic, and Google while remaining open source. Researchers are increasingly focused on developing world models (AI systems that understand and can interact with the physical world, not just digital tasks) to overcome limitations of current language models and enable advances in robotics and physical tasks like laundry folding or navigation.

MIT Technology Review
03

Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google

securityresearch
Apr 27, 2026

Google researchers found that indirect prompt injection attacks (hidden traps where malicious instructions in external data trick AI systems into bypassing their safety rules) on websites are increasing, with a 32% rise between November 2025 and February 2026, but current attacks remain relatively unsophisticated. The attacks they discovered fell into two categories: exfiltration attempts that try to steal data like IP addresses and credentials, and destruction attempts that aim to delete files, though neither showed advanced techniques. Researchers warn that while today's attacks are low in sophistication, the upward trend suggests the threat will soon grow in both scale and complexity.

SecurityWeek
04

Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

securityindustry
Apr 27, 2026

Anthropic's Claude Mythos is an AI system that can discover vulnerabilities much faster than human teams, but organizations are unprepared for the remediation (fixing) side of the process. The real problem isn't finding vulnerabilities quickly, it's that most teams lack the infrastructure to triage, prioritize, and verify fixes once they're discovered, so faster discovery just creates a growing backlog of unfixed critical issues.

The Hacker News
05

AI is reshaping DevSecOps to bring security closer to the code

securityindustry
Apr 27, 2026

AI is transforming DevSecOps (the practice of integrating security into software development processes) by embedding security checks earlier in coding and automating vulnerability detection and fixes. The shift moves security from happening after code is written to happening during code generation itself, with AI tools providing secure coding guidance, scanning for vulnerabilities using reasoning rather than fixed rules, and suggesting automated fixes integrated directly into developer workflows.

CSO Online
06

The ‘manager of agents’: How AI evolves the SOC analyst role

industrysafety
Apr 27, 2026

Rather than eliminating SOC analyst jobs, agentic AI (AI systems that can independently execute tasks) is transforming entry-level analysts from performing repetitive investigative work into 'managers of agents' who oversee AI systems and make decisions based on their findings. The shift moves analysts from manually gathering evidence across multiple systems to reviewing AI-generated investigations and validating conclusions, allowing them to handle more alerts at a higher level of judgment.

CSO Online
07

Elon Musk and Sam Altman face off in court over OpenAI’s founding mission

policy
Apr 27, 2026

Elon Musk is suing Sam Altman and OpenAI, claiming they violated their founding agreement by converting OpenAI from a non-profit (an organization that doesn't aim to make money for owners) to a for-profit business. The lawsuit alleges fraud and breach of contract, with the trial beginning in Oakland, California, and expected to last two to three weeks.

The Guardian Technology
08

Announcing our partnership with the Republic of Korea

industrypolicy
Apr 27, 2026

Google DeepMind announced a partnership with South Korea's Ministry of Science and ICT to advance AI research and development in the country. The collaboration includes establishing an AI Campus in Seoul where Korean researchers can access Google's advanced AI models for breakthroughs in life sciences, weather, climate, and energy, while also supporting talent development through internships and scholarships.

DeepMind Safety Research
09

SBOMs into Agentic AIBOMs: Schema Extensions, Agentic Orchestration and Reproducibility Evaluation

research
Apr 27, 2026

This academic paper explores how Software Bill of Materials (SBOMs, detailed lists of all software components used in a project) can be extended to cover agentic AI systems (AI systems that can independently make decisions and take actions). The paper discusses schema extensions, how to organize and orchestrate these agentic components, and methods to evaluate whether AI systems produce reproducible results.

ACM Digital Library (TOPS, DTRAP, CSUR)
10

The next phase of the Microsoft OpenAI partnership

industry
Apr 27, 2026

Microsoft and OpenAI amended their partnership agreement to clarify their long-term relationship and how they will work together on AI development. Key changes include OpenAI gaining freedom to sell products through any cloud provider (not just Microsoft's Azure), Microsoft receiving a non-exclusive license to OpenAI's technology through 2032, and changes to how the companies share revenue. The amendment aims to give both companies flexibility while maintaining their collaborative work on building large-scale AI systems.

OpenAI Blog
Prev1...2930313233...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026