aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
5
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 30/371
VIEW ALL
01

Google employees ask Sundar Pichai to say no to classified military AI use

policysafety
Critical This Week3 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Apr 27, 2026

Over 600 Google employees, including many from DeepMind (Google's AI research lab), signed a letter asking CEO Sundar Pichai to prevent the Pentagon from using Google's AI models for classified purposes (secret military projects). The employees argue that the only way to ensure Google isn't associated with potential harms from such uses is to reject these classified projects entirely, since otherwise they could happen without employee knowledge or oversight.

The Verge (AI)
02

CVE-2026-7141: A vulnerability was found in vllm up to 0.19.0. The affected element is the function has_mamba_layers of the file vllm/v

security
Apr 27, 2026

A vulnerability was found in vllm (a language model serving framework) up to version 0.19.0 in the has_mamba_layers function, which can result in uninitialized resource (memory that hasn't been set to a known value before use). An attacker can trigger this flaw remotely, though the attack is difficult to execute and requires high complexity.

Fix: Deploy patch 1ad67864c0c20f167929e64c875f5c28e1aad9fd to fix this issue.

NVD/CVE Database
03

OpenAI shakes up partnership with Microsoft, capping revenue share payments

industry
Apr 27, 2026

OpenAI and Microsoft announced a revised partnership agreement that allows OpenAI to cap its revenue share payments to Microsoft and serve customers through any cloud provider, not just Microsoft Azure. Previously, OpenAI was restricted to primarily using Microsoft's cloud services, but the new deal lets OpenAI work with competitors like Amazon and Google while maintaining Microsoft as its primary provider through 2030.

CNBC Technology
04

This bank CEO let his AI clone handle an earnings call — now he's signing an OpenAI deal

industry
Apr 27, 2026

Customers Bank CEO Sam Sidhu revealed that an AI clone (a digital voice generated to sound like him) delivered his prepared remarks during an earnings call, then announced a partnership with OpenAI to automate banking processes like loan approvals and account openings. The bank plans to deploy AI agents (software that can make decisions and take actions with minimal human input) across lending, deposits, and payments over the next 6-12 months, with goals including reducing loan processing time from 30-45 days to 7 days and account opening time to under 20 minutes.

CNBC Technology
05

Microsoft and OpenAI’s famed AGI agreement is dead

policy
Apr 27, 2026

Microsoft and OpenAI have removed a clause from their partnership agreement that previously governed what would happen if AGI (artificial general intelligence, an AI system that can do any intellectual task a human can do) was developed. Under the new terms, Microsoft remains OpenAI's primary cloud partner with first access to new products, but OpenAI now has freedom to use other cloud providers instead of being locked into Microsoft's Azure platform.

The Verge (AI)
06

Elon Musk and Sam Altman’s court battle over the future of OpenAI

policy
Apr 27, 2026

Elon Musk, a cofounder of OpenAI, is suing the company and its leaders Sam Altman and Greg Brockman, claiming they abandoned OpenAI's original mission to develop AI for humanity's benefit and shifted focus to profit instead. OpenAI counters that the lawsuit is a baseless attempt by Musk to harm a competitor to his own AI ventures. Musk is seeking the removal of Altman and Brockman, an end to OpenAI's nonprofit status, and up to $150 billion in damages.

The Verge (AI)
07

OpenAI available at FedRAMP Moderate

policy
Apr 27, 2026

OpenAI has received FedRAMP 20x Moderate authorization (a security certification that allows U.S. government agencies to use cloud services), making ChatGPT Enterprise and the API Platform available for federal use. This certification was achieved through a faster authorization process that emphasizes cloud-native security evidence and automated validation, allowing government agencies to access advanced AI capabilities like GPT-5.5 while meeting federal security and governance requirements.

OpenAI Blog
08

Qualcomm up 7% on report it’s partnering with OpenAI on smartphone AI chip

industry
Apr 27, 2026

Qualcomm is reportedly partnering with OpenAI and MediaTek to develop custom smartphone chips, with mass production expected in 2028. According to analyst Ming-Chi Kuo, OpenAI believes controlling both the operating system (the software that runs a device) and hardware will let it deliver comprehensive AI agent services (AI systems that can perform tasks autonomously) that use real-time smartphone data to improve performance.

CNBC Technology
09

Deepfake Voice Attacks are Outpacing Defenses: What Security Leaders Should Know

securitysafety
Apr 27, 2026

Deepfake voice and video attacks (AI-generated replicas of real people) are becoming increasingly common and costly, with tools that require only three seconds of audio and cost almost nothing to create. Attackers target finance employees and IT staff by impersonating executives on calls or video meetings to authorize large money transfers or credential changes, and these attacks bypass traditional security tools because they rely on tricking people rather than exploiting software vulnerabilities. Organizations that have successfully stopped these attacks all used the same defense: training employees to pause and verify requests before acting on them.

Fix: The source explicitly states: 'The organizations that have stopped these attacks all found the same answer: train your people to pause and verify before they act.' No specific training program, tool, or technical mitigation is detailed in the text.

BleepingComputer
10

Parsing Agentic Offensive Security's Existential Threat

safetysecurity
Apr 27, 2026

Some people worry that advanced frontier LLMs (large language models, AI systems trained on massive amounts of text) like Claude Mythos and GPT-5.5 could cause serious cybersecurity problems by being misused for attacks. However, security researcher Ari Herbert-Voss suggests this situation could also present opportunities.

Dark Reading
Prev1...2829303132...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026