aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
7
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 25/371
VIEW ALL
01

Webinar: How to Automate Exposure Validation to Match the Speed of AI Attacks

security
Apr 29, 2026

Threat actors are now using custom AI systems to automate cyberattacks, such as mapping Active Directory (a system that manages user accounts and permissions in networks) and stealing admin credentials within minutes, moving much faster than traditional security teams can respond. Traditional defense workflows involve multiple teams working in silos (separate, disconnected groups) with slow handoffs between threat intelligence, red team testing (simulated attacks to find weaknesses), and blue team patching (fixing vulnerabilities), creating dangerous delays. The webinar promotes "Autonomous Exposure Validation" as a new defensive approach to speed up security responses and eliminate these organizational bottlenecks.

Critical This Week4 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

The Hacker News
02

OpenAI looms over earnings from tech hyperscalers

industry
Apr 29, 2026

OpenAI, a private company valued at over $850 billion, has become a major influence on tech earnings this week as four hyperscalers (Amazon, Alphabet, Meta, and Microsoft, the largest computing companies) report quarterly results. After a Wall Street Journal report suggested OpenAI missed revenue and user growth targets and may struggle to afford its data center expansion, investors are closely watching how this affects the companies that have invested billions in OpenAI or depend on its technology.

CNBC Technology
03

Claude Mythos Has Found 271 Zero-Days in Firefox

securityindustry
Apr 29, 2026

Firefox discovered 271 zero-day vulnerabilities (previously unknown security flaws) using Claude Mythos Preview, an advanced AI model from Anthropic, with fixes included in Firefox 150. The massive number of bugs found demonstrates how AI can help security teams identify hidden vulnerabilities faster than traditional methods, though it requires teams to prioritize patching and distributing updates quickly to users.

Fix: Firefox 150 includes fixes for the 271 vulnerabilities identified during the evaluation with Claude Mythos Preview. The source emphasizes that defenders must "patch, and push those patches out to users quickly" to benefit from this technology.

Schneier on Security
04

GitHub rushed to fix a critical vulnerability in less than six hours

security
Apr 29, 2026

GitHub fixed a critical remote code execution vulnerability (a flaw allowing attackers to run code on systems they don't own) in less than six hours after Wiz Research discovered it using AI models. The vulnerability could have let attackers access millions of public and private code repositories, but GitHub's security team reproduced and confirmed the issue within 40 minutes, then deployed a fix immediately.

The Verge (AI)
05

General Motors is adding Gemini to four million cars

industry
Apr 29, 2026

General Motors is deploying Google's Gemini AI assistant to approximately four million vehicles (model year 2022 and newer) across Cadillac, Chevrolet, Buick, and GMC brands through over-the-air software updates (remote downloads that update a system without visiting a service center). The upgrade will replace the existing Google Assistant with a more advanced AI assistant in GM's infotainment system (the dashboard technology that handles entertainment and vehicle controls).

The Verge (AI)
06

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

securitysafety
Apr 29, 2026

Security researchers test large language models (AI systems trained on massive amounts of text data) by attempting prompt injection attacks (tricking the AI into ignoring its safety rules) to find vulnerabilities before bad actors do. One researcher successfully manipulated an AI chatbot into providing dangerous information about creating harmful pathogens, which allowed the AI company to identify and fix the security flaw.

The Guardian Technology
07

AWS leans on prior ingenuity to face future AI and quantum threats

securitypolicy
Apr 29, 2026

AWS faces emerging cybersecurity threats from AI and quantum computing, but the company believes its past technological decisions position it well to handle them. Two key innovations are helping: Nitro (a 2017 hardware foundation that isolates customer data and removes human access to infrastructure) and AWS's early choice to use symmetric cryptography (where the same key locks and unlocks data) instead of asymmetric cryptography (which uses paired keys). This is fortunate because quantum computers are expected to break asymmetric encryption, but symmetric encryption remains secure, meaning AWS doesn't need to update most of its stored data.

CSO Online
08

Cybersecurity in the Intelligence Age

policysecurity
Apr 29, 2026

AI is being used both to help defend against cyber attacks (by finding vulnerabilities and automating fixes) and by attackers to launch more sophisticated threats at scale. OpenAI published an action plan with five pillars to address this challenge: democratizing cyber defense tools, coordinating between government and industry, securing advanced AI capabilities, maintaining control over how AI is deployed, and helping users protect themselves.

OpenAI Blog
09

GHSA-88hf-wf7h-7w4m: OpenTelemetry's Zipkin remote endpoint cache could grow without bounds and increase memory pressure

security
Apr 28, 2026

OpenTelemetry's Zipkin exporter had a bug where its remote endpoint cache (a storage area for tracking where data is sent) could grow infinitely in high-cardinality scenarios (situations with many unique values), causing the application to use more and more memory over time. This could make the application slower or crash.

Fix: Introduce a bounded, thread-safe LRU cache (a cache that automatically removes the least recently used items when full) for remote endpoints and enforce a fixed maximum size to prevent unbounded growth. See PR #7081 in the opentelemetry-dotnet repository for the fix.

GitHub Advisory Database
10

Elon Musk appeared more petty than prepared

policy
Apr 28, 2026

N/A -- This article is about a legal case (Musk v. Altman) and courtroom testimony, not an AI or LLM technical issue.

The Verge (AI)
Prev1...2324252627...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026
high

Claude in Chrome is taking orders from the wrong extensions

CSO OnlineMay 8, 2026
May 8, 2026