aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
8
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 23/371
VIEW ALL
01

GHSA-r6jc-mpqw-m755: n8n has SQL Injection in Oracle Database Node via Limit Field

security
Apr 29, 2026

n8n, a workflow automation tool, had a SQL injection vulnerability (a type of attack where malicious SQL commands are inserted into input fields) in its Oracle Database node. The flaw allowed attackers to inject arbitrary SQL commands through the `Limit` field when external user input was used, potentially letting them steal data from the connected Oracle database.

Critical This Week4 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, temporary mitigations include: limiting workflow creation and editing permissions to fully trusted users only, disabling the Oracle Database node by adding `n8n-nodes-base.oracleDatabase` to the `NODES_EXCLUDE` environment variable, and avoiding passing unvalidated external user input into the Oracle Database node's `Limit` field via expressions. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database
02

Google Search queries hit an ‘all time high’ last quarter

industry
Apr 29, 2026

Google reported record-breaking search queries in Q1 2026, with CEO Sundar Pichai attributing the growth to AI investments and new AI experiences integrated into their products. The company saw 19% revenue growth in search, over 350 million paid subscriptions across services like Gemini App and YouTube, and Pichai highlighted this as their strongest quarter for consumer AI products.

The Verge (AI)
03

GHSA-55m9-299j-53c7: OneCollector exporter reads unbounded HTTP response bodies

security
Apr 29, 2026

The OneCollector exporter (a tool that sends telemetry data, which is information about how a program is running, to a backend server) has a flaw where it reads error responses from failed HTTP requests without limiting how much data it accepts. If an attacker controls the backend server or intercepts the connection, they can send an extremely large response that exhausts the application's memory and crashes it (a denial-of-service attack, where a system is made unavailable).

Fix: Update to the version with PR #4117 applied, which limits the number of bytes read from error response bodies to 4MiB (megabytes). Additionally, use network-level controls like firewall rules, mTLS (mutual TLS, a security protocol for encrypting connections), or a service mesh to prevent Man-in-the-Middle attacks on the configured backend/collector endpoint.

GitHub Advisory Database
04

Where the goblins came from

safetyresearch
Apr 29, 2026

Starting with GPT-5.1, OpenAI's models began frequently mentioning goblins and gremlins in their responses, a behavior that grew worse in later versions. The root cause was discovered to be the training process for the "Nerdy" personality feature, which unknowingly gave high rewards for outputs containing creature metaphors, causing the model to learn and amplify this quirk over time. The problem was highly concentrated in the Nerdy personality (which made up only 2.5% of responses but accounted for 66.7% of goblin mentions), and was identified through comparing model outputs and analyzing which reward signals (scoring systems that guide AI training) favored creature-word language.

OpenAI Blog
05

Designing trust and safety into Amazon Bedrock powered applications

safetypolicy
Apr 29, 2026

This document outlines how to build safety and trust into AI applications using Amazon Bedrock (AWS's generative AI service) by following a responsible AI framework. Organizations that implement responsible AI practices see significant business benefits, including 82% improvement in employee trust and 25% increase in customer loyalty. Safety should be integrated throughout the AI development lifecycle across three phases: design and development (evaluating risks and building guardrails), deployment (implementing multiple layers of protection including red team testing, which simulates attacks to find vulnerabilities), and operations (continuous monitoring and adaptation as technology and usage patterns evolve).

Fix: The source text describes approaches rather than specific technical fixes. For the design and development phase, it recommends thoroughly evaluating safety risks, understanding application capabilities and limits, and building safety guardrails from the beginning. For deployment, it recommends implementing robust safety measures through multiple layers including comprehensive user training, proactive monitoring and review processes, clear safety protocols and user guidelines, and red team testing. For the operations phase, it recommends implementing real-time feedback mechanisms, conducting regular performance evaluations, and continuously monitoring for shifts in application usage or functions that could compromise safety.

AWS Security Blog
06

LLM 0.32a0 is a major backwards-compatible refactor

industry
Apr 29, 2026

LLM 0.32a0 is an alpha release that redesigns how the LLM Python library handles inputs and outputs to better support modern AI models. Instead of the old simple text-in, text-out model, it now represents conversations as sequences of messages (with user and assistant roles) and allows responses to contain different types of content, making it easier to work with APIs like OpenAI's chat completions.

Simon Willison's Weblog
07

GHSA-vc24-j8c5-2vw4: OpenTelemetry.Resources.Azure has an unbounded HTTP response body read

security
Apr 29, 2026

OpenTelemetry.Resources.Azure has a vulnerability where it reads unlimited amounts of data from Azure VM metadata service responses into memory, allowing an attacker to cause the application to crash by sending extremely large responses (a denial of service attack where the system runs out of memory). This affects applications using the Azure VM resource detector that connect to a compromised or intercepted metadata endpoint.

Fix: Fixed in OpenTelemetry.Resources.Azure version 1.15.0-beta.2. The fix introduces limits to HttpClient requests so that response bodies are streamed rather than loaded entirely into memory, with responses greater than 4 MiB being ignored. As workarounds, you can disable the Azure VM resource detector or use network-level controls (firewall rules, mTLS, or service mesh) to prevent Man-in-the-Middle attacks on the Azure VM instance metadata endpoint.

GitHub Advisory Database
08

All the evidence unveiled so far in Musk v. Altman

industry
Apr 29, 2026

A legal trial between Elon Musk and Sam Altman is revealing documents from OpenAI's founding, including emails and corporate records that show Musk drafted much of OpenAI's early mission and structure, Nvidia provided computational resources, and early leaders had concerns about various aspects of the organization's direction. The case is still ongoing and more evidence is expected to be disclosed as it progresses.

The Verge (AI)
09

OpenAI’s subtle drift from Microsoft has become an aggressive move toward Amazon

industry
Apr 29, 2026

OpenAI has restructured its relationship with Microsoft multiple times in six months, most recently ending Microsoft's exclusive access to OpenAI's models and technology. The company is now moving its AI services to Amazon Web Services (cloud computing infrastructure), Microsoft's major competitor, after committing $100+ billion in spending to AWS and receiving a $50 billion investment from Amazon. This shift suggests OpenAI is deliberately diversifying away from its decade-long partnership with Microsoft to work with multiple cloud providers and meet more customers' needs.

CNBC Technology
10

Building the compute infrastructure for the Intelligence Age

industry
Apr 29, 2026

OpenAI's Stargate project aims to build massive compute infrastructure (computer hardware and power systems) to support advanced AI development and deployment, with a goal of securing 10GW of capacity in the United States by 2029, which they have already exceeded. The company emphasizes that meeting growing AI demand requires partnerships across multiple sectors including energy providers, chipmakers, construction firms, and local communities, rather than relying on any single organization. OpenAI plans to expand compute capacity further while investing in local communities through education programs and workforce development.

OpenAI Blog
Prev1...2122232425...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026
high

Claude in Chrome is taking orders from the wrong extensions

CSO OnlineMay 8, 2026
May 8, 2026