aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
68
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 13/371
VIEW ALL
01

llm-echo 0.5a0

industry
May 4, 2026

llm-echo 0.5a0 is a debug plugin (a tool that helps developers test code) for LLM that provides a fake AI model called "echo" for testing purposes instead of running a real LLM. The new version adds a "-o thinking 1" option to simulate reasoning blocks (the internal steps an AI uses to work through problems) and is compatible with LLM 0.32a0 and higher.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Simon Willison's Weblog
02

Anthropic Mythos spurs White House to weigh pre-release reviews for high-risk AI models

policysecurity
May 4, 2026

The Trump administration is considering requiring advanced AI models to be reviewed before public release, particularly those capable of helping users find software vulnerabilities (weaknesses in code that attackers can exploit). This discussion was prompted by Anthropic's Mythos model, which can identify thousands of high-severity vulnerabilities better than most human programmers, though the company has not released it publicly and instead created Project Glasswing to give selected companies access for defensive purposes (finding and fixing vulnerabilities before attackers do).

CSO Online
03

GHSA-8pqq-224h-x875: ogham-mcp had credentials embedded in published PyPI sdists -- Neon postgres URLs and Voyage API key

security
May 4, 2026

Between February and April 2026, the ogham-mcp package accidentally published 22 versions on PyPI (the Python package repository) with embedded credentials, including database passwords for Neon postgres (a database service) and a Voyage AI API key (a token that grants access to an AI service). No evidence of actual misuse was found, and all credentials have been rotated by the maintainers.

Fix: Upgrade to v0.11.1 immediately by running: pip install --upgrade "ogham-mcp>=0.11.1". This version removes the leaked credentials and adds automated scanning to prevent future credential leaks. Users do not need to rotate credentials on their own end, as the exposed credentials belonged to the project maintainers, not to users.

GitHub Advisory Database
04

New ways to buy ChatGPT ads

industry
May 4, 2026

OpenAI is expanding its ChatGPT advertising pilot by introducing new tools that make it easier for businesses to create and buy ads. Advertisers can now use a beta self-serve Ads Manager (a tool for setting up and managing ad campaigns) or work through partners, and can choose between cost-per-click (CPC, paying only when someone clicks an ad) or cost-per-mille (CPM, paying per 1,000 ad views) bidding options. The platform includes measurement tools that let advertisers see campaign performance without accessing user conversations, maintaining privacy.

OpenAI Blog
05

Advancing youth safety and wellbeing in EMEA

safetypolicy
May 4, 2026

OpenAI has published a European Youth Safety Blueprint with five practical pillars to help protect young people using AI, including age-appropriate safeguards, privacy-preserving age verification, and parental controls. The company is also funding 12 organizations across Europe, the Middle East, and Africa with €500,000 in grants to conduct research and programs on youth safety, AI literacy, and mental health support in real-world settings.

OpenAI Blog
06

OpenAI and PwC collaborate to reimagine the office of the CFO

industry
May 4, 2026

OpenAI and PwC are collaborating to help finance teams use AI agents (software programs that can autonomously perform tasks) to automate workflows, reduce manual work, and improve decision-making in finance departments. The partnership is building these agents based on real-world experience from OpenAI's own finance organization, where they have already seen results like processing 5 times more contracts with the same team size.

OpenAI Blog
07

CVE-2026-42092: titra is an open source time tracking project. In version 0.99.52, the globalsettings Meteor publication returns all glo

security
May 4, 2026

Titra, an open source time tracking application, has a vulnerability in version 0.99.52 where the globalsettings Meteor publication (a feature that broadcasts data to connected users) exposes sensitive configuration information like API keys without checking if the user has admin permissions. Any authenticated user (someone logged into the system) can access these secrets through DDP (the protocol Meteor uses to send data to clients).

NVD/CVE Database
08

CVE-2026-42440: OOM Denial of Service via Unbounded Array Allocation in Apache OpenNLP AbstractModelReader  Versions Affected:  before

security
May 4, 2026

Apache OpenNLP has a vulnerability where three methods in AbstractModelReader read count values from binary model files without checking if they're reasonable, allowing an attacker to trigger an OOM error (a crash caused by the program running out of memory) by creating a malicious .bin file with an extremely large count value. This denial of service (making a service unavailable) attack requires minimal file size and crashes the Java virtual machine early during model loading.

Fix: 2.x users should upgrade to 2.5.9. 3.x users should upgrade to 3.0.0-M3. The fix adds an upper bound check (default 10,000,000) on the three count fields before array allocation; values that are negative or exceed the bound throw an IllegalArgumentException and fail safely. Users who cannot upgrade immediately should treat all .bin model files as untrusted input unless their origin is verified, and avoid loading models from end users or third-party repositories without integrity checks. Deployments needing higher limits can set the OPENNLP_MAX_ENTRIES system property at JVM startup (e.g., -DOPENNLP_MAX_ENTRIES=50000000).

NVD/CVE Database
09

CVE-2026-42077: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a prototype pollution vulnerabilit

security
May 4, 2026

Evolver, a self-evolving engine for AI agents, had a prototype pollution vulnerability (a bug where attackers inject malicious properties into core JavaScript objects) in versions before 1.69.3. The flaw existed in functions that merged user data without blocking dangerous keys like __proto__ and constructor, allowing attackers to modify how all JavaScript objects behave.

Fix: Update to version 1.69.3, where this issue has been patched.

NVD/CVE Database
10

CVE-2026-42076: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a command injection vulnerability

security
May 4, 2026

Evolver, a tool that helps AI agents improve themselves, had a command injection vulnerability (a security flaw where attackers trick the system into running unauthorized commands) in versions before 1.69.3. The flaw was in the _extractLLM() function, which built shell commands using simple string concatenation without cleaning the input first, allowing attackers to execute arbitrary commands on the server when certain input contained shell metacharacters (special characters that have meaning to the command system).

Fix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.

NVD/CVE Database
Prev1...1112131415...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026