aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
68
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 10/371
VIEW ALL
01

Introducing AI traffic analysis dashboards for AWS WAF

securityindustry
Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

May 5, 2026

AWS has launched AI Traffic Analysis dashboards for AWS WAF (a web access control list, or tool that filters traffic to web applications), helping organizations understand and manage AI bot traffic that now makes up 30-60% of total web activity. The dashboard provides visibility into which AI bots are accessing applications, their intent (like data gathering or search indexing), and traffic patterns, integrated with AWS WAF Bot Control's detection of over 650 unique bots.

AWS Security Blog
02

OpenAI president’s ‘deeply personal’ diary becomes focus in Musk’s case against Altman

policy
May 5, 2026

Elon Musk is suing OpenAI's president Greg Brockman and CEO Sam Altman, claiming they violated OpenAI's founding agreement by converting it from a non-profit to a for-profit company while deceiving him about their intentions. During the trial's second week, Brockman's personal emails, texts, and diary entries became key evidence as Musk seeks to remove the executives, undo the restructuring, and obtain $134 billion to return to OpenAI's non-profit arm.

The Guardian Technology
03

Anthropic CEO warns of cyber ‘moment of danger’ as AI exposes thousands of vulnerabilities

securitypolicy
May 5, 2026

Anthropic's CEO warned that their latest AI model, Mythos, has discovered tens of thousands of software vulnerabilities (security weaknesses that attackers could exploit), creating an urgent window for organizations to patch them before rival AI systems catch up in about 6-12 months. The company is restricting access to Mythos because releasing information about unpatched vulnerabilities could allow criminals or hostile nations to exploit them, but leaders expressed conditional optimism that addressing this "moment of danger" correctly could lead to improved cybersecurity overall.

CNBC Technology
04

GHSA-fj4g-2p96-q6m3: Network-AI missing authentication on MCP HTTP endpoint, which allows unauthenticated privileged tool calls

security
May 5, 2026

The Network-AI project has a critical vulnerability where its MCP HTTP endpoint (a server that handles tool requests) accepts requests without any authentication checks, and binds to 0.0.0.0 (making it accessible from any network). This allows anyone who can reach the server to call privileged tools that can read and modify the system's configuration, control agents, create security tokens, and adjust budget limits.

GitHub Advisory Database
05

US to safety test new AI models from Google, Microsoft, xAI

policysafety
May 5, 2026

Google, Microsoft, and xAI have agreed to voluntarily submit their new AI models for safety testing by the US Department of Commerce's Center for AI Standards and Innovation (CAISI, a government agency focused on AI safety standards) before releasing them to the public. This expands earlier agreements with other AI companies and represents a shift toward safety oversight, even as the Trump administration has generally favored less regulation of AI development. The evaluations will assess the models' capabilities and security, with CAISI having already conducted 40 previous evaluations including some models that were not released publicly.

BBC Technology
06

CVE-2026-7847: A vulnerability was found in chatchat-space Langchain-Chatchat up to 0.3.1.3. The affected element is the function _get_

security
May 5, 2026

A vulnerability was found in Langchain-Chatchat (a chatbot framework) up to version 0.3.1.3 in the file upload handler component. The vulnerability involves insufficiently random values (meaning the system doesn't generate unpredictable numbers properly), which could be exploited by someone on the same local network, though the attack is difficult to carry out.

NVD/CVE Database
07

Trump admin moves further into AI oversight, will test Google, Microsoft and xAI models

policy
May 5, 2026

The U.S. government is increasing oversight of AI models through the Center for AI Standards and Innovation (CAISI, a government agency within the Department of Commerce), which has signed agreements to evaluate AI models from Google DeepMind, Microsoft, and xAI before they are released publicly. The White House is also considering creating a new working group to develop procedures for vetting AI models before public release, which might be established through an executive order (a direct presidential directive).

CNBC Technology
08

Major publishers sue Meta for copyright infringement over AI training

policysecurity
May 5, 2026

Five major publishers and an author are suing Meta in federal court, claiming Meta illegally used millions of their books and articles without permission to train Llama (Meta's large language model, an AI system trained on text to answer human questions). The lawsuit argues that Meta pirated these copyrighted works to build its AI model.

The Guardian Technology
09

OpenAI claims ChatGPT’s new default model hallucinates way less

safety
May 5, 2026

OpenAI released a new default model called GPT-5.5 Instant that the company claims produces fewer hallucinations (instances where an AI generates false or made-up information as if it were fact), particularly in high-stakes fields like medicine and law. According to OpenAI's internal testing, the new model generated 52.5% fewer hallucinated claims than the previous GPT-5.3 Instant model on difficult prompts.

The Verge (AI)
10

Book publishers sue Meta over AI’s ‘word-for-word’ copying

policysecurity
May 5, 2026

Meta is being sued by five major book publishers and an author who claim the company illegally copied their books and journal articles without permission to train its Llama AI model (a large language model that powers AI applications). The publishers allege Meta obtained copyrighted material from pirate websites, such as LibGen and Sci-Hub, and used it to train the AI system.

The Verge (AI)
Prev1...89101112...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026