aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
71
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 3/371
VIEW ALL
01

CVE-2026-32207: Improper neutralization of input during web page generation ('cross-site scripting') in Azure Machine Learning allows an

security
May 7, 2026

CVE-2026-32207 is a cross-site scripting vulnerability (XSS, where an attacker injects malicious code into a web page that gets executed in users' browsers) in Azure Machine Learning that allows an unauthorized attacker to perform spoofing (impersonating someone or something else) over a network. The vulnerability stems from improper handling of user input during web page generation.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

NVD/CVE Database
02

CVE-2026-26164: Improper neutralization of special elements in output used by a downstream component ('injection') in M365 Copilot allow

security
May 7, 2026

CVE-2026-26164 is a vulnerability in Microsoft 365 Copilot caused by improper neutralization of special elements in output (a type of injection attack, where specially crafted input can be misinterpreted as commands). An attacker without authorization could exploit this to access and disclose information over a network.

NVD/CVE Database
03

CVE-2026-26129: Improper neutralization of special elements in M365 Copilot allows an unauthorized attacker to disclose information over

security
May 7, 2026

CVE-2026-26129 is a vulnerability in Microsoft 365 Copilot where improper neutralization of special elements (failure to safely handle certain characters or code) allows an unauthorized attacker to disclose information over a network. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.0, indicating moderate severity.

NVD/CVE Database
04

OpenAI rolls out new model for cybersecurity teams a month after Anthropic's Mythos debut

industry
May 7, 2026

OpenAI announced GPT-5.5-Cyber, a specialized version of its latest AI model designed for cybersecurity teams, which is being released in limited preview to vetted partners. Unlike the standard GPT-5.5 model, this version has relaxed safety restrictions to make it easier for security professionals to use it for tasks like vulnerability identification (finding weaknesses in software), patch validation (checking if security updates work), and malware analysis (studying malicious software). This release comes one month after rival Anthropic launched Claude Mythos, a similar AI tool also restricted to select cybersecurity organizations.

CNBC Technology
05

CVE-2026-41691: Copilot said: i18nextify is a JavaScript library that adds i18nextify is a JavaScript library that adds website internat

security
May 7, 2026

i18nextify is a JavaScript library that enables website internationalization (support for multiple languages) through a simple script tag. Versions before 3.0.5 have a URL-injection vulnerability (where attackers can manipulate URLs by injecting special characters) because the library doesn't properly validate language and namespace values before using them in web requests, allowing attackers to exploit this if an application accepts user input for language selection.

Fix: This issue has been fixed in version 3.0.5. If users cannot upgrade immediately, they can work around the issue by sanitising lng / ns before they reach i18next by stripping .., /, \, ?, #, %, whitespace, and control characters; and capping the length.

NVD/CVE Database
06

Ollama vulnerability highlights danger of AI frameworks with unrestricted access

security
May 7, 2026

Ollama, a popular framework for running AI models locally, has a critical vulnerability (CVE-2026-7482, called Bleeding Llama) that allows attackers to steal sensitive data like passwords, chat messages, and system secrets from over 300,000 exposed servers. The flaw lets unauthenticated attackers upload a specially crafted file that tricks Ollama into reading memory beyond its intended boundaries, and the vulnerability is easy to exploit because Ollama has no authentication enabled by default.

Fix: Users should update to Ollama version 0.17.1, which includes a patch for this vulnerability. Additionally, deploy an authentication proxy or API gateway (a security layer that controls access) in front of all Ollama instances and never expose them to the internet without IP access filters and firewalls. If your Ollama server was internet-accessible, assume environment variables and secrets in memory may be compromised and rotate API keys, tokens, and credentials immediately. On local networks, Ollama servers should be isolated on secure network segments and behind firewalls.

CSO Online
07

French prosecutors escalate probe of Elon Musk and X to criminal investigation

safetypolicy
May 7, 2026

French prosecutors have escalated their investigation of Elon Musk and his social network X into a criminal probe, focusing on allegations of algorithmic manipulation (using computer programs to influence user feeds and information), spreading of nonconsensual sexually explicit deepfake images (synthetic media created without consent), and Holocaust denial content on X's AI chatbot Grok. Musk and former X CEO Linda Yaccarino were summoned to appear in April but declined to do so, and similar investigations are underway in other countries and by California authorities.

CNBC Technology
08

How to Disable Google's Gemini in Chrome

safetypolicy
May 7, 2026

Google's Chrome browser automatically downloaded and installed Gemini Nano, a local AI model (an AI that runs directly on your computer rather than in the cloud) taking up about 4 GB of space, without clear user notification. Many users were unaware of this installation until recent reports highlighted the issue, raising concerns about transparency in how tech companies roll out AI features.

Fix: To disable Gemini Nano, open Chrome on your computer, click the 'More' menu (three vertical dots) in the top right corner, go to Settings, then System, and toggle 'On-device AI' to off. According to Google, "Once disabled, the model will no longer download or update." However, the source notes that directly uninstalling the file from the directory will cause Chrome to silently redownload it when the browser restarts, so using the settings toggle is the proper method. Be aware that disabling this feature will prevent certain security functions like on-device scam detection from working.

Wired (Security)
09

When prompts become shells: RCE vulnerabilities in AI agent frameworks

securityresearch
May 7, 2026

AI agent frameworks like Semantic Kernel, LangChain, and CrewAI let AI models control tools and plugins (software add-ons that perform actions like running scripts or accessing databases), but researchers discovered that prompt injection (tricking an AI by hiding instructions in its input) can turn into RCE (remote code execution, where an attacker runs commands on a system they don't own). Two critical vulnerabilities in Microsoft's Semantic Kernel (CVE-2026-25592 and CVE-2026-26030) could allow attackers to execute code on a host machine through malicious prompts.

Fix: The source states that the two vulnerabilities in Semantic Kernel "have since been fixed" but does not provide specific patch versions, mitigation steps, or technical details on how to address the vulnerabilities. The text mentions "responsible disclosure" and working with maintainers but does not explicitly describe how to patch or mitigate these issues. N/A -- no explicit mitigation or patch version details discussed in source.

Microsoft Security Blog
10

llm-gemini 0.31

industry
May 7, 2026

This is a brief announcement of llm-gemini version 0.31, posted by Simon Willison on May 7, 2026. The content appears to be metadata and navigation elements from a blog or news site covering developments in large language models (LLMs, AI systems trained on vast amounts of text data) and Google's Gemini AI model, rather than detailed technical information about the release itself.

Simon Willison's Weblog
Prev12345...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026