aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
4
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 35/371
VIEW ALL
01

Microsoft now lets admins uninstall Copilot on enterprise devices

securitypolicy
Critical This Week2 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Apr 24, 2026

Microsoft has released a new policy setting called RemoveMicrosoftCopilotApp that allows IT administrators to uninstall Copilot (an AI-powered digital assistant) from enterprise Windows devices, available after the April 2026 Patch Tuesday security update. The policy can be deployed through Group Policy or Policy CSP (configuration service provider, a system for managing Windows settings remotely) on devices managed by Microsoft Intune or SCCM (System Center Configuration Manager, enterprise management tools), and applies only to Windows 11 version 25H2 where users haven't launched Copilot in the last 28 days. Users can still reinstall Copilot if they choose to after it is uninstalled by the policy.

Fix: To enable the RemoveMicrosoftCopilotApp policy, open the Group Policy Editor and navigate to either '/User/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp' or '/Device/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp'. When enabled, this policy will uninstall the Microsoft Copilot app from devices in the organization in a non-disruptive way. This setting applies to Enterprise, Professional, and Education client SKUs only.

BleepingComputer
02

Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

securitypolicy
Apr 24, 2026

The Trump administration is announcing plans to prevent foreign companies, especially those in China, from using 'model extraction attacks' (techniques that steal capabilities from U.S.-made AI systems by training weaker AI models on the outputs of stronger ones) to copy American AI innovations. The administration says it will work with U.S. AI companies to identify these extraction activities, build defenses, and punish offenders, while Congress is also proposing legislation to identify and sanction foreign actors who extract features from closed-source U.S. AI models.

SecurityWeek
03

China’s DeepSeek previews new AI model a year after jolting US rivals 

industry
Apr 24, 2026

Chinese AI company DeepSeek released a preview of its new V4 model, which is open-source (publicly available code that anyone can use and modify) and claims to match the performance of closed-source (proprietary, not publicly available) AI systems from US companies like OpenAI and Google. The V4 model shows major improvements in coding tasks, which are important for AI agents (AI systems that can take actions independently), and works well with Chinese chip technology from Huawei.

The Verge (AI)
04

Cohere to acquire German AI company Aleph Alpha as it looks to expand in Europe

industry
Apr 24, 2026

Cohere, a Canadian AI company, announced plans to acquire German AI company Aleph Alpha to expand in Europe, with Aleph Alpha's backer Schwarz Group investing $600 million in Cohere's upcoming funding round. The acquisition aims to combine both companies' strengths to offer sovereign AI (customized AI systems that keep data and control within a specific country or region) to regulated sectors like government, finance, and defense, while giving European organizations alternatives to relying on single AI providers. The deal is expected to close in 2026, pending regulatory approval.

CNBC Technology
05

Copperhelm Raises $7 Million for Agentic Cloud Security Platform

industry
Apr 24, 2026

Copperhelm, an Israel-based startup, raised $7 million to develop an agentic cloud security platform, which uses AI agents (autonomous software programs that can make decisions and take actions independently) to monitor cloud environments, investigate threats, and automatically fix security problems in real time. The platform uses a proprietary component called Context Lake to help AI agents understand cloud data and make accurate security decisions, while keeping human security teams in control of the process. This approach is positioned as an alternative to manual cloud security work that typically requires large engineering teams.

SecurityWeek
06

LMDeploy CVE-2026-33626 Flaw Exploited Within 13 Hours of Disclosure

security
Apr 24, 2026

A serious flaw in LMDeploy (an open-source toolkit for deploying language models) called CVE-2026-33626 was exploited by attackers within 13 hours of being made public. The vulnerability is a server-side request forgery (SSRF, a weakness where a server is tricked into making requests to internal systems it shouldn't access) in the image-loading function that fails to block requests to private IP addresses, potentially letting attackers steal cloud credentials and access internal networks.

Fix: The vulnerability affects LMDeploy versions 0.12.0 and prior with vision language support. The source text does not explicitly mention a patched version number, update, or mitigation steps. N/A -- no mitigation discussed in source.

The Hacker News
07

DeepSeek V4 - almost on the frontier, a fraction of the price

industry
Apr 24, 2026

DeepSeek released two new preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash, which use a Mixture of Experts architecture (a design where only some parts of the model activate for each task) and support 1 million token context (the amount of text the model can consider at once). These models are significantly cheaper than competitors like GPT and Claude, with DeepSeek-V4-Flash costing $0.14 per million input tokens compared to $0.20 for GPT-5.4 Nano, because DeepSeek focused on efficiency improvements that reduced computational requirements.

Simon Willison's Weblog
08

China's DeepSeek releases preview of long-awaited V4 model as AI race intensifies

industry
Apr 24, 2026

DeepSeek, a Chinese AI startup, released a preview of its V4 large language model, which is open source (meaning developers can download, run locally, and modify the code) and optimized for agent-based tasks like knowledge processing. The release intensifies competition in the AI sector, particularly between the U.S. and China, though it remains unclear which chips (processors used for training) were primarily used to build V4, given U.S. export restrictions on advanced Nvidia processors to China.

CNBC Technology
09

CVE-2026-6393: The BetterDocs plugin for WordPress is vulnerable to Missing Authorization in versions up to and including 4.3.11. This

security
Apr 24, 2026

The BetterDocs plugin for WordPress (versions up to 4.3.11) has a security flaw where the generate_openai_content_callback() function checks for a nonce (a security token that verifies a request is legitimate) but doesn't verify that the user has permission to perform the action. This allows any authenticated user with subscriber-level access or higher to make the plugin call OpenAI's AI service using the site owner's API key and paid quota, even though they shouldn't have that permission.

NVD/CVE Database
10

CVE-2026-41318: AnythingLLM is an application that turns pieces of content into context that any LLM can use as references during chatti

security
Apr 24, 2026

AnythingLLM, an application that lets LLMs reference external documents during conversations, has a security flaw in versions before 1.12.1 where chart captions aren't properly filtered for malicious code. An attacker can inject harmful instructions (prompt injection, where hidden commands are slipped into LLM inputs) through shared documents or chart records to execute XSS (cross-site scripting, code that runs in other users' browsers without permission) when those users view the conversation.

Fix: Update to version 1.12.1 or later, which contains a patch for this issue.

NVD/CVE Database
Prev1...3334353637...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026