The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Independent research. No sponsors, no paywalls, no conflicts of interest.
Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).
ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.
AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.
Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.
Microsoft has released a new policy setting called RemoveMicrosoftCopilotApp that allows IT administrators to uninstall Copilot (an AI-powered digital assistant) from enterprise Windows devices, available after the April 2026 Patch Tuesday security update. The policy can be deployed through Group Policy or Policy CSP (configuration service provider, a system for managing Windows settings remotely) on devices managed by Microsoft Intune or SCCM (System Center Configuration Manager, enterprise management tools), and applies only to Windows 11 version 25H2 where users haven't launched Copilot in the last 28 days. Users can still reinstall Copilot if they choose to after it is uninstalled by the policy.
Fix: To enable the RemoveMicrosoftCopilotApp policy, open the Group Policy Editor and navigate to either '/User/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp' or '/Device/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp'. When enabled, this policy will uninstall the Microsoft Copilot app from devices in the organization in a non-disruptive way. This setting applies to Enterprise, Professional, and Education client SKUs only.
BleepingComputer