aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
6
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 27/371
VIEW ALL
01

Musk and Altman go to court

policy
Apr 28, 2026

Elon Musk and OpenAI are involved in a legal trial over disputes about the early development of AI, including questions about who deserves credit and financial compensation for the technology's creation. The case is expected to make private communications from important figures in the AI industry public during the coming weeks.

Critical This Week3 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

The Verge (AI)
02

OpenAI's revenue, growth estimates fall short as company races toward IPO: Report

industry
Apr 28, 2026

OpenAI has failed to meet its own revenue and user growth targets, raising concerns about whether the company can afford its massive spending on data centers (facilities that house computing equipment). Finance Chief Sarah Friar worried the company might not be able to fund future computing agreements if the revenue slowdown continues, prompting executives to look for ways to cut costs.

CNBC Technology
03

Critical Cursor bug could turn routine Git into RCE

security
Apr 28, 2026

A critical vulnerability in Cursor IDE (a code editor with AI capabilities) allowed attackers to execute malicious code on a developer's machine by embedding harmful Git hooks (automated scripts that run during repository operations) in a fake repository. When Cursor's AI agent autonomously performed routine Git operations like checking out code, it would unknowingly trigger and run the attacker's malicious scripts, giving the attacker control over the developer's computer.

Fix: The flaw is patched in Cursor version 2.5. According to the source, 'Sandbox escape via writing .git configuration was possible in versions prior to 2.5,' meaning the vulnerability has been fixed in version 2.5 and later.

CSO Online
04

The Race Is on to Keep AI Agents From Running Wild With Your Credit Cards

policysecurity
Apr 28, 2026

Agentic AI (AI systems that perform actions on behalf of humans) is growing in use, but it creates new security risks like agents being hijacked or tricked into unauthorized transactions. The FIDO Alliance (an industry group focused on authentication standards), along with Google and Mastercard, is launching working groups to develop security standards that will protect AI agent transactions using cryptographic tools (mathematical techniques that verify identity and prevent tampering) and authentication mechanisms that prevent phishing attacks.

Fix: Google is contributing the Agent Payments Protocol (AP2), which cryptographically verifies that a user intended for an agent-initiated transaction to happen. Mastercard is contributing the Verifiable Intent framework (codeveloped with Google), which is a secure mechanism for users to authorize and control agent actions. Together, these tools aim to provide cryptographic proof that transactions were authorized by the user while maintaining privacy through selective disclosure, so different parties in the payment ecosystem only see relevant information.

Wired (Security)
05

Meta's new AI model shows early promise, but investors want to see Zuckerberg's strategy

industry
Apr 28, 2026

Meta launched Muse Spark, a new closed-source AI model (a large language model that processes and generates text), marking a shift from its previous open-source Llama models toward a paid subscription approach similar to competitors like OpenAI and Google. While Muse Spark shows competitive performance in text and vision tasks, investors are waiting to see Meta's strategy for driving consumer adoption and generating revenue beyond just improving its advertising business.

CNBC Technology
06

The Download: Musk and Altman’s legal showdown, and AI’s profit problem

industrypolicy
Apr 28, 2026

This newsletter covers multiple AI developments including a legal battle between Elon Musk and OpenAI's leadership over the company's for-profit status, the gap between AI hype and actual profitability, and the rise of weaponized deepfakes (AI-generated fake videos or images used maliciously) that are spreading misinformation and harming vulnerable groups. The content also reports on business moves like OpenAI ending its exclusive partnership with Microsoft and various regulatory actions worldwide.

MIT Technology Review
07

Privacy-preserving for user-uploaded images and text in Vision-Language Models

privacyresearch
Apr 28, 2026

Vision-language models (AI systems that process both images and text together) can leak private information from user-uploaded content, such as identifying people in photos or extracting sensitive text. This research examines privacy risks when users submit images and text to these models. The paper proposes privacy-preserving methods to protect user data while still allowing these AI systems to function effectively.

Elsevier Security Journals
08

A Survey of Algorithm Debt in Machine and Deep Learning Systems: Definition, Smells, and Future Work

research
Apr 28, 2026

This survey paper examines algorithm debt in machine learning and deep learning systems, which refers to the long-term costs and problems that accumulate when developers use suboptimal algorithms or methods in AI projects. The paper defines what algorithm debt is, identifies warning signs called 'smells' that indicate its presence, and discusses future research directions. Understanding algorithm debt helps developers recognize when quick, temporary solutions in AI projects create technical problems that become harder and more expensive to fix later.

ACM Digital Library (TOPS, DTRAP, CSUR)
09

Sevii Launches Cyber Swarm Defense to Make Agentic AI Security Costs Predictable

industrysecurity
Apr 28, 2026

CISOs (chief information security officers) struggle with unpredictable costs when using agentic AI (autonomous AI agents that can make decisions and take actions) for cybersecurity defense, since they are charged per AI token (a unit of text similar to a word) used, and attack volumes can spike unexpectedly. Sevii launched Cyber Swarm Defense, a new mode that charges by protected asset (like laptops or cloud servers) at a fixed yearly rate instead of per token, making defense costs predictable regardless of how many attacks occur. The system also includes governance controls that let security teams automatically remediate low-risk assets while keeping critical ones for human review.

Fix: Sevii's Cyber Swarm Defense (CSD) mode charges by asset protected at a firm fixed price (for example, $50 per year per laptop, identity, or cloud asset) rather than by AI token usage. The platform automatically scales up defensive agentic AI agents as needed during multiple simultaneous attacks without increasing costs. Customers can also use Sevii's Myrmidon Defense Technology to set remediation service level objectives, allowing automatic remediation of lower-value assets while keeping critical assets for manual remediation by in-house security experts.

SecurityWeek
10

Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE

security
Apr 28, 2026

LeRobot, Hugging Face's open-source robotics platform, has a critical unpatched vulnerability (CVE-2026-25874, CVSS score 9.3) that allows unauthenticated attackers to execute arbitrary code by sending malicious data through unencrypted network connections. The flaw stems from unsafe deserialization (a process of converting data back into code without properly checking if it's trustworthy) using pickle, an unsafe data format, which enables attackers to compromise the server, steal sensitive data, or impact connected robots.

Fix: A fix is planned in version 0.6.0. The LeRobot team acknowledged the issue in January 2026 and noted that the vulnerable part of the codebase will need to be almost entirely refactored.

The Hacker News
Prev1...2526272829...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026