aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 70/371
VIEW ALL
01

Claude uncovers a 13‑year‑old ActiveMQ RCE bug within minutes

securityresearch
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Apr 10, 2026

Claude, an AI assistant, discovered a critical remote code execution (RCE, where an attacker can run commands on a system they don't own) vulnerability in Apache ActiveMQ that had gone undetected for 13 years. The bug allows attackers to trick ActiveMQ's management API into loading a malicious file from the internet and executing arbitrary commands, especially if default login credentials are still in use. Claude identified the complete exploit chain in about 10 minutes, a task that would have taken a human researcher roughly a week.

Fix: CVE-2026-34197 has been addressed in newer ActiveMQ Classic releases (version 6.2.3 and 5.19.4). Users must upgrade to these patched versions to be protected.

CSO Online
02

Browser Extensions Are the New AI Consumption Channel That No One Is Talking About

securitypolicy
Apr 10, 2026

AI browser extensions are a major security blind spot in enterprises because they operate inside browsers with direct access to user data, passwords, and cookies while bypassing traditional security monitoring tools like DLP (data loss prevention, which blocks sensitive information from leaving a network) and SaaS logs. The report shows AI extensions are significantly riskier than regular extensions: they are 60% more likely to have CVEs (known software vulnerabilities), 3 times more likely to access cookies, and 6 times more likely to increase their permissions over time, yet 99% of enterprise users have at least one extension installed with little organizational visibility into which ones exist or what they can access.

The Hacker News
03

Sen. Sanders Talks to Claude About AI and Privacy

policysafety
Apr 10, 2026

N/A -- The provided content does not contain substantive information about a specific AI or LLM security issue. It appears to be metadata and navigation elements from Bruce Schneier's security blog, listing essay titles and tags rather than discussing an actual technical problem or vulnerability.

Schneier on Security
04

Microsoft starts removing Copilot buttons from Windows 11 apps

industry
Apr 10, 2026

Microsoft is removing Copilot buttons (shortcuts to access its AI assistant) from several Windows 11 apps, including Notepad and Snipping Tool, replacing them with alternative menus like "writing tools." The underlying AI features remain available, but the company is reducing the number of ways users can directly access Copilot across its applications.

The Verge (AI)
05

US summons bank bosses over cyber risks from Anthropic’s latest AI model

securitypolicy
Apr 10, 2026

US Treasury Secretary Scott Bessent summoned major American bank leaders to a meeting in Washington to discuss cybersecurity risks from Anthropic's new Claude Mythos AI model. Federal Reserve Chair Jerome Powell attended the meeting, which was called after Anthropic released the model and warned it poses unprecedented cybersecurity threats.

The Guardian Technology
06

CVE-2026-5998: A flaw has been found in zhayujie chatgpt-on-wechat CowAgent up to 2.0.4. This affects the function dispatch of the file

security
Apr 9, 2026

A path traversal vulnerability (a weakness that lets attackers access files outside their intended directory) was found in the chatgpt-on-wechat CowAgent software version 2.0.4 and earlier, specifically in the memory API endpoint where it processes a filename argument. This flaw can be exploited remotely by attackers, and proof-of-concept code has already been published online.

Fix: Upgrading to version 2.0.5 mitigates this issue. The patch identifier is 174ee0cafc9e8e9d97a23c305418251485b8aa89.

NVD/CVE Database
07

Alibaba leads $290 million investment for building a new kind of AI model as LLM limits emerge

industry
Apr 9, 2026

Alibaba is investing $290 million in ShengShu, a startup developing world models (AI systems trained on videos and physical scenarios rather than just text) to better understand and replicate the real world. This shift reflects growing recognition that large language models (LLMs, which are AI trained mainly on text data) have limitations, and companies are now focusing on AI that can work with robots and other systems that need to understand physical reality.

CNBC Technology
08

OpenAI slams Anthropic in memo to shareholders as its leading AI rival gains momentum

industry
Apr 9, 2026

OpenAI sent a memo to investors criticizing Anthropic, its main rival in the AI market, saying Anthropic is limited by compute constraints (the computing power needed to train and run AI models). OpenAI claims it will have significantly more computing capacity than Anthropic by 2030, giving it a competitive advantage in developing more capable AI models and lowering costs. Both companies are competing intensely in the large language model (LLM, an AI trained on vast amounts of text to generate human-like responses) market and preparing for potential public stock offerings.

CNBC Technology
09

Brainstorming with ChatGPT

industry
Apr 9, 2026

This article describes how ChatGPT can help with brainstorming by quickly generating ideas, organizing them into clear themes, and turning rough directions into executable plans. The AI acts as a thought partner to overcome common brainstorming obstacles (too few or too many unstructured ideas) by expanding options, adding structure through frameworks, and helping test plans for weaknesses.

OpenAI Blog
10

Analyzing data with ChatGPT

industry
Apr 9, 2026

ChatGPT can analyze data files (like CSV or Excel spreadsheets) by letting you upload them and ask questions in plain language, helping you explore raw data and find insights without building formulas or dashboards manually. The tool is most useful early in analysis, when you're discovering patterns and anomalies, and it can generate visualizations and summaries to share with others. To get reliable results, you should frame your decision clearly, provide context about your data, ask for structured approaches rather than just answers, and verify key numbers before acting on the findings.

OpenAI Blog
Prev1...6869707172...371Next