aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 67/371
VIEW ALL
01

CVE-2026-6129: A vulnerability was detected in zhayujie chatgpt-on-wechat CowAgent up to 2.0.4. This affects an unknown function of the

security
Apr 12, 2026

A vulnerability (CVE-2026-6129) was found in the CowAgent component of zhayujie's chatgpt-on-wechat software up to version 2.0.4, where missing authentication (failure to verify user identity) in the Agent Mode Service allows attackers to perform unauthorized actions remotely. The exploit is publicly available and the developers have not yet responded to the initial report of the problem.

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

NVD/CVE Database
02

The AI code wars are heating up

industry
Apr 12, 2026

GitHub Copilot, a tool that uses AI to autocomplete code as developers write it, was one of the earliest successful AI applications, debuting in spring 2021 through a Microsoft and OpenAI partnership, long before ChatGPT became widely known. The article discusses how AI code-writing tools have become increasingly important in the tech industry.

The Verge (AI)
03

CVE-2026-6126: A weakness has been identified in zhayujie chatgpt-on-wechat CowAgent 2.0.4. The affected element is an unknown function

security
Apr 12, 2026

CVE-2026-6126 is a missing authentication vulnerability in zhayujie chatgpt-on-wechat CowAgent version 2.0.4, affecting an administrative HTTP endpoint (a web-based control interface). An attacker can remotely exploit this flaw without needing valid credentials, and the exploit code has been publicly released.

NVD/CVE Database
04

Is AI the greatest art heist in history?

safetypolicy
Apr 12, 2026

This article argues that generative AI (machine learning systems that create new content like images or text) is harming the art world by using artists' work without permission to train itself, similar to a large-scale theft. The piece describes widespread concerns about AI in 2026, including environmental damage from data centers (large facilities that store and process information), harmful effects on users' mental health, and job displacement, issues that artists had warned about earlier.

The Guardian Technology
05

AI companies know they have an image problem. Will funding policy papers and thinktanks dig them out?

policy
Apr 12, 2026

Major AI companies like OpenAI are investing in policy papers, think tanks, and public engagement efforts to improve their public image as polls show growing disapproval of AI technology. OpenAI recently released a policy paper on industrial policy and opened a Washington DC office with space for non-profits and policymakers to learn about their technology, as part of a broader strategy to reshape how people perceive the AI industry.

The Guardian Technology
06

‘Too powerful for the public’: Inside Anthropic’s bid to win the AI publicity war

industry
Apr 12, 2026

Anthropic announced it created a powerful AI model called Mythos that it decided not to release publicly, citing cybersecurity risks as the reason. The announcement drew significant attention from government officials and politicians, though some skeptics question whether the decision was genuinely about security concerns or a publicity strategy to attract investment.

The Guardian Technology
07

‘It has your name on it, but I don’t think it’s you’: how AI is impersonating musicians on Spotify

securitysafety
Apr 11, 2026

AI bots are creating fake music and uploading it to Spotify under the names of real musicians, including famous artists like jazz pianist Jason Moran and rapper Drake. Spotify has acknowledged the problem, removing over 75 million spammy tracks in 12 months, and says it is developing a new tool that will let artists review and approve releases before they go live on the platform.

Fix: Spotify stated it is 'working on a new tool to give artists more control over what shows up under their name' that would 'let artists review and then approve or decline releases before they go live on the platform.' The company also said that 'estate or rights holders for a deceased artist can opt into the company's new tool if they have an account.' Additionally, Spotify noted it 'employs a range of safeguards to protect artists, including systems designed to detect and prevent unauthorized content, human review, and reporting and takedown processes.'

The Guardian Technology
08

Vibe check from inside one of AI industry's main events: 'Claude mania'

industry
Apr 11, 2026

At the HumanX AI conference in San Francisco, Anthropic's Claude Code (an AI coding agent, a tool that generates, edits and reviews code) has become the dominant topic in the AI industry, surpassing OpenAI's influence among executives and investors. Despite a legal dispute with the Department of Defense, Anthropic continues to gain momentum, with Claude Code generating over $2.5 billion in annualized revenue since its May 2025 public launch. The company's focus on coding rather than spreading resources across multiple AI products has positioned it well to capture enterprise contracts.

CNBC Technology
09

ChatGPT rolls out new $100 Pro subscription to challenge Claude

industry
Apr 10, 2026

OpenAI has launched a new $100 Pro subscription tier to compete with Claude's pricing and target coders and enterprises. The new Pro plan sits between the existing $20 Plus and $200 Pro Max tiers, offering 5x higher usage limits than Plus and access to advanced features like Codex (a code-generation tool), deep research, and GPT-5. OpenAI's strategy mirrors Anthropic's approach of offering a mid-tier subscription designed specifically for people doing complex, high-stakes work.

BleepingComputer
10

Man arrested after Sam Altman's house hit with Molotov cocktail, OpenAI headquarters threatened

security
Apr 10, 2026

A 20-year-old man was arrested after throwing a Molotov cocktail (a homemade incendiary weapon) at OpenAI CEO Sam Altman's home and then threatening arson at the company's San Francisco headquarters. No one was injured in the attack, and the suspect was taken into custody with charges pending. The incident occurred during a controversial period for OpenAI involving military partnerships and litigation.

CNBC Technology
Prev1...6566676869...371Next