aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
6
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 29/371
VIEW ALL
01

Meta, Google, OpenAI among Big Tech firms seeing top staff leaving to launch AI startups

industry
Apr 28, 2026

Top researchers from major AI companies like Google DeepMind, Meta, and OpenAI are leaving to start their own AI startups, which are raising hundreds of millions of dollars in funding. These new companies can focus on research areas that large tech firms deprioritize, such as new AI architectures and interpretability (understanding how AI systems make decisions), giving them a competitive advantage in the rapidly growing AI market.

Critical This Week3 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

CNBC Technology
02

Introducing talkie: a 13B vintage language model from 1930

research
Apr 27, 2026

Researchers have created talkie, a 13 billion-parameter language model (a neural network with 13 billion adjustable values) trained entirely on English text from before 1931 to study how AI performs on historical knowledge and invention tasks. The base model uses only out-of-copyright data, but the chat version required fine-tuning (additional training to adjust behavior) with help from modern AI systems like Claude, which introduced some knowledge from after 1931 that the researchers are working to eliminate.

Fix: The talkie team states they 'aspire to eventually move beyond this limitation' by using 'vintage base models themselves as judges to enable a fully bootstrapped era-appropriate post-training pipeline,' meaning they plan to use talkie's own historical knowledge rather than modern AI systems for future training adjustments. However, this is described as a future goal, not a solution currently implemented.

Simon Willison's Weblog
03

OpenAI models, Codex, and Managed Agents come to AWS

industry
Apr 27, 2026

OpenAI and AWS have expanded their partnership to make OpenAI's models, including GPT-5.5, available through Amazon Bedrock (AWS's managed service for using AI models). This integration lets enterprises use OpenAI's capabilities within their existing AWS security systems, workflows, and infrastructure, with three new offerings: OpenAI models on AWS, Codex (a coding assistant used by over 4 million people weekly) on AWS, and Amazon Bedrock Managed Agents for building AI agents that can execute multi-step workflows.

OpenAI Blog
04

Our commitment to community safety

safetypolicy
Apr 27, 2026

OpenAI describes its safety approach for ChatGPT to prevent misuse for violence, threats, or harm. The system is trained to distinguish between harmful requests and legitimate questions about violence for educational or historical reasons, while using detection systems and expert guidance to identify concerning patterns across conversations and take action like revoking access when needed.

OpenAI Blog
05

Elon Musk and Sam Altman are going to court over OpenAI’s future

policy
Apr 27, 2026

Elon Musk is suing OpenAI CEO Sam Altman and president Greg Brockman, alleging they deceived him into funding the company by promising to keep it as a nonprofit focused on beneficial AI, then secretly restructured it into a for-profit operation. The trial could determine whether OpenAI can operate as a for-profit company and may result in removing current leadership or forcing the company back to nonprofit status. The case highlights a fundamental conflict over OpenAI's mission: whether it should prioritize open-source AI for public benefit or operate for financial gain to fund more advanced development.

MIT Technology Review
06

CVE-2026-7178: A weakness has been identified in ChatGPTNextWeb NextChat up to 2.16.1. This affects the function storeUrl of the file a

security
Apr 27, 2026

A vulnerability (CVE-2026-7178) was found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems) through the storeUrl function in the Artifacts Endpoint. The flaw can be exploited remotely, and the attack code has been made public, though the project developers have not yet responded to the early notification.

NVD/CVE Database
07

CVE-2026-7177: A security flaw has been discovered in ChatGPTNextWeb NextChat up to 2.16.1. Affected by this issue is the function prox

security
Apr 27, 2026

A security flaw has been found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems). The vulnerability exists in the proxyHandler function and can be exploited remotely, with public exploits already available. The developers have been notified but have not yet responded.

NVD/CVE Database
08

Canonical lays out a plan for AI in Ubuntu Linux

industry
Apr 27, 2026

Canonical, the company behind Ubuntu Linux (a popular operating system), plans to add AI features to its system over the next year. These features will work in two ways: some will improve existing system functions quietly in the background, while others will be designed specifically for users who want AI-powered tools and workflows. The features will include accessibility improvements like better speech-to-text conversion and other AI-powered capabilities.

The Verge (AI)
09

CVE-2026-7191- Arbitrary Code Execution via Sandbox Bypass in QnABot on AWS

security
Apr 27, 2026

QnABot on AWS (a conversational AI tool built with Amazon Lex and other AWS services) has a vulnerability where administrators can run arbitrary code (unintended commands) by exploiting improper use of the static-eval npm package through the Content Designer interface, potentially giving them access to sensitive backend resources like databases and environment variables that should be protected.

AWS Security Bulletins
10

Tracking the history of the now-deceased OpenAI Microsoft AGI clause

policy
Apr 27, 2026

Microsoft and OpenAI had a contract clause stating that if AGI (artificial general intelligence, meaning AI systems that outperform humans at most economically valuable work) was achieved, Microsoft would lose its commercial rights to OpenAI's technology. On April 27, 2026, this clause effectively ended when Microsoft's license became non-exclusive and Microsoft stopped paying revenue shares to OpenAI, with payments continuing regardless of technological progress.

Simon Willison's Weblog
Prev1...2728293031...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026