aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
7
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 24/371
VIEW ALL
01

Tumbler Ridge families are suing OpenAI

safetypolicy
Critical This Week4 issues
high

GHSA-8g7g-hmwm-6rv2: n8n-mcp affected by path traversal, redirect-following SSRF, and telemetry payload exposure

GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Apr 29, 2026

Seven families are suing OpenAI and its CEO after a school shooting in Tumbler Ridge, Canada, claiming the company failed to alert police about the shooter's suspicious ChatGPT activity. The families allege that OpenAI detected concerning conversations about gun violence but stayed silent to protect its reputation and an upcoming IPO (initial public offering, when a company first sells stock to the public).

The Verge (AI)
02

ChatGPT downloads are slowing — and may cause problems for OpenAI’s IPO

industry
Apr 29, 2026

ChatGPT is experiencing slower growth and rising uninstall rates, with users leaving the app or switching to competing chatbots. According to market data, uninstalls jumped 413 percent year-over-year in May following OpenAI's partnership with the Pentagon, while monthly user growth dropped from 168 percent in January to 78 percent in April.

The Verge (AI)
03

New Wave of DPRK Attacks Uses AI-Inserted npm Malware, Fake Firms, and RATs

security
Apr 29, 2026

Researchers discovered malicious code in npm packages (repositories where developers share reusable code) that were designed to steal cryptocurrency wallet credentials and funds. The attack, linked to North Korean hackers, used a two-layer approach where harmless-looking packages contained hidden dependencies that executed the actual malware, and the malicious packages mimicked the names of legitimate libraries to avoid detection.

The Hacker News
04

Wiz Code Week Recap: Securing AI Native Development

securityindustry
Apr 29, 2026

AI models can now find and exploit software vulnerabilities faster than security teams can defend against them, creating urgent security challenges for AI-driven development. Wiz addressed this by launching an AI-BOM (a tool that automatically catalogs AI frameworks, models, and IDE extensions like GitHub Copilot and Cursor) to give security teams visibility into how AI tools interact with their data, plus embedding security guardrails directly into developer IDEs through plugins that catch hardcoded secrets, misconfigurations, and AI-specific risks like prompt injection (tricking an AI by hiding instructions in its input) before code is committed.

Fix: Wiz Code plugins for AI-native IDEs (like Claude Code and Cursor) embed security directly into development workflows using pre-commit hooks (automated checks that run before code is saved) to catch hardcoded secrets, IaC (infrastructure-as-code) misconfigurations, vulnerabilities, and AI-specific issues. Additionally, Wiz Skills allow developers to automatically pull active security issues from the Wiz Security Graph and apply fixes directly in the IDE using the Wiz Green Agent, which generates fixes based on full code-to-cloud context.

Wiz Research Blog
05

Larry’s risky business

industry
Apr 29, 2026

Oracle, a traditional database company, has shifted its business strategy to focus on AI rather than building its own foundation models (large language models like ChatGPT). Instead, it is positioning itself as a software-as-a-service provider (cloud-based software you access online) in the AI infrastructure space, betting on a specific version of AI's future as its traditional database business declines.

The Verge (AI)
06

K-TCDP: A Temporal Correlated DP Mechanism for LoRA Supervised Fine-Tuning

researchprivacy
Apr 29, 2026

This research proposes K-TCDP (K-Temporal Correlated Differential Privacy), a new method for training large language models privately using LoRA (a technique that adds small trainable adapters to a model). Standard privacy-preserving training adds random noise that degrades model quality, but K-TCDP uses strategically correlated noise over time so that noise added in early steps can be partially canceled out by noise in later steps, improving model performance while maintaining privacy guarantees.

IEEE Xplore (Security & AI Journals)
07

Learning from the Vercel breach: Shadow AI & OAuth sprawl

securityprivacy
Apr 29, 2026

When employees connect unapproved AI apps to work platforms like Google Workspace or Salesforce using OAuth (a system that lets apps access your accounts), they create persistent bridges that attackers can exploit if the AI app gets hacked. The Vercel breach showed this risk in action: an employee used a trial version of Context.ai without approval, and when Context.ai was compromised, attackers used the OAuth tokens (digital keys that grant access) to reach sensitive Vercel data like API keys and employee records.

BleepingComputer
08

Taylor Swift deepfakes are pushing scams on TikTok

securitysafety
Apr 29, 2026

Scammers are creating deepfakes (AI-generated fake videos that realistically mimic real people) of celebrities like Taylor Swift and Rihanna on TikTok to trick users into fake reward programs. These deepfakes often manipulate real footage with AI and use TikTok's official branding to appear legitimate, but they redirect users to third-party websites that steal personal information.

The Verge (AI)
09

CVE-2026-42249: Ollama for Windows contains a Remote Code Execution vulnerability in its update mechanism due to improper handling of at

security
Apr 29, 2026

Ollama for Windows has a remote code execution vulnerability (the ability for an attacker to run commands on your computer) in its update system. The vulnerability happens because the application builds file paths using information from HTTP headers without checking if they're legitimate, allowing attackers to use path traversal sequences (like ../ to navigate directories) to write malicious executable files to dangerous locations like the Windows Startup folder. When combined with a missing signature verification flaw, an attacker can automatically execute malicious code without the user knowing.

NVD/CVE Database
10

CVE-2026-42248: Ollama for Windows does not perform integrity or authenticity verification of downloaded update executables. Unlike othe

security
Apr 29, 2026

Ollama for Windows has a vulnerability (CVE-2026-42248) where it does not verify that downloaded updates are authentic and haven't been tampered with before installing them. Because Ollama automatically installs updates without asking the user, an attacker could trick the software into downloading and running malicious code without the user knowing.

NVD/CVE Database
Prev1...2223242526...371Next
high

GHSA-cmrh-wvq6-wm9r: n8n-mcp webhook and API client paths has an authenticated SSRF

CVE-2026-44694GitHub Advisory DatabaseMay 8, 2026
May 8, 2026
high

CVE-2026-41487: Langfuse is an open source large language model engineering platform. From version 3.68.0 to before version 3.167.0, the

CVE-2026-41487NVD/CVE DatabaseMay 8, 2026
May 8, 2026
high

Claude in Chrome is taking orders from the wrong extensions

CSO OnlineMay 8, 2026
May 8, 2026