aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
69
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 367/371
VIEW ALL
01

CVE-2020-15194: In Tensorflow before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, the `SparseFillEmptyRowsGrad` implementation has in

security
Sep 25, 2020

TensorFlow (an open-source machine learning library) before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 has a bug in the `SparseFillEmptyRowsGrad` function where it doesn't properly check the shape (dimensions) of one of its inputs called `grad_values_t`. An attacker could exploit this by sending invalid data to cause the program to crash, disrupting AI systems that use TensorFlow to serve predictions.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: Update TensorFlow to version 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 or later, which contain the patch released in commit 390611e0d45c5793c7066110af37c8514e6a6c54.

NVD/CVE Database
02

CVE-2020-15193: In Tensorflow before versions 2.2.1 and 2.3.1, the implementation of `dlpack.to_dlpack` can be made to use uninitialized

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a vulnerability in the `dlpack.to_dlpack` function where it can be tricked into using uninitialized memory (memory that hasn't been set to a known value), leading to further memory corruption. The problem occurs because the code assumes the input is a TensorFlow tensor, but an attacker can pass in a regular Python object instead, causing a faulty type conversion that accesses memory incorrectly.

Fix: Upgrade to TensorFlow version 2.2.1 or 2.3.1, where the issue is patched in commit 22e07fb204386768e5bcbea563641ea11f96ceb8.

NVD/CVE Database
03

CVE-2020-15192: In Tensorflow before versions 2.2.1 and 2.3.1, if a user passes a list of strings to `dlpack.to_dlpack` there is a memor

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a memory leak (wasted computer memory that isn't freed) when users pass a list of strings to a function called `dlpack.to_dlpack`. The bug happens because the code doesn't properly check for error conditions during validation, so it continues running even when it should stop and clean up.

Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which include the fix released in commit 22e07fb204386768e5bcbea563641ea11f96ceb8.

NVD/CVE Database
04

CVE-2020-15191: In Tensorflow before versions 2.2.1 and 2.3.1, if a user passes an invalid argument to `dlpack.to_dlpack` the expected v

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a bug where invalid arguments to `dlpack.to_dlpack` (a function that converts data between formats) cause the code to create null pointers (memory references that point to nothing) without properly checking for errors. This can lead to the program crashing or behaving unpredictably when it tries to use these invalid pointers.

Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which contain the patch for this issue.

NVD/CVE Database
05

CVE-2020-15190: In Tensorflow before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, the `tf.raw_ops.Switch` operation takes as input a

security
Sep 25, 2020

TensorFlow versions before 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 have a bug in the `tf.raw_ops.Switch` operation where it tries to access a null pointer (a reference to nothing), causing the program to crash. The problem occurs because the operation outputs two tensors (data structures in machine learning frameworks) but only one is actually created, leaving the other as an undefined reference that shouldn't be accessed.

Fix: Update to TensorFlow version 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 or later. The issue is patched in commit da8558533d925694483d2c136a9220d6d49d843c.

NVD/CVE Database
06

Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries

securityresearch
Sep 22, 2020

This article describes a participant's experience in Microsoft and CUJO AI's Machine Learning Security Evasion Competition, where the goal was to modify malware samples to bypass machine learning models (AI systems trained to detect malicious files) while keeping them functional. The participant attempted two main evasion techniques: hiding data in binaries using steganography (concealing information within files), which had minimal impact, and signing binaries with fake Microsoft certificates using Authenticode (a digital signature system that verifies software authenticity), which showed more promise.

Embrace The Red
07

Machine Learning Attack Series: Backdooring models

securityresearch
Sep 18, 2020

This post discusses backdooring attacks on machine learning models, where an adversary gains access to a model file (the trained AI system used in production) and overwrites it with malicious code. The threat was identified during threat modeling, which is a security planning process where teams imagine potential attacks to prepare defenses. The post indicates it will cover attacks, mitigations, and how Husky AI was built to address this risk.

Embrace The Red
08

Machine Learning Attack Series: Perturbations to misclassify existing images

securityresearch
Sep 16, 2020

This post discusses a machine learning attack technique where researchers modify existing images through small changes (perturbations, or slight adjustments to pixels) to trick an AI model into misclassifying them. For example, they aim to alter a picture of a plush bunny so that an image recognition model incorrectly identifies it as a husky dog.

Embrace The Red
09

Machine Learning Attack Series: Smart brute forcing

securityresearch
Sep 13, 2020

This post is part of a series about machine learning security attacks, with sections covering how an AI system called Husky AI was built and threat-modeled, plus investigations into attacks against it. The previous post demonstrated basic techniques to fool an image recognition model (a type of AI trained to identify what's in pictures) by generating images with solid colors or random pixels.

Embrace The Red
10

Machine Learning Attack Series: Brute forcing images to find incorrect predictions

researchsecurity
Sep 9, 2020

A researcher tested a machine learning model called Husky AI by creating simple test images (all black, all white, and random pixels) and sending them through an HTTP API to see if the model would make incorrect predictions. The white canvas image successfully tricked the model into incorrectly classifying it as a husky, demonstrating a perturbation attack (where slightly modified or unusual inputs fool an AI into making wrong predictions).

Embrace The Red
Prev1...365366367368369...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026