aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
68
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 363/371
VIEW ALL
01

CVE-2020-26269: In TensorFlow release candidate versions 2.4.0rc*, the general implementation for matching filesystem paths to globbing

security
Dec 10, 2020

TensorFlow's release candidate versions 2.4.0rc* contain a vulnerability in the code that matches filesystem paths to globbing patterns (a method of searching for files using wildcards), which can cause the program to read memory outside the bounds of an array holding directory information. The vulnerability stems from missing checks on assumptions made by the parallel implementation, but this issue only affects the development version and release candidates, not the final release.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: This is patched in version 2.4.0. The implementation was completely rewritten to fully specify and validate the preconditions.

NVD/CVE Database
02

CVE-2020-26268: In affected versions of TensorFlow the tf.raw_ops.ImmutableConst operation returns a constant tensor created from a memo

security
Dec 10, 2020

A bug in TensorFlow's tf.raw_ops.ImmutableConst operation (a function that creates fixed tensors from memory-mapped files) causes the Python interpreter to crash when the tensor type is not an integer type, because the code tries to write to memory that should be read-only. This crash happens when the file is large enough to contain the tensor data, resulting in a segmentation fault (a critical memory access error).

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.

NVD/CVE Database
03

CVE-2020-26267: In affected versions of TensorFlow the tf.raw_ops.DataFormatVecPermute API does not validate the src_format and dst_form

security
Dec 10, 2020

CVE-2020-26267 is a vulnerability in TensorFlow where the tf.raw_ops.DataFormatVecPermute API (a function for converting data format layout) fails to check the src_format and dst_format inputs, leading to uninitialized memory accesses (using memory that hasn't been set to a known value), out-of-bounds reads (accessing data outside intended boundaries), and potential crashes. The vulnerability was patched across multiple TensorFlow versions.

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.

NVD/CVE Database
04

CVE-2020-26266: In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code

security
Dec 10, 2020

CVE-2020-26266 is a vulnerability in TensorFlow where saved models can accidentally use uninitialized values (memory locations that haven't been set to a starting value) during execution because certain floating point data types weren't properly initialized in the Eigen library (a math processing component). This is a use of uninitialized resource (CWE-908) type bug that could lead to unpredictable behavior when running affected models.

Fix: This vulnerability is fixed in TensorFlow versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.

NVD/CVE Database
05

CVE-2020-26271: In affected versions of TensorFlow under certain cases, loading a saved model can result in accessing uninitialized memo

security
Dec 10, 2020

TensorFlow has a vulnerability where loading a saved model can access uninitialized memory (data that hasn't been set to a known value) when building a computation graph. The bug occurs in the MakeEdge function, which connects parts of a neural network together, because it doesn't verify that array indices are valid before accessing them, potentially allowing attackers to leak memory addresses from the library.

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.

NVD/CVE Database
06

CVE-2020-29374: An issue was discovered in the Linux kernel before 5.7.3, related to mm/gup.c and mm/huge_memory.c. The get_user_pages (

security
Nov 28, 2020

A bug was found in the Linux kernel before version 5.7.3 in the get_user_pages function (a mechanism that allows programs to access memory pages), where it incorrectly grants write access when it should only allow read access for copy-on-write pages (memory regions shared between processes that are copied when modified). This happens because the function doesn't properly respect read-only restrictions, creating a security vulnerability.

Fix: Update the Linux kernel to version 5.7.3 or later. A patch is available at https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=17839856fd588f4ab6b789f482ed3ffd7c403e1f. Debian users should refer to security updates referenced in the Debian mailing list announcements and DSA-5096.

NVD/CVE Database
07

Machine Learning Attack Series: Overview

securityresearch
Nov 26, 2020

This is an index page summarizing a series of blog posts about machine learning security from a red teaming perspective (testing a system by simulating attacker behavior). The posts cover ML basics, threat modeling, practical attacks like adversarial examples (inputs designed to fool AI models), model theft, backdoors (hidden malicious code inserted into models), and how traditional security attacks (like weak access control) also threaten AI systems.

Embrace The Red
08

Machine Learning Attack Series: Generative Adversarial Networks (GANs)

securityresearch
Nov 25, 2020

This post describes how Generative Adversarial Networks (GANs, a type of AI system where two neural networks compete to create realistic fake images) can be used to generate fake husky photos that trick an image recognition system called Husky AI into misclassifying them as real huskies. The author explains they investigated this attack method and references a GAN course to learn more about the technique.

Embrace The Red
09

Assuming Bias and Responsible AI

safetypolicy
Nov 24, 2020

AI and machine learning systems have caused serious problems in real-world situations, including Amazon's recruiting tool that discriminated against women, Microsoft's chatbot that became racist and sexist, IBM's cancer treatment recommendation system that doctors criticized, and Facebook's AI that made incorrect translations leading to someone's arrest. These examples show that AI systems can develop and spread biased predictions and failures with harmful consequences. The article highlights the importance of addressing bias when building and deploying AI systems responsibly.

Embrace The Red
10

CVE-2020-28975: svm_predict_values in svm.cpp in Libsvm v324, as used in scikit-learn 0.23.2 and other products, allows attackers to cau

security
Nov 21, 2020

A vulnerability in Libsvm v324 (a machine learning library used by scikit-learn 0.23.2) allows attackers to crash a program by sending a specially crafted machine learning model with an extremely large value in the _n_support array, causing a segmentation fault (a type of crash where the program tries to access memory it shouldn't). The scikit-learn developers noted this only happens if an application violates the library's API by modifying private attributes.

Fix: A patch is available in scikit-learn at commit 1bf13d567d3cd74854aa8343fd25b61dd768bb85 on GitHub, as referenced in the source material.

NVD/CVE Database
Prev1...361362363364365...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026