aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
2,829
[LAST_24H]
3
[LAST_7D]
160
Daily BriefingMonday, April 6, 2026
>

Attackers Exploit AI Systems as Infrastructure for Attacks: Adversaries are increasingly abusing legitimate AI services for malicious operations, including poisoning MCP servers (tools that connect AI assistants to external services) in supply chains, using AI platforms like Claude and Copilot as command-and-control channels (hidden pathways for sending instructions to compromised systems), and hijacking AI agents (automated systems that perform tasks) to exfiltrate data or execute destructive actions. This represents an evolution beyond prompt injection (tricking an AI by hiding instructions in its input) toward sophisticated agent hijacking techniques.

>

AI Security Tools Create New Vendor Lock-In Risks: Commercial AI-powered security products are generating a distinct form of platform dependency through proprietary training data, vendor-specific threat intelligence feeds (collections of indicators showing cyber attacks), and specialized hardware requirements. Organizations face significant migration costs and technical barriers when attempting to switch providers.

Latest Intel

page 279/283
VIEW ALL
01

CVE-2020-15193: In Tensorflow before versions 2.2.1 and 2.3.1, the implementation of `dlpack.to_dlpack` can be made to use uninitialized

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a vulnerability in the `dlpack.to_dlpack` function where it can be tricked into using uninitialized memory (memory that hasn't been set to a known value), leading to further memory corruption. The problem occurs because the code assumes the input is a TensorFlow tensor, but an attacker can pass in a regular Python object instead, causing a faulty type conversion that accesses memory incorrectly.

Critical This Week5 issues
critical

GHSA-jjhc-v7c2-5hh6: LiteLLM: Authentication bypass via OIDC userinfo cache key collision

CVE-2026-35030GitHub Advisory DatabaseApr 3, 2026
Apr 3, 2026

Fix: Upgrade to TensorFlow version 2.2.1 or 2.3.1, where the issue is patched in commit 22e07fb204386768e5bcbea563641ea11f96ceb8.

NVD/CVE Database
02

CVE-2020-15192: In Tensorflow before versions 2.2.1 and 2.3.1, if a user passes a list of strings to `dlpack.to_dlpack` there is a memor

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a memory leak (wasted computer memory that isn't freed) when users pass a list of strings to a function called `dlpack.to_dlpack`. The bug happens because the code doesn't properly check for error conditions during validation, so it continues running even when it should stop and clean up.

Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which include the fix released in commit 22e07fb204386768e5bcbea563641ea11f96ceb8.

NVD/CVE Database
03

CVE-2020-15191: In Tensorflow before versions 2.2.1 and 2.3.1, if a user passes an invalid argument to `dlpack.to_dlpack` the expected v

security
Sep 25, 2020

TensorFlow versions before 2.2.1 and 2.3.1 have a bug where invalid arguments to `dlpack.to_dlpack` (a function that converts data between formats) cause the code to create null pointers (memory references that point to nothing) without properly checking for errors. This can lead to the program crashing or behaving unpredictably when it tries to use these invalid pointers.

Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which contain the patch for this issue.

NVD/CVE Database
04

CVE-2020-15190: In Tensorflow before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, the `tf.raw_ops.Switch` operation takes as input a

security
Sep 25, 2020

TensorFlow versions before 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 have a bug in the `tf.raw_ops.Switch` operation where it tries to access a null pointer (a reference to nothing), causing the program to crash. The problem occurs because the operation outputs two tensors (data structures in machine learning frameworks) but only one is actually created, leaving the other as an undefined reference that shouldn't be accessed.

Fix: Update to TensorFlow version 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 or later. The issue is patched in commit da8558533d925694483d2c136a9220d6d49d843c.

NVD/CVE Database
05

Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries

securityresearch
Sep 22, 2020

This article describes a participant's experience in Microsoft and CUJO AI's Machine Learning Security Evasion Competition, where the goal was to modify malware samples to bypass machine learning models (AI systems trained to detect malicious files) while keeping them functional. The participant attempted two main evasion techniques: hiding data in binaries using steganography (concealing information within files), which had minimal impact, and signing binaries with fake Microsoft certificates using Authenticode (a digital signature system that verifies software authenticity), which showed more promise.

Embrace The Red
06

Machine Learning Attack Series: Backdooring models

securityresearch
Sep 18, 2020

This post discusses backdooring attacks on machine learning models, where an adversary gains access to a model file (the trained AI system used in production) and overwrites it with malicious code. The threat was identified during threat modeling, which is a security planning process where teams imagine potential attacks to prepare defenses. The post indicates it will cover attacks, mitigations, and how Husky AI was built to address this risk.

Embrace The Red
07

Machine Learning Attack Series: Perturbations to misclassify existing images

securityresearch
Sep 16, 2020

This post discusses a machine learning attack technique where researchers modify existing images through small changes (perturbations, or slight adjustments to pixels) to trick an AI model into misclassifying them. For example, they aim to alter a picture of a plush bunny so that an image recognition model incorrectly identifies it as a husky dog.

Embrace The Red
08

Machine Learning Attack Series: Smart brute forcing

securityresearch
Sep 13, 2020

This post is part of a series about machine learning security attacks, with sections covering how an AI system called Husky AI was built and threat-modeled, plus investigations into attacks against it. The previous post demonstrated basic techniques to fool an image recognition model (a type of AI trained to identify what's in pictures) by generating images with solid colors or random pixels.

Embrace The Red
09

Machine Learning Attack Series: Brute forcing images to find incorrect predictions

researchsecurity
Sep 9, 2020

A researcher tested a machine learning model called Husky AI by creating simple test images (all black, all white, and random pixels) and sending them through an HTTP API to see if the model would make incorrect predictions. The white canvas image successfully tricked the model into incorrectly classifying it as a husky, demonstrating a perturbation attack (where slightly modified or unusual inputs fool an AI into making wrong predictions).

Embrace The Red
10

Threat modeling a machine learning system

securityresearch
Sep 6, 2020

This post explains threat modeling for machine learning systems, which is a process to systematically identify potential security attacks. The author uses Microsoft's Threat Modeling tool and STRIDE (a framework categorizing threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) to identify vulnerabilities in a machine learning system called 'Husky AI', and notes that perturbation attacks (where attackers query the model to trick it into making wrong predictions) are a particular concern for ML systems.

Embrace The Red
Prev1...277278279280281...283Next
critical

CVE-2026-0545: In mlflow/mlflow, the FastAPI job endpoints under `/ajax-api/3.0/jobs/*` are not protected by authentication or authoriz

CVE-2026-0545NVD/CVE DatabaseApr 3, 2026
Apr 3, 2026
critical

GHSA-3hfp-gqgh-xc5g: Axios supply chain attack - dependency in @lightdash/cli may resolve to compromised axios versions

GitHub Advisory DatabaseApr 2, 2026
Apr 2, 2026
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026