aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 236/371
VIEW ALL
01

Deep Learning With Data Privacy via Residual Perturbation

researchprivacy
Nov 26, 2025

This research proposes a new method for protecting data privacy in deep learning (training AI models on sensitive data) by adding Gaussian noise (random values from a bell-curve distribution) to ResNets (a type of neural network with skip connections). The method aims to provide differential privacy (a mathematical guarantee that an individual's data cannot be easily identified from the model's results) while maintaining better accuracy and speed than existing privacy-protection techniques like DPSGD (differentially private stochastic gradient descent, a slower privacy-focused training method).

IEEE Xplore (Security & AI Journals)
02

CVE-2025-62703: Fugue is a unified interface for distributed computing that lets users execute Python, Pandas, and SQL code on Spark, Da

security
Nov 25, 2025

Fugue is a tool that lets developers run Python, Pandas, and SQL code across distributed computing systems like Spark, Dask, and Ray. Versions 0.9.2 and earlier have a remote code execution vulnerability (RCE, where attackers can run arbitrary code on a victim's machine) in the RPC server because it deserializes untrusted data using cloudpickle.loads() without checking if the data is safe first. An attacker can send malicious serialized Python objects to the server, which will execute on the victim's machine.

Fix: This issue has been patched via commit 6f25326.

NVD/CVE Database
03

CVE-2025-13380: The AI Engine for WordPress: ChatGPT, GPT Content Generator plugin for WordPress is vulnerable to Arbitrary File Read in

security
Nov 25, 2025

A WordPress plugin called 'The AI Engine for WordPress: ChatGPT, GPT Content Generator' has a vulnerability that allows attackers with Contributor-level access or higher to read any file on the server. The problem exists because the plugin doesn't properly check file paths that users provide to certain functions (the 'lqdai_update_post' AJAX endpoint and the insert_image() function), which could expose sensitive information.

NVD/CVE Database
04

Antigravity Grounded! Security Vulnerabilities in Google's Latest IDE

security
Nov 25, 2025

Google's new Antigravity IDE inherits multiple security vulnerabilities from the Windsurf codebase it was licensed from, including remote command execution (RCE, where an attacker can run commands on a system they don't own) via indirect prompt injection (tricking an AI by hiding instructions in its input), hidden instruction execution, and data exfiltration. The IDE's default setting allows the AI to automatically execute terminal commands without human review, relying on the language model's judgment to determine if a command is safe, which researchers have successfully bypassed with working exploits.

Embrace The Red
05

CVE-2025-65106: LangChain is a framework for building agents and LLM-powered applications. From versions 0.3.79 and prior and 1.0.0 to 1

security
Nov 21, 2025

LangChain, a framework for building AI agents and applications powered by large language models, has a template injection vulnerability (a security flaw where attackers can hide malicious code in text templates) in versions 0.3.79 and earlier and 1.0.0 through 1.0.6. Attackers can exploit this by crafting malicious template strings that access internal Python object data in ChatPromptTemplate and similar classes, particularly when an application accepts untrusted template input.

Fix: Update to LangChain version 0.3.80 or 1.0.7, where the vulnerability has been patched.

NVD/CVE Database
06

CVE-2025-65946: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. Prior to version 3.26.7, Due to an error

security
Nov 21, 2025

Roo Code is an AI-powered coding agent that runs inside code editors. Before version 3.26.7, a validation error allowed Roo to automatically execute commands that weren't on an allow list (a list of approved commands), which is a type of command injection vulnerability (where attackers trick a system into running unintended commands).

Fix: Update to version 3.26.7 or later. According to the source, 'This issue has been patched in version 3.26.7.'

NVD/CVE Database
07

CVE-2025-65107: Langfuse is an open source large language model engineering platform. In versions from 2.95.0 to before 2.95.12 and from

security
Nov 21, 2025

Langfuse, an open source platform for managing large language models, has a vulnerability in versions 2.95.0–2.95.11 and 3.17.0–3.130.x where attackers could take over user accounts if certain security settings are not configured. The attack works by tricking an authenticated user into clicking a malicious link (via CSRF, which is cross-site request forgery where an attacker tricks your browser into making unwanted requests, or phishing).

Fix: Update to Langfuse version 2.95.12 or 3.131.0, where the issue has been patched. Alternatively, as a workaround, set the AUTH_<PROVIDER>_CHECK configuration parameter.

NVD/CVE Database
08

CVE-2025-12973: The S2B AI Assistant – ChatBot, ChatGPT, OpenAI, Content & Image Generator plugin for WordPress is vulnerable to arbitra

security
Nov 21, 2025

The S2B AI Assistant WordPress plugin (a tool that adds AI chatbot features to websites) has a vulnerability in versions up to 1.7.8 where it fails to check what type of files users are uploading. This allows editors and higher-level users to upload malicious files that could potentially let attackers run commands on the website server (remote code execution, or RCE).

NVD/CVE Database
09

CVE-2025-62609: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a segmentation fault

security
Nov 21, 2025

MLX is an array framework for machine learning on Apple silicon that has a vulnerability where loading malicious GGUF files (a machine learning model format) causes a segmentation fault (a crash where the program tries to access invalid memory). The problem occurs because the code dereferences an untrusted pointer (uses a memory address without checking if it's valid) from an external library without validation.

Fix: This issue has been patched in version 0.29.4. Users should update MLX to version 0.29.4 or later.

NVD/CVE Database
10

CVE-2025-62608: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a heap buffer overflo

security
Nov 21, 2025

MLX is an array framework (a software library for handling arrays of data in machine learning) for Apple silicon computers. Before version 0.29.4, the software had a heap buffer overflow (a memory safety bug where the program reads beyond allocated memory) in its file-loading function when processing malicious NumPy .npy files (a common data format in machine learning), which could crash the program or leak sensitive information.

Fix: Update MLX to version 0.29.4 or later. The vulnerability has been patched in this version.

NVD/CVE Database
Prev1...234235236237238...371Next