aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
21
[LAST_7D]
162
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 149/275
VIEW ALL
01

Revealing the Risk of Hyper-Parameter Leakage in Deep Reinforcement Learning Models

securityresearch
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Oct 6, 2025

Researchers discovered that hyper-parameters (settings that control how a deep reinforcement learning model learns and behaves) can be leaked from closed-box DRL models, meaning attackers can figure out these secret settings just by observing how the model responds to different situations. They created an attack called HyperInfer that successfully inferred hyper-parameters with over 90% accuracy, showing that even restricted AI models may expose information that was meant to stay hidden.

IEEE Xplore (Security & AI Journals)
02

PrivESD: A Privacy-Preserving Cloud-Edge Collaborative Logistic Regression Model Over Encrypted Streaming Data

securityresearch
Oct 6, 2025

PrivESD is a new system that allows machine learning classification (logistic regression, a technique for categorizing data) to work on encrypted streaming data (continuously flowing information that's been scrambled for privacy) while stored in the cloud. The system splits the computational work between cloud servers and edge devices (computers closer to where data originates) to reduce processing burden and privacy risks, and uses special encryption methods that still allow the system to compare values without revealing the actual data.

IEEE Xplore (Security & AI Journals)
03

Hard Sample Mining: A New Paradigm of Efficient and Robust Model Training

research
Oct 6, 2025

Hard sample mining (HSM, a technique for selecting the most difficult training examples to focus a model's learning) has emerged as a method to improve how efficiently deep neural networks (AI systems based on interconnected layers inspired by brain neurons) train and make them more robust to errors. This survey article reviews different HSM approaches and explains how they help address training inefficiency and data distribution biases (when training data doesn't represent real-world scenarios fairly) in deep learning.

IEEE Xplore (Security & AI Journals)
04

Three-Dimensional Multiobject Tracking Based on Voxel Masking Encoder and Deep Hashing Paradigm

research
Oct 6, 2025

This paper presents a new system for 3-D multiobject tracking (MOT, a technique where AI follows multiple objects moving through 3-D space) used in autonomous vehicles to improve safety. The system uses a voxel masking encoder (a method that processes 3-D space divided into small cubes, focusing on important features while ignoring empty space) and deep hashing (a technique that converts objects into compact numerical codes for fast comparison) to better track distant objects, partially hidden objects, and similar-looking objects. The method was tested on the KITTI dataset (a standard collection of driving videos used to evaluate autonomous vehicle systems) and showed better tracking accuracy than existing methods.

IEEE Xplore (Security & AI Journals)
05

FedMPS: Federated Learning in a Synergy of Multi-Level Prototype-Based Contrastive Learning and Soft Label Generation

research
Oct 6, 2025

FedMPS is a federated learning (FL, a technique where multiple computers train an AI model together without sharing raw data) framework that addresses performance problems caused by data heterogeneity (differences in data across participants). Instead of exchanging full model parameters, FedMPS transmits only prototypes (representative feature patterns) and soft labels (probability-based output predictions), which reduces communication costs and improves how well models learn from each other.

IEEE Xplore (Security & AI Journals)
06

Syntax-Oriented Shortcut: A Syntax Level Perturbing Algorithm for Preventing Text Data From Being Learned

researchsecurity
Oct 6, 2025

Researchers created a method called UTE-SS (Unlearnable text examples generation via syntax-oriented shortcut) to protect text data from being used to train AI models without permission. The method adds small, hard-to-notice changes to text by altering its syntax (grammatical structure) so that language models learn misleading patterns instead of useful information, making the text data effectively useless for training.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-61685: Mastra is a Typescript framework for building AI agents and assistants. Versions 0.13.8 through 0.13.20-alpha.0 are vuln

security
Oct 3, 2025

Mastra (a TypeScript framework for building AI agents and assistants) versions 0.13.8 through 0.13.20-alpha.0 have a directory traversal vulnerability, which means an attacker can bypass security checks to list files and folders in any directory on a user's computer, potentially exposing sensitive information. The flaw exists because while the code tries to prevent path traversal (unauthorized access to files through manipulated file paths) for reading files, a separate part of the code that suggests directories can be exploited to work around this protection.

Fix: This issue is fixed in version 0.13.20.

NVD/CVE Database
08

CVE-2025-59944: Cursor is a code editor built for programming with AI. Versions 1.6.23 and below contain case-sensitive checks in the wa

security
Oct 3, 2025

Cursor is a code editor designed for programming with AI help. Versions 1.6.23 and below have a security flaw where they use case-sensitive checks (checking uppercase and lowercase letters as different) to protect sensitive files, which allows attackers to use prompt injection (tricking the AI with hidden instructions) to modify these files and gain remote code execution (the ability to run commands on the victim's computer) on case-insensitive filesystems (systems that treat uppercase and lowercase letters the same).

Fix: This issue is fixed in version 1.7. Users should upgrade to version 1.7 or later.

NVD/CVE Database
09

CVE-2025-59829: Claude Code is an agentic coding tool. Versions below 1.0.120 failed to account for symlinks when checking permission de

security
Oct 3, 2025

Claude Code versions before 1.0.120 had a security flaw where it could bypass file access restrictions by following symlinks (shortcuts that point to other files). Even if a user blocked Claude Code from accessing a file, the tool could still read it if there was a symlink pointing to that blocked file.

Fix: Update Claude Code to version 1.0.120 or later. Users with automatic updates enabled will have received this fix automatically; users updating manually should upgrade to the latest version.

NVD/CVE Database
10

CVE-2025-61593: Cursor is a code editor built for programming with AI. In versions 1.7 and below, a vulnerability in the way Cursor CLI

security
Oct 3, 2025

Cursor, a code editor designed for programming with AI, has a vulnerability in versions 1.7 and below where attackers can use prompt injection (tricking the AI by hiding instructions in its input) to modify sensitive configuration files and achieve remote code execution (RCE, where an attacker can run commands on a system they don't own). This vulnerability is especially dangerous on case-insensitive filesystems (systems that treat uppercase and lowercase letters as the same).

Fix: This issue is fixed in commit 25b418f, but has yet to be released as of October 3, 2025.

NVD/CVE Database
Prev1...147148149150151...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026