aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
21
[LAST_7D]
162
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 150/275
VIEW ALL
01

CVE-2025-61592: Cursor is a code editor built for programming with AI. In versions 1.7 and below, automatic loading of project-specific

security
Oct 3, 2025

Cursor, a code editor designed for AI-assisted programming, has a vulnerability in versions 1.7 and below where it automatically loads configuration files from project directories, which can be exploited by attackers. If a user runs Cursor's command-line tool (CLI) in a malicious repository, an attacker could use prompt injection (tricking the AI by hiding instructions in its input) combined with permissive settings to achieve remote code execution (the ability to run commands on the user's system without permission).

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Fix: The fix is available as patch 2025.09.17-25b418f. As of October 3, 2025, this patch has not yet been included in an official release version.

NVD/CVE Database
02

v0.14.4

security
Oct 3, 2025

LlamaIndex released version 0.14.4 on September 24, 2025, with updates across multiple packages that integrate with different AI services and databases. Most updates fixed dependency issues with OpenAI libraries, while others added new features like support for Claude Sonnet 4.5 and structured outputs, and fixed bugs in areas like authorization headers and data fetching.

Fix: Update to version 0.14.4 and the corresponding versioned packages listed in the release notes (e.g., llama-index-llms-openai 0.6.1, llama-index-embeddings-text-embeddings-inference 0.4.2, llama-index-llms-ollama 0.7.4, and others) to receive the dependency fixes and bug fixes described.

LlamaIndex Security Releases
03

CVE-2025-61591: Cursor is a code editor built for programming with AI. In versions 1.7 and below, when MCP uses OAuth authentication wit

security
Oct 3, 2025

Cursor is a code editor that lets programmers work with AI assistance. In versions 1.7 and below, when using MCP (a system for connecting external tools to AI) with OAuth authentication (a login method), an attacker can trick Cursor into running malicious commands by pretending to be a trusted service, potentially giving them full control of the user's computer.

Fix: A patch is available at version 2025.09.17-25b418f. Users should update to this patched version to fix the vulnerability.

NVD/CVE Database
04

CVE-2025-61590: Cursor is a code editor built for programming with AI. Versions 1.6 and below are vulnerable to Remote Code Execution (R

security
Oct 3, 2025

Cursor, a code editor designed for AI-assisted programming, has a critical vulnerability in versions 1.6 and below that allows remote code execution (RCE, where an attacker runs commands on your computer without permission). An attacker who gains control of the AI chat context (such as through a compromised MCP server, a tool that extends the AI's capabilities) can use prompt injection (tricking the AI by hiding malicious instructions in its input) to make Cursor modify workspace configuration files, bypassing an existing security protection and ultimately executing arbitrary code.

Fix: Update to version 1.7, which fixes this issue.

NVD/CVE Database
05

FedNK-RF: Federated Kernel Learning With Heterogeneous Data and Optimal Rates

research
Oct 3, 2025

This research paper proposes FedNK-RF, an algorithm for federated learning (a decentralized approach where multiple parties train AI models together while keeping their data private) that handles heterogeneous data (data that differs significantly across different sources). The algorithm uses random features and Nyström approximation (a mathematical technique that reduces computational errors) to improve accuracy while maintaining privacy protection, and the authors prove it achieves optimal performance rates.

IEEE Xplore (Security & AI Journals)
06

CVE-2025-61589: Cursor is a code editor built for programming with AI. In versions 1.6 and below, Mermaid (a to render diagrams) allows

security
Oct 3, 2025

Cursor, a code editor designed for programming with AI, has a vulnerability in versions 1.6 and below where Mermaid (a tool for rendering diagrams) can embed images that get displayed in the chat box. An attacker can exploit this through prompt injection (tricking the AI by hiding instructions in its input) to send sensitive information to an attacker-controlled server, or a malicious AI model might trigger this automatically.

Fix: This issue is fixed in version 1.7. Users should upgrade to version 1.7 or later.

NVD/CVE Database
07

CVE-2025-59536: Claude Code is an agentic coding tool. Versions before 1.0.111 were vulnerable to Code Injection due to a bug in the sta

security
Oct 3, 2025

Claude Code (an AI tool that writes and runs code automatically) had a security flaw in versions before 1.0.111 where it could execute code from a project before the user confirmed they trusted the project. An attacker could exploit this by tricking a user into opening a malicious project directory.

Fix: Update Claude Code to version 1.0.111 or later. Users with auto-update enabled will have received this fix automatically; users performing manual updates should update to the latest version.

NVD/CVE Database
08

Privacy-Preserving Federated Learning Scheme With Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures

securityresearch
Oct 2, 2025

Federated learning schemes (systems where multiple parties train AI models together while keeping data private) that use two servers for privacy protection were found to leak user data when facing model poisoning attacks (where malicious users deliberately corrupt the learning process). The researchers propose an enhanced framework called PBFL that uses Byzantine-robust aggregation (a method to safely combine data from untrusted sources), normalization checks, similarity measurements, and trapdoor fully homomorphic encryption (a technique for doing calculations on encrypted data without decrypting it) to protect privacy while defending against poisoning attacks.

Fix: The authors propose an enhanced privacy-preserving and Byzantine-robust federated learning (PBFL) framework that addresses the vulnerability. Key components include: a novel Byzantine-tolerant aggregation strategy with normalization judgment, cosine similarity computation, and adaptive user weighting; a dual-scoring trust mechanism and outlier suppression for detecting stealthy attacks; and two privacy-preserving subroutines (secure normalization judgment and secure cosine similarity measurement) that operate over encrypted gradients using a trapdoor fully homomorphic encryption scheme. According to theoretical analyses and experiments, this scheme guarantees security, convergence, and efficiency even with malicious users and one malicious server.

IEEE Xplore (Security & AI Journals)
09

Data Aggregation Mechanisms With Dynamic Integrity Trustworthiness Evaluation Framework for Datacenters

research
Oct 2, 2025

This research proposes a data aggregation framework (a system for combining data from multiple sources) that evaluates how trustworthy different data sources are using dynamic Bayesian networks (a model that updates trust scores based on changing network behavior over time). The framework combines trust measurement with the minimum spanning tree protocol (an algorithm for efficient data routing) to improve how data centers process large amounts of information, achieving significant reductions in computational, communication, and storage costs.

IEEE Xplore (Security & AI Journals)
10

An Algorithm for Persistent Homology Computation Using Homomorphic Encryption

research
Oct 1, 2025

This research presents a new method for performing topological data analysis (TDA, a technique that finds shape-based patterns in complex data) on encrypted information using homomorphic encryption (HE, a type of encryption that lets computers process data without decrypting it first). The authors adapted a fundamental TDA algorithm called boundary matrix reduction to work with encrypted data, proved it works correctly mathematically, and tested it using the OpenFHE framework to show it functions properly on real encrypted data.

IEEE Xplore (Security & AI Journals)
Prev1...148149150151152...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026