aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 240/371
VIEW ALL
01

CVE-2025-64107: Cursor is a code editor built for programming with AI. In versions 1.7.52 and below, manipulating internal settings may

security
Nov 4, 2025

Cursor, a code editor designed for AI-assisted programming, had a security flaw in versions 1.7.52 and below where attackers could bypass safety checks on Windows machines. While the software blocked path manipulation (tricks to access files in unintended ways) using forward slashes and required human approval, the same trick using backslashes was not detected, potentially allowing an attacker with prompt injection access (hidden malicious instructions in AI inputs) to run arbitrary code and overwrite important files without permission.

Fix: This issue is fixed in version 2.0.

NVD/CVE Database
02

CVE-2025-64320: Improper Neutralization of Input Used for LLM Prompting vulnerability in Salesforce Agentforce Vibes Extension allows Co

security
Nov 4, 2025

CVE-2025-64320 is a code injection vulnerability in Salesforce Agentforce Vibes Extension that occurs because the software doesn't properly filter user input before sending it to an LLM (large language model), allowing attackers to inject malicious code. The vulnerability affects versions before 3.2.0 of the extension.

Fix: Update Salesforce Agentforce Vibes Extension to version 3.2.0 or later.

NVD/CVE Database
03

CVE-2025-10875: Improper Neutralization of Input Used for LLM Prompting vulnerability in Salesforce Mulesoft Anypoint Code Builder allow

security
Nov 4, 2025

CVE-2025-10875 is a vulnerability in Salesforce Mulesoft Anypoint Code Builder that allows improper neutralization of input used for LLM prompting (a technique where attackers manipulate AI system instructions through user input), leading to code injection (inserting malicious code into a system). This vulnerability affects versions of the software before 1.11.6.

Fix: Update Mulesoft Anypoint Code Builder to version 1.11.6 or later.

NVD/CVE Database
04

CVE-2025-12695: The overly permissive sandbox configuration in DSPy allows attackers to steal sensitive files in cases when users build

security
Nov 4, 2025

CVE-2025-12695 is a vulnerability in DSPy (a framework for building AI agents) where an overly permissive sandbox configuration (a restricted environment meant to limit what code can do) allows attackers to steal sensitive files when users build an AI agent that takes user input and uses the PythonInterpreter class (a tool that runs Python code). The vulnerability stems from improper isolation, meaning the sandbox doesn't adequately separate the untrusted code from the rest of the system.

NVD/CVE Database
05

CVE-2025-12156: The Ai Auto Tool Content Writing Assistant (Gemini Writer, ChatGPT ) All in One plugin for WordPress is vulnerable to un

security
Nov 4, 2025

A WordPress plugin called 'Ai Auto Tool Content Writing Assistant' (versions 2.0.7 to 2.2.6) has a security flaw where it doesn't properly check user permissions before allowing the save_post_data() function (a feature that stores post information) to run. This means even low-level users (Subscriber level and above) can create and publish posts they shouldn't be able to, allowing unauthorized modification of website content.

NVD/CVE Database
06

v0.14.7

industry
Oct 30, 2025

LlamaIndex released version 0.14.7 and several component updates that add new features and fix bugs across the platform. Key updates include integrations with tool-calling features for multiple AI models (Anthropic, Mistral, Ollama), new support for GitHub App authentication, and fixes for failing tests and documentation issues. These changes improve how LlamaIndex connects to different AI services and external tools.

LlamaIndex Security Releases
07

CVE-2025-12060: The keras.utils.get_file API in Keras, when used with the extract=True option for tar archives, is vulnerable to a path

security
Oct 30, 2025

Keras, a machine learning library, has a vulnerability in its keras.utils.get_file function when extracting tar archives (compressed file collections). An attacker can create a malicious tar file with special symlinks (shortcuts to files) that, when extracted, writes files anywhere on the system instead of just the intended folder, giving them unauthorized access to overwrite important system files.

Fix: Upgrade Keras to version 3.12 or later. The source notes that upgrading Python alone (even to versions like Python 3.13.4 that fix the underlying CVE-2025-4517 vulnerability) is not sufficient; the Keras upgrade is also required.

NVD/CVE Database
08

MaxDiv: Zero-Shot Machine Unlearning via Distributionally Divergent Erasing Samples

researchprivacy
Oct 30, 2025

This article presents MaxDiv, a technique for machine unlearning, which is the process of removing specific knowledge from an AI model after training to protect privacy, even when the original training data is no longer available. MaxDiv works by creating special synthetic data samples that have opposite characteristics to the data being forgotten, and it uses knowledge distillation (a technique where a model learns to replicate another model's behavior) to ensure important information isn't accidentally lost during the unlearning process.

IEEE Xplore (Security & AI Journals)
09

CVE-2025-11203: LiteLLM Information health API_KEY Information Disclosure Vulnerability. This vulnerability allows remote attackers to d

security
Oct 29, 2025

LiteLLM, a tool that helps developers use different AI models through one interface, has a vulnerability where the health endpoint (a checking tool that monitors system status) improperly exposes API_KEY information (secret credentials used to authenticate requests) to attackers who are already authenticated. An attacker with access could steal these stored credentials and use them to compromise the system further.

NVD/CVE Database
10

CVE-2025-11201: MLflow Tracking Server Model Creation Directory Traversal Remote Code Execution Vulnerability. This vulnerability allows

security
Oct 29, 2025

MLflow Tracking Server contains a directory traversal (a vulnerability where an attacker uses special path characters like '../' to access files outside the intended directory) vulnerability that allows unauthenticated attackers to execute arbitrary code on the server. The flaw stems from insufficient validation of file paths when handling model creation, letting attackers run commands with the privileges of the service account running MLflow.

NVD/CVE Database
Prev1...238239240241242...371Next