aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 239/371
VIEW ALL
01

CVE-2025-11972: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to SQL Injection

security
Nov 8, 2025

A WordPress plugin called Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI has a SQL injection vulnerability (a security flaw where attackers can insert harmful database commands into the plugin's code) in versions up to 3.40.0. Attackers with Editor-level access or higher can exploit the 'post_types' parameter to extract sensitive information from the website's database because the plugin doesn't properly clean up user input before using it in database queries.

NVD/CVE Database
02

v5.1.0

securityresearch
Nov 6, 2025

ATLAS Data v5.1.0 is an updated framework that documents security threats and defenses related to AI systems, now containing 16 tactics, 84 techniques, and 32 mitigations. The update adds new attack methods targeting AI, such as prompt injection (tricking an AI by hiding instructions in its input), deepfake generation, and data theft from AI services, along with new defensive measures like human oversight of AI agent actions and restricted permissions for AI tools. It also includes 42 real-world case studies showing how these attacks and defenses apply in practice.

MITRE ATLAS Releases
03

CVE-2025-12488: oobabooga text-generation-webui trust_remote_code Reliance on Untrusted Inputs Remote Code Execution Vulnerability. This

security
Nov 6, 2025

A vulnerability in oobabooga text-generation-webui (CVE-2025-12488) allows attackers to execute arbitrary code (running any commands they want on a system) by exploiting the trust_remote_code parameter in the load endpoint. The flaw occurs because the software doesn't properly validate user input before using it to load a model, and no authentication is required to exploit it.

NVD/CVE Database
04

CVE-2025-12487: oobabooga text-generation-webui trust_remote_code Reliance on Untrusted Inputs Remote Code Execution Vulnerability. This

security
Nov 6, 2025

A vulnerability in oobabooga text-generation-webui allows attackers to run arbitrary code (unauthorized commands) on the system without needing to log in. The flaw occurs because the software doesn't properly check user input for the trust_remote_code parameter before using it to load a model, letting attackers execute code with the same permissions as the service.

NVD/CVE Database
05

CVE-2025-62039: Insertion of Sensitive Information Into Sent Data vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator

security
Nov 6, 2025

A vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator (version 2.6.6 and earlier) allows sensitive information to be exposed when data is sent. The flaw, called CWE-201 (insertion of sensitive information into sent data), means attackers could potentially retrieve embedded sensitive data from the plugin.

NVD/CVE Database
06

FUBA: Backdoor Federated Learning via Federated Unlearning

securityresearch
Nov 6, 2025

Researchers discovered a new attack called FUBA (federated unlearning backdoor attack) that exploits a privacy feature in federated learning (a technique where multiple parties train an AI model together without sharing their raw data). The attack uses malicious unlearning requests, which are supposed to let participants remove their data from a trained model, to secretly inject backdoors (hidden harmful behaviors) into the model instead. The attack is difficult to detect because it hides from existing security defenses.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-12360: The Better Find and Replace – AI-Powered Suggestions plugin for WordPress is vulnerable to unauthorized API usage due to

security
Nov 6, 2025

The Better Find and Replace plugin for WordPress (versions up to 1.7.7) has a security flaw where a function called rtafar_ajax() doesn't properly check user permissions, allowing low-level authenticated users (Subscriber-level access) to trigger OpenAI API key usage and consume quota, potentially costing money. This happens because the code is missing a capability check (a permission verification system that controls what users can do).

NVD/CVE Database
08

Modifying AI Under the EU AI Act: Lessons from Practice on Classification and Compliance

policy
Nov 5, 2025

Under the EU AI Act, organizations that modify existing AI systems or general-purpose AI models (GPAI models, which are foundational AI systems designed to perform many different tasks) may become legally classified as "providers" and face significant compliance responsibilities. The article explains that modifications triggering higher compliance burdens typically involve high-risk AI systems or substantial changes to a GPAI model's capabilities or generality, such as fine-tuning (customizing a model for specific tasks). Proper assessment of whether a modification triggers provider status is critical, since misclassification can result in fines up to €15 million or 3% of global annual revenue.

EU AI Act Updates
09

CVE-2025-64110: Cursor is a code editor built for programming with AI. In versions 1.7.23 and below, a logic bug allows a malicious agen

security
Nov 4, 2025

Cursor, a code editor designed for programming with AI, has a logic bug in versions 1.7.23 and below that allows attackers to bypass cursorignore (a file that protects sensitive files from being read). An attacker who has already performed prompt injection (tricking an AI by hiding instructions in its input) or controls a malicious AI model could create a new cursorignore file to override existing protections and access protected files.

Fix: Update to version 2.0, where this issue is fixed.

NVD/CVE Database
10

CVE-2025-64108: Cursor is a code editor built for programming with AI. In versions 1.7.44 and below, various NTFS path quirks allow a pr

security
Nov 4, 2025

Cursor, a code editor designed for AI-assisted programming, has a vulnerability in versions 1.7.44 and below where attackers can exploit NTFS path quirks (special behaviors of Windows file systems) to bypass file protection rules and overwrite files that normally require human approval, potentially leading to RCE (remote code execution, where an attacker can run commands on a system they don't own). This attack requires chaining with prompt injection (tricking an AI by hiding instructions in its input) or a malicious AI model, and only affects Windows systems using NTFS.

Fix: This issue is fixed in version 2.0. Users should upgrade to version 2.0 or later.

NVD/CVE Database
Prev1...237238239240241...371Next