aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 233/371
VIEW ALL
01

CVE-2025-62994: Insertion of Sensitive Information Into Sent Data vulnerability in WP Messiah WP AI CoPilot ai-co-pilot-for-wp allows Re

security
Dec 9, 2025

CVE-2025-62994 is a vulnerability in WP AI CoPilot (a WordPress plugin that adds AI assistance to WordPress sites) version 1.2.7 and earlier, where sensitive information gets accidentally included when the plugin sends data. This allows attackers to retrieve embedded sensitive data that shouldn't be exposed.

NVD/CVE Database
02

AdaptiveShield: Dynamic Defense Against Decentralized Federated Learning Poisoning Attacks

securityresearch
Dec 9, 2025

Federated learning (a system where decentralized devices train a shared AI model together while keeping their data local) is vulnerable to poisoning attacks, where malicious participants inject false data to corrupt the final model. This paper proposes AdaptiveShield, a defense system that uses dynamic detection strategies to identify attackers, automatically adjusts its sensitivity thresholds to handle different attack types, reduces damage from missed attackers by adjusting hyperparameters (settings that control how the model learns), and hides user identities through a shuffling mechanism to protect privacy.

Fix: AdaptiveShield employs: (1) dynamic detection strategies that assess maliciousness and dynamically adjust detection thresholds to adapt to various attack scenarios; (2) dynamic hyperparameter adjustment to minimize negative impact from missed attackers and enhance robustness; and (3) a hierarchical shuffle mechanism to dissociate user identities from their uploaded local models, providing privacy protection.

IEEE Xplore (Security & AI Journals)
03

Enhancing the Security of Large Character Set CAPTCHAs Using Transferable Adversarial Examples

researchsecurity
Dec 9, 2025

Deep learning attacks have successfully cracked CAPTCHAs (automated tests that distinguish humans from bots) that use large character sets, especially those with alphabets from languages like Chinese. This paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation), a framework that makes CAPTCHAs harder to attack by adding adversarial perturbations (intentional distortions that confuse AI recognition systems) through two modules: one that prevents character recognition and another that adds global visual noise, reducing attack success rates from 51.52% to 2.56%.

Fix: The paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation) as a defense framework. According to the source, ACG uses 'a Fine-grained Generation Module, combining three novel strategies to prevent attackers from recognizing characters, and an Ensemble Generation Module to generate global perturbations in CAPTCHAs' to strengthen defense against recognition attacks and improve robustness against diverse detection architectures.

IEEE Xplore (Security & AI Journals)
04

Versatile Backdoor Attack With Visible, Semantic, Sample-Specific and Compatible Triggers

securityresearch
Dec 9, 2025

Researchers developed a new method for backdoor attacks (techniques that manipulate AI systems to behave in specific ways when exposed to hidden trigger patterns) that works better in real-world physical scenarios. The method, called VSSC triggers (Visible, Semantic, Sample-specific, and Compatible), uses large language models, generative models, and vision-language models in an automated pipeline to create stealthy triggers that can survive visual distortions and be deployed using real objects, making physical backdoor attacks more practical and systematic than manual methods.

IEEE Xplore (Security & AI Journals)
05

Test-Time Correction: An Online 3D Detection System via Visual Prompting

research
Dec 9, 2025

This paper presents Test-Time Correction (TTC), a system that helps autonomous vehicles fix detection errors while driving, rather than waiting for retraining. TTC uses an Online Adapter module with visual prompts (image-based descriptions of objects derived from feedback like mismatches or user clicks) to continuously correct mistakes in real-time, allowing vehicles to adapt to new situations and improve safety without stopping to retrain the system.

IEEE Xplore (Security & AI Journals)
06

A Unified Decision Rule for Generalized Out-of-Distribution Detection

researchsafety
Dec 9, 2025

This research paper addresses generalized out-of-distribution detection (OOD detection, where an AI system identifies inputs that are very different from its training data), which is important for AI systems used in safety-critical applications. Rather than focusing on designing better scoring functions, the authors propose a new decision rule called the generalized Benjamini Hochberg procedure that uses hypothesis testing (a statistical method for making decisions about data) to determine whether an input is out-of-distribution, and they prove this method controls false positive rates better than traditional threshold-based approaches.

IEEE Xplore (Security & AI Journals)
07

Side-Channel Analysis Based on Multiple Leakage Models Ensemble

researchsecurity
Dec 8, 2025

This research proposes a new framework for side-channel analysis (SCA, a type of attack that exploits physical information like power consumption or timing to break cryptography) by combining multiple different leakage models (ways of measuring how a cryptographic device leaks secrets) using ensemble learning (combining many weaker models into one stronger one). The framework improves how well attackers can recover secret keys by using deep learning with complementary information from different measurement approaches, and the authors prove mathematically that their ensemble model gets closer to the true secret distribution.

IEEE Xplore (Security & AI Journals)
08

CVE-2025-13922: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to time-based bli

security
Dec 6, 2025

A WordPress plugin called AI Autotagger with OpenAI has a security flaw called time-based blind SQL injection (a technique where attackers sneak extra database commands into legitimate queries by exploiting how the software processes user input) in versions up to 3.40.1. Attackers with contributor-level access or higher can use this flaw to steal sensitive data from the database, slow down the website, or extract information through time-delay tricks.

NVD/CVE Database
09

CVE-2025-34291: Langflow versions up to and including 1.6.9 contain a chained vulnerability that enables account takeover and remote cod

security
Dec 5, 2025

Langflow versions up to 1.6.9 have a chained vulnerability that allows attackers to take over user accounts and run arbitrary code on the system. The flaw combines two misconfigurations: overly permissive CORS settings (CORS, or cross-origin resource sharing, allows webpages from different domains to access each other) that accept requests from any origin with credentials, and refresh token cookies (a token used to get new access credentials) set to SameSite=None, which allows a malicious webpage to steal valid tokens and impersonate a victim.

NVD/CVE Database
10

Homophily Edge Augment Graph Neural Network for High-Class Homophily Variance Learning

research
Dec 5, 2025

Graph Neural Networks (GNNs, machine learning models that work with interconnected data) perform poorly at detecting anomalies in graphs because of high Class Homophily Variance (CHV), meaning some node types cluster together while others scatter. The researchers propose HEAug, a new GNN model that creates additional connections between nodes that are similar in features but not originally linked, and adjusts its training process to avoid generating unwanted connections.

Fix: The proposed mitigation is the HEAug (Homophily Edge Augment Graph Neural Network) model itself. According to the source, it works by: (1) sampling new homophily adjacency matrices (connection patterns) from scratch using self-attention mechanisms, (2) leveraging nodes that are relevant in feature space but not directly connected in the original graph, and (3) modifying the loss function to punish the generation of unnecessary heterophilic edges by the model.

IEEE Xplore (Security & AI Journals)
Prev1...231232233234235...371Next