aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3224 items

CVE-2022-50661: In the Linux kernel, the following vulnerability has been resolved: seccomp: Move copy_seccomp() to no failure path. O

infovulnerability
security
Dec 9, 2025
CVE-2022-50661

A memory leak vulnerability exists in the Linux kernel's seccomp (secure computing, a security feature that restricts what system calls a process can make) implementation where seccomp filter objects are not properly freed when a process creation fails after a signal interrupts it. The fix moves the copy_seccomp() function to execute after a signal check and adds a warning in free_task() to ensure filters are properly released during process cleanup.

Fix: Move copy_seccomp() to execute after the signal check in copy_process(), and add a WARN_ON_ONCE() in free_task() for future debugging. This ensures seccomp_filter_release() is called to decrement the filter's refcount in the failure path, preventing memory leaks.

NVD/CVE Database

CVE-2025-64671: Improper neutralization of special elements used in a command ('command injection') in Copilot allows an unauthorized at

highvulnerability
security
Dec 9, 2025
CVE-2025-64671

CVE-2025-64671 is a command injection vulnerability (a flaw where an attacker can inject malicious commands into input that gets executed) in Copilot that allows an unauthorized attacker to execute code locally on a system. The vulnerability stems from improper handling of special characters in commands, and Microsoft has documented it as a known issue.

CVE-2025-62994: Insertion of Sensitive Information Into Sent Data vulnerability in WP Messiah WP AI CoPilot ai-co-pilot-for-wp allows Re

mediumvulnerability
security
Dec 9, 2025
CVE-2025-62994

CVE-2025-62994 is a vulnerability in WP AI CoPilot (a WordPress plugin that adds AI assistance to WordPress sites) version 1.2.7 and earlier, where sensitive information gets accidentally included when the plugin sends data. This allows attackers to retrieve embedded sensitive data that shouldn't be exposed.

HP-OTP: One-Time Password Scheme Based on Hardened Password

inforesearchPeer-Reviewed
security

AdaptiveShield: Dynamic Defense Against Decentralized Federated Learning Poisoning Attacks

inforesearchPeer-Reviewed
security

Enhancing the Security of Large Character Set CAPTCHAs Using Transferable Adversarial Examples

inforesearchPeer-Reviewed
research

A Unified Decision Rule for Generalized Out-of-Distribution Detection

inforesearchPeer-Reviewed
research

Versatile Backdoor Attack With Visible, Semantic, Sample-Specific and Compatible Triggers

inforesearchPeer-Reviewed
security

Test-Time Correction: An Online 3D Detection System via Visual Prompting

inforesearchPeer-Reviewed
research

Side-Channel Analysis Based on Multiple Leakage Models Ensemble

inforesearchPeer-Reviewed
research

Teamwork Makes TEE Work: Open and Resilient Remote Attestation on Decentralized Trust

inforesearchPeer-Reviewed
security

CVE-2025-40311: In the Linux kernel, the following vulnerability has been resolved: accel/habanalabs: support mapping cb with vmalloc-b

infovulnerability
security
Dec 7, 2025
CVE-2025-40311

A bug in the Linux kernel's Habana Labs accelerator driver could cause a kernel crash when trying to map certain types of memory (specifically, memory allocated by dma_alloc_coherent, which is memory designed for direct hardware access) if IOMMU (input/output memory management unit, which controls how devices access system memory) is enabled. The kernel would crash because it tried to map vmalloc-backed memory (memory allocated from the virtual memory system) without the proper flags set.

CVE-2025-13922: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to time-based bli

mediumvulnerability
security
Dec 6, 2025
CVE-2025-13922

A WordPress plugin called AI Autotagger with OpenAI has a security flaw called time-based blind SQL injection (a technique where attackers sneak extra database commands into legitimate queries by exploiting how the software processes user input) in versions up to 3.40.1. Attackers with contributor-level access or higher can use this flaw to steal sensitive data from the database, slow down the website, or extract information through time-delay tricks.

CVE-2025-34291: Langflow versions up to and including 1.6.9 contain a chained vulnerability that enables account takeover and remote cod

highvulnerability
security
Dec 5, 2025
CVE-2025-34291EPSS: 13.3%

CVE-2025-66581: Frappe Learning Management System (LMS) is a learning system that helps users structure their content. Prior to 2.41.0,

mediumvulnerability
security
Dec 5, 2025
CVE-2025-66581

Frappe Learning Management System (LMS) had a vulnerability in versions before 2.41.0 where the server did not properly check user permissions, allowing low-privileged users like students to perform actions meant only for instructors or administrators by directly accessing the API (the interface that lets software communicate with other software). The flaw existed because permission checks only happened on the client side or in the user interface rather than on the server, which is easier to bypass.

Homophily Edge Augment Graph Neural Network for High-Class Homophily Variance Learning

inforesearchPeer-Reviewed
research

CVE-2025-12189: The Bread & Butter: Gate content + Capture leads + Collect first-party data + Nurture with Ai agents plugin for WordPres

mediumvulnerability
security
Dec 5, 2025
CVE-2025-12189

A WordPress plugin called 'The Bread & Butter' has a security flaw called CSRF (cross-site request forgery, where an attacker tricks someone into performing an unwanted action on a website) in versions up to 7.10.1321. The flaw exists in the image upload function because it lacks proper nonce validation (a security token that verifies a request is legitimate), allowing attackers to upload malicious files that could lead to RCE (remote code execution, where an attacker runs commands on the website) if they can trick an admin into clicking a malicious link.

The Normalization of Deviance in AI

infonews
safetyresearch

CVE-2025-66479: Anthropic Sandbox Runtime is a lightweight sandboxing tool for enforcing filesystem and network restrictions on arbitrar

lowvulnerability
security
Dec 4, 2025
CVE-2025-66479

Anthropic Sandbox Runtime is a tool that restricts what processes can access on a computer's filesystem (file storage) and network without needing containers (isolated computing environments). Before version 0.0.16, a bug prevented the network sandbox from working correctly when no allowed domains were specified, which could let code inside the sandbox make network requests it shouldn't be able to make.

v0.14.10

infonews
industry
Dec 4, 2025

Version 0.14.10 of llama-index-core added a mock function calling LLM (a simulated language model that can pretend to execute functions), while related packages fixed typos and added new integrations like Airweave tool support for advanced search capabilities. This is a routine software release with feature additions and bug fixes.

Previous72 / 162Next
NVD/CVE Database
NVD/CVE Database
Dec 9, 2025

One-Time Passwords (OTPs, temporary codes used in two-factor authentication to verify your identity) like HOTP and TOTP have vulnerabilities that let attackers bypass security if they steal the secret key stored on a device or server. This paper proposes HP-OTP, a new OTP scheme that combines your password with the device's unique identifier to make it harder for attackers to forge codes even if they compromise either the device or server.

IEEE Xplore (Security & AI Journals)
research
Dec 9, 2025

Federated learning (a system where decentralized devices train a shared AI model together while keeping their data local) is vulnerable to poisoning attacks, where malicious participants inject false data to corrupt the final model. This paper proposes AdaptiveShield, a defense system that uses dynamic detection strategies to identify attackers, automatically adjusts its sensitivity thresholds to handle different attack types, reduces damage from missed attackers by adjusting hyperparameters (settings that control how the model learns), and hides user identities through a shuffling mechanism to protect privacy.

Fix: AdaptiveShield employs: (1) dynamic detection strategies that assess maliciousness and dynamically adjust detection thresholds to adapt to various attack scenarios; (2) dynamic hyperparameter adjustment to minimize negative impact from missed attackers and enhance robustness; and (3) a hierarchical shuffle mechanism to dissociate user identities from their uploaded local models, providing privacy protection.

IEEE Xplore (Security & AI Journals)
security
Dec 9, 2025

Deep learning attacks have successfully cracked CAPTCHAs (automated tests that distinguish humans from bots) that use large character sets, especially those with alphabets from languages like Chinese. This paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation), a framework that makes CAPTCHAs harder to attack by adding adversarial perturbations (intentional distortions that confuse AI recognition systems) through two modules: one that prevents character recognition and another that adds global visual noise, reducing attack success rates from 51.52% to 2.56%.

Fix: The paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation) as a defense framework. According to the source, ACG uses 'a Fine-grained Generation Module, combining three novel strategies to prevent attackers from recognizing characters, and an Ensemble Generation Module to generate global perturbations in CAPTCHAs' to strengthen defense against recognition attacks and improve robustness against diverse detection architectures.

IEEE Xplore (Security & AI Journals)
safety
Dec 9, 2025

This research paper addresses generalized out-of-distribution detection (OOD detection, where an AI system identifies inputs that are very different from its training data), which is important for AI systems used in safety-critical applications. Rather than focusing on designing better scoring functions, the authors propose a new decision rule called the generalized Benjamini Hochberg procedure that uses hypothesis testing (a statistical method for making decisions about data) to determine whether an input is out-of-distribution, and they prove this method controls false positive rates better than traditional threshold-based approaches.

IEEE Xplore (Security & AI Journals)
research
Dec 9, 2025

Researchers developed a new method for backdoor attacks (techniques that manipulate AI systems to behave in specific ways when exposed to hidden trigger patterns) that works better in real-world physical scenarios. The method, called VSSC triggers (Visible, Semantic, Sample-specific, and Compatible), uses large language models, generative models, and vision-language models in an automated pipeline to create stealthy triggers that can survive visual distortions and be deployed using real objects, making physical backdoor attacks more practical and systematic than manual methods.

IEEE Xplore (Security & AI Journals)
Dec 9, 2025

This paper presents Test-Time Correction (TTC), a system that helps autonomous vehicles fix detection errors while driving, rather than waiting for retraining. TTC uses an Online Adapter module with visual prompts (image-based descriptions of objects derived from feedback like mismatches or user clicks) to continuously correct mistakes in real-time, allowing vehicles to adapt to new situations and improve safety without stopping to retrain the system.

IEEE Xplore (Security & AI Journals)
security
Dec 8, 2025

This research proposes a new framework for side-channel analysis (SCA, a type of attack that exploits physical information like power consumption or timing to break cryptography) by combining multiple different leakage models (ways of measuring how a cryptographic device leaks secrets) using ensemble learning (combining many weaker models into one stronger one). The framework improves how well attackers can recover secret keys by using deep learning with complementary information from different measurement approaches, and the authors prove mathematically that their ensemble model gets closer to the true secret distribution.

IEEE Xplore (Security & AI Journals)
Dec 8, 2025

Remote attestation (RA, the process of verifying that software running on a trusted computer processor is genuine and hasn't been tampered with) traditionally relies on a single central authority to verify trust, which creates security vulnerabilities. This paper introduces Janus, a new RA system that spreads trust across multiple parties using physical hardware features (PUF, or physically unclonable function, unique identifiers built into computer chips) and smart contracts (automated programs running on blockchain networks) to make the verification process more secure, flexible, and resistant to attacks.

IEEE Xplore (Security & AI Journals)

Fix: The fix checks whether the memory address comes from the vmalloc range, and if so, sets the VM_MIXEDMAP flag in the VMA (virtual memory area, a region of a process's memory) before mapping it. This allows the memory to be safely mapped without triggering a kernel crash.

NVD/CVE Database
NVD/CVE Database

Langflow versions up to 1.6.9 have a chained vulnerability that allows attackers to take over user accounts and run arbitrary code on the system. The flaw combines two misconfigurations: overly permissive CORS settings (CORS, or cross-origin resource sharing, allows webpages from different domains to access each other) that accept requests from any origin with credentials, and refresh token cookies (a token used to get new access credentials) set to SameSite=None, which allows a malicious webpage to steal valid tokens and impersonate a victim.

NVD/CVE Database

Fix: Update to version 2.41.0 or later, where this vulnerability is fixed.

NVD/CVE Database
Dec 5, 2025

Graph Neural Networks (GNNs, machine learning models that work with interconnected data) perform poorly at detecting anomalies in graphs because of high Class Homophily Variance (CHV), meaning some node types cluster together while others scatter. The researchers propose HEAug, a new GNN model that creates additional connections between nodes that are similar in features but not originally linked, and adjusts its training process to avoid generating unwanted connections.

Fix: The proposed mitigation is the HEAug (Homophily Edge Augment Graph Neural Network) model itself. According to the source, it works by: (1) sampling new homophily adjacency matrices (connection patterns) from scratch using self-attention mechanisms, (2) leveraging nodes that are relevant in feature space but not directly connected in the original graph, and (3) modifying the loss function to punish the generation of unnecessary heterophilic edges by the model.

IEEE Xplore (Security & AI Journals)
NVD/CVE Database
Dec 4, 2025

The AI industry is gradually accepting LLM (large language model) outputs as reliable without questioning them, similar to how NASA ignored warning signs before the Challenger disaster. This 'normalization of deviance' (accepting behavior that deviates from proper standards as normal) is particularly risky in agentic systems (AI systems that can take independent actions without human approval at each step), where unchecked LLM decisions could cause serious problems.

Embrace The Red

Fix: A patch was released in v0.0.16 that fixes this issue.

NVD/CVE Database
LlamaIndex Security Releases