aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3223 items

HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack

inforesearchPeer-Reviewed
securityresearch
Dec 26, 2025

Hypergraph Neural Networks (HGNNs, which are AI models that learn from data where connections can link multiple items together instead of just pairs) can be weakened by structural attacks that corrupt their connections and reduce accuracy. HGNN Shield is a defense framework with two main components: Hyperedge-Dependent Estimation (which assesses how important each connection is within the network) and High-Order Shield (which detects and removes harmful connections before the AI processes data). Experiments show the framework improves performance by an average of 9.33% compared to existing defenses.

Fix: The HGNN Shield defense framework addresses the vulnerability through two modules: (1) Hyperedge-Dependent Estimation (HDE) that 'prioritizes vertex dependencies within hyperedges and adapts traditional connectivity measures to hypergraphs, facilitating precise structural modifications,' and (2) High-Order Shield (HOS) positioned before convolutional layers, which 'consists of three submodules: Hyperpath Cut, Hyperpath Link, and Hyperpath Refine' that 'collectively detect, disconnect, and refine adversarial connections, ensuring robust message propagation.'

IEEE Xplore (Security & AI Journals)

CVE-2025-68665: LangChain is a framework for building LLM-powered applications. Prior to @langchain/core versions 0.3.80 and 1.1.8, and

highvulnerability
security
Dec 23, 2025
CVE-2025-68665

LangChain, a framework for building applications powered by LLMs (large language models), had a serialization injection vulnerability (a flaw where specially crafted data can be misinterpreted as legitimate code during the conversion of objects to JSON format) in its toJSON() method. The vulnerability occurred because the method failed to properly escape objects containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating malicious user data as legitimate LangChain objects when deserializing (converting back from JSON format).

CVE-2025-68664: LangChain is a framework for building agents and LLM-powered applications. Prior to versions 0.3.81 and 1.2.5, a seriali

criticalvulnerability
security
Dec 23, 2025
CVE-2025-68664

LangChain, a framework for building AI agents and applications powered by large language models, had a serialization injection vulnerability (a flaw in how it converts data to stored formats) in its dumps() and dumpd() functions before versions 0.3.81 and 1.2.5. The functions failed to properly escape dictionaries containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating user-supplied data as legitimate LangChain objects during deserialization (converting stored data back into usable form).

CVE-2025-14930: Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14930

A vulnerability in Hugging Face Transformers GLM4 allows attackers to run harmful code on a system by tricking users into opening a malicious file or visiting a malicious webpage. The problem occurs because the software doesn't properly check data when loading model weights (the numerical values that make the AI work), allowing deserialization of untrusted data (converting unsafe external files into code the system will execute).

CVE-2025-14929: Hugging Face Transformers X-CLIP Checkpoint Conversion Deserialization of Untrusted Data Remote Code Execution Vulnerabi

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14929

A vulnerability in Hugging Face Transformers' X-CLIP checkpoint conversion allows attackers to execute arbitrary code (running commands they choose on a system) by tricking users into opening malicious files or visiting malicious pages. The flaw occurs because the code doesn't properly validate checkpoint data before deserializing it (converting stored data back into usable objects), which lets attackers inject malicious code that runs with the same permissions as the application.

CVE-2025-14928: Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability a

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14928

A vulnerability in Hugging Face Transformers' HuBERT convert_config function allows attackers to execute arbitrary code (RCE, or remote code execution, where an attacker runs commands on a system) by tricking users into converting a malicious checkpoint (a saved model file). The flaw occurs because the function doesn't properly validate user input before using it to run Python code.

CVE-2025-14927: Hugging Face Transformers SEW-D convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability al

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14927

Hugging Face Transformers (a popular library for working with AI language models) has a vulnerability in its SEW-D convert_config function that allows attackers to run arbitrary code (any commands they want) on a victim's computer. The flaw exists because the function doesn't properly check user input before using it to execute Python code, and an attacker can exploit this by tricking a user into converting a malicious checkpoint (a saved model file).

CVE-2025-14926: Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allo

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14926

A vulnerability in Hugging Face Transformers (a popular AI library) allows attackers to run arbitrary code on a user's computer through a malicious checkpoint (a saved model file). The flaw exists in the convert_config function, which doesn't properly validate user input before executing it as Python code, meaning an attacker can trick a user into converting a malicious checkpoint to execute code with the user's permissions.

CVE-2025-14924: Hugging Face Transformers megatron_gpt2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vuln

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14924

A vulnerability in Hugging Face Transformers (a popular library for working with AI language models) allows attackers to run arbitrary code on a computer by tricking users into opening malicious files or visiting malicious websites. The flaw occurs because the software doesn't properly check data when loading saved model checkpoints (files that store a model's learned parameters), which lets attackers execute code by sending untrusted data through deserialization (the process of converting stored data back into usable objects).

CVE-2025-14921: Hugging Face Transformers Transformer-XL Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. Th

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14921

A vulnerability in Hugging Face Transformers' Transformer-XL model allows attackers to run arbitrary code (remote code execution) on a victim's computer by tricking them into opening a malicious file or visiting a malicious webpage. The flaw occurs because the software doesn't properly validate data when reading model files, allowing attackers to exploit the deserialization process (converting saved data back into objects that the program can use) to inject and execute malicious code.

CVE-2025-14920: Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vu

criticalvulnerability
security
Dec 23, 2025
CVE-2025-14920

A vulnerability in Hugging Face Transformers' Perceiver model allows attackers to run malicious code on a user's computer by tricking them into opening a malicious file or visiting a harmful webpage. The flaw happens because the software doesn't properly check data when loading model files, allowing untrusted code to be executed (deserialization of untrusted data, where a program reconstructs objects from stored data without verifying they're safe).

CVE-2025-13707: Tencent HunyuanDiT model_resume Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerabilit

criticalvulnerability
security
Dec 23, 2025
CVE-2025-13707

Tencent HunyuanDiT (an AI image generation model) has a remote code execution vulnerability in its model_resume function that allows attackers to run arbitrary code if a user opens a malicious file or visits a malicious page. The flaw stems from improper validation of user input during deserialization (converting data from storage format back into usable objects), allowing attackers to execute code with root-level privileges.

CVE-2025-63664: Incorrect access control in the /api/v1/conversations/*/messages API of GT Edge AI Platform before v2.0.10-dev allows un

highvulnerability
security
Dec 22, 2025
CVE-2025-63664

CVE-2025-63664 is a flaw in the GT Edge AI Platform (before version 2.0.10-dev) where incorrect access control in the /api/v1/conversations/*/messages API allows attackers without permission to view other users' message histories with AI agents. This is classified as improper access control (CWE-284, a category of security flaws where systems fail to properly restrict what users can access).

Large Language Models in Human Subject Research, and the Presence of Idiosyncratic Human Behaviors

inforesearchPeer-Reviewed
research

Generative Artificial Intelligence: Ethical Challenges and Trust Mechanisms

inforesearchPeer-Reviewed
research

Cybersecurity Challenges for the Elderly: Vulnerabilities and Risks

inforesearchPeer-Reviewed
security

The Impact of Artificial Intelligence in Protecting the Online Social Community From Cyberbullying

inforesearchPeer-Reviewed
research

Slack Federated Adversarial Training

inforesearchPeer-Reviewed
research

Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks

inforesearchPeer-Reviewed
security

Proactive Bot Detection Based on Structural Information Principles

inforesearchPeer-Reviewed
research
Previous69 / 162Next

Fix: Update @langchain/core to version 0.3.80 or 1.1.8, and update langchain to version 0.3.37 or 1.2.3. According to the source: 'This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3.'

NVD/CVE Database

Fix: Update to LangChain version 0.3.81 or version 1.2.5, where this issue has been patched.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database

Fix: Update GT Edge AI Platform to version 2.0.10-dev or later.

NVD/CVE Database
safety
Dec 22, 2025

Large language models (LLMs, AI systems trained on huge amounts of text to generate human-like responses) can now mimic not just general human language but also unusual, individual-specific human behaviors. This ability could lead to LLMs being used more widely in research studies and potentially reduce the role of actual humans, which raises concerns about AI alignment (ensuring AI systems behave in ways humans intend and approve of) and how this technology affects society.

IEEE Xplore (Security & AI Journals)
safety
Dec 22, 2025

Generative AI (systems that create new text, images, or other content) is transforming many industries but raises ethical concerns like data privacy (protecting personal information), bias (unfair treatment of certain groups), transparency (being open about how the AI works), and accountability (responsibility for the AI's actions). Researchers propose a trust framework based on transparency, fairness, accountability, and privacy to help ensure generative AI is developed and used responsibly.

IEEE Xplore (Security & AI Journals)
Dec 22, 2025

Elderly people are increasingly using digital technology for communication and information access, but their limited cybersecurity knowledge makes them attractive targets for cybercriminals. The article examines common cybercrimes targeting seniors, the specific vulnerabilities that put them at risk, and existing approaches to reduce these dangers.

IEEE Xplore (Security & AI Journals)
safety
Dec 22, 2025

Cyberbullying on social media is a growing problem that harms people's mental health, and traditional methods to stop it are no longer effective. This study examines how artificial intelligence can help protect online communities from cyberbullying by exploring different AI technologies, their uses, and the challenges involved. The goal is to understand how AI might create safer online environments.

IEEE Xplore (Security & AI Journals)
security
Dec 22, 2025

This research addresses a problem in federated learning (a method where multiple computers train an AI model together without sharing raw data) combined with adversarial training (a technique that makes AI models resistant to intentionally tricky inputs). The authors found that simply combining these two approaches causes the model's accuracy to drop because adversarial training increases differences in the data across different computers, making the federated learning less effective. They propose SFAT (Slack Federated Adversarial Training), which uses a relaxation mechanism to adjust how the computers combine their learning results, reducing the harmful effects of data differences and improving overall performance.

IEEE Xplore (Security & AI Journals)
research
Dec 22, 2025

Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.

Fix: The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.

IEEE Xplore (Security & AI Journals)
security
Dec 22, 2025

This research proposes SIAMD, a framework for detecting social media bots (automated accounts that spread misinformation) before they cause harm. The system analyzes patterns in how user accounts interact with messages, uses structural entropy (a measure of uncertainty in data patterns) to identify bot-like behavior, and generates synthetic bot messages with large language models (AI systems trained on text data) to test and improve detection systems.

IEEE Xplore (Security & AI Journals)