All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Hypergraph Neural Networks (HGNNs, which are AI models that learn from data where connections can link multiple items together instead of just pairs) can be weakened by structural attacks that corrupt their connections and reduce accuracy. HGNN Shield is a defense framework with two main components: Hyperedge-Dependent Estimation (which assesses how important each connection is within the network) and High-Order Shield (which detects and removes harmful connections before the AI processes data). Experiments show the framework improves performance by an average of 9.33% compared to existing defenses.
Fix: The HGNN Shield defense framework addresses the vulnerability through two modules: (1) Hyperedge-Dependent Estimation (HDE) that 'prioritizes vertex dependencies within hyperedges and adapts traditional connectivity measures to hypergraphs, facilitating precise structural modifications,' and (2) High-Order Shield (HOS) positioned before convolutional layers, which 'consists of three submodules: Hyperpath Cut, Hyperpath Link, and Hyperpath Refine' that 'collectively detect, disconnect, and refine adversarial connections, ensuring robust message propagation.'
IEEE Xplore (Security & AI Journals)LangChain, a framework for building applications powered by LLMs (large language models), had a serialization injection vulnerability (a flaw where specially crafted data can be misinterpreted as legitimate code during the conversion of objects to JSON format) in its toJSON() method. The vulnerability occurred because the method failed to properly escape objects containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating malicious user data as legitimate LangChain objects when deserializing (converting back from JSON format).
LangChain, a framework for building AI agents and applications powered by large language models, had a serialization injection vulnerability (a flaw in how it converts data to stored formats) in its dumps() and dumpd() functions before versions 0.3.81 and 1.2.5. The functions failed to properly escape dictionaries containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating user-supplied data as legitimate LangChain objects during deserialization (converting stored data back into usable form).
A vulnerability in Hugging Face Transformers GLM4 allows attackers to run harmful code on a system by tricking users into opening a malicious file or visiting a malicious webpage. The problem occurs because the software doesn't properly check data when loading model weights (the numerical values that make the AI work), allowing deserialization of untrusted data (converting unsafe external files into code the system will execute).
A vulnerability in Hugging Face Transformers' X-CLIP checkpoint conversion allows attackers to execute arbitrary code (running commands they choose on a system) by tricking users into opening malicious files or visiting malicious pages. The flaw occurs because the code doesn't properly validate checkpoint data before deserializing it (converting stored data back into usable objects), which lets attackers inject malicious code that runs with the same permissions as the application.
A vulnerability in Hugging Face Transformers' HuBERT convert_config function allows attackers to execute arbitrary code (RCE, or remote code execution, where an attacker runs commands on a system) by tricking users into converting a malicious checkpoint (a saved model file). The flaw occurs because the function doesn't properly validate user input before using it to run Python code.
Hugging Face Transformers (a popular library for working with AI language models) has a vulnerability in its SEW-D convert_config function that allows attackers to run arbitrary code (any commands they want) on a victim's computer. The flaw exists because the function doesn't properly check user input before using it to execute Python code, and an attacker can exploit this by tricking a user into converting a malicious checkpoint (a saved model file).
A vulnerability in Hugging Face Transformers (a popular AI library) allows attackers to run arbitrary code on a user's computer through a malicious checkpoint (a saved model file). The flaw exists in the convert_config function, which doesn't properly validate user input before executing it as Python code, meaning an attacker can trick a user into converting a malicious checkpoint to execute code with the user's permissions.
A vulnerability in Hugging Face Transformers (a popular library for working with AI language models) allows attackers to run arbitrary code on a computer by tricking users into opening malicious files or visiting malicious websites. The flaw occurs because the software doesn't properly check data when loading saved model checkpoints (files that store a model's learned parameters), which lets attackers execute code by sending untrusted data through deserialization (the process of converting stored data back into usable objects).
A vulnerability in Hugging Face Transformers' Transformer-XL model allows attackers to run arbitrary code (remote code execution) on a victim's computer by tricking them into opening a malicious file or visiting a malicious webpage. The flaw occurs because the software doesn't properly validate data when reading model files, allowing attackers to exploit the deserialization process (converting saved data back into objects that the program can use) to inject and execute malicious code.
A vulnerability in Hugging Face Transformers' Perceiver model allows attackers to run malicious code on a user's computer by tricking them into opening a malicious file or visiting a harmful webpage. The flaw happens because the software doesn't properly check data when loading model files, allowing untrusted code to be executed (deserialization of untrusted data, where a program reconstructs objects from stored data without verifying they're safe).
Tencent HunyuanDiT (an AI image generation model) has a remote code execution vulnerability in its model_resume function that allows attackers to run arbitrary code if a user opens a malicious file or visits a malicious page. The flaw stems from improper validation of user input during deserialization (converting data from storage format back into usable objects), allowing attackers to execute code with root-level privileges.
CVE-2025-63664 is a flaw in the GT Edge AI Platform (before version 2.0.10-dev) where incorrect access control in the /api/v1/conversations/*/messages API allows attackers without permission to view other users' message histories with AI agents. This is classified as improper access control (CWE-284, a category of security flaws where systems fail to properly restrict what users can access).
Fix: Update @langchain/core to version 0.3.80 or 1.1.8, and update langchain to version 0.3.37 or 1.2.3. According to the source: 'This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3.'
NVD/CVE DatabaseFix: Update to LangChain version 0.3.81 or version 1.2.5, where this issue has been patched.
NVD/CVE DatabaseFix: Update GT Edge AI Platform to version 2.0.10-dev or later.
NVD/CVE DatabaseLarge language models (LLMs, AI systems trained on huge amounts of text to generate human-like responses) can now mimic not just general human language but also unusual, individual-specific human behaviors. This ability could lead to LLMs being used more widely in research studies and potentially reduce the role of actual humans, which raises concerns about AI alignment (ensuring AI systems behave in ways humans intend and approve of) and how this technology affects society.
Generative AI (systems that create new text, images, or other content) is transforming many industries but raises ethical concerns like data privacy (protecting personal information), bias (unfair treatment of certain groups), transparency (being open about how the AI works), and accountability (responsibility for the AI's actions). Researchers propose a trust framework based on transparency, fairness, accountability, and privacy to help ensure generative AI is developed and used responsibly.
Elderly people are increasingly using digital technology for communication and information access, but their limited cybersecurity knowledge makes them attractive targets for cybercriminals. The article examines common cybercrimes targeting seniors, the specific vulnerabilities that put them at risk, and existing approaches to reduce these dangers.
Cyberbullying on social media is a growing problem that harms people's mental health, and traditional methods to stop it are no longer effective. This study examines how artificial intelligence can help protect online communities from cyberbullying by exploring different AI technologies, their uses, and the challenges involved. The goal is to understand how AI might create safer online environments.
This research addresses a problem in federated learning (a method where multiple computers train an AI model together without sharing raw data) combined with adversarial training (a technique that makes AI models resistant to intentionally tricky inputs). The authors found that simply combining these two approaches causes the model's accuracy to drop because adversarial training increases differences in the data across different computers, making the federated learning less effective. They propose SFAT (Slack Federated Adversarial Training), which uses a relaxation mechanism to adjust how the computers combine their learning results, reducing the harmful effects of data differences and improving overall performance.
Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.
Fix: The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.
IEEE Xplore (Security & AI Journals)This research proposes SIAMD, a framework for detecting social media bots (automated accounts that spread misinformation) before they cause harm. The system analyzes patterns in how user accounts interact with messages, uses structural entropy (a measure of uncertainty in data patterns) to identify bot-like behavior, and generates synthetic bot messages with large language models (AI systems trained on text data) to test and improve detection systems.