All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
AgentAPI (an HTTP interface for various AI coding assistants) versions 0.3.3 and below are vulnerable to a DNS rebinding attack (where an attacker tricks your browser into connecting to a malicious server that responds like your local machine), allowing unauthorized access to the /messages endpoint. This vulnerability can expose sensitive data stored locally, including API keys, file contents, and code the user was developing.
Fix: This issue is fixed in version 0.4.0.
NVD/CVE DatabasePyTorch version 2.7.0 has a vulnerability (CVE-2025-55560) that causes a Denial of Service (DoS, where a system becomes unavailable or unresponsive) when a model uses specific sparse tensor functions (torch.Tensor.to_sparse() and torch.Tensor.to_dense()) and is compiled by Inductor (PyTorch's code compilation tool). This issue stems from uncontrolled resource consumption, meaning the system uses up too many computing resources.
CVE-2025-55559 is a vulnerability in TensorFlow v2.18.0 where setting the padding parameter to 'valid' in tf.keras.layers.Conv2D (a layer used in neural networks for image processing) causes a Denial of Service (DoS, where a system becomes unavailable to users). The vulnerability is classified as uncontrolled resource consumption, meaning the system uses up resources like memory or CPU in an uncontrolled way.
CVE-2025-55558 is a buffer overflow (a memory safety error where data is written beyond the intended boundaries) in PyTorch version 2.7.0 that occurs when certain neural network operations are combined and compiled using Inductor, a code compiler. This vulnerability causes a Denial of Service attack (making a service unavailable to users), though no CVSS severity score has been assigned yet.
PyTorch version 2.7.0 has a bug where a name error occurs when a model uses torch.cummin (a function that finds cumulative minimum values) and is compiled by Inductor (PyTorch's compiler for optimizing code). This causes a Denial of Service (DoS, where a system becomes unavailable to users).
TensorFlow v2.18.0 has a bug where the Embedding function (a neural network layer that converts words or items into numerical representations) produces random results when compiled, causing applications to behave unexpectedly. The issue is tracked as CVE-2025-55556 and has a severity rating that is still being assessed.
PyTorch version 2.8.0 contains an integer overflow vulnerability (a bug where a number gets too large for its storage space and wraps around to an incorrect value) in the torch.nan_to_num function when using the .long() method. The vulnerability is tracked as CVE-2025-55554, though a detailed severity rating has not yet been assigned by NIST.
CVE-2025-55553 is a syntax error in the proxy_tensor.py file of PyTorch version 2.7.0 that allows attackers to cause a Denial of Service (DoS, a type of attack where a system becomes unavailable to legitimate users). The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.0, indicating moderate severity.
PyTorch v2.8.0 has a vulnerability (CVE-2025-55552) where two functions, torch.rot90 (which rotates arrays) and torch.randn_like (which generates random numbers matching a given shape), behave unexpectedly when used together, possibly due to integer overflow or wraparound (where numbers wrap around to negative values instead of staying large).
A vulnerability (CVE-2025-55551) exists in PyTorch version 2.8.0 in a math component called torch.linalg.lu that allows attackers to cause a Denial of Service (DoS, where a system becomes unavailable to users) by performing a slice operation (extracting a portion of data). The issue involves uncontrolled resource consumption (CWE-400, where a program uses too much memory or processing power without limits).
PyTorch versions before 3.7.0 have a bug in the bernoulli_p decompose function (a mathematical operation used in the dropout layers) that doesn't work the same way as the main CPU implementation, causing problems with nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d when fallback_random=True (a setting that uses random number generation as a backup method).
This paper presents EdFROST, a new threshold EdDSA (a cryptographic signature scheme used in distributed systems) protocol that detects malicious behavior more efficiently than previous methods while reducing computational overhead from zero-knowledge proofs (mathematical techniques that prove something is true without revealing how). The authors also propose a weighted threshold signature system that prevents powerful participants from dominating decisions and uses game theory (the study of strategic decision-making) with blockchain incentives to encourage honest behavior and resist DDoS attacks (attempts to overwhelm a system with traffic).
Fix: The source proposes EdFROST as the solution, which is described as being "unforgeable and supports identifiable aborts under a chosen-message attack." The paper also states that they "design a game-theoretic incentive model, implemented via tamper-proof chaincode, achieving rational identifiable aborts with a unique sequential equilibrium" to incentivize honest behavior, ensure efficient abort handling, and resist DDoS attacks. The authors note that "experimental results demonstrate that the EdFROST and chaincode are efficient and lightweight, making them well-suited for large-scale distributed systems."
IEEE Xplore (Security & AI Journals)This research presents a method to classify encrypted internet traffic (HTTPS, a protocol that scrambles data sent over the internet) by reconstructing the original application data sizes hidden beneath encryption layers. The researchers developed an algorithm called LC-MRNN (Length-Correction Multiple Regression Neural Network, a type of machine learning model) to accurately restore these hidden data lengths, which helps network administrators and security teams identify what applications users are running, even when the actual data is encrypted.
Deep neural networks (DNNs, machine learning models with many layers that learn patterns from data) are vulnerable to adversarial attacks, where small, carefully crafted changes to input data trick the AI into making wrong predictions, especially in critical areas like self-driving cars. This paper presents AI-Shielder, a method that intentionally embeds backdoors (hidden pathways that alter how the model behaves) into neural networks to detect and block adversarial attacks while keeping the AI's normal performance intact. Testing shows AI-Shielder reduces successful attacks from 91.8% to 3.8% with only minor slowdowns.
Fix: AI-Shielder is the proposed solution presented in the paper. According to the results, it 'reduces the attack success rate from 91.8% to 3.8%, which outperforms the state-of-the-art works by 37.2%, with only a 0.6% decline in the clean data accuracy' and 'introduces only 1.43% overhead to the model prediction time, almost negligible in most cases.' The approach works by leveraging intentionally embedded backdoors to fail adversarial perturbations while maintaining original task performance.
IEEE Xplore (Security & AI Journals)This research presents SEOMA, a new system for searchable encryption (SE, a method that lets users store encrypted data on servers while still being able to search it by keywords without revealing the data's contents). The system improves on existing approaches by supporting multiple users accessing the same data while also verifying that the data owner is legitimate and preventing malicious owners from uploading fake encrypted files. SEOMA uses attribute encryption (a technique that controls who can decrypt data based on their characteristics) and access control policies to manage which users can access what data, while using less computing power and bandwidth than previous solutions.
Machine unlearning (the process of removing a user's data from a trained AI model) needs verification to confirm that genuine user data was actually deleted, but current methods using backdoors (hidden triggers added to test if data is gone) can't properly verify removal of real user samples. This paper proposes SMS, or Self-Supervised Model Seeding, which embeds user-specific identifiers into the model's internal representation to directly link users' actual data with the model, enabling better verification that genuine samples were truly unlearned.
Healthcare organizations are collecting more patient data than ever, which creates privacy risks. This research proposes GFKMC (Generalization First k-Member Clustering), a new privacy method that protects patient identities by grouping similar records together while keeping the data useful for analysis, and it works better than older methods by losing less information when privacy protection is increased.
This research presents a method for detecting moving objects in encrypted video without decrypting it, protecting privacy when video processing is done in the cloud. The approach uses selective encryption (encrypting only certain parts of compressed video) and extracts motion information from encrypted video data, then applies deep learning with attention mechanisms (a technique that helps the AI focus on important regions) to identify moving objects even with incomplete information.
This paper presents ASGA, a method for creating adversarial attacks (small, crafted changes meant to trick AI models) on video action recognition systems (AI models that identify what actions people are performing in videos). The key innovation is that attackers can compute perturbations (the malicious changes) just once on important keyframes (selected frames that represent the video's content), then replicate these changes across the entire video, making the attack work even when the model samples frames differently and reducing computational cost.
This research studies federated learning (FL, a method where multiple devices collaboratively train an AI model without sending their data to a central server) on real IoT and edge devices (small computing devices like phones and sensors) rather than in simulated environments. The study examines how FL performs in realistic conditions, focusing on heterogeneous scenarios (situations where devices have different computing power, network speeds, and data types), and provides insights to help researchers and practitioners build more practical FL systems.