All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Machine unlearning allows AI models to forget the effects of specific training samples, but verifying whether this actually happened is difficult because existing checks (like backdoor attacks or membership inference attacks, which test if a model remembers data by trying to extract or manipulate it) can be fooled by a dishonest model provider who simply retrains the model to pass the test rather than truly unlearning. This paper proposes IndirectVerify, a formal verification method that uses pairs of connected samples (trigger samples that are unlearned and reaction samples that should be affected by that unlearning) with intentional perturbations (small changes to training data) to create indirect evidence that unlearning actually occurred, making it harder to fake.
CVE-2025-59286 is a command injection vulnerability (a flaw where an attacker can insert malicious commands by exploiting how special characters are handled) in Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability stems from improper neutralization of special elements used in commands. A CVSS score (a 0-10 rating of how severe a vulnerability is) has not yet been assigned by NIST.
CVE-2025-59272 is a command injection vulnerability (a flaw where an attacker can insert malicious commands into user input that gets executed by the system) in Copilot that allows an unauthorized attacker to disclose information locally. The vulnerability stems from improper handling of special characters in commands, and it has a CVSS 4.0 severity rating (a moderate severity score on a 0-10 scale).
CVE-2025-59252 is a command injection vulnerability (a flaw where an attacker can insert malicious commands into a system by exploiting improper handling of special characters) in Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability stems from improper neutralization of special elements used in commands. The CVSS severity score (a 0-10 rating of vulnerability severity) has not yet been assigned by NIST.
Flowise is a visual tool for building custom LLM (large language model) workflows, but versions before 3.0.8 have a path traversal vulnerability (a security flaw where attackers can access files outside intended directories) in its file read and write tools. Authenticated attackers could exploit this to read and write any files on the system, potentially leading to remote code execution (running malicious commands on the server).
Kilo Code versions up to 4.86.0 contain a vulnerability in the ClineProvider function that allows prompt injection (tricking an AI by hiding instructions in its input) through improper handling of special characters. The vulnerability can be exploited remotely and has already been made public.
A Server-Side Request Forgery (SSRF) vulnerability, a weakness that lets attackers trick a server into making unwanted requests to internal resources, exists in the MediaConnector class of the vLLM project's multimodal feature set. The vulnerability occurs in the load_from_url and load_from_url_async methods, which fetch media from user-provided URLs without properly checking which hosts are allowed, potentially allowing attackers to access internal network resources through the vLLM server.
A bug in the Linux kernel's CDC NCM network driver (cdc_ncm_check_tx_max function) caused a crash when dwNtbOutMaxSize (a device setting that specifies maximum transmission buffer size) was set to very low values. The problem occurred because memory allocated for network data packets (SKBs, which are data structures for handling network traffic) didn't have enough space for both the SKB header structures and the actual network data, causing the kernel to panic when trying to write data beyond the allocated bounds.
LLaMA-Factory, a library for customizing large language models, has a vulnerability in versions before 0.9.4 that allows authenticated users to exploit SSRF (server-side request forgery, where the server is tricked into making requests to unintended destinations) and LFI (local file inclusion, where attackers can read files directly from the server) by providing malicious URLs to the chat API. The vulnerability exists because the code doesn't validate URLs before making HTTP requests, allowing attackers to access sensitive internal services or read arbitrary files from the server.
Researchers discovered a type of backdoor attack (hidden malicious instructions planted in AI systems) on multiagent reinforcement learning systems, where one adversary agent uses its actions to trigger hidden failures in other agents' decision-making policies. Unlike previous attacks that assumed unrealistic direct control over what victims observe, this attack is more practical because it works through normal agent interactions in partially observable environments (where agents cannot always see what others are doing). The researchers developed a training method to help adversary agents efficiently trigger these backdoors with minimal suspicious actions.
This article describes BMMA-GPT, a biometric authentication system that uses multiple forms of identification (like fingerprints and facial recognition) together with mathematical optimization to improve security and speed. The system uses a dual-threshold approach (two decision points to verify identity) and can be tailored to different organizational needs, achieving high accuracy while keeping verification time under 1.5 seconds.
Researchers developed TabExtractor, a tool that can steal tabular models (AI systems trained on spreadsheet-like data) without needing access to the original training data or knowing how the model was built. The attack works by creating synthetic data samples and using a special neural network architecture called a contrastive tabular transformer (CTT, a type of AI that learns by comparing similar and different examples) to reverse-engineer a clone of the victim model that performs almost as well as the original. This research shows that tabular models face serious security risks from extraction attacks.
This research addresses privacy risks in decentralized optimization (where multiple networked computers work together to solve a problem without a central coordinator) by proposing ZS-DDAPush, an algorithm that adds mathematical noise structures to protect sensitive node information during communication. The key innovation is that ZS-DDAPush achieves privacy protection while maintaining the accuracy and efficiency of the optimization process, avoiding the typical trade-offs seen in other privacy methods like differential privacy (adding statistical noise to protect individual data) or encryption (scrambling data so only authorized parties can read it).
This research proposes a new method for deploying cyber deception (defensive tricks to confuse attackers) in networks by combining deep reinforcement learning (a type of AI that learns by trial and error) with game theory that accounts for time delays. The method uses an algorithm called proximal policy optimization (PPO, a technique for training AI to make optimal decisions) to figure out where and when to place deception resources, and tests show it outperforms existing approaches in handling complex network attacks.
This research paper proposes a new cryptographic method for secure data sharing in Internet of Vehicles (IoV, a system where vehicles communicate with each other and road infrastructure). The method uses Certificateless Signcryption (CLSC, a technique that encrypts data and verifies its authenticity without requiring traditional certificates) to allow one sender to securely share customized data with multiple specific receivers while keeping it hidden from others, even across different geographic regions. The proposed approach reduces computational complexity and includes privacy protections through pseudonym generation (creating fake identities).
This paper describes a new watermarking technique (a method to embed hidden ownership markers into AI models) that remains stable when models are fine-tuned (adjusted to perform new tasks) across different domains. The researchers propose a system that automatically adjusts synthetic training samples and watermark embedding based on the specific data, using out-of-distribution awareness (detecting when data differs significantly from expected patterns) to keep the watermark robust while maintaining the model's performance on its actual task.
This paper presents DynMD, a new machine learning model that uses Graph Neural Networks (GNNs, which are AI systems that analyze connected data points and their relationships) to detect malware by analyzing streaming behavioral data (information about what a program does over time). Unlike previous approaches that miss how malware behaviors connect over time, DynMD uses an energy-based method to better understand malware patterns and can detect threats 3.81 to 5.33 times faster than existing systems.
Mujaz is a system that uses natural language processing (NLP, the field of AI that helps computers understand human language) to automatically clean up and summarize vulnerability descriptions found in public databases. The system was trained on a collection of carefully labeled vulnerability summaries and uses pre-trained language models (AI systems trained on large amounts of text) to create clearer, more consistent descriptions that help developers and organizations understand and patch security issues more effectively.
Fix: Upgrade to Flowise version 3.0.8, which fixes this vulnerability. The patch is available at https://github.com/FlowiseAI/Flowise/releases/tag/flowise%403.0.8.
NVD/CVE DatabaseCVE-2025-5009 is a privacy bug in Google's Gemini iOS app where sharing a snippet of a conversation accidentally shared the entire conversation history through a public link instead of just the selected part. This exposed users' full conversation data, including private information they didn't intend to share.
Researchers developed BPDA, a method for finding security vulnerabilities in embedded firmware (software that runs on devices like routers and IoT devices) by tracking how user input flows through code to reach dangerous functions called sinks. The method is faster and more accurate than existing tools, discovering 163 real vulnerabilities including 34 previously unknown ones when tested on firmware from major manufacturers.
Fix: Applying a patch is the recommended action to fix this issue, as stated in the source material.
NVD/CVE DatabaseFix: The fix clamps dwNtbOutMaxSize to a valid range between USB_CDC_NCM_NTB_MIN_OUT_SIZE and CDC_NCM_NTB_MAX_SIZE_TX, ensuring that enough memory space is allocated to handle both the CDC network data and the SKB header structures without overflow.
NVD/CVE DatabaseFix: Update to version 0.9.4 or later, which fixes the underlying issue.
NVD/CVE Database