All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.
Fix: The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.
IEEE Xplore (Security & AI Journals)Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where an attacker can specify any file path in a request to create or overwrite files anywhere on the server. The vulnerability exists because the server doesn't restrict or validate the file paths, allowing attackers to write files to sensitive locations like system directories.
Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where its API Request component can make arbitrary HTTP requests to internal network addresses. An attacker with an API key could exploit this SSRF (server-side request forgery, where a server is tricked into making requests to unintended targets) to access sensitive internal resources like databases and metadata services, potentially stealing information or preparing further attacks.
CVE-2025-63389 is a critical vulnerability in Ollama (an AI platform) versions up to v0.12.3 where API endpoints (connection points for software communication) are exposed without authentication (verification of identity), allowing attackers to remotely perform unauthorized model management operations. The vulnerability stems from missing authentication checks on critical functions.
CVE-2025-62998 is a vulnerability in WP AI CoPilot (a WordPress plugin that adds AI features) versions 1.2.7 and earlier, where sensitive information can be unintentionally included in data sent from the plugin. This is classified as CWE-201 (insertion of sensitive information into sent data), meaning the plugin may leak private or confidential data to unintended recipients.
AnythingLLM v1.8.5 has a vulnerability in its /api/workspaces endpoint (a web address used to access workspace data) that skips authentication checks, allowing attackers without permission to see detailed information about all workspaces, including AI model settings, system prompts (instructions given to the AI), and other configuration details. This means someone could potentially discover sensitive workspace configurations without needing to log in.
President Trump issued an executive order to prevent states from regulating AI by using federal tools like funding withholding and legal challenges, aiming to replace varied state rules with a single federal framework. The order directs federal agencies, including the Attorney General and Commerce Secretary, to challenge state AI laws they view as problematic, while the FTC and FCC will issue guidance on how existing federal laws apply to AI. This action follows a year where ambitious state AI safety proposals, like New York's RAISE Act (which would require AI labs to publish safety practices and report serious incidents), were either weakened or blocked.
A use-after-free vulnerability (UAF, a bug where code accesses memory that has already been freed) was found in the Linux kernel's ksmbd component. The problem occurred when ipc_msg_send_request() freed memory while handle_response() was simultaneously trying to write data to it, causing a crash. This happened because the two functions didn't use the same lock (ipc_msg_table_lock, a mechanism that prevents multiple tasks from accessing shared data at the same time) when accessing shared data.
This post describes a vulnerability in VirtualBox's NAT (network address translation, a mode that makes VM traffic look like it comes from the host computer) networking code, specifically in how it manages memory for packet data using a custom zone allocator. The vulnerability exists because safety checks that verify memory integrity use Assert() statements, which are disabled in the standard release builds of VirtualBox that users download, allowing potential exploitation.
This article explains race condition vulnerabilities (security gaps that occur when a system state changes between a security check and a resource access) in Windows and describes techniques to expand the narrow time window needed to exploit them. The author focuses on slowing down the Object Manager Namespace lookup process (the kernel system that finds named objects like files and events in Windows NT) by manipulating Symbolic Links (redirects in the object naming system) to create larger exploitation windows.
Weaviate OSS (open-source software) versions before 1.33.4 have a vulnerability where the fileName field is not properly validated in the transfer logic. An attacker who can call the GetFile method while a shard (a part of a database) is paused and the FileReplicationService (the system that copies files) is accessible could read any files that the service has permission to access.
Weaviate OSS (an open-source vector database) before version 1.33.4 has a path traversal vulnerability (a bug where an attacker can access files outside the intended directory using tricks like ../../..) that allows attackers with database write access to escape the backup restore location and create or overwrite files elsewhere on the system. This could let attackers modify critical files within the application's permissions.
Fix: Update Langflow to version 1.7.0, which fixes the issue.
NVD/CVE DatabaseFix: Update to version 1.7.0 or later, which contains a patch for this issue.
NVD/CVE DatabaseThe article argues that while AI language models (LLMs, systems trained on large amounts of text to generate responses) and traditional programming languages both increase abstraction, they differ fundamentally in a critical way: compilers are deterministic (they reliably produce the same output every time), while LLMs are nondeterministic (they produce different outputs for the same input). This matters for software security and correctness because compilers preserve the programmer's intended meaning through the translation process, but LLMs cannot guarantee they will generate code that does what you actually need.
The AIBOM Generator, an open-source tool that creates an AI Software Bill of Materials (AIBOM, a structured document listing key information about an AI model like its data sources and configurations), has been moved to OWASP (a nonprofit focused on software security) to enable broader community collaboration and development. The tool helps organizations understand what's inside AI models, where they came from, and how trustworthy their documentation is, addressing a gap between rapid AI adoption and lagging transparency practices. The project is now part of the OWASP GenAI Security Project and will continue improving AI supply chain visibility through community-driven enhancements.
ChargerWhisper is a side-channel attack (a method that steals information by observing physical properties rather than breaking encryption) that uses high-frequency inaudible sounds produced by fast chargers to infer private user information. The attack works because electronic components in chargers vibrate at frequencies correlated with power output, which changes based on what activities users perform on their devices, allowing attackers to identify websites being visited or unlock PINs through acoustic analysis.
Researchers have developed a steganographic method (hiding secret data inside another medium) that embeds hidden messages into compressed neural network models (AI systems made smaller through techniques like quantization, pruning, or distillation). The approach allows a receiver with the correct extraction network to recover the hidden data while ordinary users remain unaware it exists, and the method maintains the model's performance in size, speed, and accuracy.
Fix: The fix involves three changes: (1) Taking ipc_msg_table_lock in ipc_msg_send_request() while validating entry->response, freeing it when invalid, and removing the entry from ipc_msg_table, (2) Returning the final entry->response pointer to the caller only after the hash entry is removed under the lock, and (3) Returning NULL in the error path to preserve the original API semantics. This ensures all accesses to entry->response are protected by the same lock, eliminating the race condition.
NVD/CVE DatabaseThis paper presents Srchpa, a privacy-preserving method for computing the shortest path (the most efficient route between two locations) between a user and a destination. Unlike traditional navigation systems where users must share their location with a server, Srchpa protects both the user's location data and the server's route information while requiring only a single round of communication (one back-and-forth exchange) instead of multiple interactions. The scheme is designed to work efficiently even on resource-limited devices like smartphones.
This research addresses backdoor attacks, where poisoned training data (maliciously altered samples inserted into a dataset) causes neural networks to behave incorrectly on specific inputs. The authors propose a defense method called Trap that detects poisoned samples early in training by recognizing they cluster separately from legitimate data, then removes the backdoor by retraining part of the model on relabeled poisoned samples, achieving very high attack detection rates with minimal accuracy loss.
Fix: The paper proposes detecting poisoned samples during early training stages and removing the backdoor by retraining the classifier part of the model on relabeled poisoned samples. The authors report their method reduced average attack success rate to 0.07% while only decreasing average accuracy by 0.33% across twelve attacks on four datasets.
IEEE Xplore (Security & AI Journals)Researchers found that text-to-image diffusion models (AI systems that generate images from text descriptions) can be attacked using backdoors, which are hidden triggers in text that make the model produce unwanted outputs. This paper proposes Dynamic Attention Analysis (DAA), a new detection method that tracks how the model's attention mechanisms (the parts of the AI that focus on relevant information) change over time, since backdoor attacks create different patterns than normal operation. The method achieved strong detection results, correctly identifying backdoored samples about 79% of the time.
Fix: Upgrade to Weaviate OSS version 1.33.4 or later.
NVD/CVE DatabaseFix: Upgrade Weaviate OSS to version 1.33.4 or later.
NVD/CVE DatabaseResearchers studied an AI-driven metaverse prototype (a 3D virtual environment enhanced with multi-agent systems, or software that can act independently) designed to train cybersecurity professionals, gathering feedback from 53 experts. The study found that this technology could create personalized, scalable training experiences but identified implementation challenges and proposed six recommendations for organizations considering adopting it.