All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
vLLM, a system for running and serving large language models, had a security weakness in how it checked API keys (secret codes that authenticate users) before version 0.11.0rc2. The validation used a basic string comparison that took longer to complete the more correct characters an attacker guessed, allowing them to figure out the key one character at a time through a timing attack (analyzing how long the system takes to respond). This weakness could let attackers bypass authentication and gain unauthorized access.
Fix: Update vLLM to version 0.11.0rc2 or later, which fixes the issue.
NVD/CVE DatabaseThe HTMLSectionSplitter class in langchain-text-splitters version 0.3.8 has a vulnerability where it unsafely parses XSLT stylesheets (instructions that transform XML data), allowing attackers to read sensitive files like SSH keys or environment configurations without needing special access. This XXE (XML External Entity, a type of injection attack that exploits how XML parsers handle external files) attack works by default in older versions of the underlying lxml library and can still work in newer versions unless specific security controls are added.
Flowise version 3.0.7 has a file upload vulnerability that lets authenticated users (people with login access) upload any file type without proper checks. Attackers can upload malicious Node.js web shells (programs that let someone run commands on a server remotely), which stay on the server and could lead to RCE (remote code execution, where an attacker runs commands on a system they don't own) if activated through admin mistakes or other vulnerabilities.
SillyTavern, a locally installed interface for interacting with text generation AI models and other AI tools, has a vulnerability in versions before 1.13.4 that allows DNS rebinding (a network attack where an attacker tricks your computer into connecting to a malicious server by manipulating domain name lookups) to let attackers install harmful extensions, steal chat conversations, or create fake login pages. The vulnerability affects the web-based user interface and could be exploited especially when the application is accessed over a local network without SSL (encrypted connections).
A vulnerability in the Linux kernel's bpf_sk_assign function (a BPF helper that assigns sockets to network packets) could cause a memory leak when unhashed UDP sockets (sockets not yet bound to a port) are used. The problem occurs because the function assumes a socket flag called SOCK_RCU_FREE stays constant, but this flag gets set when an unhashed socket is later bound to a port, breaking the function's memory management logic.
Mastra (a TypeScript framework for building AI agents and assistants) versions 0.13.8 through 0.13.20-alpha.0 have a directory traversal vulnerability, which means an attacker can bypass security checks to list files and folders in any directory on a user's computer, potentially exposing sensitive information. The flaw exists because while the code tries to prevent path traversal (unauthorized access to files through manipulated file paths) for reading files, a separate part of the code that suggests directories can be exploited to work around this protection.
Cursor is a code editor designed for programming with AI help. Versions 1.6.23 and below have a security flaw where they use case-sensitive checks (checking uppercase and lowercase letters as different) to protect sensitive files, which allows attackers to use prompt injection (tricking the AI with hidden instructions) to modify these files and gain remote code execution (the ability to run commands on the victim's computer) on case-insensitive filesystems (systems that treat uppercase and lowercase letters the same).
Claude Code versions before 1.0.120 had a security flaw where it could bypass file access restrictions by following symlinks (shortcuts that point to other files). Even if a user blocked Claude Code from accessing a file, the tool could still read it if there was a symlink pointing to that blocked file.
Cursor, a code editor designed for programming with AI, has a vulnerability in versions 1.7 and below where attackers can use prompt injection (tricking the AI by hiding instructions in its input) to modify sensitive configuration files and achieve remote code execution (RCE, where an attacker can run commands on a system they don't own). This vulnerability is especially dangerous on case-insensitive filesystems (systems that treat uppercase and lowercase letters as the same).
Federated learning (a way for multiple parties to train an AI model together without sharing their raw data with a central server) normally requires many communication rounds that waste bandwidth and can leak private information. Existing compression methods reduce communication but ignore privacy risks and fail when some clients disconnect. Octopus addresses these issues by using Sketch (a data compression technique) to compress gradients (the direction and size of updates to a model), adding protective masks around the compressed data, and including a strategy to handle disconnected clients.
Fix: Octopus employs Sketch to compress gradients and embeds masks for the compressed gradients to safeguard them while reducing communication overhead. The scheme proposes an anti-disconnection strategy to support model updates even when some clients are disconnected.
IEEE Xplore (Security & AI Journals)Federated learning (a training method where multiple parties collaborate to build an AI model without sharing raw data) is vulnerable to model poisoning attacks (where attackers inject harmful updates during training to break the model). This paper proposes MSDFL and HMSDFL, new defensive approaches that strengthen models by improving their stability, meaning they become less sensitive to small changes in their internal parameters, making them more resistant to these poisoning attacks.
Fix: The source explicitly describes the solution: 'we introduce a new method named Model Stability Defense for Federated Learning (MSDFL), designed to fortify the defense of FL systems against model poisoning attacks. MSDFL utilizes a minmax optimization framework, which is fundamentally linked to empirical risk for exploring the effects of model perturbations. The core aim of our approach is to minimize the norm of the model-output Jacobian matrix without compromising predictive performance, thereby establishing defense through enhanced model stability.' The paper also proposes 'a refined version of MSDFL, named Holistic Model Stability Defense for Federated Learning (HMSDFL), which considers model stability across all output dimensions of the logits to effectively eradicate the disparity in model convergence speed induced by MSDFL.'
IEEE Xplore (Security & AI Journals)EEG-FE_rrRS is a biometric recognition system that uses brain wave signals (EEG, electroencephalogram) and a fuzzy extractor (a cryptographic tool that converts messy biometric data into secure, consistent digital codes) to create unique digital identities for users in applications like drones and virtual worlds. The system combines EEG signal processing with a fuzzy extractor framework and demonstrates high accuracy in recognizing individuals, achieving nearly perfect results on certain datasets.
Fix: The vulnerability has been patched in version 1.13.4. Users should update to this version. The fix includes a new server configuration setting called `hostWhitelist.enabled` in the config.yaml file or the `SILLYTAVERN_HOSTWHITELIST_ENABLED` environment variable that validates hostnames in incoming HTTP requests against an allowed list. The setting is disabled by default for backward compatibility, but users are encouraged to review their server configurations and enable this protection, especially if hosting over a local network without SSL.
NVD/CVE DatabaseMillimeter-wave radars (mmWave, sensors that use radio waves to detect objects) used in autonomous vehicles can be tricked by attackers who send false signals to distort what the radar perceives, potentially causing dangerous driving behavior. AttackDeceiver is a new anti-spoofing system (a defense against false signal attacks) that uses a phase-shifted interleaving waveform (a specially designed radio signal pattern) to detect fake targets by comparing readings from two independent channels, and it also tricks adaptive attackers into creating unrealistic fake objects that are easier to identify.
Fix: The source describes the AttackDeceiver system itself as the mitigation. It works by comparing range and velocity estimates from two independent virtual channels to detect and mitigate spoofing attacks, and by inducing attackers to generate false targets with unrealistic velocity fluctuations that can be identified. The prototype achieved false target recall exceeding 97.9% and signal-to-interference-plus-noise ratio enhancement exceeding 13.46 dB.
IEEE Xplore (Security & AI Journals)Researchers demonstrated a flow correlation attack against Nym, a mixnet (a network system that hides which user is communicating with which destination by routing traffic through multiple nodes). By analyzing the pattern and rate of data packets, an attacker controlling entry and exit gateways can use a neural network (a machine learning model inspired by how brains process information) to match incoming flows with outgoing flows with very high accuracy. The study tested five defense strategies and found that using the right combination of countermeasures at appropriate scales can meaningfully reduce the attack's effectiveness.
Fix: The source states: 'the right choice and scale of countermeasure(s) can offer meaningful protection' and mentions that 'five evaluated defense strategies' were tested. However, the source does not explicitly specify which countermeasures to implement, their names, configuration details, or version updates. The text only notes that 'steps a mixnet such as Nym can take to make our attack both less likely and less accurate' exist but does not detail them.
IEEE Xplore (Security & AI Journals)PrivESD is a new system that allows machine learning classification (logistic regression, a technique for categorizing data) to work on encrypted streaming data (continuously flowing information that's been scrambled for privacy) while stored in the cloud. The system splits the computational work between cloud servers and edge devices (computers closer to where data originates) to reduce processing burden and privacy risks, and uses special encryption methods that still allow the system to compare values without revealing the actual data.
Researchers discovered that hyper-parameters (settings that control how a deep reinforcement learning model learns and behaves) can be leaked from closed-box DRL models, meaning attackers can figure out these secret settings just by observing how the model responds to different situations. They created an attack called HyperInfer that successfully inferred hyper-parameters with over 90% accuracy, showing that even restricted AI models may expose information that was meant to stay hidden.
Researchers created a method called UTE-SS (Unlearnable text examples generation via syntax-oriented shortcut) to protect text data from being used to train AI models without permission. The method adds small, hard-to-notice changes to text by altering its syntax (grammatical structure) so that language models learn misleading patterns instead of useful information, making the text data effectively useless for training.
This paper presents a new system for 3-D multiobject tracking (MOT, a technique where AI follows multiple objects moving through 3-D space) used in autonomous vehicles to improve safety. The system uses a voxel masking encoder (a method that processes 3-D space divided into small cubes, focusing on important features while ignoring empty space) and deep hashing (a technique that converts objects into compact numerical codes for fast comparison) to better track distant objects, partially hidden objects, and similar-looking objects. The method was tested on the KITTI dataset (a standard collection of driving videos used to evaluate autonomous vehicle systems) and showed better tracking accuracy than existing methods.
FedMPS is a federated learning (FL, a technique where multiple computers train an AI model together without sharing raw data) framework that addresses performance problems caused by data heterogeneity (differences in data across participants). Instead of exchanging full model parameters, FedMPS transmits only prototypes (representative feature patterns) and soft labels (probability-based output predictions), which reduces communication costs and improves how well models learn from each other.
Hard sample mining (HSM, a technique for selecting the most difficult training examples to focus a model's learning) has emerged as a method to improve how efficiently deep neural networks (AI systems based on interconnected layers inspired by brain neurons) train and make them more robust to errors. This survey article reviews different HSM approaches and explains how they help address training inefficiency and data distribution biases (when training data doesn't represent real-world scenarios fairly) in deep learning.
Fix: Fix the problem by rejecting unhashed sockets in bpf_sk_assign(). This matches the behaviour of __inet_lookup_skb which is ultimately the goal of bpf_sk_assign().
NVD/CVE DatabaseFix: This issue is fixed in version 0.13.20.
NVD/CVE DatabaseFix: This issue is fixed in version 1.7. Users should upgrade to version 1.7 or later.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.120 or later. Users with automatic updates enabled will have received this fix automatically; users updating manually should upgrade to the latest version.
NVD/CVE DatabaseFix: This issue is fixed in commit 25b418f, but has yet to be released as of October 3, 2025.
NVD/CVE Database