All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
GPT-SoVITS-WebUI, a tool for converting voices and generating speech from text, has an unsafe deserialization vulnerability (a flaw where untrusted data is converted back into code objects, potentially allowing attackers to run malicious code) in versions 20250228v3 and earlier. The vulnerability occurs because user-supplied file paths are directly passed to torch.load, a function that can execute arbitrary code during the deserialization process.
DSpace, an open-source application for storing and accessing digital files, has a vulnerability in versions before 7.6.4, 8.2, and 9.1 where it doesn't properly disable XML External Entity (XXE) injection, a technique where attackers embed malicious code in XML files to read sensitive files or steal data from the server). The vulnerability affects both the command-line import tool and the web interface's batch import feature, but only administrators can trigger it by importing archive files.
A ReDoS (regular expression denial of service, where carefully designed text input causes a regex pattern to consume excessive CPU) vulnerability was found in the Hugging Face Transformers library's DonutProcessor class, affecting versions 4.50.3 and earlier. The vulnerable regex pattern can be exploited through crafted input strings to cause the system to slow down or crash, disrupting document processing tasks that use the Donut model.
A WordPress plugin called 'Photos, Files, YouTube, Twitter, Instagram, TikTok, Ecommerce Contest Gallery' has a vulnerability called Stored Cross-Site Scripting (XSS, where an attacker can hide malicious code in a webpage that runs when others view it) in versions up to 26.0.8. Attackers with Author-level permissions or higher can inject harmful scripts through the upload title field because the plugin doesn't properly clean and secure user input.
CVE-2025-7021 is a vulnerability in OpenAI Operator SaaS on Web where an attacker can trick users into entering sensitive information like login credentials by creating a fake fullscreen interface that mimics browser controls and hides security warnings. The attacker overlays distracting elements (such as a fake cookie consent screen) to obscure notifications and deceive users into interacting with the malicious site. This vulnerability has a CVSS score of 6.9 (MEDIUM severity).
CVE-2025-38341 is a double free vulnerability (a bug where memory is freed twice, causing crashes or security issues) in the Linux kernel's fbnic ethernet driver that occurs when a function called fbnic_mbx_map_msg() fails to DMA-map (transfer data to hardware memory) a firmware message. The vulnerability arises because the function's design expects callers to free the message themselves on error, but some code paths may incorrectly free the message twice.
A vulnerability in the Linux kernel's SGX (Software Guard Extensions, a CPU feature that creates isolated execution areas) allows the system to attempt reclaiming memory pages that are already poisoned (marked as corrupted due to hardware errors). When the kernel tries to reclaim these poisoned pages using special CPU instructions like EWB (encrypt and write back), it can trigger machine check errors that crash the system, because SGX instructions cannot safely handle these hardware errors.
Roo Code is an AI tool that can write code automatically. Before version 3.22.6, if a user had auto-approved write permissions, an attacker could send prompts to the agent that would modify VS Code settings files (configuration files that control how the editor works) and run malicious code on the user's computer. For example, an attacker could change a PHP validation setting to point to a harmful command, then create a PHP file to execute it.
Hugging Face Transformers versions up to 4.49.0 have a vulnerability in the `image_utils.py` file where URL validation (checking if a URL starts with certain text) can be tricked through URL username injection (adding fake credentials to a URL). Attackers can create fake URLs that look like they're from YouTube but actually point to malicious sites, risking phishing attacks, malware, or stolen data.
A ReDoS vulnerability (regular expression denial of service, where specially crafted text causes a regex pattern to consume excessive CPU) was found in Hugging Face Transformers library version 4.49.0, specifically in code that filters Python try/except blocks. Attackers could exploit this to crash or slow down systems using the library, potentially disrupting model serving or supply chain processes.
A ReDoS vulnerability (regular expression denial of service, where specially crafted input causes a program to use excessive CPU by making the regex engine work inefficiently) was found in the Hugging Face Transformers library version 4.49.0, specifically in a function that reads configuration files. An attacker could send malicious input to make the application slow down or crash by exhausting its computing resources.
A ReDoS vulnerability (regular expression denial of service, where inefficient pattern matching causes a system to slow down or crash) was found in the Hugging Face Transformers library version 4.49.0. The problem is in a regex pattern called `SETTING_RE` that uses inefficient repetition, causing it to take exponentially longer when processing specially crafted input strings, which can make the application unresponsive or crash.
BerriAI litellm version 1.65.4 contains a SQL injection vulnerability (a type of attack where malicious SQL code is inserted into user inputs to manipulate database queries) in the /key/block endpoint. This weakness allows attackers to potentially access or modify database contents through this vulnerable endpoint.
The U.S. Senate voted 99-1 to remove a provision from a Republican bill that would have prevented states from regulating AI if they wanted to receive federal broadband expansion funds. The provision was weakened by Senate rules that limited it to only $500 million in new funding rather than $42.45 billion in total broadband funds, making it less likely states would comply even if it had passed.
A vulnerability exists in Anthropic's deprecated Slack MCP Server (Model Context Protocol Server, a tool that lets AI agents interact with Slack) that allows attackers to steal sensitive data. When an AI agent processes untrusted input, an attacker can trick it into creating messages with malicious links that, when Slack's link preview bots automatically expand them, secretly send private data to the attacker's server without requiring any user action.
The @cyanheads/git-mcp-server (an MCP server, or a tool that lets AI systems interact with Git repositories) has a command injection vulnerability (a flaw where attackers can sneak extra system commands into input) in versions before 2.1.5. Because the server doesn't check user input before running system commands, attackers can execute arbitrary code on the server, or trick an AI client into running unwanted actions through indirect prompt injection (hiding malicious instructions in data the AI reads).
dpkg-deb (a tool that extracts and manages Debian package files) fails to properly set permissions on temporary directories when unpacking package contents, potentially leaving temporary files behind. If an attacker repeatedly sends malicious packages or uses highly compressible files placed in directories that can't be deleted by regular users, this could fill up the disk and cause a denial of service (DoS, a situation where a system becomes unusable due to resource exhaustion).
Fix: The source explicitly states: 'The fix is included in DSpace 7.6.4, 8.2, and 9.1. Please upgrade to one of these versions.' For organizations unable to upgrade immediately, the source mentions: 'it is possible to manually patch the DSpace backend' and recommends administrators 'carefully inspect any SAF archives (they did not construct themselves) before importing' and 'affected external services can be disabled to mitigate the ability for payloads to be delivered via external service APIs.'
NVD/CVE DatabaseThe EU published a General-Purpose AI Code of Practice in July 2025 to clarify how AI developers should comply with the EU AI Act's safety requirements, which had been ambiguously worded. The Code establishes a three-step framework for identifying, analyzing, and determining whether systemic risks (including CBRN threats, loss of control, cyber attacks, and harmful manipulation) are acceptable before deploying large AI models, along with requirements for continuous monitoring and incident reporting.
Fix: The EU General-Purpose AI Code of Practice provides a structured approach requiring GPAI providers to: (1) Identify potential systemic risks in four categories (CBRN, loss of control, cyber offense capabilities, and harmful manipulation), (2) Analyze each risk using model evaluations and third-party evaluators when necessary, (3) Determine whether risks are acceptable and implement safety and security mitigations if not, and (4) conduct continuous monitoring after deployment with strict incident reporting timelines.
CAIS AI Safety NewsletterIn Q2 2025, attackers exploited GPT-4.1 by embedding malicious hidden instructions within tool descriptions, a technique called tool poisoning (hiding harmful prompts inside the text that describes what a tool does). When the AI interacted with these poisoned tools, it unknowingly executed unauthorized actions and leaked sensitive data without the user's knowledge.
Fix: The source explicitly mentions these mitigations: implement strict validation and sanitization of tool descriptions, establish permissions and access controls for tool integrations, monitor AI behavior for anomalies during tool execution, and educate developers on secure integration practices. Developers must validate third-party tools and ensure descriptions are free of hidden prompts, and IT teams should audit AI tool integrations and monitor for unusual activity.
OWASP GenAI SecurityFix: Update the Hugging Face Transformers library to version 4.52.1 or later, as this version contains the fix for the vulnerability.
NVD/CVE DatabaseFix: The Linux kernel project has released patches to fix this vulnerability. Three patch commits are available: https://git.kernel.org/stable/c/0a211e23852019ef55c70094524e87a944accbb5, https://git.kernel.org/stable/c/5bd1bafd4474ee26f504b41aba11f3e2a1175b88, and https://git.kernel.org/stable/c/670179265ad787b9fd8e701601914618b8927755. Users should apply the appropriate kernel update containing one of these patches.
NVD/CVE DatabaseFix: Call sgx_unmark_page_reclaimable() to remove the affected EPC (Enclave Page Cache) page from sgx_active_page_list when a memory error is detected. This prevents the corrupted page from being considered for reclaiming and stops the system from attempting dangerous operations on it.
NVD/CVE DatabaseGoogle is automatically enabling its Gemini AI to access third-party apps like WhatsApp on Android devices, overriding previous user settings that blocked such access. Users who want to prevent this must take action, though Google's guidance on how to fully disable Gemini integrations is unclear and confusing, with the company stating that even when Gemini access is blocked, data is still stored for 72 hours.
Fix: According to a Tuta researcher cited in the article, disabling Gemini app activity is likely to prevent data collection beyond the 72-hour temporary storage period. Additionally, if the Gemini app is not already installed on a device, it will not be installed after the change takes effect.
Ars Technica (Security)Fix: Update Roo Code to version 3.22.6 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: The issue is fixed in version 4.52.1. Update Hugging Face Transformers to version 4.52.1 or later.
NVD/CVE DatabaseFix: Update to version 4.51.0, where the vulnerability is fixed.
NVD/CVE DatabaseFix: Update to version 4.51.0, where the issue is resolved.
NVD/CVE DatabaseFix: Update to version 4.51.0 or later, where the issue is fixed.
NVD/CVE DatabaseFix: Update to version 2.1.5, where this issue has been patched.
NVD/CVE Database