All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
China's DeepSeek AI tool, which caused significant market disruption when it launched a year ago, is now being adopted by an increasing number of US companies. The episode discusses this growing trend of Chinese AI technology being integrated into American business operations.
BentoML, a Python library for serving AI models, had a vulnerability (before version 1.4.34) that allowed path traversal attacks (exploiting file path inputs to access files outside intended directories) through its configuration file. An attacker could trick a user into building a malicious configuration that would steal sensitive files like SSH keys or passwords and hide them in the compiled application, potentially exposing them when shared or deployed.
This statement describes how U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have conducted surveillance and violated constitutional rights, including facial recognition scanning and warrantless home searches. The document argues these violations are systemic problems, citing recent deaths during enforcement actions and a leaked memo allowing searches based on administrative warrants (warrants issued by agency officials rather than judges) without judicial review.
The Kalrav AI Agent plugin for WordPress (versions up to 2.3.3) has a vulnerability in its file upload feature that fails to check what type of file is being uploaded. This allows attackers without user accounts to upload malicious files to the server, potentially leading to RCE (remote code execution, where an attacker can run commands on a system they don't own).
ChatterMate, a no-code AI chatbot framework (software that lets people build chatbots without writing code), has a security flaw in versions 1.0.8 and earlier where it accepts and runs malicious HTML/JavaScript code from user chat input. An attacker could send specially crafted code (like an iframe with a javascript: link) that executes in the user's browser and steals sensitive data such as localStorage tokens and cookies, which are used to keep users logged in.
Langflow contains a remote code execution (RCE, where an attacker can run commands on a system they don't own) vulnerability in its disk cache service that allows authenticated attackers to execute arbitrary code by sending maliciously crafted data that the system deserializes (converts from stored format back into usable objects) without proper validation. The flaw exploits insufficient checking of user-supplied input, letting attackers run code with the permissions of the service account.
Langflow, a workflow automation tool, has a vulnerability where attackers can inject malicious Python code into Python function components and execute it on the server (RCE, or remote code execution). The severity and how it can be exploited depend on how Langflow is configured.
Langflow contains a remote code execution vulnerability (RCE, where an attacker can run commands on a system they don't own) in how it handles the exec_globals parameter at the validate endpoint, allowing unauthenticated attackers to execute arbitrary code with root-level privileges. The flaw stems from including functionality from an untrusted source without proper validation.
Langflow contains a vulnerability in its eval_custom_component_code function that allows attackers to execute arbitrary code (RCE, or remote code execution) without needing to log in. The flaw occurs because the function doesn't properly validate user input before executing it as Python code, letting attackers run any commands they want on the affected system.
Langflow has a critical vulnerability where attackers can execute arbitrary code (commands) on the server without needing to log in, by sending malicious input to the validate endpoint. The flaw occurs because the code parameter is not properly checked before being run as Python code, allowing an attacker to run commands with root-level permissions (the highest system access level).
Ollama MCP Server contains a command injection vulnerability (a flaw where an attacker can insert malicious commands into user input that gets executed) in its execAsync method that allows unauthenticated attackers to run arbitrary code on the affected system. The vulnerability exists because the server doesn't properly validate user input before passing it to system commands, letting attackers execute code with the same privileges as the service running the server.
MCP Manager for Claude Desktop has a vulnerability where attackers can inject malicious commands into MCP config objects (configuration files that tell Claude how to use external tools) that aren't properly checked before being run as system commands. By tricking a user into visiting a malicious website or opening a malicious file, an attacker can break out of the sandbox (the restricted environment that limits what Claude can access) and run arbitrary code (any commands they want) on the computer.
A vulnerability in gemini-mcp-tool's execAsync method allows attackers to run arbitrary code (RCE, or remote code execution) on systems using this tool without needing to log in. The flaw occurs because the tool doesn't properly check user input before running system commands, letting attackers inject malicious commands.
CVE-2026-24307 is a vulnerability in Microsoft 365 Copilot where improper validation of input (failure to check that data matches what the system expects) allows an attacker to access and disclose information over a network without authorization. The vulnerability has a CVSS score of 4.0 (a moderate severity rating on a 0-10 scale).
CVE-2026-21521 is a vulnerability in Microsoft Copilot where improper handling of escape sequences (special characters used to control how text is displayed or interpreted) allows an attacker to disclose information over a network without authorization. The vulnerability is classified as CWE-150 (improper neutralization of escape, meta, or control sequences) and was reported by Microsoft Corporation.
CVE-2026-21520 is a vulnerability in Microsoft Copilot Studio that allows an unauthenticated attacker to view sensitive information through a network-based attack. The vulnerability stems from improper handling of special characters in commands (command injection, where attackers manipulate input to execute unintended commands), and affects Copilot Studio's hosted service.
The White House digitally altered a photograph of an activist's arrest by darkening her skin and distorting her facial features to make her appear more distraught than in the original image posted by the Department of Homeland Security. AI detection tools confirmed the manipulation, raising concerns about how generative AI (systems that create images from text descriptions) and image editing technology can be misused by government to spread false information and reinforce racial stereotypes. The incident highlights the danger of deepfakes (realistic-looking fake media created with AI) and the importance of protecting citizens' right to independently document government actions.
AnythingLLM is an application that lets users feed documents into an LLM so it can reference them during conversations. Versions before 1.10.0 had a security flaw where an API key (QdrantApiKey) for Qdrant, the database that stores document information, could be exposed to anyone without authentication (credentials). If exposed, attackers could read or modify all the documents and knowledge stored in the database, breaking the system's ability to search and retrieve information correctly.
Fix: Update AnythingLLM to version 1.10.0 or later. According to the source: 'Version 1.10.0 patches the issue.'
NVD/CVE DatabaseFix: Update BentoML to version 1.4.34 or later, which contains a patch for this issue.
NVD/CVE DatabaseFix: Congress must vote to reject any further funding of ICE and CBP, and rebuild the immigration enforcement system from the ground up to respect human rights and ensure real accountability for individual officers, their leadership, and the agency as a whole.
EFF Deeplinks BlogFix: Update to version 1.0.9, where this issue has been fixed. The patch is available at https://github.com/chattermate/chattermate.chat/releases/tag/v1.0.9.
NVD/CVE DatabaseThis article argues that training AI models on copyrighted works should be protected as fair use (the legal right to use copyrighted material without permission for certain purposes like research or analysis), just as courts have previously allowed for search engines and other information technologies. The article contends that AI training is transformative because it extracts patterns from works rather than replacing them, and that expanding copyright restrictions on AI training could harm legitimate research practices in science and medicine.
Mobile super apps (large platforms that host smaller third-party applications, called miniapps, which share the same underlying services) create new security risks because multiple apps can access shared resources and data. Researchers studied how these ecosystems work, identified security vulnerabilities and potential abuses, and developed recommendations to make super app platforms safer while keeping them easy to use.