All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
CVE-2025-11200 is a vulnerability in MLflow that allows remote attackers to bypass authentication (gain access without logging in) because the system has weak password requirements (passwords that are too easy to guess or crack). Attackers can exploit this flaw to access MLflow installations without needing valid credentials.
Fix: A patch is available at the following GitHub commit: https://github.com/mlflow/mlflow/commit/1f74f3f24d8273927b8db392c23e108576936c54
NVD/CVE DatabaseCVE-2025-12058 is a vulnerability in Keras (a machine learning library) where the load_model method can be tricked into reading files from a computer's local storage or making network requests to external servers, even when the safe_mode=True security flag is enabled. The problem occurs because the StringLookup layer (a component that converts text into numbers) accepts file paths during model loading, and an attacker can craft a malicious .keras file (a model storage format) to exploit this weakness.
Anthropic added network request capabilities to Claude's Code Interpreter, which creates a security risk for data exfiltration (unauthorized stealing of sensitive information). An attacker, either controlling the AI model or using indirect prompt injection (hidden malicious instructions in a document the AI processes), could abuse Anthropic's own APIs to steal data that a user has access to, rather than using typical methods like hidden links.
A Linux kernel vulnerability (CVE-2025-40058) affects Intel VT-d IOMMU (input/output memory management unit, a hardware component that manages memory access for devices) dirty page tracking. Dirty page tracking requires the IOMMU and CPU to keep memory synchronized, but if the IOMMU's page walk (the process of reading memory structure tables) is incoherent (not synchronized), the tracking fails and can cause non-recoverable faults. The fix prevents this misconfiguration by only enabling SSADS (support for dirty tracking) when both ecap_slads and ecap_smpwc hardware capabilities are present.
A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL code into an application) exists in LangGraph's SQLite storage system, specifically in version 2.0.10 of langgraph-checkpoint-sqlite. The vulnerability happens because the code directly combines user input with SQL commands instead of safely separating them, allowing attackers to steal sensitive data like passwords and API keys, and bypass security protections.
LlamaIndex v0.14.6 is a software update released on October 26, 2025, that fixes various bugs across multiple components including support for parallel tool calls, metadata handling, embedding format compatibility, and SQL injection vulnerabilities (using parameterized queries instead of raw SQL string concatenation). The release also adds new features like async support for retrievers and integrations with new services like Helicone.
FastGPT, an AI Agent building platform, had a vulnerability in its workflow file reading node where network links were not properly verified, creating a risk of SSRF attacks (server-side request forgery, where an attacker tricks the server into making unwanted requests to other systems). The vulnerability affected versions before 4.11.1.
Hugging Face Smolagents version 1.20.0 has an XPath injection vulnerability (a security flaw where attackers can inject malicious code into XPath queries, which are used to search and navigate document structures) in its web browser function. The vulnerability exists because user input is directly inserted into XPath queries without being cleaned, allowing attackers to bypass search filters, access unintended data, and disrupt automated web tasks.
AI agents (software systems that take actions automatically) often execute pre-approved system commands like 'find' and 'grep' for efficiency, but attackers can bypass human approval protections through argument injection attacks (exploiting how command parameters are handled) to achieve remote code execution (RCE, where attackers run unauthorized commands on a system). The article identifies that while these systems block dangerous commands and disable shell operators, they fail to validate command argument flags, creating a common vulnerability across multiple popular AI agent products.
A vulnerability (CVE-2025-53066) exists in Oracle Java SE and related products, affecting multiple versions including Java 8, 11, 17, 21, and 25. An attacker with network access can exploit this flaw in the JAXP component (a Java library for processing XML data) without needing to log in, potentially gaining unauthorized access to sensitive data. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 7.5, indicating it is a serious threat.
The Moodle OpenAI Chat Block plugin version 3.0.1 has an IDOR vulnerability (insecure direct object reference, where a user can access resources by directly requesting them without proper permission checks). An authenticated student can bypass validation of the blockId parameter in the plugin's API and impersonate another user's block, such as an administrator's block, allowing them to execute queries with that block's settings, expose sensitive information, and potentially misuse API resources.
CVE-2025-49655 is a vulnerability in Keras (a machine learning framework) versions 3.11.0 through 3.11.2 where deserialization (converting saved data back into usable form) of untrusted data can allow malicious code to run on a user's computer when they load a specially crafted Keras file, even if safe mode is enabled. This vulnerability affects both locally stored and remotely downloaded files.
CVE-2025-62356 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Qodo Gen IDE that allows attackers to read any local files on a user's computer, both inside and outside their projects. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).
CVE-2025-62353 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Windsurf IDE that allows attackers to read and write any files on a user's computer. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).
LlamaIndex v0.14.5 is a release that fixes multiple bugs and adds new features across its ecosystem of AI/LLM tools. Changes include fixing duplicate node positions in documents, improving streaming functionality with AI providers like Anthropic and OpenAI, adding support for new AI models, and enhancing vector storage (database systems that store AI embeddings, which are numerical representations of text meaning) capabilities. The release also introduces new integrations, such as Sglang LLM support and SignNow MCP (model context protocol, a standard for connecting AI tools) tools.
A new benchmark called the Remote Labor Index (RLI) measures whether AI systems can automate real computer work tasks across different professions, showing that current AI agents can only fully automate 2.5% of projects despite improving over time. Additionally, over 50,000 people, including top scientists and Nobel laureates, signed an open letter calling for a moratorium (temporary ban) on developing superintelligence (a hypothetical AI system far more capable than humans) until it can be proven safe and controllable.
Fix: Mark SSADS as supported only when both ecap_slads and ecap_smpwc are supported, preventing the IOMMU from being incorrectly configured for dirty page tracking when operating in incoherent mode.
NVD/CVE DatabaseThis paper presents RINNs (reparameterizable integral neural networks), a new type of AI model designed to run efficiently on mobile devices with limited computing power. The key innovation is a reparameterization strategy that converts the complex mathematical structure used during training into a simpler feed-forward structure (a straightforward sequence of processing steps) at inference time, allowing these models to achieve high accuracy (79.1%) while running very fast (0.87 milliseconds) on mobile hardware.
Fix: The source explicitly mentions one security fix: 'Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore' (llama-index-storage-kvstore-postgres #20104). Users should update to v0.14.6 to receive this and other bug fixes. No other specific mitigation steps are described in the release notes.
LlamaIndex Security ReleasesFix: Update FastGPT to version 4.11.1 or later, as this issue has been patched in that version.
NVD/CVE DatabaseFix: The issue is fixed in version 1.22.0. Users should upgrade Hugging Face Smolagents to version 1.22.0 or later.
NVD/CVE DatabaseFix: The article states that 'the impact from this vulnerability class can be limited through improved command execution design using methods like sandboxing (isolating code in a restricted environment) and argument separation.' The text also mentions providing 'actionable recommendations for developers, users, and security engineers,' but the specific recommendations are not detailed in the provided excerpt.
Trail of Bits BlogFix: Update Keras to version 3.11.3 or later. The GitHub pull request at https://github.com/keras-team/keras/pull/21575 contains the fix.
NVD/CVE DatabaseThe Senate introduced the AI LEAD Act, which would make AI companies legally liable for harms their systems cause, similar to how traditional product liability (the legal responsibility companies have when their products injure people) works for other products. The act would clarify that AI systems count as products subject to liability and would hold companies accountable if they failed to exercise reasonable care in designing the system, providing warnings, or if they sold a defective system. Additionally, China announced new export controls on rare earth metals (elements essential to semiconductors and AI hardware), which could disrupt global AI supply chains if strictly enforced.
Fix: The AI LEAD Act itself serves as the proposed solution: it would establish federal product liability for AI systems, clarify that AI companies are liable for harms if they fail to exercise reasonable care in design or warnings or breach warranties, allow deployers to be held liable for substantially modifying or dangerously misusing systems, prohibit AI companies from limiting liability through consumer contracts, and require foreign AI developers to register agents for service of process in the US before selling products domestically.
CAIS AI Safety NewsletterN/A -- The provided content is a navigation menu and feature listing from GitHub's website, not a security issue, vulnerability report, or technical problem related to AI/LLMs.
ATLAS Data v5.0.0 introduces a new "Technique Maturity" field that categorizes AI attack techniques based on evidence level, ranging from feasible (proven in research) to realized (used in actual attacks). The release adds 11 new techniques covering AI agent attacks like context poisoning (injecting false information into an AI system's memory), credential theft from AI configurations, and prompt injection (tricking an AI by hiding malicious instructions in its input), plus updates to existing techniques and case studies.