All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL code into an application) exists in LangGraph's SQLite storage system, specifically in version 2.0.10 of langgraph-checkpoint-sqlite. The vulnerability happens because the code directly combines user input with SQL commands instead of safely separating them, allowing attackers to steal sensitive data like passwords and API keys, and bypass security protections.
LlamaIndex v0.14.6 is a software update released on October 26, 2025, that fixes various bugs across multiple components including support for parallel tool calls, metadata handling, embedding format compatibility, and SQL injection vulnerabilities (using parameterized queries instead of raw SQL string concatenation). The release also adds new features like async support for retrievers and integrations with new services like Helicone.
FastGPT, an AI Agent building platform, had a vulnerability in its workflow file reading node where network links were not properly verified, creating a risk of SSRF attacks (server-side request forgery, where an attacker tricks the server into making unwanted requests to other systems). The vulnerability affected versions before 4.11.1.
Hugging Face Smolagents version 1.20.0 has an XPath injection vulnerability (a security flaw where attackers can inject malicious code into XPath queries, which are used to search and navigate document structures) in its web browser function. The vulnerability exists because user input is directly inserted into XPath queries without being cleaned, allowing attackers to bypass search filters, access unintended data, and disrupt automated web tasks.
AI agents (software systems that take actions automatically) often execute pre-approved system commands like 'find' and 'grep' for efficiency, but attackers can bypass human approval protections through argument injection attacks (exploiting how command parameters are handled) to achieve remote code execution (RCE, where attackers run unauthorized commands on a system). The article identifies that while these systems block dangerous commands and disable shell operators, they fail to validate command argument flags, creating a common vulnerability across multiple popular AI agent products.
A vulnerability (CVE-2025-53066) exists in Oracle Java SE and related products, affecting multiple versions including Java 8, 11, 17, 21, and 25. An attacker with network access can exploit this flaw in the JAXP component (a Java library for processing XML data) without needing to log in, potentially gaining unauthorized access to sensitive data. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 7.5, indicating it is a serious threat.
The Moodle OpenAI Chat Block plugin version 3.0.1 has an IDOR vulnerability (insecure direct object reference, where a user can access resources by directly requesting them without proper permission checks). An authenticated student can bypass validation of the blockId parameter in the plugin's API and impersonate another user's block, such as an administrator's block, allowing them to execute queries with that block's settings, expose sensitive information, and potentially misuse API resources.
CVE-2025-49655 is a vulnerability in Keras (a machine learning framework) versions 3.11.0 through 3.11.2 where deserialization (converting saved data back into usable form) of untrusted data can allow malicious code to run on a user's computer when they load a specially crafted Keras file, even if safe mode is enabled. This vulnerability affects both locally stored and remotely downloaded files.
CVE-2025-62356 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Qodo Gen IDE that allows attackers to read any local files on a user's computer, both inside and outside their projects. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).
CVE-2025-62353 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Windsurf IDE that allows attackers to read and write any files on a user's computer. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).
LlamaIndex v0.14.5 is a release that fixes multiple bugs and adds new features across its ecosystem of AI/LLM tools. Changes include fixing duplicate node positions in documents, improving streaming functionality with AI providers like Anthropic and OpenAI, adding support for new AI models, and enhancing vector storage (database systems that store AI embeddings, which are numerical representations of text meaning) capabilities. The release also introduces new integrations, such as Sglang LLM support and SignNow MCP (model context protocol, a standard for connecting AI tools) tools.
pwn.college DOJO, an education platform for learning cybersecurity, had a vulnerability in its /workspace endpoint that allowed attackers to access other users' Windows virtual machines (VMs, which are simulated computers) without permission. The flaw occurred because the system retrieved user information from a URL parameter without checking if the requester had admin privileges, and it didn't verify passwords before granting access to a user's desktop, potentially allowing attackers to view and modify files on both Windows and Linux systems.
Creativeitem Academy LMS version 5.13 and earlier has a privilege escalation vulnerability (a security flaw where users gain unauthorized higher-level permissions) in the Api_instructor controller that allows regular authenticated users to access functions meant only for instructors without proper role validation (checks that verify what a user is allowed to do). This could let unauthorized users create and manage courses.
A prompt injection vulnerability (tricking an AI by hiding instructions in its input) exists in Windsurf version 1.10.7 when using Write mode with the SWE-1 model. An attacker can create a specially crafted file name that gets added to the user's prompt, causing Windsurf to follow malicious instructions instead of the user's intended commands. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.6, classified as medium severity.
EspoCRM (a customer relationship management application) versions before 9.1.9 have a vulnerability that lets attackers create new user accounts, including admin accounts, by combining stored SVG injection (hiding malicious code in image files) with lack of CSRF protection (missing checks to verify requests are legitimate). An attacker with editing permissions can embed a malicious link in a Knowledge Base article that, when clicked by an authenticated user, tricks their browser into creating an attacker-controlled account with chosen privileges.
text-generation-webui (an open-source tool for running large language models through a web interface) versions 3.13 and earlier contain a Local File Inclusion vulnerability (a flaw where an attacker can read files they shouldn't have access to) in the character picture upload feature. An attacker can upload a text file with a symbolic link (a shortcut to another file) pointing to sensitive files, and the application will expose those files' contents through the web, potentially revealing passwords and system settings.
Fix: The source explicitly mentions one security fix: 'Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore' (llama-index-storage-kvstore-postgres #20104). Users should update to v0.14.6 to receive this and other bug fixes. No other specific mitigation steps are described in the release notes.
LlamaIndex Security ReleasesFix: Update FastGPT to version 4.11.1 or later, as this issue has been patched in that version.
NVD/CVE DatabaseFix: The issue is fixed in version 1.22.0. Users should upgrade Hugging Face Smolagents to version 1.22.0 or later.
NVD/CVE DatabaseFix: The article states that 'the impact from this vulnerability class can be limited through improved command execution design using methods like sandboxing (isolating code in a restricted environment) and argument separation.' The text also mentions providing 'actionable recommendations for developers, users, and security engineers,' but the specific recommendations are not detailed in the provided excerpt.
Trail of Bits BlogFix: Update Keras to version 3.11.3 or later. The GitHub pull request at https://github.com/keras-team/keras/pull/21575 contains the fix.
NVD/CVE DatabaseThe Senate introduced the AI LEAD Act, which would make AI companies legally liable for harms their systems cause, similar to how traditional product liability (the legal responsibility companies have when their products injure people) works for other products. The act would clarify that AI systems count as products subject to liability and would hold companies accountable if they failed to exercise reasonable care in designing the system, providing warnings, or if they sold a defective system. Additionally, China announced new export controls on rare earth metals (elements essential to semiconductors and AI hardware), which could disrupt global AI supply chains if strictly enforced.
Fix: The AI LEAD Act itself serves as the proposed solution: it would establish federal product liability for AI systems, clarify that AI companies are liable for harms if they fail to exercise reasonable care in design or warnings or breach warranties, allow deployers to be held liable for substantially modifying or dangerously misusing systems, prohibit AI companies from limiting liability through consumer contracts, and require foreign AI developers to register agents for service of process in the US before selling products domestically.
CAIS AI Safety NewsletterN/A -- The provided content is a navigation menu and feature listing from GitHub's website, not a security issue, vulnerability report, or technical problem related to AI/LLMs.
ATLAS Data v5.0.0 introduces a new "Technique Maturity" field that categorizes AI attack techniques based on evidence level, ranging from feasible (proven in research) to realized (used in actual attacks). The release adds 11 new techniques covering AI agent attacks like context poisoning (injecting false information into an AI system's memory), credential theft from AI configurations, and prompt injection (tricking an AI by hiding malicious instructions in its input), plus updates to existing techniques and case studies.
Fix: This issue has been patched in commit 467db0b9ea0d9a929dc89b41f6eb59f7cfc68bef. No known workarounds exist.
NVD/CVE DatabaseFix: Update to EspoCRM version 9.1.9 or later, where this issue has been patched.
NVD/CVE DatabaseThis research presents LipVor, an algorithm that mathematically verifies whether a trained neural network (a computer model with interconnected nodes that learns patterns) follows partial monotonicity constraints, which means outputs change predictably with certain inputs. The method works by testing the network at specific points and using mathematical properties to guarantee the network behaves correctly across its entire domain, potentially allowing neural networks to be used in critical applications like credit scoring where trustworthiness and predictable behavior are required.
Fix: Update to version 3.14, where this vulnerability is fixed.
NVD/CVE Database