All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
ZenML Server in the ZenML machine learning package before version 0.46.7 has a remote privilege escalation vulnerability (CVE-2024-25723), meaning an attacker can gain higher-level access to the system from a distance. The flaw exists in a REST API endpoint (a web-based interface for requests) that activates user accounts, because it only requires a valid username and new password to change account settings, without proper access controls checking who should be allowed to do this.
Fix: Update ZenML to version 0.46.7 or use one of the patched versions: 0.44.4, 0.43.1, or 0.42.2.
NVD/CVE DatabaseA vulnerability in the Linux kernel's queued read-write lock mechanism allowed a race condition where a reader could modify a value while a writer thought it had acquired the lock. The problem occurred because the writer's lock acquisition wasn't properly ordered with respect to the atomic compare-and-exchange operation (cmpxchg, a CPU instruction that compares and swaps values atomically), creating a window where reads could see stale data before the write lock was truly secured.
The EU AI Act classifies AI systems by risk level, from prohibited (like social scoring systems that manipulate behavior) to minimal risk (unregulated). High-risk AI systems, such as those used in critical decisions affecting people's lives, face strict regulations requiring developers to provide documentation, conduct testing, and monitor for problems. General-purpose AI (large language models that can do many tasks) have lighter requirements unless they present systemic risk, in which case developers must test them against adversarial attacks (attempts to trick or break them) and report serious incidents.
CVE-2024-27444 is a vulnerability in LangChain Experimental (a Python library for building AI applications) before version 0.1.8 that allows attackers to bypass a previous security fix and run arbitrary code (malicious commands they choose) by using Python's special attributes like __import__ and __globals__, which were not blocked by the pal_chain/base.py security checks.
MLflow, a machine learning platform, has a vulnerability where it doesn't properly clean user input from dataset tables, allowing XSS (cross-site scripting, where attackers inject malicious code into web pages). When someone runs a recipe using an untrusted dataset in Jupyter Notebook, this can lead to RCE (remote code execution, where an attacker can run commands on the user's computer).
MLflow has a vulnerability (CVE-2024-27132) where template variables are not properly sanitized, allowing XSS (cross-site scripting, where malicious code runs in a user's browser) when running an untrusted recipe in Jupyter Notebook. This can lead to client-side RCE (remote code execution, where an attacker can run commands on the user's computer) through insufficient input cleaning.
ONNX (a machine learning model format library) versions 1.15.0 and earlier have an out-of-bounds read vulnerability (accessing memory outside intended boundaries) caused by an off-by-one error in the ONNX_ASSERT and ONNX_ASSERTM functions, which handle string copying. This flaw could allow attackers to read sensitive data from memory.
ONNX (a machine learning model format) versions 1.15.0 and earlier contain a directory traversal vulnerability (a security flaw where an attacker can access files outside the intended directory) in the external_data field of tensor proto (a data structure component). This vulnerability bypasses a previous security patch, allowing attackers to potentially access files they shouldn't be able to reach.
A bug in the Linux kernel's NVMe over TCP (nvmet-tcp, a protocol for storage communication) can cause a kernel panic (system crash) when a host computer sends an H2CData command with an invalid DATAL (data length) value. The crash happens in the nvmet_tcp_build_pdu_iovec() function, which processes incoming network packets.
CVE-2023-30767 is a vulnerability in Intel's Optimization for TensorFlow before version 2.13.0 caused by improper buffer restrictions (inadequate checks on how much data can be written to a memory area). An authenticated user with local access to a system could exploit this flaw to gain higher privilege levels than they should have.
ChatGPT's Code Interpreter (a sandbox environment that runs code) was not properly isolated between different GPTs, meaning files uploaded to one GPT were visible and could be modified by other GPTs used by the same person, creating a security risk where malicious GPTs could steal or overwrite sensitive files. OpenAI addressed this vulnerability in May 2024.
Researchers discovered ASCII Smuggling, a technique using Unicode Tags Block characters (special Unicode codes that mirror ASCII but stay invisible in UI elements) to hide prompt injections (tricky instructions hidden in AI input) that large language models interpret as regular text. This attack is particularly dangerous for LLMs because they can both read these hidden messages and generate them in responses, enabling more sophisticated attacks beyond traditional methods like XSS (cross-site scripting, injecting malicious code into websites) and SSRF (server-side request forgery, tricking a server into making unauthorized requests).
CVE-2024-0964 is a vulnerability in Gradio (an AI tool library) where an attacker can remotely read files from a server by sending a specially crafted JSON request. The flaw exists because Gradio doesn't properly limit which files users can access through its API, allowing attackers to bypass directory restrictions and read sensitive files they shouldn't be able to reach.
Autolab, a web-based course management system that automatically grades programming assignments, contained path traversal vulnerabilities (a type of bug where attackers can access files outside the intended directory) that allowed instructors to read arbitrary files on the system in versions before 2.12.0. This vulnerability affects the assessment functionality and has no workaround available.
LlamaIndex (a tool for building AI applications with custom data) versions up to 0.9.34 has a SQL injection vulnerability (a flaw where attackers can insert malicious database commands into normal text input) in its Text-to-SQL feature. This allows attackers to run harmful SQL commands by hiding them in English language requests, such as deleting database tables.
LlamaHub (a library for loading plugins) versions before 0.0.67 have a vulnerability in how they handle OpenAPI and ChatGPT plugin loaders that allows attackers to execute arbitrary code (run any code they choose on a system). The problem is that the code uses unsafe YAML parsing instead of safe_load (a secure function that prevents malicious code in configuration files).
A researcher discovered that Amazon Q for Business was vulnerable to an indirect prompt injection attack (a technique where an attacker hides malicious instructions in data that gets fed to an AI), which could trick the AI into outputting markdown tags that render as hyperlinks. This allowed attackers to steal sensitive data from victims by embedding malicious links in uploaded files. Amazon identified and fixed the vulnerability after the researcher reported it.
Fix: Switching the cmpxchg to use acquire semantics (memory ordering guarantees that prevent certain CPU operations from being reordered) addresses the issue. After this change, the atomic_cond_read can be switched to use relaxed semantics (a faster version without strict ordering guarantees), as the cmpxchg now provides the necessary ordering.
NVD/CVE DatabaseFix: Update to LangChain version 0.1.8 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/de9a6cdf163ed00adaf2e559203ed0a9ca2f1de7.
NVD/CVE DatabaseFix: A patch is available at https://github.com/mlflow/mlflow/pull/10893
NVD/CVE DatabaseFix: Fix the bug by raising a fatal error if DATAL isn't coherent with the packet size. Additionally, the PDU (protocol data unit, the structure holding network data) length should never exceed the MAXH2CDATA parameter that was communicated to the host in nvmet_tcp_handle_icreq().
NVD/CVE DatabaseA researcher discovered a vulnerability in Google Gemini where attackers can hide instructions in emails that trick the AI into automatically calling external tools (called Extensions) without the user's knowledge. When a user asks the AI to analyze a malicious email, the AI follows the hidden instructions and invokes the tool, which is a form of request forgery (making unauthorized requests on behalf of the user).
Fix: Update Intel Optimization for TensorFlow to version 2.13.0 or later.
NVD/CVE DatabaseFix: OpenAI addressed this vulnerability in May 2024. Additionally, the source recommends: 'Disable Code Interpreter in private GPTs with private knowledge files (as they will be accessible to other GPTs)' and notes that 'when creating a new GPT Code Interpreter is off by default' as one change OpenAI made. Users should avoid uploading sensitive files to Code Interpreter and use third-party GPTs with caution, especially those with Code Interpreter enabled.
Embrace The RedFix: As a developer, a possible mitigation is to remove Unicode Tags Block text on the way in and out (meaning filter it both when users send input to your LLM and when the LLM sends responses back to users). Additionally, test your own LLM applications for this new attack vector to identify vulnerabilities.
Embrace The RedA researcher discovered that Anthropic's Claude AI model is vulnerable to hidden prompt injections using Unicode Tags code points (invisible characters that can carry secret instructions in text). Like ChatGPT before it, Claude can interpret these hidden instructions and follow them, even though users cannot see them on their screen. The researcher reported the issue to Anthropic, but the ticket was closed without further details provided.
Fix: A patch is available at https://github.com/gradio-app/gradio/commit/d76bcaaaf0734aaf49a680f94ea9d4d22a602e70, which addresses the path traversal vulnerability (CWE-22, improper limitation of pathname access).
NVD/CVE DatabaseGoogle Bard gained a code interpreter feature that lets it run Python code to create charts and perform calculations. The feature works by executing code in a sandboxed environment (an isolated virtual computer), which users can trigger by asking Bard to visualize data or plot results. While exploring this sandbox, the author found it to be somewhat unreliable and less capable than similar features in other AI systems, with limited ability to run arbitrary programs.
Fix: Upgrade to Autolab version 2.12.0 or later, which contains a patch for this vulnerability.
NVD/CVE DatabaseFix: Upgrade LlamaHub to version 0.0.67 or later, as indicated by the release notes and patch references in the source.
NVD/CVE Database