Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
LlamaIndex versions up to 0.12.2 have a vulnerability where the VannaPack VannaQueryEngine takes user prompts, converts them to SQL statements, and runs them without limits on how much computing power they use. An attacker can exploit this by submitting prompts that trigger expensive SQL operations, causing the system to run out of CPU or memory (a denial-of-service attack, where a service becomes unavailable).
LlamaIndex versions up to 0.11.6 contain a vulnerability where the BGEM3Index.load_from_disk() function uses pickle.load() (a Python method that converts stored data back into objects) to read files from a user-provided directory without checking if they're safe. An attacker could provide a malicious pickle file that executes arbitrary code (runs any commands they want) when a victim loads the index from disk.
LLaMA-Factory, a library for training large language models, has a remote code execution vulnerability (RCE, where attackers can run malicious code on a victim's computer) in versions up to 0.9.3. Attackers can exploit this by uploading a malicious checkpoint file through the web interface, and the victim won't know they've been compromised because the vulnerable code loads files without proper safety checks.
A vulnerability in the LangChainLLM class (a component for running language models in the llama_index library) version v0.12.5 allows attackers to cause a Denial of Service (DoS, where a system becomes unresponsive). If a thread (a lightweight process running code in parallel) terminates unexpectedly before executing the language model prediction, the code lacks error handling and enters an infinite loop (code that never stops repeating), which can be triggered by providing incorrectly typed input.
CVE-2024-12911 is a vulnerability in the `default_jsonalyzer` function of `JSONalyzeQueryEngine` in the llama_index library that allows attackers to perform SQL injection (inserting malicious SQL commands) through prompt injection (hiding hidden instructions in the AI's input). This can lead to arbitrary file creation and denial-of-service attacks (making a system unavailable by overwhelming it).
Haystack is a framework for building applications with LLMs (large language models) and AI tools, but versions before 2.3.1 have a critical vulnerability where attackers can execute arbitrary code if they can create and render Jinja2 templates (template engines that generate dynamic text). This affects Haystack clients that allow users to create and run Pipelines, which are workflows that process data through multiple steps.
A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user's browser into making unwanted requests on their behalf) exists in the 'Servers Configurations' function of parisneo/lollms-webui versions 9.6 and later, affecting services like XTTS and vLLM that lack CSRF protection. Attackers can exploit this to deceive users into installing unwanted packages without their knowledge or consent.
A command injection vulnerability (a flaw that lets attackers run unauthorized commands) exists in the RunGptLLM class of the llama_index library version 0.9.47, which connects applications to language models. The vulnerability uses the eval function (a tool that executes text as code) unsafely, potentially allowing a malicious LLM provider to run arbitrary commands and take control of a user's machine.
A vulnerability was found in the `safe_eval` function of the `llama_index` package that allows prompt injection (tricking an AI by hiding instructions in its input) to execute arbitrary code (running code an attacker chooses). The flaw exists because the input validation is insufficient, meaning the package doesn't properly check what data is being passed in, allowing attackers to bypass safety restrictions that were meant to prevent this type of attack.
LlamaIndex (a tool for building AI applications with custom data) versions up to 0.9.34 has a SQL injection vulnerability (a flaw where attackers can insert malicious database commands into normal text input) in its Text-to-SQL feature. This allows attackers to run harmful SQL commands by hiding them in English language requests, such as deleting database tables.
LlamaHub (a library for loading plugins) versions before 0.0.67 have a vulnerability in how they handle OpenAPI and ChatGPT plugin loaders that allows attackers to execute arbitrary code (run any code they choose on a system). The problem is that the code uses unsafe YAML parsing instead of safe_load (a secure function that prevents malicious code in configuration files).
Fix: Update to version 0.9.4, which contains a fix for the issue.
NVD/CVE DatabaseFix: The vulnerability is fixed in version 0.5.1 of llama_index. Users should upgrade to this version or later.
NVD/CVE DatabaseFix: The vulnerability has been fixed in Haystack version 2.3.1. Users should upgrade to this version or later.
NVD/CVE DatabaseFix: This issue was fixed in version 0.10.13 of the llama_index library. Users should upgrade to version 0.10.13 or later.
NVD/CVE DatabaseFix: Upgrade LlamaHub to version 0.0.67 or later, as indicated by the release notes and patch references in the source.
NVD/CVE Database