Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
Auto-GPT versions before 0.4.3 have a security flaw where the docker-compose.yml file (a configuration file that sets up Docker containers) is mounted into the container without write protection. If an attacker tricks Auto-GPT into running malicious code through the `execute_python_file` or `execute_python_code` commands, they can overwrite this file and gain control of the host system (the main computer running Auto-GPT) when it restarts.
Fix: Update to Auto-GPT version 0.4.3 or later, as the issue has been patched in that version.
NVD/CVE DatabaseA SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL commands into input fields) exists in langchain versions before v0.0.247 in the SQLDatabaseChain component, allowing remote attackers to obtain sensitive information from databases.
CVE-2023-36188 is a vulnerability in langchain version 0.0.64 that allows a remote attacker to execute arbitrary code (running commands they shouldn't be able to run) through the PALChain parameter in Python's exec method. This is a type of injection attack (CWE-74, where an attacker tricks a system by inserting malicious code into input that gets processed as commands).
CVE-2023-36258 is a vulnerability in LangChain before version 0.0.236 that allows an attacker to execute arbitrary code (run any commands they want on a system) by exploiting the ability to use Python functions like os.system, exec, or eval (functions that can run code dynamically). This is a code injection vulnerability (CWE-94, where attackers trick a program into running unintended code).
Langchain version 0.0.171 has a vulnerability that allows arbitrary code execution (running uncontrolled commands on a system) through its load_prompt function. The vulnerability was reported in June 2023, but the provided source material does not contain detailed information about how the vulnerability works or its severity rating.
Langchain versions before v0.0.225 contained a remote code execution (RCE, where attackers can run commands on a system they don't own) vulnerability in the JiraAPIWrapper component that allowed attackers to execute arbitrary code through specially crafted input. The vulnerability was identified in the JiraAPI wrapper component of the library.
Gradio, an open-source Python library for building machine learning and data science applications, has a vulnerability where it fails to properly filter file paths and restrict which URLs can be proxied (accessed through Gradio as an intermediary), allowing unauthorized file access. This vulnerability affects input validation (the process of checking that data entering a system is safe and expected).
ChuanhuChatGPT (a graphical interface for ChatGPT and other large language models) has a vulnerability in versions 20230526 and earlier that allows attackers to access the config.json file (a configuration file storing sensitive settings) without permission when authentication is disabled, potentially exposing API keys (credentials that grant access to external services). The vulnerability allows attackers to steal these API keys from the configuration file.
gpt_academic (a tool that provides a graphical interface for ChatGPT/GLM) versions 3.37 and earlier have a vulnerability where the Configuration File Handler allows attackers to read sensitive files through the `/file` route because no files are protected from access. This can leak sensitive information from working directories to users who shouldn't have access to it.
Autolab, a service that automatically grades programming assignments in courses, has a tar slip vulnerability (a flaw where extracted files can be placed outside their intended directory) in its assessment installation feature. An attacker with instructor permissions could upload a specially crafted tar file (a compressed archive format) with file paths like `../../../../tmp/tarslipped1.sh` to place files anywhere on the system when the form is submitted.
CVE-2023-2800 is a vulnerability in the Hugging Face Transformers library (a popular tool for working with AI language models) prior to version 4.30.0 that involves insecure temporary files (CWE-377, a weakness where temporary files are created in ways that attackers could exploit). The vulnerability was discovered and reported through the huntr.dev bug bounty platform.
MLflow (a tool for managing machine learning experiments) versions before 2.3.1 contain a path traversal vulnerability (CWE-29, a weakness where attackers can access files outside intended directories by using special characters like '..\'). This vulnerability could allow an attacker to read or manipulate files they shouldn't have access to.
CVE-2023-30172 is a directory traversal vulnerability (a flaw where attackers can access files outside the intended folder by manipulating file paths) in the /get-artifact API method of MLflow platform versions up to v2.0.1. Attackers can exploit the path parameter to read arbitrary files stored on the server.
The AI ChatBot WordPress plugin before version 4.4.9 has two security flaws in its code that handles OpenAI settings. First, it lacks authorization checks (meaning it doesn't verify who should be allowed to make changes), allowing even low-privilege users like subscribers to modify settings. Second, it's vulnerable to CSRF (cross-site request forgery, where an attacker tricks a logged-in user into making unwanted changes) and stored XSS (cross-site scripting, where malicious code gets saved and runs when others view the page).
CVE-2023-2356 is a relative path traversal vulnerability (a flaw that lets attackers access files outside their intended directory by manipulating file paths) found in MLflow versions before 2.3.1. This weakness could allow attackers to read or access files they shouldn't be able to reach on systems running the affected software.
IBM Watson Machine Learning on Cloud Pak for Data versions 4.0 and 4.5 has a vulnerability called SSRF (server-side request forgery, where an attacker tricks the system into making unauthorized network requests on their behalf). An authenticated attacker could exploit this to discover network details or launch other attacks.
MindsDB, a platform for building AI solutions, has a vulnerability in older versions where it unsafely extracts files from remote archives using `tarfile.extractall()` (a Python function that unpacks compressed files). An attacker could exploit this to overwrite any file that the server can access, similar to known attacks called TarSlip or ZipSlip (path traversal attacks, where files are extracted to unexpected locations).
CVE-2023-28312 is an information disclosure vulnerability in Azure Machine Learning, meaning unauthorized people could access sensitive data they shouldn't be able to see. The vulnerability involves improper access control (CWE-284, a weakness where the system doesn't properly check who is allowed to access what), and it was reported by Microsoft.
CVE-2023-29374 is a vulnerability in LangChain versions up to 0.0.131 where the LLMMathChain component is vulnerable to prompt injection attacks (tricking an AI by hiding instructions in its input), allowing attackers to execute arbitrary code through Python's exec method. This is a code execution vulnerability that could allow an attacker to run malicious commands on a system running the affected software.
MindsDB, an open source machine learning platform, has a vulnerability where it unsafely unpacks tar files (compressed archives) using a function that doesn't check if extracted files stay in the intended folder. An attacker could create a malicious tar file with a specially crafted filename (like `../../../../etc/passwd`) that tricks the system into writing files to sensitive system locations, potentially overwriting important system files on the server running MindsDB.
Fix: Update langchain to version v0.0.247 or later.
NVD/CVE DatabaseFix: A patch is available at https://github.com/hwchase17/langchain/pull/6003
NVD/CVE DatabaseFix: Upgrade LangChain to version 0.0.236 or later.
NVD/CVE DatabaseFix: Update Langchain to v0.0.225 or later. A fix is available in the release v0.0.225.
NVD/CVE DatabaseFix: Users are advised to upgrade to version 3.34.0. The source notes there are no known workarounds for this vulnerability.
NVD/CVE DatabaseFix: The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication (a login system that restricts who can access the software) can help mitigate the vulnerability.
NVD/CVE DatabaseFix: A patch is available at commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02. As a workaround, users can configure the project using environment variables instead of `config*.py` files, or use docker-compose installation (a tool for running containerized applications) to configure the project instead of configuration files.
NVD/CVE DatabaseFix: Upgrade to version 2.11.0 or later.
NVD/CVE DatabaseFix: Update to version 4.30.0 or later. A patch is available at https://github.com/huggingface/transformers/commit/80ca92470938bbcc348e2d9cf4734c7c25cb1c43.
NVD/CVE DatabaseFix: Update MLflow to version 2.3.1 or later. A patch is available at https://github.com/mlflow/mlflow/commit/fae77a525dd908c56d6204a4cef1c1c75b4e9857.
NVD/CVE DatabaseFix: Update the AI ChatBot WordPress plugin to version 4.4.9 or later.
NVD/CVE DatabaseFix: Update MLflow to version 2.3.1 or later. A patch is available at https://github.com/mlflow/mlflow/commit/f73147496e05c09a8b83d95fb4f1bf86696c6342.
NVD/CVE DatabaseFix: Upgrade to release 23.2.1.0 or later. The source explicitly states 'There are no known workarounds for this vulnerability,' so updating is the only mitigation mentioned.
NVD/CVE DatabaseFix: A patch is available at https://github.com/hwchase17/langchain/pull/1119
NVD/CVE DatabaseFix: This issue has been addressed in version 22.11.4.3. Users are advised to upgrade. Users unable to upgrade should avoid ingesting archives from untrusted sources.
NVD/CVE Database