All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
A researcher discovered that Amazon Q for Business was vulnerable to an indirect prompt injection attack (a technique where an attacker hides malicious instructions in data that gets fed to an AI), which could trick the AI into outputting markdown tags that render as hyperlinks. This allowed attackers to steal sensitive data from victims by embedding malicious links in uploaded files. Amazon identified and fixed the vulnerability after the researcher reported it.
NVIDIA Triton Inference Server for Linux and Windows has a vulnerability (CVE-2023-31036) that occurs when launched with the non-default --model-control explicit option, allowing attackers to use path traversal (exploiting how file paths are handled to access unintended directories) through the model load API. A successful attack could lead to code execution (running unauthorized commands), denial of service (making the system unavailable), privilege escalation (gaining higher access levels), information disclosure (exposing sensitive data), and data tampering (modifying files).
CVE-2023-7215 is a cross-site scripting (XSS) vulnerability, a type of attack where malicious code gets injected into a webpage that a user views in their browser, found in Chanzhaoyu chatgpt-web version 2.11.1. An attacker can exploit this by manipulating the Description argument with malicious image code, and the attack can be performed remotely over the internet. The vulnerability has been publicly disclosed and may already be in use by attackers.
CVE-2023-7018 is a deserialization of untrusted data vulnerability (a flaw where an AI library unsafely processes data from untrusted sources) in the Hugging Face Transformers library before version 4.36. This weakness could potentially allow an attacker to execute malicious code through specially crafted input.
OpenAI has begun addressing a data exfiltration vulnerability (where attackers steal user data) in ChatGPT that exploits image markdown rendering during prompt injection attacks (tricking an AI by hiding instructions in its input). The company implemented a client-side validation check called 'url_safe' on the web app that blocks images from suspicious domains, though the fix is incomplete and attackers can still leak small amounts of data through workarounds.
CVE-2023-6730 is a deserialization of untrusted data vulnerability (a security flaw where a program unsafely reconstructs objects from untrusted input, potentially allowing attackers to execute malicious code) found in the Hugging Face Transformers library before version 4.36. The vulnerability has a CVSS score of 4.0, which indicates a moderate severity level (a 0-10 rating of how severe a vulnerability is).
CVE-2023-6909 is a path traversal vulnerability (a security flaw where an attacker can access files outside their intended directory using special characters like '..\'). It affects MLflow versions before 2.9.2 in the mlflow/mlflow GitHub repository. The vulnerability was discovered and reported through the huntr.dev bug bounty platform.
CVE-2023-6831 is a path traversal vulnerability (a flaw where an attacker can access files outside the intended directory by using special characters like '..\'). in MLflow versions before 2.9.2 that allows attackers to manipulate file paths and access restricted files they shouldn't be able to reach.
CVE-2023-6572 is a command injection vulnerability (a security flaw where an attacker can run unauthorized commands) in the Gradio application (a tool for building AI demos) versions prior to the main branch. The vulnerability results from improper handling of special characters that could allow attackers to execute commands on affected systems.
CVE-2023-6753 is a path traversal vulnerability (a security flaw where an attacker can access files outside the intended directory by using special path characters) found in MLflow versions before 2.9.2. The vulnerability allows unauthorized access to restricted files on a system running the affected software.
CVE-2023-35625 is a vulnerability in Azure Machine Learning Compute Instance that allows unauthorized users to access sensitive information through the SDK (software development kit, a collection of tools for building applications). The vulnerability is classified as an information disclosure issue, meaning private data could be exposed to people who shouldn't see it.
CVE-2023-6709 is a vulnerability in MLflow (a machine learning tool) versions before 2.9.2 involving improper neutralization of special elements in a template engine (a system that generates text by filling in placeholders in templates). This weakness could potentially allow attackers to manipulate how the software processes certain input data.
TorchServe (a tool for running PyTorch machine learning models as web services) versions before 0.9.0 had a ZipSlip vulnerability (a flaw where an attacker can extract files outside the intended folder by crafting malicious archive files), allowing attackers to upload harmful code disguised in publicly available models that could execute on machines running TorchServe. The vulnerability affected the model and workflow management API, which handles uploaded files.
A researcher discovered that LLMs like ChatGPT can be tricked through prompt injection (hiding malicious instructions in input text) by using invisible Unicode characters from the Tags Unicode Block (a section of the Unicode standard containing special code points). The proof-of-concept demonstrated how invisible instructions embedded in pasted text caused ChatGPT to perform unintended actions, such as generating images with DALL-E.
A security researcher presented at the 37th Chaos Communication Congress about Large Language Models Application Security and prompt injection (tricking an AI by hiding instructions in its input). The talk covered security research findings and was made available in video and slide formats for public access.
Gradio is a Python package for building web demos of machine learning models. Versions before 4.11.0 had a file traversal vulnerability (a weakness that lets attackers read files they shouldn't access) in the `/file` route, allowing attackers to view arbitrary files on machines running publicly accessible Gradio apps if they knew the file paths.
Fix: Update Gradio to version 4.11.0 or later, where this issue has been patched.
NVD/CVE DatabaseFix: Update to Transformers version 4.36 or later. A patch is available at the GitHub commit: https://github.com/huggingface/transformers/commit/1d63b0ec361e7a38f1339385e8a5a855085532ce
NVD/CVE DatabaseFix: OpenAI implemented a mitigation by adding a client-side validation API call (url_safe endpoint) that checks whether image URLs are safe before rendering them. The validation returns {"safe":false} to prevent rendering images from malicious domains. However, the source explicitly notes this is not a complete fix and suggests OpenAI should additionally "limit the number of images that are rendered per response to just one or maybe a handful maximum" to further reduce bypass techniques. The source also notes the current iOS version 1.2023.347 (16603) does not yet have these improvements.
Embrace The RedFix: Update MLflow to version 2.9.2 or later. A patch is available at the GitHub commit referenced: https://github.com/mlflow/mlflow/commit/1da75dfcecd4d169e34809ade55748384e8af6c1
NVD/CVE DatabaseFix: Update MLflow to version 2.9.2 or later. A patch is available at https://github.com/mlflow/mlflow/commit/1da75dfcecd4d169e34809ade55748384e8af6c1.
NVD/CVE DatabaseFix: A patch is available at the GitHub commit: https://github.com/gradio-app/gradio/commit/5b5af1899dd98d63e1f9b48a93601c2db1f56520. Users should update to the main branch or apply this commit to fix the vulnerability.
NVD/CVE DatabaseFix: Update MLflow to version 2.9.2 or later. A patch is available at https://github.com/mlflow/mlflow/commit/1c6309f884798fbf56017a3cc808016869ee8de4.
NVD/CVE DatabaseA researcher demonstrated that malicious GPTs (custom ChatGPT agents) can secretly steal user data by embedding hidden images in conversations that send information to external servers, and can also trick users into sharing personal details like passwords. OpenAI's validation checks for publishing GPTs can be easily bypassed by slightly rewording malicious instructions, allowing harmful GPTs to be shared publicly, though the researcher reported these vulnerabilities to OpenAI in November 2023 without receiving a fix.
Fix: Update MLflow to version 2.9.2 or later. A patch is available at https://github.com/mlflow/mlflow/commit/432b8ccf27fd3a76df4ba79bb1bec62118a85625.
NVD/CVE DatabaseMLflow, an open-source machine learning platform, has a reflected XSS (cross-site scripting, where an attacker injects malicious JavaScript that runs in a victim's browser) vulnerability in how it handles the Content-Type header in POST requests. An attacker can craft a malicious Content-Type header that gets sent back to the user without proper filtering, allowing arbitrary JavaScript code to execute in the victim's browser.
CVE-2023-43472 is a vulnerability in MLFlow (an open-source platform for managing machine learning workflows) versions 2.8.1 and earlier that allows a remote attacker to obtain sensitive information by sending a specially crafted request to the REST API (the interface that programs use to communicate with MLFlow). The vulnerability has a CVSS severity score of 4.0 (a moderate risk level on a scale of 0-10).
A security researcher presented at Ekoparty 2023 about prompt injections (attacks where malicious instructions are hidden in inputs to trick an AI into misbehaving) found in real-world LLM applications and chatbots like ChatGPT, Bing Chat, and Google Bard, demonstrating various exploits and discussing mitigations. The talk covered both basic LLM concepts and deep dives into how these attacks work across different AI platforms.
Fix: Upgrade to TorchServe version 0.9.0 or later. The fix validates the file paths in zip archives before extracting them to prevent files from being placed in unintended filesystem locations.
NVD/CVE Database