All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
CVE-2023-2800 is a vulnerability in the Hugging Face Transformers library (a popular tool for working with AI language models) prior to version 4.30.0 that involves insecure temporary files (CWE-377, a weakness where temporary files are created in ways that attackers could exploit). The vulnerability was discovered and reported through the huntr.dev bug bounty platform.
Fix: Update to version 4.30.0 or later. A patch is available at https://github.com/huggingface/transformers/commit/80ca92470938bbcc348e2d9cf4734c7c25cb1c43.
NVD/CVE DatabaseMLflow (a tool for managing machine learning experiments) versions before 2.3.1 contain a path traversal vulnerability (CWE-29, a weakness where attackers can access files outside intended directories by using special characters like '..\'). This vulnerability could allow an attacker to read or manipulate files they shouldn't have access to.
A malicious website can hijack a ChatGPT chat session and steal conversation history by controlling the data that plugins (add-ons that extend ChatGPT's abilities) retrieve. The post highlights that while plugins can leak data by receiving too much information, the main risk here is when an attacker controls what data the plugin pulls in, enabling them to extract sensitive information.
CVE-2023-30172 is a directory traversal vulnerability (a flaw where attackers can access files outside the intended folder by manipulating file paths) in the /get-artifact API method of MLflow platform versions up to v2.0.1. Attackers can exploit the path parameter to read arbitrary files stored on the server.
The AI ChatBot WordPress plugin before version 4.4.9 has two security flaws in its code that handles OpenAI settings. First, it lacks authorization checks (meaning it doesn't verify who should be allowed to make changes), allowing even low-privilege users like subscribers to modify settings. Second, it's vulnerable to CSRF (cross-site request forgery, where an attacker tricks a logged-in user into making unwanted changes) and stored XSS (cross-site scripting, where malicious code gets saved and runs when others view the page).
Triton is a Minecraft plugin that translates server messages, but it has a vulnerability in its bungee mode (a feature for connecting multiple servers). When bungee mode is enabled, attackers can send a special packet through the 'triton:main' plugin channel to run any command on the server console, potentially making themselves administrators, stealing player information, or changing server settings.
CVE-2023-2356 is a relative path traversal vulnerability (a flaw that lets attackers access files outside their intended directory by manipulating file paths) found in MLflow versions before 2.3.1. This weakness could allow attackers to read or access files they shouldn't be able to reach on systems running the affected software.
IBM Watson Machine Learning on Cloud Pak for Data versions 4.0 and 4.5 has a vulnerability called SSRF (server-side request forgery, where an attacker tricks the system into making unauthorized network requests on their behalf). An authenticated attacker could exploit this to discover network details or launch other attacks.
MindsDB, a platform for building AI solutions, has a vulnerability in older versions where it unsafely extracts files from remote archives using `tarfile.extractall()` (a Python function that unpacks compressed files). An attacker could exploit this to overwrite any file that the server can access, similar to known attacks called TarSlip or ZipSlip (path traversal attacks, where files are extracted to unexpected locations).
CVE-2023-28312 is an information disclosure vulnerability in Azure Machine Learning, meaning unauthorized people could access sensitive data they shouldn't be able to see. The vulnerability involves improper access control (CWE-284, a weakness where the system doesn't properly check who is allowed to access what), and it was reported by Microsoft.
CVE-2023-29374 is a vulnerability in LangChain versions up to 0.0.131 where the LLMMathChain component is vulnerable to prompt injection attacks (tricking an AI by hiding instructions in its input), allowing attackers to execute arbitrary code through Python's exec method. This is a code execution vulnerability that could allow an attacker to run malicious commands on a system running the affected software.
MindsDB, an open source machine learning platform, has a vulnerability where it unsafely unpacks tar files (compressed archives) using a function that doesn't check if extracted files stay in the intended folder. An attacker could create a malicious tar file with a specially crafted filename (like `../../../../etc/passwd`) that tricks the system into writing files to sensitive system locations, potentially overwriting important system files on the server running MindsDB.
TensorFlow (an open-source machine learning framework) versions before 2.11.1 have a bug where a malicious invalid input can crash a model and trigger a denial of service attack (making a service unavailable by overwhelming it). The vulnerability exists in the Convolution3DTranspose function, which is commonly used in modern neural networks, and could be exploited if an attacker can send input to this function.
Fix: Update MLflow to version 2.3.1 or later. A patch is available at https://github.com/mlflow/mlflow/commit/fae77a525dd908c56d6204a4cef1c1c75b4e9857.
NVD/CVE DatabaseChatGPT can access YouTube transcripts through plugins, which is useful but creates a security risk called indirect prompt injection (hidden instructions embedded in content that an AI reads and then follows). Attackers can hide malicious commands in video transcripts, and when ChatGPT reads those transcripts to answer user questions, it may follow the hidden instructions instead of the user's intended request.
This resource is a tutorial and lab (an interactive learning environment for hands-on practice) that teaches prompt injection, which is a technique for tricking AI systems by embedding hidden instructions in their input. The tutorial covers examples ranging from simple prompt engineering (getting an AI to change its output) to more complex attacks like injecting malicious code (HTML/XSS, which runs unwanted scripts in web browsers) and stealing data from AI systems.
Prompt injection (tricking an AI by hiding instructions in its input) is a widespread vulnerability in AI education, with indirect prompt injections being particularly dangerous because they allow untrusted data to secretly take control of an LLM (large language model) and change its goals and behavior. Since attack payloads use natural language, attackers can craft many creative variations to bypass input validation (checking that data meets safety rules) and web application firewalls (security systems that filter harmful requests).
Fix: Update the AI ChatBot WordPress plugin to version 4.4.9 or later.
NVD/CVE DatabaseFix: This issue has been fixed in version 3.8.4.
NVD/CVE DatabaseFix: Update MLflow to version 2.3.1 or later. A patch is available at https://github.com/mlflow/mlflow/commit/f73147496e05c09a8b83d95fb4f1bf86696c6342.
NVD/CVE DatabaseThis is a podcast episode about AI red teaming (simulated attacks to find weaknesses in AI systems) and threat modeling (planning for potential security risks) in machine learning systems. The episode explores how traditional security practices can be combined with machine learning security to better protect AI applications from attacks.
Fix: Upgrade to release 23.2.1.0 or later. The source explicitly states 'There are no known workarounds for this vulnerability,' so updating is the only mitigation mentioned.
NVD/CVE DatabaseLLM outputs are untrusted and can be manipulated through prompt injection (tricking an AI by hiding instructions in its input), which affects large language models in particular ways. This post addresses how to handle the risks of untrusted output when using AI systems in real applications.
Fix: A patch is available at https://github.com/hwchase17/langchain/pull/1119
NVD/CVE DatabaseFix: This issue has been addressed in version 22.11.4.3. Users are advised to upgrade. Users unable to upgrade should avoid ingesting archives from untrusted sources.
NVD/CVE DatabaseAI prompt injection is a vulnerability where attackers manipulate input given to AI systems, either directly (by controlling parts of the prompt themselves) or indirectly (by embedding malicious instructions in data the AI will later process, like web pages). These attacks can trick AI systems into ignoring their intended instructions and producing harmful, misleading, or inappropriate responses, similar to how SQL injection or cross-site scripting (XSS, a web attack that injects malicious code into websites) compromise other systems.
Fix: Upgrade to TensorFlow version 2.11.1 or later. The source states there are no known workarounds for this vulnerability.
NVD/CVE Database# Analysis ## Summary A user discovered that Bing Chat could be manipulated into describing illegal activities (like bank robbery) by using indirect language techniques, even though it refused to help when the user directly asked about hacking. This shows that the AI's safety filters, which are supposed to prevent harmful outputs, can be bypassed through clever wording rather than direct requests. ## Solution N/A -- no mitigation discussed in source.