All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Firecrawl, a web scraper that extracts webpage content for large language models, had a server-side request forgery vulnerability (SSRF, a flaw where an attacker tricks a server into making unwanted requests to internal networks) in versions before 1.1.1 that could expose local network resources. The cloud service was patched on December 27th, 2024, and the open-source version was patched on December 29th, 2024, with no user data exposed.
Fix: All open-source Firecrawl users should upgrade to v1.1.1. For the unpatched playwright services, users should configure a secure proxy by setting the `PROXY_SERVER` environment variable and ensure the proxy is configured to block all traffic to link-local IP addresses (see documentation for setup instructions).
NVD/CVE DatabaseA bug in the Linux kernel's NVMe (a fast storage protocol) driver could cause incorrect memory cleanup when the system fails to allocate enough memory for a descriptor table (a list telling the hardware where data is located). The bug doesn't usually cause visible problems because most systems allocate memory in large chunks, but it represents a memory management error that could cause issues in specific scenarios.
A vulnerability in the Linux kernel's BPF (Berkeley Packet Filter, a framework for running sandboxed programs in the kernel) verifier was causing it to incorrectly assume that raw tracepoint arguments (data passed to certain kernel monitoring hooks) could never be NULL, leading the verifier to delete necessary NULL checks and potentially crash the kernel. The fix marks these arguments as PTR_MAYBE_NULL (pointers that might be null) and adds special handling to allow safe operations on them, including enabling PROBE_MEM marking (a safer memory access mode) when loading from these pointers.
A WordPress plugin called Text Prompter is vulnerable to stored cross-site scripting (XSS, a type of attack where harmful code is hidden in web pages and runs when users visit them) in all versions up to 1.0.7. Attackers with contributor-level access or higher can inject malicious scripts through the plugin's shortcode feature because the plugin does not properly filter or secure user input.
The European Commission is hiring Legal and Policy Officers for the European AI Office to help develop trustworthy AI policies and legislation. Applicants need at least three years of experience in EU digital policy or legislation, relevant degrees, and fluency in EU languages, with applications due by January 15, 2025.
A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into making unwanted requests on a website they're logged into) was found in the KCT AIKCT Engine Chatbot plugin affecting versions up to 1.6.2. The vulnerability allows attackers to perform unauthorized actions by exploiting this weakness in how the chatbot handles user requests.
A security vulnerability in Google's Vertex Gemini API (a generative AI service) affects customers using VPC-SC (VPC Service Controls, a security tool that restricts data leaving a virtual private network). An attacker could craft a malicious file path that tricks the API into sending image data outside the security perimeter, bypassing the intended protections.
A researcher discovered that DeepSeek-R1-Lite, a new AI reasoning model, is vulnerable to prompt injection (tricking an AI by hiding instructions in its input) combined with XSS (cross-site scripting, where malicious code runs in a user's browser). By uploading a specially crafted document with base64-encoded malicious code, an attacker could trick the AI into executing JavaScript that steals a user's session token (a credential stored in browser memory that proves who you are), leading to complete account takeover.
Autolab, a course management system for auto-graded programming assignments, has a vulnerability where students can insert spreadsheet formulas (like those used in Excel) into their first or last names. When instructors download and open the course roster, these formulas execute and can leak student information by sending it to remote servers. The vulnerability has been patched in the source code repository.
Lobe Chat, an open-source AI chat framework, has a vulnerability in versions before 1.19.13 that allows attackers to perform SSRF (server-side request forgery, where an attacker tricks a server into making unauthorized requests to other systems) without logging in. Attackers can exploit this to scan internal networks and steal sensitive information like API keys stored in authentication headers.
CVE-2024-49038 is a cross-site scripting (XSS, a type of attack where malicious code is injected into a webpage to trick users) vulnerability in Microsoft Copilot Studio that allows an unauthorized attacker to gain elevated privileges over a network by exploiting improper handling of user input during webpage generation.
Autolab is a course management system that automatically grades programming assignments. A vulnerability in versions 3.0.0 and later allows any logged-in student to download all submissions from other students or even instructor test files using the download_all_submissions feature, potentially exposing private coursework to unauthorized people.
MLflow has a vulnerability (CVE-2024-27134) where directories have overly permissive access settings, allowing a local attacker to gain elevated permissions through a ToCToU attack (a race condition where an attacker exploits the gap between when a program checks permissions and when it uses a resource). This only affects code using the spark_udf() MLflow API.
LLama Factory, a tool for fine-tuning large language models (AI systems trained on specific tasks or data), has a critical vulnerability that lets attackers run arbitrary commands on the computer running it. The flaw comes from unsafe handling of user input, specifically using a Python function called `Popen` with `shell=True` (a setting that interprets input as system commands) without checking or cleaning the input first.
Fix: Mark raw_tp arguments as PTR_MAYBE_NULL and special case the dereference and pointer arithmetic to permit it. Enable PROBE_MEM marking when loads occur into trusted pointers with PTR_MAYBE_NULL. Do not apply this adjustment when ref_obj_id > 0, as acquired objects don't need such adjustment. Update the tp_btf_nullable selftest to reflect the new verifier behavior that no longer causes errors when directly dereferencing a raw tracepoint argument marked as __nullable.
NVD/CVE DatabaseA new research paper examines prompt injection attacks (tricks where hidden instructions in user inputs manipulate AI systems) and how they can compromise the CIA triad (confidentiality, integrity, and availability, the three core principles of security). The paper includes real-world examples of these attacks against major AI vendors like OpenAI, Google, Anthropic, and Microsoft, and aims to help traditional cybersecurity experts better understand and defend against these emerging AI-specific threats.
A security researcher analyzed xAI's Grok chatbot (an AI assistant available through X and an API) for vulnerabilities and found multiple security issues, including prompt injection (tricking the AI by hiding instructions in user posts, images, and PDFs), data exfiltration (stealing information from the system), phishing attacks through clickable links, and ASCII smuggling (hiding invisible text to manipulate the AI's behavior). The researcher responsibly disclosed these findings to xAI.
Fix: Google Cloud Platform implemented a fix to return an error message when a media file URL is specified in the fileUri parameter and VPC Service Controls is enabled. No further fix actions are needed.
NVD/CVE DatabaseLLMs (large language models) can output ANSI escape codes (special control characters that modify how terminal emulators display text and behave), and when LLM-powered applications print this output to a terminal without filtering it, attackers can use prompt injection (tricking an AI by hiding instructions in its input) to make the terminal execute harmful commands like clearing the screen, hiding text, or stealing clipboard data. The vulnerability affects LLM-integrated command-line tools and applications that don't properly handle or encode these control characters before displaying LLM output.
Fix: According to the source, users are advised to manually patch their systems or wait for the next release. The fix is expected to be released in the next version. No known workarounds are available.
NVD/CVE DatabaseFix: Upgrade to lobe-chat version 1.19.13 or later. According to the source, 'This issue has been addressed in release version 1.19.13 and all users are advised to upgrade.' There are no known workarounds for this vulnerability.
NVD/CVE DatabaseFix: The issue has been patched in commit `1aa4c769`, which is expected to be included in version 3.0.3. Users can either manually patch their installation or wait for version 3.0.3 to be released. As an immediate temporary workaround, administrators can disable the download_all_submissions feature.
NVD/CVE DatabaseFix: A patch is available at https://github.com/mlflow/mlflow/pull/10874, though the source does not specify which MLflow version contains the fix.
NVD/CVE DatabaseA security flaw in Hugging Face Transformers allows attackers to run arbitrary code (RCE, remote code execution) on a user's computer by tricking them into opening a malicious file or visiting a malicious webpage. The vulnerability happens because the software doesn't properly validate data when loading model files, allowing untrusted data to be deserialized (converted from storage format back into a running program).
A vulnerability in Hugging Face Transformers' MaskFormer model allows attackers to run arbitrary code (RCE, or remote code execution) on a user's computer if they visit a malicious webpage or open a malicious file. The flaw occurs because the model file parser doesn't properly validate user-supplied data before deserializing it (converting saved data back into working code), allowing attackers to inject and execute malicious code.
Hugging Face Transformers MobileViTV2 has a vulnerability where attackers can execute arbitrary code (running commands they choose) by tricking users into visiting malicious pages or opening malicious files that contain specially crafted configuration files. The flaw happens because the software doesn't properly check (validate) data before deserializing it (converting it from stored format back into usable code), allowing untrusted data to be executed.
Fix: This vulnerability is fixed in version 0.9.1.
NVD/CVE Database