All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
LibreChat, a ChatGPT alternative with extra features, had a vulnerability in versions before 0.8.1-rc2 where an authenticated user could exploit the "Actions" feature by uploading malicious OpenAPI specs (interface documents that describe how to connect to external services) to perform SSRF (server-side request forgery, where the server itself is tricked into accessing restricted URLs on the attacker's behalf). This could allow attackers to reach sensitive services like cloud metadata endpoints that are normally hidden from regular users.
Fix: Update LibreChat to version 0.8.1-rc2 or later, where this issue has been patched.
NVD/CVE DatabaseKeras version 3.11.3 has a path traversal vulnerability (a security flaw where attackers can write files outside the intended directory) in the keras.utils.get_file() function when extracting tar archives (compressed file formats). The function fails to properly validate file paths during extraction, allowing an attacker to write files anywhere on the system, potentially compromising it or executing malicious code.
The AI ChatBot with ChatGPT and Content Generator plugin for WordPress (versions up to 2.7.0) has a missing authorization check (a security control that verifies a user has permission to perform an action) in its 'ays_chatgpt_save_wp_media' function, allowing unauthenticated attackers to upload media files without logging in. This vulnerability affects all versions through 2.7.0.
CVE-2025-13378 is a vulnerability in the AI ChatBot with ChatGPT and Content Generator plugin for WordPress that allows SSRF (server-side request forgery, where an attacker tricks a server into making unwanted network requests on their behalf). The vulnerability exists in the plugin code, with references to affected code in versions 2.6.9 and earlier.
Ray, an AI compute engine, had a critical vulnerability before version 2.52.0 that allowed attackers to run code on a developer's computer (RCE, or remote code execution) through Firefox and Safari browsers. The vulnerability exploited a weak security check that only looked at the User-Agent header (a piece of information browsers send to websites) combined with DNS rebinding attacks (tricks that redirect browser requests to unexpected servers), allowing attackers to compromise developers who visited malicious websites or ads.
The mistral-dashboard plugin for OpenStack (a cloud computing platform) has a local file inclusion vulnerability (a flaw that lets attackers read files they shouldn't access) in its 'Create Workbook' feature, which could expose sensitive file contents on the affected system.
Fugue is a tool that lets developers run Python, Pandas, and SQL code across distributed computing systems like Spark, Dask, and Ray. Versions 0.9.2 and earlier have a remote code execution vulnerability (RCE, where attackers can run arbitrary code on a victim's machine) in the RPC server because it deserializes untrusted data using cloudpickle.loads() without checking if the data is safe first. An attacker can send malicious serialized Python objects to the server, which will execute on the victim's machine.
A WordPress plugin called 'The AI Engine for WordPress: ChatGPT, GPT Content Generator' has a vulnerability that allows attackers with Contributor-level access or higher to read any file on the server. The problem exists because the plugin doesn't properly check file paths that users provide to certain functions (the 'lqdai_update_post' AJAX endpoint and the insert_image() function), which could expose sensitive information.
Google's new Antigravity IDE inherits multiple security vulnerabilities from the Windsurf codebase it was licensed from, including remote command execution (RCE, where an attacker can run commands on a system they don't own) via indirect prompt injection (tricking an AI by hiding instructions in its input), hidden instruction execution, and data exfiltration. The IDE's default setting allows the AI to automatically execute terminal commands without human review, relying on the language model's judgment to determine if a command is safe, which researchers have successfully bypassed with working exploits.
LangChain, a framework for building AI agents and applications powered by large language models, has a template injection vulnerability (a security flaw where attackers can hide malicious code in text templates) in versions 0.3.79 and earlier and 1.0.0 through 1.0.6. Attackers can exploit this by crafting malicious template strings that access internal Python object data in ChatPromptTemplate and similar classes, particularly when an application accepts untrusted template input.
Roo Code is an AI-powered coding agent that runs inside code editors. Before version 3.26.7, a validation error allowed Roo to automatically execute commands that weren't on an allow list (a list of approved commands), which is a type of command injection vulnerability (where attackers trick a system into running unintended commands).
Langfuse, an open source platform for managing large language models, has a vulnerability in versions 2.95.0–2.95.11 and 3.17.0–3.130.x where attackers could take over user accounts if certain security settings are not configured. The attack works by tricking an authenticated user into clicking a malicious link (via CSRF, which is cross-site request forgery where an attacker tricks your browser into making unwanted requests, or phishing).
The S2B AI Assistant WordPress plugin (a tool that adds AI chatbot features to websites) has a vulnerability in versions up to 1.7.8 where it fails to check what type of files users are uploading. This allows editors and higher-level users to upload malicious files that could potentially let attackers run commands on the website server (remote code execution, or RCE).
MLX is an array framework for machine learning on Apple silicon that has a vulnerability where loading malicious GGUF files (a machine learning model format) causes a segmentation fault (a crash where the program tries to access invalid memory). The problem occurs because the code dereferences an untrusted pointer (uses a memory address without checking if it's valid) from an external library without validation.
MLX is an array framework (a software library for handling arrays of data in machine learning) for Apple silicon computers. Before version 0.29.4, the software had a heap buffer overflow (a memory safety bug where the program reads beyond allocated memory) in its file-loading function when processing malicious NumPy .npy files (a common data format in machine learning), which could crash the program or leak sensitive information.
Fix: Update to version 2.7.1 or later, which includes a fix for the missing authorization check as shown in the changeset referenced in the vulnerability report.
NVD/CVE DatabaseFix: The vulnerability was fixed in version 2.7.1, as shown by the changeset comparison between version 2.6.9 and version 2.7.1 of the admin file in the WordPress plugin repository.
NVD/CVE DatabaseFix: Update to Ray version 2.52.0 or later, as this issue has been patched in that version.
NVD/CVE DatabaseN/A -- This content is a website navigation menu and product listing for GitHub's development platform features, not a technical article about an AI/LLM issue, vulnerability, or problem.
This research proposes a new method for protecting data privacy in deep learning (training AI models on sensitive data) by adding Gaussian noise (random values from a bell-curve distribution) to ResNets (a type of neural network with skip connections). The method aims to provide differential privacy (a mathematical guarantee that an individual's data cannot be easily identified from the model's results) while maintaining better accuracy and speed than existing privacy-protection techniques like DPSGD (differentially private stochastic gradient descent, a slower privacy-focused training method).
Fix: This issue has been patched via commit 6f25326.
NVD/CVE DatabaseFix: Update to LangChain version 0.3.80 or 1.0.7, where the vulnerability has been patched.
NVD/CVE DatabaseFix: Update to version 3.26.7 or later. According to the source, 'This issue has been patched in version 3.26.7.'
NVD/CVE DatabaseFix: Update to Langfuse version 2.95.12 or 3.131.0, where the issue has been patched. Alternatively, as a workaround, set the AUTH_<PROVIDER>_CHECK configuration parameter.
NVD/CVE DatabaseFix: This issue has been patched in version 0.29.4. Users should update MLX to version 0.29.4 or later.
NVD/CVE DatabaseFix: Update MLX to version 0.29.4 or later. The vulnerability has been patched in this version.
NVD/CVE DatabaseThis paper presents mathematical approaches to solve Shape-from-Template (SfT, reconstructing a 3D object's shape from a single image using a known template) and Non-Rigid Structure-from-Motion (NRSfM, figuring out how a flexible object moves and its 3D structure from video). The researchers use Semi-Definite Programming (SDP, a mathematical optimization technique for solving certain types of problems) to find solutions that work with different types of object deformation models, requiring only point correspondences (matching points between images) rather than additional impractical assumptions.
Scene Graph Generation (SGG, a method that identifies objects and their relationships in images) is limited by long-tailed bias, where the AI model performs well on common relationships but poorly on rare ones. This paper proposes a Grounded Cognition Method (GCM) that mimics human thinking by using techniques like Out Domain Knowledge Injection to broaden visual understanding, a Semantic Group Aware Synthesizer to organize relationship categories, modality erasure (removing one type of input at a time) to improve robustness, and a Shapley Enhanced Multimodal Counterfactual module to handle diverse contexts.
This research addresses the problem of recognizing shapes that have been rotated at different angles in computer vision (the field of teaching computers to understand images). The authors propose a new method that focuses on analyzing the outline or contour points of shapes rather than individual pixels, and they use a special neural network module to identify geometric patterns in these contours while ignoring rotation. Their approach shows better results than previous methods, especially for complex shapes, and it works even when the contour data is slightly noisy or imperfect.