aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3230 items

CVE-2025-66201: LibreChat is a ChatGPT clone with additional features. Prior to version 0.8.1-rc2, LibreChat is vulnerable to Server-sid

highvulnerability
security
Nov 29, 2025
CVE-2025-66201

LibreChat, a ChatGPT alternative with extra features, had a vulnerability in versions before 0.8.1-rc2 where an authenticated user could exploit the "Actions" feature by uploading malicious OpenAPI specs (interface documents that describe how to connect to external services) to perform SSRF (server-side request forgery, where the server itself is tricked into accessing restricted URLs on the attacker's behalf). This could allow attackers to reach sensitive services like cloud metadata endpoints that are normally hidden from regular users.

Fix: Update LibreChat to version 0.8.1-rc2 or later, where this issue has been patched.

NVD/CVE Database

CVE-2025-12638: Keras version 3.11.3 is affected by a path traversal vulnerability in the keras.utils.get_file() function when extractin

highvulnerability
security
Nov 28, 2025
CVE-2025-12638

Keras version 3.11.3 has a path traversal vulnerability (a security flaw where attackers can write files outside the intended directory) in the keras.utils.get_file() function when extracting tar archives (compressed file formats). The function fails to properly validate file paths during extraction, allowing an attacker to write files anywhere on the system, potentially compromising it or executing malicious code.

CVE-2025-13381: The AI ChatBot with ChatGPT and Content Generator by AYS plugin for WordPress is vulnerable to unauthorized access due t

mediumvulnerability
security
Nov 27, 2025
CVE-2025-13381

The AI ChatBot with ChatGPT and Content Generator plugin for WordPress (versions up to 2.7.0) has a missing authorization check (a security control that verifies a user has permission to perform an action) in its 'ays_chatgpt_save_wp_media' function, allowing unauthenticated attackers to upload media files without logging in. This vulnerability affects all versions through 2.7.0.

CVE-2025-13378: The AI ChatBot with ChatGPT and Content Generator by AYS plugin for WordPress is vulnerable to Server-Side Request Forge

mediumvulnerability
security
Nov 27, 2025
CVE-2025-13378

CVE-2025-13378 is a vulnerability in the AI ChatBot with ChatGPT and Content Generator plugin for WordPress that allows SSRF (server-side request forgery, where an attacker tricks a server into making unwanted network requests on their behalf). The vulnerability exists in the plugin code, with references to affected code in versions 2.6.9 and earlier.

CVE-2025-62593: Ray is an AI compute engine. Prior to version 2.52.0, developers working with Ray as a development tool can be exploited

criticalvulnerability
security
Nov 26, 2025
CVE-2025-62593

Ray, an AI compute engine, had a critical vulnerability before version 2.52.0 that allowed attackers to run code on a developer's computer (RCE, or remote code execution) through Firefox and Safari browsers. The vulnerability exploited a weak security check that only looked at the User-Agent header (a piece of information browsers send to websites) combined with DNS rebinding attacks (tricks that redirect browser requests to unexpected servers), allowing attackers to compromise developers who visited malicious websites or ads.

CVE-2021-4472: The mistral-dashboard plugin for openstack has a local file inclusion vulnerability through the 'Create Workbook' featur

mediumvulnerability
security
Nov 26, 2025
CVE-2021-4472

The mistral-dashboard plugin for OpenStack (a cloud computing platform) has a local file inclusion vulnerability (a flaw that lets attackers read files they shouldn't access) in its 'Create Workbook' feature, which could expose sensitive file contents on the affected system.

v5.1.1

inforesearchIndustry
industry

Deep Learning With Data Privacy via Residual Perturbation

inforesearchPeer-Reviewed
research

CVE-2025-62703: Fugue is a unified interface for distributed computing that lets users execute Python, Pandas, and SQL code on Spark, Da

highvulnerability
security
Nov 25, 2025
CVE-2025-62703

Fugue is a tool that lets developers run Python, Pandas, and SQL code across distributed computing systems like Spark, Dask, and Ray. Versions 0.9.2 and earlier have a remote code execution vulnerability (RCE, where attackers can run arbitrary code on a victim's machine) in the RPC server because it deserializes untrusted data using cloudpickle.loads() without checking if the data is safe first. An attacker can send malicious serialized Python objects to the server, which will execute on the victim's machine.

CVE-2025-13380: The AI Engine for WordPress: ChatGPT, GPT Content Generator plugin for WordPress is vulnerable to Arbitrary File Read in

mediumvulnerability
security
Nov 25, 2025
CVE-2025-13380

A WordPress plugin called 'The AI Engine for WordPress: ChatGPT, GPT Content Generator' has a vulnerability that allows attackers with Contributor-level access or higher to read any file on the server. The problem exists because the plugin doesn't properly check file paths that users provide to certain functions (the 'lqdai_update_post' AJAX endpoint and the insert_image() function), which could expose sensitive information.

Antigravity Grounded! Security Vulnerabilities in Google's Latest IDE

highnews
security
Nov 25, 2025

Google's new Antigravity IDE inherits multiple security vulnerabilities from the Windsurf codebase it was licensed from, including remote command execution (RCE, where an attacker can run commands on a system they don't own) via indirect prompt injection (tricking an AI by hiding instructions in its input), hidden instruction execution, and data exfiltration. The IDE's default setting allows the AI to automatically execute terminal commands without human review, relying on the language model's judgment to determine if a command is safe, which researchers have successfully bypassed with working exploits.

CVE-2025-65106: LangChain is a framework for building agents and LLM-powered applications. From versions 0.3.79 and prior and 1.0.0 to 1

highvulnerability
security
Nov 21, 2025
CVE-2025-65106

LangChain, a framework for building AI agents and applications powered by large language models, has a template injection vulnerability (a security flaw where attackers can hide malicious code in text templates) in versions 0.3.79 and earlier and 1.0.0 through 1.0.6. Attackers can exploit this by crafting malicious template strings that access internal Python object data in ChatPromptTemplate and similar classes, particularly when an application accepts untrusted template input.

CVE-2025-65946: Roo Code is an AI-powered autonomous coding agent that lives in users' editors. Prior to version 3.26.7, Due to an error

highvulnerability
security
Nov 21, 2025
CVE-2025-65946

Roo Code is an AI-powered coding agent that runs inside code editors. Before version 3.26.7, a validation error allowed Roo to automatically execute commands that weren't on an allow list (a list of approved commands), which is a type of command injection vulnerability (where attackers trick a system into running unintended commands).

CVE-2025-65107: Langfuse is an open source large language model engineering platform. In versions from 2.95.0 to before 2.95.12 and from

mediumvulnerability
security
Nov 21, 2025
CVE-2025-65107

Langfuse, an open source platform for managing large language models, has a vulnerability in versions 2.95.0–2.95.11 and 3.17.0–3.130.x where attackers could take over user accounts if certain security settings are not configured. The attack works by tricking an authenticated user into clicking a malicious link (via CSRF, which is cross-site request forgery where an attacker tricks your browser into making unwanted requests, or phishing).

CVE-2025-12973: The S2B AI Assistant – ChatBot, ChatGPT, OpenAI, Content & Image Generator plugin for WordPress is vulnerable to arbitra

highvulnerability
security
Nov 21, 2025
CVE-2025-12973

The S2B AI Assistant WordPress plugin (a tool that adds AI chatbot features to websites) has a vulnerability in versions up to 1.7.8 where it fails to check what type of files users are uploading. This allows editors and higher-level users to upload malicious files that could potentially let attackers run commands on the website server (remote code execution, or RCE).

CVE-2025-62609: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a segmentation fault

highvulnerability
security
Nov 21, 2025
CVE-2025-62609

MLX is an array framework for machine learning on Apple silicon that has a vulnerability where loading malicious GGUF files (a machine learning model format) causes a segmentation fault (a crash where the program tries to access invalid memory). The problem occurs because the code dereferences an untrusted pointer (uses a memory address without checking if it's valid) from an external library without validation.

CVE-2025-62608: MLX is an array framework for machine learning on Apple silicon. Prior to version 0.29.4, there is a heap buffer overflo

criticalvulnerability
security
Nov 21, 2025
CVE-2025-62608

MLX is an array framework (a software library for handling arrays of data in machine learning) for Apple silicon computers. Before version 0.29.4, the software had a heap buffer overflow (a memory safety bug where the program reads beyond allocated memory) in its file-loading function when processing malicious NumPy .npy files (a common data format in machine learning), which could crash the program or leak sensitive information.

Convex Solutions to SfT and NRSfM Under Algebraic Deformation Models

inforesearchPeer-Reviewed
research

Human-Inspired Scene Understanding: A Grounded Cognition Method for Unbiased Scene Graph Generation

inforesearchPeer-Reviewed
research

Rethinking Rotation-Invariant Recognition of Fine-Grained Shapes From the Perspective of Contour Points

inforesearchPeer-Reviewed
research
Previous74 / 162Next
NVD/CVE Database

Fix: Update to version 2.7.1 or later, which includes a fix for the missing authorization check as shown in the changeset referenced in the vulnerability report.

NVD/CVE Database

Fix: The vulnerability was fixed in version 2.7.1, as shown by the changeset comparison between version 2.6.9 and version 2.7.1 of the admin file in the WordPress plugin repository.

NVD/CVE Database

Fix: Update to Ray version 2.52.0 or later, as this issue has been patched in that version.

NVD/CVE Database
NVD/CVE Database
Nov 26, 2025

N/A -- This content is a website navigation menu and product listing for GitHub's development platform features, not a technical article about an AI/LLM issue, vulnerability, or problem.

MITRE ATLAS Releases
privacy
Nov 26, 2025

This research proposes a new method for protecting data privacy in deep learning (training AI models on sensitive data) by adding Gaussian noise (random values from a bell-curve distribution) to ResNets (a type of neural network with skip connections). The method aims to provide differential privacy (a mathematical guarantee that an individual's data cannot be easily identified from the model's results) while maintaining better accuracy and speed than existing privacy-protection techniques like DPSGD (differentially private stochastic gradient descent, a slower privacy-focused training method).

IEEE Xplore (Security & AI Journals)

Fix: This issue has been patched via commit 6f25326.

NVD/CVE Database
NVD/CVE Database
Embrace The Red

Fix: Update to LangChain version 0.3.80 or 1.0.7, where the vulnerability has been patched.

NVD/CVE Database

Fix: Update to version 3.26.7 or later. According to the source, 'This issue has been patched in version 3.26.7.'

NVD/CVE Database

Fix: Update to Langfuse version 2.95.12 or 3.131.0, where the issue has been patched. Alternatively, as a workaround, set the AUTH_<PROVIDER>_CHECK configuration parameter.

NVD/CVE Database
NVD/CVE Database

Fix: This issue has been patched in version 0.29.4. Users should update MLX to version 0.29.4 or later.

NVD/CVE Database

Fix: Update MLX to version 0.29.4 or later. The vulnerability has been patched in this version.

NVD/CVE Database
Nov 21, 2025

This paper presents mathematical approaches to solve Shape-from-Template (SfT, reconstructing a 3D object's shape from a single image using a known template) and Non-Rigid Structure-from-Motion (NRSfM, figuring out how a flexible object moves and its 3D structure from video). The researchers use Semi-Definite Programming (SDP, a mathematical optimization technique for solving certain types of problems) to find solutions that work with different types of object deformation models, requiring only point correspondences (matching points between images) rather than additional impractical assumptions.

IEEE Xplore (Security & AI Journals)
Nov 21, 2025

Scene Graph Generation (SGG, a method that identifies objects and their relationships in images) is limited by long-tailed bias, where the AI model performs well on common relationships but poorly on rare ones. This paper proposes a Grounded Cognition Method (GCM) that mimics human thinking by using techniques like Out Domain Knowledge Injection to broaden visual understanding, a Semantic Group Aware Synthesizer to organize relationship categories, modality erasure (removing one type of input at a time) to improve robustness, and a Shapley Enhanced Multimodal Counterfactual module to handle diverse contexts.

IEEE Xplore (Security & AI Journals)
Nov 21, 2025

This research addresses the problem of recognizing shapes that have been rotated at different angles in computer vision (the field of teaching computers to understand images). The authors propose a new method that focuses on analyzing the outline or contour points of shapes rather than individual pixels, and they use a special neural network module to identify geometric patterns in these contours while ignoring rotation. Their approach shows better results than previous methods, especially for complex shapes, and it works even when the contour data is slightly noisy or imperfect.

IEEE Xplore (Security & AI Journals)