All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
NVIDIA Triton Server for Linux has a vulnerability where attackers can bypass input validation (improper validation of specified quantity in input) by sending malformed data. This flaw could allow an attacker to cause a denial of service attack (making a system unavailable to legitimate users).
NVIDIA Triton Inference Server has a vulnerability (CVE-2025-33201) where an attacker can send extremely large data payloads to bypass safety checks, potentially crashing the service and making it unavailable to legitimate users (a denial of service attack). The vulnerability stems from improper validation of unusual or exceptional input conditions.
MCP Server Kubernetes (a tool that lets software manage Kubernetes clusters, which are systems for running containerized applications) has a vulnerability in versions before 2.9.8 where the exec_in_pod tool accepts user commands without checking them first. When commands are provided as strings, they go directly to shell interpretation (sh -c, a command processor) without validation, allowing attackers to inject malicious shell commands either directly or through prompt injection (tricking an AI into running hidden instructions in its input).
Claude Code is an agentic coding tool (software that can write and run code automatically) that had a vulnerability before version 1.0.93 where errors in parsing shell commands (instructions to a computer's operating system) allowed attackers to bypass read-only protections and execute arbitrary code if they could add untrusted content to the tool's input. This vulnerability (command injection, or tricking the tool into running unintended commands) had a CVSS score (0-10 severity rating) of 8.7, marking it as high-risk.
A WordPress plugin called 'Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI' has a time-based SQL injection vulnerability (a security flaw where attackers can insert malicious database commands through user input) in its "getTermsForAjax" function in versions up to 3.40.1. Authenticated users with contributor-level access or higher can exploit this flaw to extract sensitive information from the website's database because the plugin doesn't properly validate user input before using it in database queries.
A WordPress plugin called AI Autotagger with OpenAI has a security flaw in versions up to 3.40.1 where it fails to properly check if users have permission to perform certain actions. This authorization bypass (a failure to verify that someone is allowed to do something) allows authenticated attackers with basic subscriber-level access to merge or delete taxonomy terms (categories and tags used to organize content) that they shouldn't be able to modify.
LlamaIndex released version 0.14.9 with updates across multiple components, including bug fixes for vector stores (systems that store converted data in a format AI models can search), support for new AI models like Claude Opus 4.5 and GPT-5.1, and improvements to integrations with services like Azure, Bedrock, and Qdrant. The release addresses issues with memory management, async operations (non-blocking code that runs in parallel), and various database connectors.
vLLM (a tool for running large language models) versions before 0.11.1 have a critical security flaw where loading a model configuration can execute malicious code from the internet without the user's permission. An attacker can create a fake model that appears safe but secretly downloads and runs harmful code from another location, even when users try to block remote code by setting trust_remote_code=False (a security setting meant to prevent exactly this).
A vulnerability (CVE-2025-49642) in Zabbix Agent on AIX systems allows local users with write access to the /home/cecuser directory to hijack library loading, potentially gaining unauthorized access or modifying the system. This is rated as medium severity (CVSS score of 5.8, a 0-10 vulnerability rating scale) and exploits untrusted search paths (directories the system checks when looking for required files).
LibreChat, a ChatGPT alternative with extra features, had a vulnerability in versions before 0.8.1-rc2 where an authenticated user could exploit the "Actions" feature by uploading malicious OpenAPI specs (interface documents that describe how to connect to external services) to perform SSRF (server-side request forgery, where the server itself is tricked into accessing restricted URLs on the attacker's behalf). This could allow attackers to reach sensitive services like cloud metadata endpoints that are normally hidden from regular users.
Keras version 3.11.3 has a path traversal vulnerability (a security flaw where attackers can write files outside the intended directory) in the keras.utils.get_file() function when extracting tar archives (compressed file formats). The function fails to properly validate file paths during extraction, allowing an attacker to write files anywhere on the system, potentially compromising it or executing malicious code.
The AI ChatBot with ChatGPT and Content Generator plugin for WordPress (versions up to 2.7.0) has a missing authorization check (a security control that verifies a user has permission to perform an action) in its 'ays_chatgpt_save_wp_media' function, allowing unauthenticated attackers to upload media files without logging in. This vulnerability affects all versions through 2.7.0.
CVE-2025-13378 is a vulnerability in the AI ChatBot with ChatGPT and Content Generator plugin for WordPress that allows SSRF (server-side request forgery, where an attacker tricks a server into making unwanted network requests on their behalf). The vulnerability exists in the plugin code, with references to affected code in versions 2.6.9 and earlier.
Ray, an AI compute engine, had a critical vulnerability before version 2.52.0 that allowed attackers to run code on a developer's computer (RCE, or remote code execution) through Firefox and Safari browsers. The vulnerability exploited a weak security check that only looked at the User-Agent header (a piece of information browsers send to websites) combined with DNS rebinding attacks (tricks that redirect browser requests to unexpected servers), allowing attackers to compromise developers who visited malicious websites or ads.
The mistral-dashboard plugin for OpenStack (a cloud computing platform) has a local file inclusion vulnerability (a flaw that lets attackers read files they shouldn't access) in its 'Create Workbook' feature, which could expose sensitive file contents on the affected system.
Fix: Update to version 2.9.8, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.93 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: A patch is available. According to the source, users should update to the version fixed in the GitHub commit referenced at https://github.com/TaxoPress/TaxoPress/commit/5eb2cee861ebd109152eea968aca0259c078c8b0.
NVD/CVE DatabaseSmart grids (power distribution systems that communicate usage data electronically) currently use classical public-key cryptosystems (encryption methods based on mathematical problems that are hard to solve) to protect power consumption information, but quantum computing threatens to break these systems. This paper proposes QC-EAM, a new security model using quantum encryption and quantum Fourier transformation (a quantum algorithm for processing data) to protect smart grid communications, tested on IBM's quantum computing platform.
Researchers discovered a serious weakness in tools designed to detect third-party libraries (external code that apps use) in Android applications. They created LibPass, an attack method that generates tricked versions of apps that can fool these detection tools into missing dangerous or non-compliant libraries, with success rates reaching up to 99%. The study reveals that current detection tools are not robust enough to withstand intentional attacks, which puts users at risk since unsafe libraries could hide inside apps.
Fix: This vulnerability is fixed in vLLM version 0.11.1. Users should update to this version or later.
NVD/CVE DatabaseThe Center for AI Safety launched an AI Dashboard that evaluates frontier AI models (the most advanced AI systems currently available) on capability and safety benchmarks, ranking them across text, vision, and risk categories. The Risk Index specifically measures how likely models are to exhibit dangerous behaviors like dual-use biology assistance (helping with harmful biological research), jailbreaking vulnerability (susceptibility to tricks that bypass safety features), overconfidence, deception, and harmful actions, with Claude Opus 4.5 currently scoring safest at 33.6 on a 0-100 scale (lower is safer). The dashboard also tracks industry progress toward broader automation milestones like AGI (artificial general intelligence, systems that can perform any intellectual task) and self-driving vehicles.
Current password strength meters in IoT systems (internet-connected devices) incorrectly rate passwords as secure when they contain certain number patterns, causing users to create passwords that are actually weak. Researchers discovered that numbers in passwords follow predictable semantic patterns (like common sequences or meaningful digit combinations), which attackers can exploit using improved PCFG attacks (a method that guesses passwords by learning common patterns from leaked databases). The study proposes updating password strength meters to account for these digit patterns when evaluating password security.
Fix: The source proposes "a feasible scheme to improve the password strength meter for IoT systems based on the high-frequency semantic characteristics of digit segments" but does not provide specific implementation details, code, or concrete steps in the text provided.
IEEE Xplore (Security & AI Journals)AI-generated image forgeries created by tools like GANs (generative adversarial networks, AI models that create fake images) are hard to detect reliably, especially when facing new types of fakes or noisy images. Researchers found that forgery detectors fail because of frequency bias (a tendency to focus on certain patterns in image data while ignoring others), and they developed a frequency alignment method that can either attack these detectors or strengthen them by removing differences between real and fake images in how they look at the frequency level.
Fix: The source proposes a two-step frequency alignment method to remove the frequency discrepancy between real and fake images. According to the text, this method 'can serve as a strong black-box attack against forgery detectors in the anti-forensic context or, conversely, as a universal defense to improve detector reliability in the forensic context.' The authors developed corresponding attack and defense implementations and demonstrated their effectiveness across twelve detectors, eight forgery models, and five evaluation metrics.
IEEE Xplore (Security & AI Journals)Fix: Update LibreChat to version 0.8.1-rc2 or later, where this issue has been patched.
NVD/CVE DatabaseFix: Update to version 2.7.1 or later, which includes a fix for the missing authorization check as shown in the changeset referenced in the vulnerability report.
NVD/CVE DatabaseFix: The vulnerability was fixed in version 2.7.1, as shown by the changeset comparison between version 2.6.9 and version 2.7.1 of the admin file in the WordPress plugin repository.
NVD/CVE DatabaseFix: Update to Ray version 2.52.0 or later, as this issue has been patched in that version.
NVD/CVE Database