All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
OpenAirInterface CN5G AMF (a component that handles network requests) versions 2.0.1 and earlier contain a logical error in how they process JSON format requests. Unauthorized attackers can exploit this flaw by sending malicious JSON data to the AMF's SBI interface (the system's network communication endpoint) to cause a denial-of-service attack (making the service unavailable to legitimate users).
OpenAirInterface CN5G AMF (a software component for handling mobile network communications) version 2.1.9 and earlier contains a buffer overflow vulnerability (a memory safety bug where data exceeds allocated space) in how it processes NAS messages (protocol messages used in mobile networks). Remote attackers without authorization can exploit this by sending an unusually long IMSI string (a mobile subscriber identifier) through port N1, potentially crashing the system or running malicious code.
A WordPress plugin called 'Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI' has a security flaw (CWE-862, missing authorization) in versions up to 3.41.0 that allows contributors and higher-level users to add or remove taxonomy terms (tags and categories) on any post, even ones they don't own, due to missing permission checks. This vulnerability affects authenticated users who have contributor-level access or above.
Anthropic's MCP TypeScript SDK (a toolkit for building AI applications) versions up to 1.25.1 has a ReDoS vulnerability (regular expression denial of service, where a maliciously designed input causes the regex parser to work extremely hard and freeze the system) in its UriTemplate class. An attacker can send a specially crafted URI (web address) that makes the Node.js process (the JavaScript runtime environment) consume excessive CPU and stop responding, causing the application to crash or become unavailable.
A security vulnerability (CVE-2025-15453) exists in Milvus versions up to 2.6.7 in the expr.Exec function, where an attacker can manipulate the code argument to trigger deserialization (converting untrusted data back into executable code), allowing remote exploitation with user credentials. The vulnerability has been publicly disclosed and is rated as medium severity (CVSS 5.3).
Langflow, a tool for building AI-powered agents and workflows, has a security flaw in versions before 1.7.0.dev45 where some API endpoints (the interfaces that software uses to communicate and request data) are missing authentication controls (checks to verify who is using them). This allows anyone without a login to access private user conversations, transaction histories, and delete messages. The vulnerability affects endpoints that handle sensitive personal data and system operations.
MessagePack for Java has a denial-of-service vulnerability in versions before 0.9.11 where specially crafted .msgpack files can trick the library into allocating massive amounts of memory. When the library deserializes (reads and converts) these files, it blindly trusts the size information in EXT32 objects (an extension data type) and tries to allocate a byte array matching that size, which can be impossibly large, causing the Java program to run out of memory and crash.
A missing authorization vulnerability (CWE-862, a weakness where the system fails to check if a user has permission to access something) was found in the Recorp AI Content Writing Assistant plugin for WordPress, affecting versions up to 1.1.7. This flaw allows attackers to exploit incorrectly configured access control, meaning they could potentially access features or data they shouldn't be able to reach.
CVE-2025-62116 is a missing authorization vulnerability (a security flaw where the software fails to check if a user has permission to perform an action) in Quadlayers AI Copilot that affects versions up to 1.4.7. The vulnerability allows attackers to exploit incorrectly configured access control security levels, meaning they may be able to access or perform actions they shouldn't be allowed to.
A major copyright case is now before the Supreme Court, asking whether internet service providers (ISPs) must act as copyright enforcers by cutting off users' internet access based on accusations alone. A lower court ruled that ISPs could be held liable for copyright infringement by their customers, which could lead to entire households, schools, and libraries losing internet access due to one person's alleged infringement, especially harming low-income and underserved communities.
A vulnerability in the Linux kernel's TLS (Transport Layer Security, a protocol that encrypts network traffic) implementation could cause threads to hang indefinitely on a lock called tx_lock. An adversarial receiver could keep the RWIN (receive window, which controls how much data can be sent) at 0 for extended periods, preventing a thread holding tx_lock from making progress and potentially blocking it for hours.
A data race vulnerability (a situation where two parts of a program access the same data simultaneously without protection) was found in the Linux kernel's RDMA/irdma driver, where completion statistics were being read and written from different processor cores at the same time. The fix converts the completion statistics into an atomic variable (a thread-safe data type that ensures safe updates across multiple processors), preventing data corruption and compiler optimization issues.
This is a release of llama-index v0.14.12, a framework for building AI applications, containing various updates across multiple components including bug fixes, new features for asynchronous tool support, and improvements to integrations with services like OpenAI, Google, Anthropic, and various vector stores (databases that store numerical representations of data for AI searching). Key fixes address issues like crashes in logging, missing parameters in tool handling, and compatibility improvements for newer Python versions.
The Electronic Frontier Foundation (EFF) received thousands of media mentions in 2025 while advocating for digital civil liberties, particularly regarding surveillance technologies like ALPRs (automated license plate readers, which scan vehicle plates automatically) and police use of doorbell cameras. The organization also pursued lawsuits challenging government data sharing and privacy violations, and spoke out against age-verification laws that threaten privacy and free expression.
LMDeploy is a toolkit for compressing, deploying, and serving large language models (LLMs). Prior to version 0.11.1, the software had an insecure deserialization vulnerability (unsafe conversion of data back into executable code) where it used torch.load() without the weights_only=True parameter when opening model checkpoint files, allowing attackers to run arbitrary code on a victim's machine by tricking them into loading a malicious .bin or .pt model file.
Fix: A fix is planned for the next release 2.6.8.
NVD/CVE DatabaseFix: Update to version 1.7.0.dev45 or later, which contains a patch for this vulnerability.
NVD/CVE DatabaseFix: Update to version 0.9.11 or later, which fixes the vulnerability.
NVD/CVE DatabaseThis research paper studies diffusion models, a type of AI used to generate images and audio, as a statistical method for density estimation (learning the probability distribution of data). The authors show that when data has a factorizable structure (meaning it can be broken into independent low-dimensional components, like in Bayesian networks), diffusion models can efficiently learn this structure and achieve optimal performance using a specially designed sparse neural network architecture (one where most connections between neurons are inactive).
Hallucinations (instances where Large Language Models generate false or misleading content) are a safety problem for AI applications. The paper introduces UQLM, a Python package that uses uncertainty quantification (UQ, a statistical technique for measuring how confident a model is in its answer) to detect when an LLM is likely hallucinating by assigning confidence scores between 0 and 1 to responses.
Fix: The source describes UQLM as 'an off-the-shelf solution for UQ-based hallucination detection that can be easily integrated to enhance the reliability of LLM outputs.' No specific implementation steps, code examples, or version details are provided in the source text.
JMLR (Journal of Machine Learning Research)This research studies how to predict whether borrowers on micro-lending platforms (small-loan services) will default (fail to repay their loans) by examining their call activity and social media behavior. The study analyzed over 154,000 loans from Indonesian platforms and found that frequent calls and stable calling patterns suggest lower default risk, while frequent social media activity and stable social media patterns actually indicate higher default risk. These findings suggest that micro-lending platforms could improve their credit assessment models (systems for deciding who gets loans) by combining both types of behavioral data.
This research studied what makes knowledge workers (people whose jobs involve handling information) want to use ChatGPT at work, using technology affordance and constraints theory (a framework explaining how tools enable certain actions while limiting others). The study found that ChatGPT's benefits like automation, information quality, and productivity boost adoption, but concerns about risk and lack of regulation reduce it. Personal innovativeness (how open someone is to new ideas) and supportive workplace culture help workers embrace ChatGPT despite their concerns.
This presentation covers security vulnerabilities found in agentic systems, which are AI agents (systems that can take actions autonomously) that can use computers and write code. The talk includes demonstrations of exploits discovered during the Month of AI Bugs, a security research initiative focused on finding bugs in AI systems.
Fix: Use interruptible sleep where possible and reschedule the work if it can't take the lock. The fix has been applied in multiple kernel commits available at kernel.org (commit hashes: 1f800f6aae57d2d8f63d32fff383017cbc11cf65, 7123a4337bf73132bbfb5437e4dc83ba864a9a1e, bde541a57b4204d0a800afbbd3d1c06c9cdb133f, be5d5d0637fd88c18ee76024bdb22649a1de00d6, ccf1ccdc5926907befbe880b562b2a4b5f44c087, and f3221361dc85d4de22586ce8441ec2c67b454f5d).
NVD/CVE DatabaseFix: Make completion statistics an atomic variable to reflect coherent updates to it. This will also avoid load/store tearing logic bug potentially possible by compiler optimizations.
NVD/CVE DatabaseFix: This issue has been patched in version 0.11.1.
NVD/CVE Database