All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Cursor, a code editor designed for AI-assisted programming, has a vulnerability in versions 1.7 and below where it automatically loads configuration files from project directories, which can be exploited by attackers. If a user runs Cursor's command-line tool (CLI) in a malicious repository, an attacker could use prompt injection (tricking the AI by hiding instructions in its input) combined with permissive settings to achieve remote code execution (the ability to run commands on the user's system without permission).
Fix: The fix is available as patch 2025.09.17-25b418f. As of October 3, 2025, this patch has not yet been included in an official release version.
NVD/CVE DatabaseLlamaIndex released version 0.14.4 on September 24, 2025, with updates across multiple packages that integrate with different AI services and databases. Most updates fixed dependency issues with OpenAI libraries, while others added new features like support for Claude Sonnet 4.5 and structured outputs, and fixed bugs in areas like authorization headers and data fetching.
Cursor is a code editor that lets programmers work with AI assistance. In versions 1.7 and below, when using MCP (a system for connecting external tools to AI) with OAuth authentication (a login method), an attacker can trick Cursor into running malicious commands by pretending to be a trusted service, potentially giving them full control of the user's computer.
Cursor, a code editor designed for AI-assisted programming, has a critical vulnerability in versions 1.6 and below that allows remote code execution (RCE, where an attacker runs commands on your computer without permission). An attacker who gains control of the AI chat context (such as through a compromised MCP server, a tool that extends the AI's capabilities) can use prompt injection (tricking the AI by hiding malicious instructions in its input) to make Cursor modify workspace configuration files, bypassing an existing security protection and ultimately executing arbitrary code.
Cursor, a code editor designed for programming with AI, has a vulnerability in versions 1.6 and below where Mermaid (a tool for rendering diagrams) can embed images that get displayed in the chat box. An attacker can exploit this through prompt injection (tricking the AI by hiding instructions in its input) to send sensitive information to an attacker-controlled server, or a malicious AI model might trigger this automatically.
Claude Code (an AI tool that writes and runs code automatically) had a security flaw in versions before 1.0.111 where it could execute code from a project before the user confirmed they trusted the project. An attacker could exploit this by tricking a user into opening a malicious project directory.
A bug in the Linux kernel's KASAN (a memory safety tool) caused memory allocation functions to ignore the caller's gfp_mask (a flag controlling how memory should be allocated), always using GFP_KERNEL instead. This created a mismatch with vmalloc() (virtual memory allocation), which supports GFP_NOFS and GFP_NOIO flags that prevent certain types of I/O operations, and could cause deadlocks when filesystems like XFS tried to allocate memory with these restrictions.
Fix: Update to version 0.14.4 and the corresponding versioned packages listed in the release notes (e.g., llama-index-llms-openai 0.6.1, llama-index-embeddings-text-embeddings-inference 0.4.2, llama-index-llms-ollama 0.7.4, and others) to receive the dependency fixes and bug fixes described.
LlamaIndex Security ReleasesFix: A patch is available at version 2025.09.17-25b418f. Users should update to this patched version to fix the vulnerability.
NVD/CVE DatabaseFix: Update to version 1.7, which fixes this issue.
NVD/CVE DatabaseThis research paper identifies security weaknesses in a previous key exchange protocol (a method for two systems to securely agree on a shared secret) used in smart grids, specifically showing it is vulnerable to offline password-guessing and key compromise impersonation attacks (where an attacker tricks one party into thinking they are the other party). The authors propose a new, lightweight protocol that fixes these issues by using the Solana blockchain to manage keys and requiring smart meters to perform only simple operations like hashing (converting data into fixed-size codes) and encryption.
Fix: The paper proposes a decentralized ultra-lightweight AKE (authenticated key exchange) protocol that leverages the public Solana blockchain to enhance transparency and enable simple key revocation, with the SMD (smart metering device) performing only hashing, symmetric encryption/decryption, and physical unclonable function operations. However, this is a research proposal rather than a patch or update to existing software, so no software mitigation version or download link is provided.
IEEE Xplore (Security & AI Journals)This research proposes Leaper, a framework that helps mobile workers in crowdsourcing tasks (where many people contribute data from their phones) protect their location privacy while still completing work. The system uses differential privacy (a mathematical technique that adds noise to data to prevent identifying individuals) and k-anonymity (mixing a person's data with others so they can't be singled out) to obfuscate, or hide, each worker's actual location, and then compensates workers fairly based on the privacy risk they accept.
This research paper proposes FedNK-RF, an algorithm for federated learning (a decentralized approach where multiple parties train AI models together while keeping their data private) that handles heterogeneous data (data that differs significantly across different sources). The algorithm uses random features and Nyström approximation (a mathematical technique that reduces computational errors) to improve accuracy while maintaining privacy protection, and the authors prove it achieves optimal performance rates.
Fix: This issue is fixed in version 1.7. Users should upgrade to version 1.7 or later.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.111 or later. Users with auto-update enabled will have received this fix automatically; users performing manual updates should update to the latest version.
NVD/CVE DatabaseFederated learning schemes (systems where multiple parties train AI models together while keeping data private) that use two servers for privacy protection were found to leak user data when facing model poisoning attacks (where malicious users deliberately corrupt the learning process). The researchers propose an enhanced framework called PBFL that uses Byzantine-robust aggregation (a method to safely combine data from untrusted sources), normalization checks, similarity measurements, and trapdoor fully homomorphic encryption (a technique for doing calculations on encrypted data without decrypting it) to protect privacy while defending against poisoning attacks.
Fix: The authors propose an enhanced privacy-preserving and Byzantine-robust federated learning (PBFL) framework that addresses the vulnerability. Key components include: a novel Byzantine-tolerant aggregation strategy with normalization judgment, cosine similarity computation, and adaptive user weighting; a dual-scoring trust mechanism and outlier suppression for detecting stealthy attacks; and two privacy-preserving subroutines (secure normalization judgment and secure cosine similarity measurement) that operate over encrypted gradients using a trapdoor fully homomorphic encryption scheme. According to theoretical analyses and experiments, this scheme guarantees security, convergence, and efficiency even with malicious users and one malicious server.
IEEE Xplore (Security & AI Journals)This research proposes a data aggregation framework (a system for combining data from multiple sources) that evaluates how trustworthy different data sources are using dynamic Bayesian networks (a model that updates trust scores based on changing network behavior over time). The framework combines trust measurement with the minimum spanning tree protocol (an algorithm for efficient data routing) to improve how data centers process large amounts of information, achieving significant reductions in computational, communication, and storage costs.
This paper addresses the lack of technical tools for regulating high-risk AI systems by proposing SFAIR (Secure Framework for AI Regulation), a system that automatically tests whether an AI meets regulatory standards. The framework uses a temporal self-replacement test (similar to certification exams for human operators) to measure an AI's operational qualification score, and protects itself using encryption, randomization, and real-time monitoring to prevent tampering.
Fix: The paper proposes SFAIR as a comprehensive framework for securing AI regulation. Key technical safeguards mentioned include: randomization, masking, encryption-based schemes, and real-time monitoring to secure SFAIR operations. Additionally, the framework leverages AMD's Secure Encrypted Virtualization-Encrypted State (SEV-ES, a processor-level security technology that encrypts AI system memory) for enhanced security. The source code of SFAIR is made publicly available.
IEEE Xplore (Security & AI Journals)This research identifies how microarchitectural website-fingerprinting attacks (timing-based methods where attackers on the same computer can learn what websites a victim visits) actually work by pinpointing four main sources of information leakage: core contention (competition for processor cores), interrupts (signals that pause processing), frequency scaling (changing processor speed), and cache eviction (removing data from fast memory). The researchers developed a framework to measure how much each leakage source contributes to these attacks and demonstrated that controlling these sources can prevent the attacks entirely.
Fix: The source demonstrates that leakage can be 'completely mitigated by controlling these sources' (core contention, interrupts, frequency scaling, and cache eviction), but does not specify the concrete technical steps, configuration changes, or software updates needed to implement such controls in practice.
IEEE Xplore (Security & AI Journals)This research presents a new method for performing topological data analysis (TDA, a technique that finds shape-based patterns in complex data) on encrypted information using homomorphic encryption (HE, a type of encryption that lets computers process data without decrypting it first). The authors adapted a fundamental TDA algorithm called boundary matrix reduction to work with encrypted data, proved it works correctly mathematically, and tested it using the OpenFHE framework to show it functions properly on real encrypted data.
Fix: The patch fixes the issue by: extending kasan_populate_vmalloc() and helpers to accept and respect gfp_mask; passing gfp_mask down to alloc_pages_bulk() and __get_free_page() functions; enforcing GFP_NOFS/NOIO semantics using memalloc_*_save()/restore() wrapper calls around apply_to_page_range(); and updating the call sites in vmalloc.c and the percpu allocator accordingly.
NVD/CVE DatabaseThis research study examines how immersive experiences in the metaverse (virtual shared digital spaces accessed through VR or similar technology) affect user emotions and behavior. The researchers found that when users experience focused immersion, enjoyment, and telepresence (the feeling of being physically present in a digital environment), they develop stronger feelings of awe and attachment to virtual places, which in turn increases how engaged they become with the platform.
This academic paper argues that companies should view cybersecurity not just as a defensive cost (like insurance to prevent losses), but as a strategic investment that creates business value and competitive advantages. The paper offers guidance to information systems leaders on how organizations can benefit financially and operationally by practicing strong cybersecurity.
This source describes a three-layer model for digital transformation in organizations, based on a case study of automotive supplier Continental AG. The model emphasizes that successful digital transformation requires simultaneous changes across IT systems, work practices (how employees actually do their jobs), and mindset evolution (how people think about their work), with these layers reinforcing each other.
Many companies find it difficult to scale AI systems (machine learning models that learn patterns from data) globally because these systems make existing technology management problems worse and introduce new challenges. Based on a study of how industrial company Siemens AG handles this, the source identifies five critical risks in managing AI technology and offers recommendations for successfully deploying AI systems across an entire organization.
This research presents CAGE, a system that adds support for confidential accelerators (specialized processing hardware like GPUs and FPGAs) to Arm CCA (Confidential Computing Architecture, which creates isolated execution regions called realms for protecting sensitive data). The system uses a novel shadow task mechanism and memory isolation to protect data confidentiality and integrity without requiring hardware changes, achieving this with only moderate performance overhead.