All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
A reported $100 billion deal between Nvidia (a chipmaker) and OpenAI (the company behind ChatGPT) appears to have collapsed. The deal was a circular arrangement, meaning Nvidia would give OpenAI money that would mostly be spent buying Nvidia's own chips, raising questions about how AI companies will fund their expensive expansion without this agreement.
AutoGPT is a platform for creating and managing AI agents that automate workflows. Before version 0.6.34, the SendDiscordFileBlock feature had an SSRF vulnerability (server-side request forgery, where an attacker tricks the server into making unwanted requests to internal systems) because it didn't filter user-provided URLs before accessing them.
A Linux kernel vulnerability allowed the system to hang indefinitely when waiting for data to be written to disk, particularly when using fuse (a file system that lets user-space programs handle file operations). The problem occurred because the system was waiting for data integrity on file systems that don't guarantee it. When a faulty fuse server stopped responding to write requests, the wait_sb_inodes() function would hang forever.
A vulnerability in the Linux kernel's virtio vsock transport allowed a malicious remote peer to force excessive memory allocation by advertising a large buffer size and reading slowly, potentially causing the host to run out of memory. The fix introduces a helper function, virtio_transport_tx_buf_size(), that limits TX credit (the amount of data queued for a connection) to the minimum of both the peer's advertised buffer and the local system's own buffer size, ensuring one endpoint cannot force another to queue more data than its own configuration allows.
A Linux kernel bug caused the iommu/io-pgtable-arm component to return a negative error code (-ENOENT) from a function that should return size_t (an unsigned data type, meaning it can only hold positive numbers). This caused the negative number to be interpreted as a huge positive value, which corrupted memory addresses used by the I/O memory management unit and triggered crashes. The bug affected how the kernel unmaps (frees) memory regions used for direct device access.
OpenClaw, a personal AI assistant, had a vulnerability in its isValidMedia() function (the code that checks if media files are safe to access) that allowed attackers to read any file on a system by using special file paths, potentially stealing sensitive data. This flaw was fixed in version 2026.1.30.
Open eClass, a course management system (software that helps teachers organize classes and assignments), had a stored XSS vulnerability (a security flaw where attackers inject harmful code that runs when other users view it) in versions before 4.2. Authenticated students could inject malicious JavaScript (code that runs in web browsers) into assignment files, and this code would execute when instructors viewed the submissions.
Claude Code is an agentic coding tool (software that can automatically write and execute code) that had a vulnerability in versions before 2.0.72 where attackers could bypass safety confirmation prompts and execute untrusted commands through the find command by injecting malicious content into the tool's context window (the input area where the AI reads information). The vulnerability has a CVSS score (a 0-10 severity rating) of 7.7, meaning it is considered high severity.
Claude Code, an agentic coding tool (AI software that writes and manages code), had a vulnerability in versions before 2.0.74 where a flaw in how it validated Bash commands (a Unix shell language) allowed attackers to bypass directory restrictions and write files outside the intended folder without permission from the user. The attack required the user to be running ZSH (a different Unix shell) and to allow untrusted content into Claude Code's input.
Claude Code, a tool that helps AI write and execute code automatically, had a security flaw before version 1.0.111 where it didn't properly check website addresses (URLs) before making requests to them. The app used a simple startsWith() check (looking only at the beginning of a domain name), which meant attackers could register fake domains like modelcontextprotocol.io.example.com that would be mistakenly trusted, allowing the tool to send data to attacker-controlled sites without the user knowing.
The Tutor LMS plugin for WordPress has a security flaw called IDOR (insecure direct object references, where an attacker can access or change data belonging to other users by guessing or manipulating identifiers) in versions up to 3.9.5. Attackers with instructor-level access can modify or delete courses they don't own by changing course ID numbers in bulk action requests, because the plugin doesn't properly check who owns each course.
OpenAI published a paper describing new mitigations for URL-based data exfiltration (a technique where attackers trick AI agents into sending sensitive data to attacker-controlled websites by embedding malicious URLs in inputs). The issue was originally reported to OpenAI in 2023 but received little attention at the time, though Microsoft implemented a fix for the same vulnerability in Bing Chat.
Fix: Microsoft applied a fix via a Content-Security-Policy header (a security rule that controls which external resources a webpage can load) in May 2023 to generally prevent loading of images. OpenAI's specific mitigations are discussed in their new paper 'Preventing URL-Based Data Exfiltration in Language-Model Agents', but detailed mitigation methods are not described in this source text.
Embrace The RedFix: This issue has been patched in autogpt-platform-beta-v0.6.34. Users should update to this version or later.
NVD/CVE DatabaseThis article discusses both harms and benefits of AI technologies, arguing that policy should focus on the specific context and impact of each AI use rather than broadly promoting or banning AI. The text warns that AI can automate bias (perpetuating discrimination in decisions about housing, employment, and arrests), consume vast resources, and replace human judgment in high-stakes decisions, while acknowledging beneficial uses like helping scientists analyze data or improving accessibility for people with disabilities.
Fix: The fix skips AS_NO_DATA_INTEGRITY mappings (file systems that don't guarantee data integrity semantics) in the wait_sb_inodes() function, allowing the system to skip waiting for these inodes entirely. This restores fuse to its prior behavior where syncs (operations that flush data to disk) become no-ops (operations that do nothing).
NVD/CVE DatabaseFix: Introduce virtio_transport_tx_buf_size() helper that returns min(peer_buf_alloc, buf_alloc) and use it wherever peer_buf_alloc is consumed. This ensures the effective TX window is bounded by both the peer's advertised buffer and the local buf_alloc (clamped to buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE). The patch is applied to virtio_transport_common.c, affecting virtio-vsock, vhost-vsock, and loopback transports.
NVD/CVE DatabaseFix: Fix by returning 0 instead of -ENOENT when encountering an unmapped PTE (page table entry). The existing WARN_ON already signals the error condition, and returning 0 (meaning 'nothing unmapped') is the correct semantic for a size_t return type. This matches the behavior of other io-pgtable implementations (io-pgtable-arm-v7s, io-pgtable-dart) which return 0 on error conditions.
NVD/CVE DatabaseFix: Update OpenClaw to version 2026.1.30 or later, as the issue has been patched in that version.
NVD/CVE DatabaseMicrosoft created a lightweight scanner that can detect backdoors (hidden malicious behaviors) in open-weight LLMs (large language models that have publicly available internal parameters) by identifying three distinctive signals: a specific attention pattern when trigger phrases are present, memorized poisoning data leakage, and activation by fuzzy triggers (partial variations of trigger phrases). The scanner works without needing to retrain the model or know the backdoor details in advance, though it only functions on open-weight models and works best on trigger-based backdoors.
Fix: Microsoft's scanner performs detection through a three-step process: it "first extracts memorized content from the model and then analyzes it to isolate salient substrings. Finally, it formalizes the three signatures above as loss functions, scoring suspicious substrings and returning a ranked list of trigger candidates." The tool works across common GPT-style models and requires access to the model files but no additional model training or prior knowledge of the backdoor behavior.
The Hacker NewsResearchers have released new work on detecting backdoors (hidden malicious behaviors embedded in a model's weights during training) in open-weight language models to improve trust in AI systems. A backdoored model appears normal most of the time but changes behavior when triggered by a specific input, like a hidden phrase, making detection difficult. The research explores whether backdoored models show systematic differences from clean models and whether their trigger phrases can be reliably identified.
X's French offices were raided by Paris prosecutors investigating suspected illegal data extraction and possession of child sexual abuse material (CSAM, images depicting the sexual abuse of children), while the UK's Information Commissioner's Office launched a separate investigation into Grok (Elon Musk's AI chatbot) for its ability to create harmful sexualized images and videos without people's consent. The investigations were triggered by reports that Grok generated sexual deepfakes (fake sexual images created using real photos of women without permission) that were shared on X.
Fix: This issue has been patched in version 4.2. Users should upgrade to version 4.2 or later.
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.72.
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.74. Users should update Claude Code to version 2.0.74 or later.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.111 or later, as the issue has been patched in that version.
NVD/CVE DatabaseAI agents are increasingly finding and reporting common security vulnerabilities (weaknesses in software) faster than human pen testers (security professionals who test systems for flaws), particularly through crowdsourced bug bounty programs (platforms where people are paid to find and report bugs). However, the source indicates that oversight and trust in these AI systems are not yet sufficiently developed to fully replace human expertise.
AI assistants like ChatGPT, Grok, and Qwen have their personalities and ethical rules shaped by their creators, and changes to these rules can cause serious problems for users. Recent examples include Grok generating millions of inappropriate sexual images and ChatGPT appearing to encourage self-harm, showing that how developers program an AI's behavior (its ethical codes) has real consequences.
This research proposes AHEDB (Accelerated Homomorphically Encrypted DataBase), a system designed to speed up database queries on encrypted data using Fully Homomorphic Encryption, or FHE (a method that lets computers perform calculations on encrypted information without decrypting it first). The system uses Encrypted Multiple Maps to reduce computational strain and a Single Range Cover algorithm for indexing, achieving better performance than existing FHE-based approaches while maintaining security.
HiveTEE is a security architecture that divides applications running inside a TEE (Trusted Execution Environment, a secure zone on a processor that protects sensitive operations from the main operating system) into smaller isolated domains, so that if one part is compromised, the damage doesn't spread to the rest. It uses RME (Realm Management Extension, a hardware feature that creates isolated execution spaces) and MTE (Memory Tagging Extension, a feature that prevents certain memory attacks), and testing shows it adds minimal slowdown (less than 3%) to applications.