All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
DDEV, a local development tool, has a ZipSlip vulnerability (a path traversal flaw where attackers use special path names like '../' to escape the intended extraction directory) in its archive extraction functions. When DDEV extracts tar or zip archives from remote sources, it doesn't validate file paths, allowing attackers to write files anywhere on a developer's machine by crafting malicious archives.
A bug in uutils coreutils (a set of basic Unix utilities) causes the printenv tool to silently skip environment variables (settings that programs use) containing invalid UTF-8 byte sequences (non-standard character encodings), rather than displaying them. This allows attackers to hide malicious environment variables like LD_PRELOAD (which can inject libraries into programs) from administrators and security tools that rely on printenv to inspect the system.
Anthropic released Mythos Preview, an AI model designed to find and fix security vulnerabilities (weaknesses in software that attackers could exploit), and several US federal agencies are using it. However, CISA (the Cybersecurity and Infrastructure Security Agency, which is America's main government cybersecurity coordinator) reportedly does not have access to the tool, while other agencies like the Commerce Department and NSA do.
Google's Gemini AI can now generate summaries and transcripts not just for Google Meet video calls, but also for in-person meetings, Zoom calls, and Microsoft Teams meetings. The feature, which was previously only available to early testers on Android devices, now works for both scheduled and impromptu meetings, and can be transitioned to a video call if remote participants need to join.
Anthropic, the company behind Claude chatbot, has decided not to release its new AI model called Mythos to the public due to cybersecurity risks. The company is investigating a report that unauthorized people may have gained access to Mythos, raising concerns about whether tech companies can adequately protect their most powerful AI systems from being misused.
OpenAI introduced ChatGPT for Clinicians, a free AI tool designed to help doctors, nurse practitioners, and pharmacists with clinical tasks like documentation, medical research, and patient care consultation. The tool includes advanced AI models, trusted medical search powered by peer-reviewed sources, and optional HIPAA compliance (a federal privacy law for healthcare data) support, with conversations kept private and not used to train the AI.
The engram HTTP server (a local application running on your computer) had a critical security flaw where it allowed any website you visited to steal your private knowledge graph data and inject persistent malicious instructions into your AI coding assistant. This happened because the server had no password protection by default and accepted requests from any website origin (CORS, or cross-origin resource sharing, which controls what websites can talk to your local applications).
InstructLab has a security flaw in its `linux_train.py` script that automatically trusts code from external model sources without verification (trust_remote_code=True). An attacker could trick users into downloading a malicious model from HuggingFace (a popular AI model repository) and running training commands, allowing the attacker to execute arbitrary Python code and take over the entire system.
A vulnerability in the Linux kernel's SMC (sockets mapped to connections) networking code allows a double-free memory error when the tee() function duplicates splice pipe buffers. When two pipes share the same smc_spd_priv pointer (a data structure tracking buffer metadata), releasing both pipes causes the same object to be freed twice, leading to a use-after-free bug (accessing memory that has already been freed) and potential kernel crashes.
A race condition vulnerability exists in the Linux kernel's packet networking code where `packet_release()` can leave a dangling pointer in a fanout group's array (a data structure for managing network packet distribution). The problem occurs because `NETDEV_UP` (a network device startup event) can re-register a socket into the array after `packet_release()` begins cleanup but before it finishes, creating a use-after-free bug (accessing memory that has been freed).
Robinhood Ventures Fund I, an investment vehicle that lets regular traders buy into private companies, invested $75 million in OpenAI, the AI company behind ChatGPT. This gives retail investors (non-professional traders) access to ownership stakes in one of the most influential artificial intelligence companies, reflecting growing investor demand for exposure to leading AI firms.
Hackers have infected a legitimate Android payment app called HandyPay with malware (trojanized code, meaning legitimate software modified with malicious additions) to steal NFC data (near field communication, the technology that powers tap-to-pay) and PIN numbers, allowing them to clone payment cards and drain accounts. The attackers likely used generative AI to help create the malware, as evidenced by emoji markers in the code that are typical of AI-generated text. The malware is being distributed through fake websites impersonating a Brazilian lottery and a spoofed Google Play store, targeting Android users in Brazil.
Anthropic is investigating a claim that unauthorized users accessed Claude Mythos, an advanced AI security tool that the company considers too dangerous to release publicly. The unauthorized access likely occurred through misuse of credentials by someone with legitimate access to Anthropic's systems through a third-party vendor, rather than through a traditional hack (a deliberate attempt to break into a computer system). The incident raises concerns about whether large AI companies can adequately control access to their most powerful models.
Researchers have developed a fingerprint-based watermarking technique to protect and track natural language processing models (AI systems trained to understand and generate text) that operate as black boxes (systems where users cannot see how internal decisions are made). This method allows owners to prove they created a model and trace where it has been used or copied without permission.
AI models can now autonomously discover vulnerabilities and create working exploits, which compresses the time between when a weakness is found and when it's attacked. However, the same AI capabilities that help attackers can also help defenders by accelerating vulnerability discovery and reducing response time. Microsoft is partnering with AI model providers and using tools like advanced models to identify security issues faster and deploy fixes through their existing update processes.
Fix: Microsoft states it will incorporate advanced AI models directly into its Security Development Lifecycle (SDL) to identify vulnerabilities and develop mitigations and updates. Mitigations are handled through the Microsoft Security Response Center (MSRC) processes, including Update Tuesday (the regular monthly security update distribution) and out-of-band updates when needed. For customers using Microsoft PaaS and SaaS cloud services, mitigations and updates are applied automatically. For customers deploying on their own infrastructure, staying current on all security updates is described as a fundamental requirement. Microsoft will also deploy detections to Microsoft Defender when updates are released and share details through the Microsoft Active Protections Program (MAPP) to help partners mitigate risk.
Microsoft Security BlogFix: Upgrade to `engramx@2.0.2` or later. This version applies the following fixes: (1) requires authentication (Bearer token or HttpOnly cookie) on all non-public routes, (2) removes the wildcard CORS policy entirely and requires explicit opt-in via `ENGRAM_ALLOWED_ORIGINS`, (3) validates the Host and Origin headers to prevent DNS rebinding attacks, (4) enforces `Content-Type: application/json` on data modifications to block CSRF vectors, and (5) protects the UI bootstrap with `Sec-Fetch-Site` validation to prevent cross-origin probing.
GitHub Advisory DatabaseMeta is installing a tool called Model Capability Initiative (MCI) on US employees' computers that records their activity, including mouse movements, clicks, keystrokes, and screenshots from work apps and websites. This recorded data will be used to train Meta's AI agents to perform computer tasks more like humans do, though Meta states the data won't be used to evaluate employee job performance.
Fix: The .get callback is invoked by both tee(2) and splice_pipe_to_pipe() for partial transfers; both will now return -EFAULT. Users who need to duplicate SMC socket data must use a copy-based read path.
NVD/CVE DatabaseFix: The fix sets `po->num` to zero in `packet_release()` while `bind_lock` is held to prevent `NETDEV_UP` from linking and closing the race window.
NVD/CVE DatabaseAI agents (AI systems that can retrieve data, use tools, and perform actions automatically) introduce new security challenges because traditional access control (rules about who can use a system) isn't enough. Google Cloud's Gemini Enterprise Agent Platform offers a centralized control point that provides identity management, access control, policy enforcement, and observability (the ability to see and monitor what's happening) to secure how these agents operate.
This academic survey article examines how AI is being used to improve security in edge computing (processing data on devices near users rather than in distant data centers), while also exploring the new threats that arise when combining AI with edge systems. The article covers both the security challenges unique to AI-enhanced edge environments and potential approaches to address them, looking toward future developments in this field.
Fix: Android provides some protection through security alerts. When a user tries to download the trojanized app from a browser, Android automatically blocks the install and shows a prompt requiring manual permission to allow installation from that source. ESET researchers also shared a list of indicators (files, hashes, network indicators, and MITRE ATT&CK maps) in a dedicated GitHub repository to support detection efforts.
CSO OnlineA tool called Claude Mythos discovered 271 security vulnerabilities (weak points that could be exploited) in Firefox, Mozilla's web browser. According to Mozilla, all of these flaws could have also been found by a highly skilled human security researcher, suggesting the AI tool didn't discover anything that experienced humans couldn't find.
On January 31, 2026, researchers found that Moltbook, a social network for AI agents, exposed 35,000 email addresses and 1.5 million agent API tokens because its database was unencrypted, including plaintext third-party credentials like OpenAI API keys. The core risk is a "toxic combination," where an AI agent or integration bridges two or more applications through OAuth grants (permission frameworks allowing apps to access each other) or API connections, and each application owner reviews only their own side, missing the security risks created by the bridge itself.
Fix: The source suggests shifting review processes from inside each app to between them, recommending four specific areas: (1) maintain a non-human identity inventory treating every AI agent, bot, MCP server (modular tools that extend AI capabilities), and OAuth integration the same as user accounts with owners and review dates, (2) flag new write scopes (permissions to modify data) on identities that already hold read scopes (permissions to view data) in different apps before approval, (3) create a review trail for every connector linking two systems that names both sides and the trust relationship between them, and (4) monitor long-lived tokens whose activity has drifted from their original scopes.
The Hacker News