aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4488 items

GHSA-x2xq-qhjf-5mvg: DDEV has ZipSlip path traversal in tar and zip archive extraction

mediumvulnerability
security
Apr 22, 2026
CVE-2026-32885

DDEV, a local development tool, has a ZipSlip vulnerability (a path traversal flaw where attackers use special path names like '../' to escape the intended extraction directory) in its archive extraction functions. When DDEV extracts tar or zip archives from remote sources, it doesn't validate file paths, allowing attackers to write files anywhere on a developer's machine by crafting malicious archives.

GitHub Advisory Database

Fingerprint-based watermarking for protecting and tracing black-box NLP models

inforesearchPeer-Reviewed
security

CVE-2026-35366: The printenv utility in uutils coreutils fails to display environment variables containing invalid UTF-8 byte sequences.

mediumvulnerability
security
Apr 22, 2026
CVE-2026-35366

A bug in uutils coreutils (a set of basic Unix utilities) causes the printenv tool to silently skip environment variables (settings that programs use) containing invalid UTF-8 byte sequences (non-standard character encodings), rather than displaying them. This allows attackers to hide malicious environment variables like LD_PRELOAD (which can inject libraries into programs) from administrators and security tools that rely on printenv to inspect the system.

AI-powered defense for an AI-accelerated threat landscape

infonews
securitypolicy

Anthropic’s Mythos rollout has missed America’s cybersecurity agency

infonews
industry
Apr 22, 2026

Anthropic released Mythos Preview, an AI model designed to find and fix security vulnerabilities (weaknesses in software that attackers could exploit), and several US federal agencies are using it. However, CISA (the Cybersecurity and Infrastructure Security Agency, which is America's main government cybersecurity coordinator) reportedly does not have access to the tool, while other agencies like the Commerce Department and NSA do.

Google Meet will take AI notes for in-person meetings too

infonews
industry
Apr 22, 2026

Google's Gemini AI can now generate summaries and transcripts not just for Google Meet video calls, but also for in-person meetings, Zoom calls, and Microsoft Teams meetings. The feature, which was previously only available to early testers on Android devices, now works for both scheduled and impromptu meetings, and can be transitioned to a video call if remote participants need to join.

What is Mythos AI and why could it be a threat to global cybersecurity?

infonews
security
Apr 22, 2026

Anthropic, the company behind Claude chatbot, has decided not to release its new AI model called Mythos to the public due to cybersecurity risks. The company is investigating a report that unauthorized people may have gained access to Mythos, raising concerns about whether tech companies can adequately protect their most powerful AI systems from being misused.

Making ChatGPT better for clinicians

infonews
industry
Apr 22, 2026

OpenAI introduced ChatGPT for Clinicians, a free AI tool designed to help doctors, nurse practitioners, and pharmacists with clinical tasks like documentation, medical research, and patient care consultation. The tool includes advanced AI models, trusted medical search powered by peer-reviewed sources, and optional HIPAA compliance (a federal privacy law for healthcare data) support, with conversations kept private and not used to train the AI.

GHSA-2r2p-4cgf-hv7h: engram: HTTP server CORS wildcard + auth-off-by-default enables CSRF graph exfiltration and persistent indirect prompt injection

highvulnerability
security
Apr 22, 2026

The engram HTTP server (a local application running on your computer) had a critical security flaw where it allowed any website you visited to steal your private knowledge graph data and inject persistent malicious instructions into your AI coding assistant. This happened because the server had no password protection by default and accepted requests from any website origin (CORS, or cross-origin resource sharing, which controls what websites can talk to your local applications).

Now Meta will track what employees do on their computers to train its AI agents

infonews
privacyindustry

CVE-2026-6859: A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from

highvulnerability
security
Apr 22, 2026
CVE-2026-6859

InstructLab has a security flaw in its `linux_train.py` script that automatically trusts code from external model sources without verification (trust_remote_code=True). An attacker could trick users into downloading a malicious model from HuggingFace (a popular AI model repository) and running training commands, allowing the attacker to execute arbitrary Python code and take over the entire system.

CVE-2026-31507: In the Linux kernel, the following vulnerability has been resolved: net/smc: fix double-free of smc_spd_priv when tee()

infovulnerability
security
Apr 22, 2026
CVE-2026-31507

A vulnerability in the Linux kernel's SMC (sockets mapped to connections) networking code allows a double-free memory error when the tee() function duplicates splice pipe buffers. When two pipes share the same smc_spd_priv pointer (a data structure tracking buffer metadata), releasing both pipes causes the same object to be freed twice, leading to a use-after-free bug (accessing memory that has already been freed) and potential kernel crashes.

CVE-2026-31504: In the Linux kernel, the following vulnerability has been resolved: net: fix fanout UAF in packet_release() via NETDEV_

infovulnerability
security
Apr 22, 2026
CVE-2026-31504

A race condition vulnerability exists in the Linux kernel's packet networking code where `packet_release()` can leave a dangling pointer in a fanout group's array (a data structure for managing network packet distribution). The problem occurs because `NETDEV_UP` (a network device startup event) can re-register a socket into the array after `packet_release()` begins cleanup but before it finishes, creating a use-after-free bug (accessing memory that has been freed).

From Access Control to Outcome Control: Securing AI Agents with Check Point and Google Cloud

infonews
securitypolicy

Retail traders can now get long OpenAI as Robinhood's venture fund takes a stake

infonews
industry
Apr 22, 2026

Robinhood Ventures Fund I, an investment vehicle that lets regular traders buy into private companies, invested $75 million in OpenAI, the AI company behind ChatGPT. This gives retail investors (non-professional traders) access to ownership stakes in one of the most influential artificial intelligence companies, reflecting growing investor demand for exposure to leading AI firms.

AI-Enhanced Cybersecurity in Edge Computing: Threats, Solutions, and Future Directions

inforesearchPeer-Reviewed
security

NFC tap-to-pay gets tapped by hackers

mediumnews
security
Apr 22, 2026

Hackers have infected a legitimate Android payment app called HandyPay with malware (trojanized code, meaning legitimate software modified with malicious additions) to steal NFC data (near field communication, the technology that powers tap-to-pay) and PIN numbers, allowing them to clone payment cards and drain accounts. The attackers likely used generative AI to help create the malware, as evidenced by emoji markers in the code that are typical of AI-generated text. The malware is being distributed through fake websites impersonating a Brazilian lottery and a spoofed Google Play store, targeting Android users in Brazil.

Claude Mythos Finds 271 Firefox Vulnerabilities

infonews
securityresearch

Toxic Combinations: When Cross-App Permissions Stack into Risk

highnews
securitysafety

Anthropic investigating claim of unauthorised access to Mythos AI tool

mediumnews
security
Apr 22, 2026

Anthropic is investigating a claim that unauthorized users accessed Claude Mythos, an advanced AI security tool that the company considers too dangerous to release publicly. The unauthorized access likely occurred through misuse of credentials by someone with legitimate access to Anthropic's systems through a third-party vendor, rather than through a traditional hack (a deliberate attempt to break into a computer system). The incident raises concerns about whether large AI companies can adequately control access to their most powerful models.

Previous22 / 225Next
research
Apr 22, 2026

Researchers have developed a fingerprint-based watermarking technique to protect and track natural language processing models (AI systems trained to understand and generate text) that operate as black boxes (systems where users cannot see how internal decisions are made). This method allows owners to prove they created a model and trace where it has been used or copied without permission.

Elsevier Security Journals
NVD/CVE Database
Apr 22, 2026

AI models can now autonomously discover vulnerabilities and create working exploits, which compresses the time between when a weakness is found and when it's attacked. However, the same AI capabilities that help attackers can also help defenders by accelerating vulnerability discovery and reducing response time. Microsoft is partnering with AI model providers and using tools like advanced models to identify security issues faster and deploy fixes through their existing update processes.

Fix: Microsoft states it will incorporate advanced AI models directly into its Security Development Lifecycle (SDL) to identify vulnerabilities and develop mitigations and updates. Mitigations are handled through the Microsoft Security Response Center (MSRC) processes, including Update Tuesday (the regular monthly security update distribution) and out-of-band updates when needed. For customers using Microsoft PaaS and SaaS cloud services, mitigations and updates are applied automatically. For customers deploying on their own infrastructure, staying current on all security updates is described as a fundamental requirement. Microsoft will also deploy detections to Microsoft Defender when updates are released and share details through the Microsoft Active Protections Program (MAPP) to help partners mitigate risk.

Microsoft Security Blog
The Verge (AI)
The Verge (AI)
The Guardian Technology
OpenAI Blog

Fix: Upgrade to `engramx@2.0.2` or later. This version applies the following fixes: (1) requires authentication (Bearer token or HttpOnly cookie) on all non-public routes, (2) removes the wildcard CORS policy entirely and requires explicit opt-in via `ENGRAM_ALLOWED_ORIGINS`, (3) validates the Host and Origin headers to prevent DNS rebinding attacks, (4) enforces `Content-Type: application/json` on data modifications to block CSRF vectors, and (5) protects the UI bootstrap with `Sec-Fetch-Site` validation to prevent cross-origin probing.

GitHub Advisory Database
Apr 22, 2026

Meta is installing a tool called Model Capability Initiative (MCI) on US employees' computers that records their activity, including mouse movements, clicks, keystrokes, and screenshots from work apps and websites. This recorded data will be used to train Meta's AI agents to perform computer tasks more like humans do, though Meta states the data won't be used to evaluate employee job performance.

The Verge (AI)
NVD/CVE Database

Fix: The .get callback is invoked by both tee(2) and splice_pipe_to_pipe() for partial transfers; both will now return -EFAULT. Users who need to duplicate SMC socket data must use a copy-based read path.

NVD/CVE Database

Fix: The fix sets `po->num` to zero in `packet_release()` while `bind_lock` is held to prevent `NETDEV_UP` from linking and closing the race window.

NVD/CVE Database
Apr 22, 2026

AI agents (AI systems that can retrieve data, use tools, and perform actions automatically) introduce new security challenges because traditional access control (rules about who can use a system) isn't enough. Google Cloud's Gemini Enterprise Agent Platform offers a centralized control point that provides identity management, access control, policy enforcement, and observability (the ability to see and monitor what's happening) to secure how these agents operate.

Check Point Research
CNBC Technology
research
Apr 22, 2026

This academic survey article examines how AI is being used to improve security in edge computing (processing data on devices near users rather than in distant data centers), while also exploring the new threats that arise when combining AI with edge systems. The article covers both the security challenges unique to AI-enhanced edge environments and potential approaches to address them, looking toward future developments in this field.

ACM Digital Library (TOPS, DTRAP, CSUR)

Fix: Android provides some protection through security alerts. When a user tries to download the trojanized app from a browser, Android automatically blocks the install and shows a prompt requiring manual permission to allow installation from that source. ESET researchers also shared a list of indicators (files, hashes, network indicators, and MITRE ATT&CK maps) in a dedicated GitHub repository to support detection efforts.

CSO Online
Apr 22, 2026

A tool called Claude Mythos discovered 271 security vulnerabilities (weak points that could be exploited) in Firefox, Mozilla's web browser. According to Mozilla, all of these flaws could have also been found by a highly skilled human security researcher, suggesting the AI tool didn't discover anything that experienced humans couldn't find.

SecurityWeek
Apr 22, 2026

On January 31, 2026, researchers found that Moltbook, a social network for AI agents, exposed 35,000 email addresses and 1.5 million agent API tokens because its database was unencrypted, including plaintext third-party credentials like OpenAI API keys. The core risk is a "toxic combination," where an AI agent or integration bridges two or more applications through OAuth grants (permission frameworks allowing apps to access each other) or API connections, and each application owner reviews only their own side, missing the security risks created by the bridge itself.

Fix: The source suggests shifting review processes from inside each app to between them, recommending four specific areas: (1) maintain a non-human identity inventory treating every AI agent, bot, MCP server (modular tools that extend AI capabilities), and OAuth integration the same as user accounts with owners and review dates, (2) flag new write scopes (permissions to modify data) on identities that already hold read scopes (permissions to view data) in different apps before approval, (3) create a review trail for every connector linking two systems that names both sides and the trust relationship between them, and (4) monitor long-lived tokens whose activity has drifted from their original scopes.

The Hacker News
BBC Technology