aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4482 items

Google to invest up to $40 billion in Anthropic as search giant spreads its AI bets

infonews
industry
Apr 24, 2026

Google is investing up to $40 billion in Anthropic, an AI company that competes with OpenAI, with an initial $10 billion upfront and the remaining $30 billion dependent on performance milestones. This investment is part of a broader partnership that includes providing Anthropic with computing resources and cloud infrastructure access. The funding addresses Anthropic's need to expand its infrastructure to handle growing demand for its Claude AI assistant.

CNBC Technology

GHSA-rp7v-4384-hfrp: k8sGPT has Prompt Injection through its k8sGPT-Operator

highvulnerability
security
Apr 24, 2026

This item describes a prompt injection vulnerability (tricking an AI by hiding malicious instructions in its input) in k8sGPT-Operator, a tool that helps manage Kubernetes clusters (container orchestration systems). The content explains the framework for measuring vulnerability severity through metrics like attack complexity and potential impact, but does not provide specific details about the vulnerability itself or how it works.

GHSA-q5hj-mxqh-vv77: Claude Code: Trust Dialog Bypass via Git Worktree Spoofing Allows Arbitrary Code Execution

highvulnerability
security
Apr 24, 2026
CVE-2026-40068

Claude Code had a security flaw where it checked a git worktree (a Git feature allowing multiple branch checkouts in separate directories) `commondir` file to decide if a folder was trustworthy, but didn't verify the file's contents. An attacker could create a malicious repository with a fake `commondir` file pointing to a folder the victim had previously trusted, tricking Claude Code into skipping its safety dialog and running malicious code from `.claude/settings.json` (a configuration file). This attack required the victim to clone the malicious repository and open it in Claude Code, and the attacker had to know a path the victim had already marked as safe.

GHSA-82j2-j2ch-gfr8: rustls-webpki: Denial of service via panic on malformed CRL BIT STRING

highvulnerability
security
Apr 24, 2026

A bug in rustls-webpki (a Rust library for validating certificates) causes the program to crash when processing a malformed CRL (certificate revocation list, a list of revoked digital certificates) with a specially crafted BIT STRING (a data structure in certificate formats). The crash happens in the `bit_string_flags()` function when it tries to access an array element that doesn't exist, but only affects applications that explicitly enable CRL checking and load CRL data from untrusted sources.

GHSA-r75f-5x8p-qvmc: LiteLLM has SQL Injection in Proxy API key verification

criticalvulnerability
security
Apr 24, 2026

LiteLLM's proxy API key verification has a SQL injection vulnerability (a type of attack where an attacker inserts malicious database commands into input fields). An unauthenticated attacker could send a specially crafted authorization header to exploit this flaw and potentially read or modify the proxy's database, gaining unauthorized access to stored credentials.

GHSA-mw35-8rx3-xf9r: Ray: Remote Code Execution via Parquet Arrow Extension Type Deserialization

highvulnerability
security
Apr 24, 2026
CVE-2026-41486

Ray Data registers custom Arrow extension types (special data format handlers) globally in PyArrow, and when PyArrow reads a Parquet file (a data storage format) containing these types, it automatically deserializes metadata bytes using cloudpickle.loads(), which can execute arbitrary code. This vulnerability was reintroduced in July 2025 after a similar issue was supposedly fixed in May 2024, allowing attackers to run malicious code just by having Ray read a specially crafted Parquet file.

GHSA-xqmj-j6mv-4862: LiteLLM: Server-Side Template Injection in /prompts/test endpoint

highvulnerability
security
Apr 24, 2026

LiteLLM Proxy had a server-side template injection vulnerability (a security flaw where user input is processed as code rather than plain text) in its `/prompts/test` endpoint that allowed authenticated users to run arbitrary code within the proxy process and potentially access sensitive information like API keys or database credentials. The vulnerability affects any deployment running an affected version of LiteLLM Proxy.

GHSA-xff3-5c9p-2mr4: New API: Stripe Webhook Signature Bypass via Empty Secret Enables Unlimited Quota Fraud

highvulnerability
security
Apr 24, 2026
CVE-2026-41432

A critical vulnerability allows attackers to forge Stripe webhook events (messages confirming payments) and illegally credit their accounts with quota without paying, because the system uses an empty default secret key and doesn't verify which payment method was actually used. Three compounding flaws enable this: the webhook handler accepts empty secrets, signature verification can be bypassed with an empty key, and the system fulfills orders from any payment gateway when it receives a forged Stripe webhook.

CVE-2026-31645: In the Linux kernel, the following vulnerability has been resolved: net: lan966x: fix page pool leak in error paths la

infovulnerability
security
Apr 24, 2026
CVE-2026-31645

A vulnerability in the Linux kernel's lan966x network driver causes memory leaks when certain initialization functions fail. Specifically, a page pool (a memory management structure that pre-allocates memory pages for efficient network operations) is created but not properly cleaned up if later operations fail, wasting system memory.

CVE-2026-31552: In the Linux kernel, the following vulnerability has been resolved: wifi: wlcore: Return -ENOMEM instead of -EAGAIN if

infovulnerability
security
Apr 24, 2026
CVE-2026-31552

A bug in the Linux kernel's WiFi driver (wlcore) causes an infinite loop and system freeze when memory allocation fails during packet transmission. The driver incorrectly returns -EAGAIN (a 'try again' error code) instead of -ENOMEM (an 'out of memory' error code) when there isn't enough buffer space, which tricks the system into repeatedly retrying the same packet in a tight loop while holding a lock (mutex, a mechanism that prevents multiple parts of code from running simultaneously).

Glasswing Secured the Code. The Rest of Your Stack Is Still on You

infonews
security
Apr 24, 2026

Organizations often have forgotten software integrations, unauthorized IT systems (shadow IT), and now hidden AI tools and agents scattered across their networks that they don't fully track or manage. Attackers can exploit these overlooked systems without needing advanced AI models, making security harder when companies don't know what's running in their own infrastructure.

Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents

infonews
securitypolicy

The Download: supercharged scams and studying AI healthcare

infonews
securityindustry

Elon Musk and Sam Altman’s court showdown will dish the dirt

infonews
policy
Apr 24, 2026

Elon Musk, who cofounded OpenAI but left after not becoming CEO, is suing the company and Sam Altman in a trial starting April 27th in Oakland, California. The lawsuit centers on claims that OpenAI committed fraud, though it also involves broader allegations of breach of contract and unfair business practices. This legal case is primarily about the conflict between Musk and Altman over control of the AI company.

Bridging the AI Agent Authority Gap: Continuous Observability as the Decision Engine

infonews
securitysafety

Microsoft now lets admins uninstall Copilot on enterprise devices

infonews
securitypolicy

Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

infonews
securitypolicy

China’s DeepSeek previews new AI model a year after jolting US rivals 

infonews
industry
Apr 24, 2026

Chinese AI company DeepSeek released a preview of its new V4 model, which is open-source (publicly available code that anyone can use and modify) and claims to match the performance of closed-source (proprietary, not publicly available) AI systems from US companies like OpenAI and Google. The V4 model shows major improvements in coding tasks, which are important for AI agents (AI systems that can take actions independently), and works well with Chinese chip technology from Huawei.

Prestigious photo contest answers ‘what is a photo?’

infonews
industry
Apr 24, 2026

The World Press Photo competition, a prestigious photojournalism award, has established rules about the use of generative AI (software that creates images from text descriptions) to determine eligibility for entries. The 2026 winning photograph, "Separated by ICE" by Carol Guzy, had to comply with these AI-related rules, reflecting the competition's effort to define what qualifies as authentic photography in an era where AI-generated images are becoming common.

Cohere to acquire German AI company Aleph Alpha as it looks to expand in Europe

infonews
industry
Apr 24, 2026

Cohere, a Canadian AI company, announced plans to acquire German AI company Aleph Alpha to expand in Europe, with Aleph Alpha's backer Schwarz Group investing $600 million in Cohere's upcoming funding round. The acquisition aims to combine both companies' strengths to offer sovereign AI (customized AI systems that keep data and control within a specific country or region) to regulated sectors like government, finance, and defense, while giving European organizations alternatives to relying on single AI providers. The deal is expected to close in 2026, pending regulatory approval.

Previous17 / 225Next
GitHub Advisory Database

Fix: Users on standard Claude Code auto-update have received this fix already. Users performing manual updates are advised to update to the latest version.

GitHub Advisory Database
GitHub Advisory Database

Fix: Fixed in version 1.83.7. The caller-supplied value is now always passed to the database as a separate parameter. Upgrade to 1.83.7 or later. Alternatively, if upgrading is not immediately possible, set `disable_error_logs: true` under `general_settings` to remove the path through which unauthenticated input reaches the vulnerable query.

GitHub Advisory Database
Hugging Face Security Advisories

Fix: Upgrade to version `1.83.7-stable` or later, which fixes the issue by switching the prompt template renderer to a sandboxed environment (a restricted area where code runs with limited permissions) that blocks the attack. If upgrading is not immediately possible, block the `POST /prompts/test` endpoint at your reverse proxy or API gateway, and review and rotate API keys that should not have access to prompt management routes.

GitHub Advisory Database
GitHub Advisory Database

Fix: Add the missing page_pool_destroy() calls in both error paths to properly clean up the page pool when initialization fails.

NVD/CVE Database

Fix: Return -ENOMEM instead of -EAGAIN when pskb_expand_head() fails in wl1271_tx_allocate() and wl1271_prepare_tx_frame() functions, so the packet is dropped and the loop terminates properly.

NVD/CVE Database
Dark Reading
Apr 24, 2026

Agentic AI (artificial intelligence systems that can make decisions and take actions without human intervention) is becoming a major cybersecurity concern because the same capabilities that help defenders also empower attackers to launch autonomous, adaptive, and large-scale attacks. The industry is responding by treating AI systems as identities (entities with credentials and access permissions) rather than separate tools, and using identity threat detection to monitor their behavior for suspicious activity.

Fix: The source recommends treating agentic AI as an identity and using identity threat detection and risk mitigation solutions as the main defense. This approach combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform to enable behavioral visibility, risk-based controls, unified policy enforcement across human and machine identities, and lifecycle management to prevent orphaned or unmanaged agents.

SecurityWeek
Apr 24, 2026

Cybercriminals are increasingly using LLMs (large language models, AI systems trained on massive amounts of text) to launch faster and cheaper attacks, including phishing emails (deceptive messages designed to steal information), deepfakes (AI-generated fake videos or images), and automated vulnerability scans (tools that search for security weaknesses). Meanwhile, AI tools are being deployed in healthcare for tasks like note-taking, reviewing patient records, and interpreting medical images, but researchers still don't know whether using these tools actually leads to better health outcomes for patients.

MIT Technology Review
The Verge (AI)
Apr 24, 2026

AI agents create a security challenge called the 'Authority Gap' because they inherit permissions from the humans and systems that activate them, rather than having their own independent authority. The article argues that enterprises cannot safely govern AI agents unless they first reduce 'identity dark matter' (hidden credentials and unmanaged permissions scattered across systems) in their traditional users and software, and then use continuous observability (real-time monitoring of who is doing what) to dynamically control what authority agents receive based on who is delegating to them and the context of their actions.

The Hacker News
Apr 24, 2026

Microsoft has released a new policy setting called RemoveMicrosoftCopilotApp that allows IT administrators to uninstall Copilot (an AI-powered digital assistant) from enterprise Windows devices, available after the April 2026 Patch Tuesday security update. The policy can be deployed through Group Policy or Policy CSP (configuration service provider, a system for managing Windows settings remotely) on devices managed by Microsoft Intune or SCCM (System Center Configuration Manager, enterprise management tools), and applies only to Windows 11 version 25H2 where users haven't launched Copilot in the last 28 days. Users can still reinstall Copilot if they choose to after it is uninstalled by the policy.

Fix: To enable the RemoveMicrosoftCopilotApp policy, open the Group Policy Editor and navigate to either '/User/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp' or '/Device/Vendor/MSFT/Policy/Config/WindowsAI/RemoveMicrosoftCopilotApp'. When enabled, this policy will uninstall the Microsoft Copilot app from devices in the organization in a non-disruptive way. This setting applies to Enterprise, Professional, and Education client SKUs only.

BleepingComputer
Apr 24, 2026

The Trump administration is announcing plans to prevent foreign companies, especially those in China, from using 'model extraction attacks' (techniques that steal capabilities from U.S.-made AI systems by training weaker AI models on the outputs of stronger ones) to copy American AI innovations. The administration says it will work with U.S. AI companies to identify these extraction activities, build defenses, and punish offenders, while Congress is also proposing legislation to identify and sanction foreign actors who extract features from closed-source U.S. AI models.

SecurityWeek
The Verge (AI)
The Verge (AI)
CNBC Technology