All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Meta has signed a multiyear agreement with Nvidia to buy millions of processors (CPUs and GPUs, which are specialized chips for computing tasks) for its data centers that run AI systems. This deal includes Nvidia's Grace and Vera CPUs and Blackwell and Rubin GPUs, with plans to add next-generation Vera CPUs in 2027. Nvidia claims these chips will improve performance-per-watt (how much computing work gets done per unit of electricity used) in Meta's data centers.
Anthropic released Claude Sonnet 4.6, a new AI model that performs similarly to the more expensive Opus 4.5 while keeping Sonnet's cheaper pricing ($3 per million input tokens, $15 per million output tokens). The model has a knowledge cutoff (the date of information it was trained on) of August 2025 and supports up to 200,000 input tokens by default, with the option to use 1 million tokens in beta at higher cost.
The Feishu extension in OpenClaw had a vulnerability where the `sendMediaFeishu` function could be tricked into reading files directly from a computer's filesystem by treating attacker-controlled file paths as input. An attacker who could influence how the tool behaves (either directly or through prompt injection, where hidden instructions are hidden in the AI's input) could steal sensitive files like `/etc/passwd`.
OpenClaw versions before 2026.2.13 logged WebSocket request headers (like Origin and User-Agent) without cleaning them up, allowing attackers to inject malicious text into logs. If those logs are later read by an LLM (large language model, an AI system that processes text) for tasks like debugging, the attacker's injected text could trick the AI into doing something unintended (a technique called indirect prompt injection or log poisoning).
Google has announced that Google I/O 2026, its annual developer conference, will be held May 19-20 in Mountain View, California, with both in-person and online attendance options. The company plans to showcase AI advances and product updates across its services, including Gemini (Google's AI assistant) and Android, through keynotes, demos, and interactive sessions.
A BBC program discusses engaging chatbots and interviews NVIDIA about AI chat technology, exploring how to make AI conversations sound more human and examining emotional connections between people and AI systems. The program also covers how new technology is assisting stroke survivors.
This BBC Radio program discusses engaging chatbots and AI chat technology, including conversations with NVIDIA about making AI sound more human and exploring emotional connections with AI. The episode also covers how new technology is assisting stroke survivors.
Skill-scanner versions 1.0.1 and earlier have a vulnerability in their API Server (a network interface that lets external programs communicate with the software) where the server is incorrectly exposed to multiple network interfaces without proper authentication. An attacker could send requests to this server to cause a denial of service attack (making it unavailable by exhausting its resources) or upload files to unintended locations on the device.
A Pterodactyl Panel (server management software) API has a missing authorization check that allows any user with a node secret token (a credential for accessing a specific server cluster) to retrieve configuration data and manipulate servers on other nodes that they shouldn't have access to. This vulnerability requires an attacker to first obtain a node token, but once they do, they can access sensitive server information, installation scripts containing secrets, and even delete servers on other nodes.
Gogs, a self-hosted Git service, has a vulnerability where anyone can upload files without logging in if the RequireSigninView setting is disabled (which is the default). Attackers can upload arbitrary files to the server by obtaining a CSRF token (a security token to prevent cross-site request forgery) from the homepage and using it with the /issues/attachments or /releases/attachments endpoints, potentially filling up disk space, hosting malware, or abusing the server as a public file storage service.
OpenClaw's Slack integration had a vulnerability where Slack channel descriptions could be injected into the AI model's system prompt (the instructions that tell the AI how to behave). This allowed attackers to use prompt injection (tricking an AI by hiding instructions in its input) to potentially trigger unintended actions or expose data if tool execution was enabled.
Anthropic released Claude Sonnet 4.6, a new AI model that performs better at coding, computer use, and data processing tasks, making it the default option for free and paid users. This launch reflects the intense competition in the AI industry, with Anthropic releasing two major models in less than two weeks to keep pace with rivals like OpenAI and Google.
Figma has partnered with Anthropic to launch a feature called 'Code to Canvas' that converts AI-generated code (from tools like Claude Code) into editable designs within Figma's platform. This allows teams to take working interfaces created by AI agents, refine them, compare options, and make design decisions together in Figma, bridging the gap between AI coding tools and design workflows.
WordPress has introduced a new AI assistant that lets users edit their websites by typing natural language requests (instructions written in plain English rather than code) instead of manually making changes. The AI can edit and translate text, generate and modify images, and adjust site elements like creating pages or changing fonts, accessible through the site editor sidebar and block notes feature (a commenting tool added in WordPress 6.9).
Anthropic released Sonnet 4.6, an updated version of its mid-size AI model with improvements in coding, instruction-following, and computer use (the ability to interact with computer interfaces). The new model features a context window (the amount of text an AI can read and remember at once) of 1 million tokens, double the previous size, allowing it to process entire codebases or dozens of research papers in one request.
Dell RecoverPoint for Virtual Machines (RP4VMs) has a vulnerability where passwords are hard-coded (built directly into the software rather than created by users), allowing attackers without authorization to remotely access the system and gain root-level persistence (permanent control of the computer). This vulnerability is currently being actively exploited by attackers.
Fix: Apply mitigations per vendor instructions (see Dell support documentation at https://www.dell.com/support/kbdoc/en-us/000426773/dsa-2026-079 and https://www.dell.com/support/kbdoc/en-us/000426742/recoverpoint-for-vms-apply-the-remediation-script-for-dsa), follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Due date: 2026-02-21.
CISA Known Exploited VulnerabilitiesGitLab has a server-side request forgery vulnerability (SSRF, a flaw that allows attackers to make requests to internal networks on behalf of the server) that can be triggered when webhook functionality is enabled. This vulnerability is actively being exploited by attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesTesla is adding Grok, an AI chatbot from Elon Musk's company xAI, to its vehicle infotainment systems (the dashboard computers that control entertainment and information) in the U.K. and nine other European markets. However, Grok has faced multiple regulatory investigations across Europe and Asia because it lacks safety guardrails, allowing users to create deepfake explicit images (fake videos or photos that look real but are computer-generated) of real people without consent, generate hate speech, and interact inappropriately with minors. Safety researchers also worry that adding chatbots to cars creates a "distraction layer" that could increase driver distraction while driving.
Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix removes direct local file reads and routes media loading through hardened helpers that enforce local-root restrictions.
GitHub Advisory DatabaseFix: Upgrade to `openclaw@2026.2.13` or later. Alternatively, if you cannot upgrade immediately, the source mentions two workarounds: treat logs as untrusted input when using AI-assisted debugging by sanitizing and escaping them, and do not auto-execute instructions derived from logs; or restrict gateway network access and apply reverse-proxy limits on header size.
GitHub Advisory DatabaseCyberattacks are accelerating due to AI, with threat actors moving from initial system access to stealing data in as little as 72 minutes, but most successful attacks exploit basic security failures like weak authentication (verification of user identity), poor visibility into systems, and misconfigured security tools rather than sophisticated exploits. Identity management is a critical weakness, with excessive permissions affecting 99% of analyzed cloud accounts and identity-based attacks playing a role in 90% of incidents investigated.
Fix: Palo Alto Networks launched Unit 42 XSIAM 2.0 (an expanded managed SOC service, which is a Security Operations Center or team that monitors and responds to threats), which the company claims includes complete onboarding, threat hunting and response, and faster modeling of attack patterns compared to traditional SOCs.
CSO OnlineFix: Update to Skill-scanner version 1.0.2 or later, which contains the fix for this vulnerability.
GitHub Advisory DatabaseFix: Upgrade to openclaw version 2026.2.3 or later. If you do not use the Slack integration, no action is required.
GitHub Advisory DatabaseResearchers discovered that AI assistants like Microsoft Copilot and Grok, which can browse the web and fetch URLs, can be abused as command-and-control (C2) proxies, a stealthy communication channel that lets attackers send commands to malware and receive data back while blending in with normal business communications. This technique, which requires the attacker to have already compromised a machine, works without needing API keys or accounts, making traditional security measures like key revocation ineffective. The attack demonstrates how AI tools can be weaponized beyond just generating malware, but also as intelligent intermediaries that help attackers adapt their strategies in real time based on information from the compromised system.