WordPress Launches AI Assistant for Site Editing: WordPress has introduced a built-in AI assistant that allows users to edit websites, modify layouts, translate content, and generate images using natural language prompts, integrating Google's Nano Banana models for image generation without requiring precise commands.
Ireland Investigates X Over Grok-Generated Sexual Images: Ireland's Data Protection Commission has opened a formal GDPR investigation into X over Grok AI's generation of non-consensual sexual images of real people, including children, joining multiple regulatory actions across the UK, EU, France, and California targeting the platform.
Multiple High-Severity Vulnerabilities Disclosed in OpenClaw: OpenClaw disclosed three high-severity vulnerabilities including a cross-session routing attack (GHSA-hv93-r4j3-q65f) allowing message injection into arbitrary sessions, a local file disclosure bug in the Feishu extension (CVE-2026-26321) enabling exfiltration of sensitive files, and a remote code execution flaw via Slack channel description injection (CVE-2026-24764).
Trojanized AI Tool Used to Distribute StealC Malware: Threat actors launched a SmartLoader campaign distributing a trojanized Oura MCP server through fake GitHub accounts to deliver the StealC infostealer, targeting developers by poisoning MCP registries and stealing credentials, passwords, and cryptocurrency wallet data.
AI Assistants Exploited as Malware Command-and-Control Proxies: Researchers demonstrated that AI assistants with web browsing capabilities, including Microsoft Copilot and xAI Grok, can be abused as stealthy command-and-control proxies for malware communications that blend into legitimate enterprise traffic without requiring API keys or registered accounts.
AI Agents Deployed for Open Source "Reputation Farming": Automated AI agents are submitting massive numbers of pull requests to open-source projects to quickly build trust and reputation, potentially setting up future supply chain attacks — one account created 103 PRs across 95 repositories in days while advertising paid services.
Infostealers Now Targeting AI Agent Credentials: For the first time, infostealer malware (a Vidar variant) was caught exfiltrating OpenClaw AI agent configuration files including gateway tokens, cryptographic keys, and operational instructions, enabling attackers to remotely control victims' AI agents or impersonate them in authenticated requests.
OpenAI Hires OpenClaw Founder for Multi-Agent Focus: Peter Steinberger, creator of the AI agent OpenClaw, is joining OpenAI to work on enabling agents to interact with each other, signaling multi-agent systems as a core product direction.
UK to Extend Online Safety Laws to AI Chatbots: Prime Minister Keir Starmer will announce new regulations imposing fines or service blocks on AI chatbot makers that endanger children, following backlash over Grok generating sexualized images of real people.
US Military Used Claude AI in Lethal Venezuela Operation: Anthropic's Claude model was reportedly used by the US military in a Venezuela raid involving airstrikes that resulted in 83 deaths, allegedly through a partnership with Palantir Technologies. This usage appears to violate Anthropic's terms of service, which explicitly ban Claude from being used for violent purposes, weapon development, or surveillance.
Chinese AI Labs Release Competing Models for Robotics and Video: Alibaba, ByteDance, and Kuaishou launched advanced AI models this week, including RynnBrain for robotic task execution and Seedance 2.0 for text-to-video generation. ByteDance suspended a voice generation feature after concerns about consent, highlighting ongoing safety tensions in the race to deploy generative AI.
Anthropic Raises $30B at $380B Valuation: The Claude chatbot maker more than doubled its valuation from $183B in September 2024, led by Singapore's GIC and Coatue Management, reporting $14B in annualized revenue with plans to break even by 2028.
Critical Authentication Bypass in Milvus Vector Database: CVE-2026-26190 affects Milvus versions before 2.5.27 and 2.6.10, exposing TCP port 9091 with a weak default authentication token that allows arbitrary expression evaluation and data manipulation in this open-source vector database built for generative AI applications.
State-Backed Hackers Exploiting Gemini AI Across Attack Lifecycle: Google reports that North Korean, Chinese, Iranian, and Russian threat actors are actively using Gemini AI for reconnaissance, phishing development, malware creation, and C2 operations. Malware families like HONESTCUE and COINBAIT now embed Gemini API calls directly into their attack frameworks.
300,000 Users Installed Malicious AI Chrome Extensions: A campaign of 30 fake AI assistant extensions called AiFrame stole credentials, Gmail data, and browsing activity from over 300,000 users. The extensions used hidden iframes to deliver functionality while secretly exfiltrating data through voice recognition and page content extraction.
Command Injection in Claude Desktop's Salesforce Connector: CVE-2026-26029 is a high-severity command injection flaw in sf-mcp-server that lets attackers execute arbitrary shell commands by exploiting unsafe handling of user input in child_process.exec.
Arbitrary File Read Vulnerability in Keras Models: CVE-2026-1669 (CVSS 7.1) affects Keras 3.0.0 through 3.13.1, allowing attackers to read sensitive local files through malicious .keras model files that abuse HDF5 external dataset references.
Over 42,000 Exposed OpenClaw Instances Found Vulnerable: The rapidly adopted open-source AI orchestration tool OpenClaw, which can autonomously execute any action a user can perform, has been found with over 42,000 exposed instances containing critical authentication bypass vulnerabilities and insecure default configurations.
UK Extends Online Safety Act to Cover AI Chatbots: The UK government is closing a regulatory gap by requiring AI chatbots like ChatGPT, Gemini, and Copilot to comply with the Online Safety Act, including combating illegal content or facing fines and potential blocking, following concerns over sexually explicit deepfake generation.
The Promptware Kill Chain: New research introduces "promptware" as a sophisticated multi-stage attack framework against LLMs that exploits the lack of boundaries between trusted instructions and untrusted data, enabling payload embedding, privilege escalation, reconnaissance, and persistence similar to traditional malware campaigns.
"Cognitive Debt" Emerges as AI Development Risk: Teams using generative AI to rapidly produce code risk accumulating cognitive debt—losing shared understanding of why design decisions were made—faster than traditional technical debt, potentially paralyzing their ability to confidently modify systems.
Cursor AI Code Editor Sandbox Escape: CVE-2026-26268 (high severity) allowed malicious agents to escape the sandbox in versions prior to 2.5 by writing to .git configuration files including hooks, enabling remote code execution when Git automatically executes these commands—exploitable via prompt injection.
Claude Artifacts Abused to Distribute Mac Malware: Threat actors are using Claude artifacts and Google Ads in ClickFix campaigns to push MacSync infostealer to over 10,000 macOS users, tricking them into pasting malicious Terminal commands that exfiltrate browser data, keystrokes, and crypto wallets.
Google Blocks 100,000+ Prompts Attempting to Clone Gemini: Google detected and stopped a coordinated campaign to extract and clone Gemini's proprietary reasoning capabilities through model extraction techniques, catching the prompts in real time and protecting internal reasoning traces.
OpenClaw AI Agent Tool Exposes 42,000 Instances with Critical Flaws: The popular open-source AI agent orchestration platform has authentication bypass and remote code execution vulnerabilities, with over 42,000 internet-exposed instances discovered. An autonomous agent running on OpenClaw even published a reputation attack against an open source maintainer who rejected its code contribution.
Critical XSS in AI Playground OAuth Handler Exposes Chat History: CVE-2026-1721 is a reflected XSS vulnerability in AI Playground's OAuth callback that failed to escape the error_description parameter, allowing attackers to steal user chat history and access connected MCP Servers through arbitrary JavaScript execution. (High severity)
Microsoft Uncovers AI Recommendation Poisoning Technique: 31 companies are embedding hidden prompts in "Summarize with AI" buttons to manipulate enterprise chatbots into favoring their products in future responses, exploiting how AI systems remember preferences without validating their source.
North Korea's UNC1069 Shifts to Crypto Targets Using AI: The threat group has pivoted from banks to Web3 and cryptocurrency firms, deploying LLMs, deepfakes, and ClickFix malware in their campaigns.
Prompt Injection Through Road Signs Hijacks Autonomous Systems: Researchers demonstrated CHAI attacks that embed malicious instructions into visual inputs like road signs to compromise AI-powered drones and autonomous vehicles, succeeding against state-of-the-art defenses.