All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Google announced Nano Banana 2, a new image generation model (software that creates images from text descriptions) that produces more realistic images faster than previous versions. The model will become the default option across Google's Gemini app, Search, and other tools, and can maintain consistency for up to five characters and 14 objects in a single image. All images generated will include a SynthID watermark (a digital marker identifying AI-created content) and support C2PA Content Credentials (an industry standard for tracking media authenticity).
Google has released Nano Banana 2, a more powerful version of its AI image generation model that is now available to free users instead of just paid subscribers. This update brings advanced image generation features that were previously exclusive to the paid Pro version, allowing users to create complex images faster and more cheaply by combining real-time information and web search capabilities.
A security flaw in n8n's GitHub Webhook Trigger node allowed attackers to forge webhook messages without proper authentication. The node failed to verify HMAC-SHA256 signatures (a cryptographic check that confirms a message came from GitHub), so anyone knowing the webhook URL could send fake requests and trigger workflows with whatever data they wanted.
n8n (a workflow automation tool) had a SQL injection vulnerability (a type of attack where specially crafted input tricks a database into running unintended commands) in its MySQL, PostgreSQL, and Microsoft SQL nodes. Attackers who could create or edit workflows could inject malicious SQL code through table or column names because these nodes didn't properly escape identifier values when building database queries.
CVE-2026-3071 is a vulnerability in Flair (a machine learning library) versions 0.4.1 and later that allows arbitrary code execution (running unauthorized commands on a system) when loading a malicious model file. The problem occurs because the LanguageModel class deserializes untrusted data (converts data from an external file without checking if it's safe), which can be exploited by attackers who provide specially crafted model files.
Norway's $2 trillion sovereign wealth fund (Norges Bank Investment Management) is using Anthropic's Claude AI model, a large language model (an AI trained on vast text data to generate human-like responses), to screen investments for ethical and governance risks. The AI tool scans companies for potential issues like forced labor or corruption within 24 hours of investment, helping the fund identify and sell risky positions before broader market awareness, with particular value for researching smaller companies in emerging markets where local language news coverage is limited.
Anthropic has revived Claude 3 Opus, a retired AI model, to write a weekly newsletter called Claude's Corner on Substack where it will share creative content and insights. Anthropic staff will review and publish each post without editing the AI's writing, though the company reserves the right to remove content that meets unspecified criteria.
A study found that ChatGPT Health, a feature that lets users connect their medical records to get health advice, failed to recommend hospital visits in over half of cases where they were medically necessary and often missed signs of suicidal ideation (thoughts of suicide). Experts worry this could cause serious harm or death, since over 40 million people ask ChatGPT for health advice daily.
Trace, a new startup, raised $3 million to help companies deploy AI agents more effectively by providing them with proper context about the company's existing tools and workflows. The company builds a knowledge graph (a structured map of how data and systems connect) from a company's email, Slack, and other tools, then uses this context to automatically create step-by-step workflows that assign tasks to both AI agents and human workers. This approach aims to solve a major barrier to enterprise AI adoption, which is the difficulty of setting up and integrating AI agents into complex business environments.
Figma is integrating OpenAI's Codex, an AI coding tool, to let users create and edit designs while working in their coding environments. The integration uses Figma's MCP (Model Context Protocol, a standardized way for AI models to access external tools and data) server to let users move easily between design files and code, allowing both engineers and designers to work more collaboratively without switching between separate applications.
Anthropic discovered and fixed security vulnerabilities in Claude (an AI assistant) that could allow attackers to silently compromise developer computers through specially crafted configuration files. Security researchers at Check Point showed how these flaws could be exploited in real-world attacks.
The article argues that the cybersecurity industry's strategy of relying on employees as a 'last line of defense' is fundamentally flawed, comparing it to asking untrained farmers to repel professional soldiers. The real human layer in security should be the trained security professionals (like CISOs and SOC analysts), not regular employees, because user reporting systems create noise that overwhelms security teams rather than improving defense.
Google API keys that were originally created as public identifiers for Google Maps became dangerous security risks when Google enabled the Gemini API on the same projects, because Gemini keys can access private files and make billable requests, yet developers were never notified of this privilege change. Truffle Security discovered nearly 3,000 exposed API keys in web archives that could access Gemini, including some belonging to Google itself, highlighting how a service upgrade unexpectedly transformed harmless public keys into secret credentials.
Nvidia CEO Jensen Huang argued that markets are wrong to fear AI agents will destroy software companies, saying instead that AI agents are 'tool users' that will rely on existing enterprise software tools like Excel, ServiceNow, and SAP to become more productive. Huang's comments came after Nvidia reported strong earnings and raised its revenue forecast, though some analysts warn that certain software companies could still face serious challenges as AI automates workflows and lowers barriers for new competitors.
Nvidia CEO Jensen Huang downplayed concerns about a dispute between the U.S. Defense Department and Anthropic, a company that makes Claude (a large language model, or LLM). The disagreement centers on whether Anthropic's AI tools can be used for autonomous weapons (weapons that make decisions without human control) and mass surveillance, with the Defense Department demanding unrestricted use while Anthropic seeks limitations.
Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.8.0 where the CSV Agent node automatically enabled a dangerous Python execution feature. This allowed attackers to run arbitrary Python and operating system commands on the server through prompt injection (tricking the AI by hiding instructions in its input), resulting in RCE (remote code execution, where an attacker can run commands on a system they don't own).
Fix: The issue has been fixed in n8n versions 2.5.0 and 1.123.15. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should consider these temporary mitigations: (1) Limit workflow creation and editing permissions to fully trusted users only, and (2) Restrict network access to the n8n webhook endpoint to known GitHub webhook IP ranges. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.
GitHub Advisory DatabaseFix: The issue has been fixed in n8n version 2.4.0. Users should upgrade to this version or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should: (1) Limit workflow creation and editing permissions to fully trusted users only, or (2) Disable the MySQL, PostgreSQL, and Microsoft SQL nodes by adding `n8n-nodes-base.mySql`, `n8n-nodes-base.postgres`, and `n8n-nodes-base.microsoftSql` to the `NODES_EXCLUDE` environment variable. These workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.
GitHub Advisory DatabaseAttackers are breaking into systems and moving through networks much faster than before, with some reaching data theft in just 4-6 minutes compared to 29 minutes on average in 2025. They're achieving this speed by reusing stolen login credentials (legitimate credentials), using AI tools to automate attacks, and avoiding malware detection by relying on normal system administration tools instead. The bulletin also describes specific threats like ResidentBat (Android spyware targeting journalists), phishing attacks impersonating cryptocurrency services, and Kali Linux now integrating Claude (an AI system) to execute hacking commands.
Hackers are compromising networks much faster in 2025, taking an average of only 29 minutes to gain full access compared to 83 minutes in 2024, with the fastest recorded time being just 27 seconds. The main reason for this acceleration is the increased use of AI tools by attackers, particularly state-sponsored and criminal groups who have boosted their activity by 89 percent, with examples including LLM-based malware (AI models trained on large amounts of text data) for automating information gathering and AI-generated scripts for extracting credentials and covering their tracks.
Large language models (LLMs, AI systems trained on text data) are very bad at generating passwords because they create predictable patterns instead of truly random ones. The study found that Claude, an LLM, always started passwords with an uppercase G followed by 7, avoided repeating characters, never used the * symbol, and repeated the same password 36% of the time across 50 attempts. This is a serious problem because autonomous AI agents (AI systems that act without human control) will need to create accounts and authenticate themselves, but the passwords they generate are weak and easy to crack.
RSA 2026 will focus on five cybersecurity trends, including AI-SOCs (security operations centers using autonomous agents to handle alert triage and incident response), CTEM (continuous threat exposure management, which gives organizations a complete view of their assets and vulnerabilities to prioritize risk), and cyber resilience (the ability to anticipate, withstand, recover from, and adapt to attacks). Security leaders should approach these trends with cautious skepticism, asking tough questions about vendor claims and ensuring strong data foundations before adopting new tools.
Fix: Google is working to revoke affected keys. Additionally, Google recommends checking your own API keys to verify none of yours are affected by this issue.
Simon Willison's WeblogFix: Version 1.8.0 fixes the issue.
NVD/CVE Database