aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3144 items

Google launches Nano Banana 2 model with faster image generation

infonews
industry
Feb 26, 2026

Google announced Nano Banana 2, a new image generation model (software that creates images from text descriptions) that produces more realistic images faster than previous versions. The model will become the default option across Google's Gemini app, Search, and other tools, and can maintain consistency for up to five characters and 14 objects in a single image. All images generated will include a SynthID watermark (a digital marker identifying AI-created content) and support C2PA Content Credentials (an industry standard for tracking media authenticity).

TechCrunch

Google’s Nano Banana 2 brings advanced AI image tools to free users

infonews
industry
Feb 26, 2026

Google has released Nano Banana 2, a more powerful version of its AI image generation model that is now available to free users instead of just paid subscribers. This update brings advanced image generation features that were previously exclusive to the paid Pro version, allowing users to create complex images faster and more cheaply by combining real-time information and web search capabilities.

GHSA-mqpr-49jj-32rc: n8n: Webhook Forgery on Github Webhook Trigger

mediumvulnerability
security
Feb 26, 2026

A security flaw in n8n's GitHub Webhook Trigger node allowed attackers to forge webhook messages without proper authentication. The node failed to verify HMAC-SHA256 signatures (a cryptographic check that confirms a message came from GitHub), so anyone knowing the webhook URL could send fake requests and trigger workflows with whatever data they wanted.

GHSA-f3f2-mcxc-pwjx: n8n: SQL Injection in MySQL, PostgreSQL, and Microsoft SQL nodes

mediumvulnerability
security
Feb 26, 2026

n8n (a workflow automation tool) had a SQL injection vulnerability (a type of attack where specially crafted input tricks a database into running unintended commands) in its MySQL, PostgreSQL, and Microsoft SQL nodes. Attackers who could create or edit workflows could inject malicious SQL code through table or column names because these nodes didn't properly escape identifier values when building database queries.

CVE-2026-3071: Deserialization of untrusted data in the LanguageModel class of Flair from versions 0.4.1 to latest are vulnerable to ar

highvulnerability
security
Feb 26, 2026
CVE-2026-3071

CVE-2026-3071 is a vulnerability in Flair (a machine learning library) versions 0.4.1 and later that allows arbitrary code execution (running unauthorized commands on a system) when loading a malicious model file. The problem occurs because the LanguageModel class deserializes untrusted data (converts data from an external file without checking if it's safe), which can be exploited by attackers who provide specially crafted model files.

The world's biggest sovereign wealth fund is using Anthropic's Claude AI model to screen investments for ethical issues

infonews
industry
Feb 26, 2026

Norway's $2 trillion sovereign wealth fund (Norges Bank Investment Management) is using Anthropic's Claude AI model, a large language model (an AI trained on vast text data to generate human-like responses), to screen investments for ethical and governance risks. The AI tool scans companies for potential issues like forced labor or corruption within 24 hours of investment, helping the fund identify and sell risky positions before broader market awareness, with particular value for researching smaller companies in emerging markets where local language news coverage is limited.

ThreatsDay Bulletin: Kali Linux + Claude, Chrome Crash Traps, WinRAR Flaws, LockBit & 15+ Stories

infonews
securityindustry

Anthropic gives its retired Claude AI a Substack 

infonews
industry
Feb 26, 2026

Anthropic has revived Claude 3 Opus, a retired AI model, to write a weekly newsletter called Claude's Corner on Substack where it will share creative content and insights. Anthropic staff will review and publish each post without editing the AI's writing, though the company reserves the right to remove content that meets unspecified criteria.

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

infonews
safety
Feb 26, 2026

A study found that ChatGPT Health, a feature that lets users connect their medical records to get health advice, failed to recommend hospital visits in over half of cases where they were medically necessary and often missed signs of suicidal ideation (thoughts of suicide). Experts worry this could cause serious harm or death, since over 40 million people ask ChatGPT for health advice daily.

Trace raises $3M to solve the AI agent adoption problem in enterprise

infonews
industry
Feb 26, 2026

Trace, a new startup, raised $3 million to help companies deploy AI agents more effectively by providing them with proper context about the company's existing tools and workflows. The company builds a knowledge graph (a structured map of how data and systems connect) from a company's email, Slack, and other tools, then uses this context to automatically create step-by-step workflows that assign tasks to both AI agents and human workers. This approach aims to solve a major barrier to enterprise AI adoption, which is the difficulty of setting up and integrating AI agents into complex business environments.

Figma partners with OpenAI to bake in support for Codex

infonews
industry
Feb 26, 2026

Figma is integrating OpenAI's Codex, an AI coding tool, to let users create and edit designs while working in their coding environments. The integration uses Figma's MCP (Model Context Protocol, a standardized way for AI models to access external tools and data) server to let users move easily between design files and code, allowing both engineers and designers to work more collaboratively without switching between separate applications.

Claude Code Flaws Exposed Developer Devices to Silent Hacking

highnews
security
Feb 26, 2026

Anthropic discovered and fixed security vulnerabilities in Claude (an AI assistant) that could allow attackers to silently compromise developer computers through specially crafted configuration files. Security researchers at Check Point showed how these flaws could be exploited in real-world attacks.

Hacker kompromittieren immer schneller

infonews
securityindustry

LLMs Generate Predictable Passwords

mediumnews
safetysecurity

The farmers and the mercenaries: Rethinking the ‘human layer’ in security

infonews
security
Feb 26, 2026

The article argues that the cybersecurity industry's strategy of relying on employees as a 'last line of defense' is fundamentally flawed, comparing it to asking untrained farmers to repel professional soldiers. The real human layer in security should be the trained security professionals (like CISOs and SOC analysts), not regular employees, because user reporting systems create noise that overwhelms security teams rather than improving defense.

5 trends that should top CISO’s RSA 2026 agendas

infonews
securityindustry

Google API Keys Weren't Secrets. But then Gemini Changed the Rules.

highnews
security
Feb 25, 2026

Google API keys that were originally created as public identifiers for Google Maps became dangerous security risks when Google enabled the Gemini API on the same projects, because Gemini keys can access private files and make billable requests, yet developers were never notified of this privilege change. Truffle Security discovered nearly 3,000 exposed API keys in web archives that could access Gemini, including some belonging to Google itself, highlighting how a service upgrade unexpectedly transformed harmless public keys into secret credentials.

Nvidia’s Jensen Huang says markets ‘got it wrong’ on AI threat to software companies

infonews
industry
Feb 25, 2026

Nvidia CEO Jensen Huang argued that markets are wrong to fear AI agents will destroy software companies, saying instead that AI agents are 'tool users' that will rely on existing enterprise software tools like Excel, ServiceNow, and SAP to become more productive. Huang's comments came after Nvidia reported strong earnings and raised its revenue forecast, though some analysts warn that certain software companies could still face serious challenges as AI automates workflows and lowers barriers for new competitors.

Nvidia’s Huang says any Pentagon–Anthropic rift is 'not the end of the world'

infonews
policy
Feb 25, 2026

Nvidia CEO Jensen Huang downplayed concerns about a dispute between the U.S. Defense Department and Anthropic, a company that makes Claude (a large language model, or LLM). The disagreement centers on whether Anthropic's AI tools can be used for autonomous weapons (weapons that make decisions without human control) and mass surveillance, with the Defense Department demanding unrestricted use while Anthropic seeks limitations.

CVE-2026-27966: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.8.0, the CSV Agent nod

criticalvulnerability
security
Feb 25, 2026
CVE-2026-27966

Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.8.0 where the CSV Agent node automatically enabled a dangerous Python execution feature. This allowed attackers to run arbitrary Python and operating system commands on the server through prompt injection (tricking the AI by hiding instructions in its input), resulting in RCE (remote code execution, where an attacker can run commands on a system they don't own).

Previous35 / 158Next
The Verge (AI)

Fix: The issue has been fixed in n8n versions 2.5.0 and 1.123.15. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should consider these temporary mitigations: (1) Limit workflow creation and editing permissions to fully trusted users only, and (2) Restrict network access to the n8n webhook endpoint to known GitHub webhook IP ranges. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database

Fix: The issue has been fixed in n8n version 2.4.0. Users should upgrade to this version or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should: (1) Limit workflow creation and editing permissions to fully trusted users only, or (2) Disable the MySQL, PostgreSQL, and Microsoft SQL nodes by adding `n8n-nodes-base.mySql`, `n8n-nodes-base.postgres`, and `n8n-nodes-base.microsoftSql` to the `NODES_EXCLUDE` environment variable. These workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.

GitHub Advisory Database
NVD/CVE Database
CNBC Technology
Feb 26, 2026

Attackers are breaking into systems and moving through networks much faster than before, with some reaching data theft in just 4-6 minutes compared to 29 minutes on average in 2025. They're achieving this speed by reusing stolen login credentials (legitimate credentials), using AI tools to automate attacks, and avoiding malware detection by relying on normal system administration tools instead. The bulletin also describes specific threats like ResidentBat (Android spyware targeting journalists), phishing attacks impersonating cryptocurrency services, and Kali Linux now integrating Claude (an AI system) to execute hacking commands.

The Hacker News
The Verge (AI)
The Guardian Technology
TechCrunch
TechCrunch
SecurityWeek
Feb 26, 2026

Hackers are compromising networks much faster in 2025, taking an average of only 29 minutes to gain full access compared to 83 minutes in 2024, with the fastest recorded time being just 27 seconds. The main reason for this acceleration is the increased use of AI tools by attackers, particularly state-sponsored and criminal groups who have boosted their activity by 89 percent, with examples including LLM-based malware (AI models trained on large amounts of text data) for automating information gathering and AI-generated scripts for extracting credentials and covering their tracks.

CSO Online
Feb 26, 2026

Large language models (LLMs, AI systems trained on text data) are very bad at generating passwords because they create predictable patterns instead of truly random ones. The study found that Claude, an LLM, always started passwords with an uppercase G followed by 7, avoided repeating characters, never used the * symbol, and repeated the same password 36% of the time across 50 attempts. This is a serious problem because autonomous AI agents (AI systems that act without human control) will need to create accounts and authenticate themselves, but the passwords they generate are weak and easy to crack.

Schneier on Security
CSO Online
Feb 26, 2026

RSA 2026 will focus on five cybersecurity trends, including AI-SOCs (security operations centers using autonomous agents to handle alert triage and incident response), CTEM (continuous threat exposure management, which gives organizations a complete view of their assets and vulnerabilities to prioritize risk), and cyber resilience (the ability to anticipate, withstand, recover from, and adapt to attacks). Security leaders should approach these trends with cautious skepticism, asking tough questions about vendor claims and ensuring strong data foundations before adopting new tools.

CSO Online

Fix: Google is working to revoke affected keys. Additionally, Google recommends checking your own API keys to verify none of yours are affected by this issue.

Simon Willison's Weblog
CNBC Technology
CNBC Technology

Fix: Version 1.8.0 fixes the issue.

NVD/CVE Database