aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4464 items

GHSA-f77h-j2v7-g6mw: n8n Vulnerable to Hijacking of Unauthenticated Chat Execution

mediumvulnerability
security
Apr 29, 2026
CVE-2026-42228

n8n's Chat Trigger feature had a security flaw where the `/chat` WebSocket endpoint (a communication channel) didn't check if users were authorized to access workflow executions. An attacker who could guess a valid execution ID (a unique identifier for a running workflow instance) could connect to an unprotected chat workflow, intercept prompts meant for legitimate users, and inject their own commands to change how the workflow behaves.

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later. As a temporary workaround, administrators can enable authentication on all Chat Trigger nodes by setting the Authentication field to `n8n User Auth` rather than `None`, though this does not fully eliminate the risk.

GitHub Advisory Database

GHSA-mp4j-h6gh-f6mp: n8n has SQL Injection in SeaTable Node

mediumvulnerability
security
Apr 29, 2026
CVE-2026-42229

A SQL injection (inserting malicious code into database queries) flaw in n8n's SeaTable node allowed attackers to manipulate search and row retrieval operations when user-controlled input was passed into the node without proper safeguards, potentially exposing unintended database rows. The vulnerability required a specific workflow setup where external input from sources like forms or webhooks was directly used in search parameters.

GHSA-f6x8-65q6-j9m9: n8n has Open Redirect in MCP OAuth Consent Flow

mediumvulnerability
security
Apr 29, 2026
CVE-2026-42230

n8n has a vulnerability where its OAuth consent flow allows attackers to register fake redirect URLs (destinations where users are sent after denying permission) without authentication. An attacker can trick a user into clicking a malicious link, and when the user clicks "Deny" on the consent dialog, they get redirected to the attacker's website instead of staying safe. This could be used for phishing (tricking users into giving up sensitive information).

GHSA-r6jc-mpqw-m755: n8n has SQL Injection in Oracle Database Node via Limit Field

mediumvulnerability
security
Apr 29, 2026
CVE-2026-42233

n8n, a workflow automation tool, had a SQL injection vulnerability (a type of attack where malicious SQL commands are inserted into input fields) in its Oracle Database node. The flaw allowed attackers to inject arbitrary SQL commands through the `Limit` field when external user input was used, potentially letting them steal data from the connected Oracle database.

GHSA-hp3c-vfpm-q4f7: n8n has SQL Injection in Snowflake and MySQL Nodes

mediumvulnerability
security
Apr 29, 2026
CVE-2026-42237

n8n's Snowflake and MySQL v1 nodes have a SQL injection vulnerability (a type of attack where malicious SQL code is inserted into input fields) because they directly insert user-controlled table and column names into database queries without proper protection. An attacker who can create workflows could use this to steal, change, or delete data in the connected database.

Google Search queries hit an ‘all time high’ last quarter

infonews
industry
Apr 29, 2026

Google reported record-breaking search queries in Q1 2026, with CEO Sundar Pichai attributing the growth to AI investments and new AI experiences integrated into their products. The company saw 19% revenue growth in search, over 350 million paid subscriptions across services like Gemini App and YouTube, and Pichai highlighted this as their strongest quarter for consumer AI products.

GHSA-55m9-299j-53c7: OneCollector exporter reads unbounded HTTP response bodies

mediumvulnerability
security
Apr 29, 2026
CVE-2026-41484

The OneCollector exporter (a tool that sends telemetry data, which is information about how a program is running, to a backend server) has a flaw where it reads error responses from failed HTTP requests without limiting how much data it accepts. If an attacker controls the backend server or intercepts the connection, they can send an extremely large response that exhausts the application's memory and crashes it (a denial-of-service attack, where a system is made unavailable).

Where the goblins came from

lownews
safetyresearch

Designing trust and safety into Amazon Bedrock powered applications

infonews
safetypolicy

LLM 0.32a0 is a major backwards-compatible refactor

infonews
industry
Apr 29, 2026

LLM 0.32a0 is an alpha release that redesigns how the LLM Python library handles inputs and outputs to better support modern AI models. Instead of the old simple text-in, text-out model, it now represents conversations as sequences of messages (with user and assistant roles) and allows responses to contain different types of content, making it easier to work with APIs like OpenAI's chat completions.

llm 0.32a0

infonews
industry
Apr 29, 2026

This is a brief announcement about llm version 0.32a0, posted by Simon Willison on April 29, 2026. The post appears to be part of a monthly briefing series covering important LLM developments, with an option for readers to sponsor the author for curated updates.

GHSA-vc24-j8c5-2vw4: OpenTelemetry.Resources.Azure has an unbounded HTTP response body read

mediumvulnerability
security
Apr 29, 2026
CVE-2026-41483

OpenTelemetry.Resources.Azure has a vulnerability where it reads unlimited amounts of data from Azure VM metadata service responses into memory, allowing an attacker to cause the application to crash by sending extremely large responses (a denial of service attack where the system runs out of memory). This affects applications using the Azure VM resource detector that connect to a compromised or intercepted metadata endpoint.

All the evidence unveiled so far in Musk v. Altman

infonews
industry
Apr 29, 2026

A legal trial between Elon Musk and Sam Altman is revealing documents from OpenAI's founding, including emails and corporate records that show Musk drafted much of OpenAI's early mission and structure, Nvidia provided computational resources, and early leaders had concerns about various aspects of the organization's direction. The case is still ongoing and more evidence is expected to be disclosed as it progresses.

OpenAI’s subtle drift from Microsoft has become an aggressive move toward Amazon

infonews
industry
Apr 29, 2026

OpenAI has restructured its relationship with Microsoft multiple times in six months, most recently ending Microsoft's exclusive access to OpenAI's models and technology. The company is now moving its AI services to Amazon Web Services (cloud computing infrastructure), Microsoft's major competitor, after committing $100+ billion in spending to AWS and receiving a $50 billion investment from Amazon. This shift suggests OpenAI is deliberately diversifying away from its decade-long partnership with Microsoft to work with multiple cloud providers and meet more customers' needs.

Building the compute infrastructure for the Intelligence Age

infonews
industry
Apr 29, 2026

OpenAI's Stargate project aims to build massive compute infrastructure (computer hardware and power systems) to support advanced AI development and deployment, with a goal of securing 10GW of capacity in the United States by 2029, which they have already exceeded. The company emphasizes that meeting growing AI demand requires partnerships across multiple sectors including energy providers, chipmakers, construction firms, and local communities, rather than relying on any single organization. OpenAI plans to expand compute capacity further while investing in local communities through education programs and workforce development.

Tumbler Ridge families are suing OpenAI

infonews
safetypolicy

ChatGPT downloads are slowing — and may cause problems for OpenAI’s IPO

infonews
industry
Apr 29, 2026

ChatGPT is experiencing slower growth and rising uninstall rates, with users leaving the app or switching to competing chatbots. According to market data, uninstalls jumped 413 percent year-over-year in May following OpenAI's partnership with the Pentagon, while monthly user growth dropped from 168 percent in January to 78 percent in April.

New Wave of DPRK Attacks Uses AI-Inserted npm Malware, Fake Firms, and RATs

highnews
security
Apr 29, 2026

Researchers discovered malicious code in npm packages (repositories where developers share reusable code) that were designed to steal cryptocurrency wallet credentials and funds. The attack, linked to North Korean hackers, used a two-layer approach where harmless-looking packages contained hidden dependencies that executed the actual malware, and the malicious packages mimicked the names of legitimate libraries to avoid detection.

Wiz Code Week Recap: Securing AI Native Development

infonews
securityindustry

Larry’s risky business

infonews
industry
Apr 29, 2026

Oracle, a traditional database company, has shifted its business strategy to focus on AI rather than building its own foundation models (large language models like ChatGPT). Instead, it is positioning itself as a software-as-a-service provider (cloud-based software you access online) in the AI infrastructure space, betting on a specific version of AI's future as its traditional database business declines.

Previous10 / 224Next

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, temporary mitigations include: restricting workflow creation and editing permissions to trusted users only; disabling the SeaTable node by adding `n8n-nodes-base.seaTable` to the `NODES_EXCLUDE` environment variable; and avoiding unvalidated external user input in SeaTable node parameters.

GitHub Advisory Database

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can restrict network access to the n8n instance to prevent untrusted users from reaching the MCP OAuth endpoints, or limit access to the n8n instance to fully trusted users only. However, the source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, temporary mitigations include: limiting workflow creation and editing permissions to fully trusted users only, disabling the Oracle Database node by adding `n8n-nodes-base.oracleDatabase` to the `NODES_EXCLUDE` environment variable, and avoiding passing unvalidated external user input into the Oracle Database node's `Limit` field via expressions. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database

Fix: The issue has been fixed in n8n versions 1.123.32, 2.17.4, and 2.18.1. Users should upgrade to one of these versions or later. If immediate upgrade is not possible, temporary workarounds include: limit workflow creation and editing permissions to trusted users only; migrate from the legacy MySQL v1 node to MySQL v2 node, which has identifier escaping (protection against SQL injection); disable the Snowflake node by adding 'n8n-nodes-base.snowflake' to the 'NODES_EXCLUDE' environment variable; and avoid passing unvalidated external user input into table name, column name, or update key fields in the affected nodes. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database
The Verge (AI)

Fix: Update to the version with PR #4117 applied, which limits the number of bytes read from error response bodies to 4MiB (megabytes). Additionally, use network-level controls like firewall rules, mTLS (mutual TLS, a security protocol for encrypting connections), or a service mesh to prevent Man-in-the-Middle attacks on the configured backend/collector endpoint.

GitHub Advisory Database
Apr 29, 2026

Starting with GPT-5.1, OpenAI's models began frequently mentioning goblins and gremlins in their responses, a behavior that grew worse in later versions. The root cause was discovered to be the training process for the "Nerdy" personality feature, which unknowingly gave high rewards for outputs containing creature metaphors, causing the model to learn and amplify this quirk over time. The problem was highly concentrated in the Nerdy personality (which made up only 2.5% of responses but accounted for 66.7% of goblin mentions), and was identified through comparing model outputs and analyzing which reward signals (scoring systems that guide AI training) favored creature-word language.

OpenAI Blog
Apr 29, 2026

This document outlines how to build safety and trust into AI applications using Amazon Bedrock (AWS's generative AI service) by following a responsible AI framework. Organizations that implement responsible AI practices see significant business benefits, including 82% improvement in employee trust and 25% increase in customer loyalty. Safety should be integrated throughout the AI development lifecycle across three phases: design and development (evaluating risks and building guardrails), deployment (implementing multiple layers of protection including red team testing, which simulates attacks to find vulnerabilities), and operations (continuous monitoring and adaptation as technology and usage patterns evolve).

Fix: The source text describes approaches rather than specific technical fixes. For the design and development phase, it recommends thoroughly evaluating safety risks, understanding application capabilities and limits, and building safety guardrails from the beginning. For deployment, it recommends implementing robust safety measures through multiple layers including comprehensive user training, proactive monitoring and review processes, clear safety protocols and user guidelines, and red team testing. For the operations phase, it recommends implementing real-time feedback mechanisms, conducting regular performance evaluations, and continuously monitoring for shifts in application usage or functions that could compromise safety.

AWS Security Blog
Simon Willison's Weblog
Simon Willison's Weblog

Fix: Fixed in OpenTelemetry.Resources.Azure version 1.15.0-beta.2. The fix introduces limits to HttpClient requests so that response bodies are streamed rather than loaded entirely into memory, with responses greater than 4 MiB being ignored. As workarounds, you can disable the Azure VM resource detector or use network-level controls (firewall rules, mTLS, or service mesh) to prevent Man-in-the-Middle attacks on the Azure VM instance metadata endpoint.

GitHub Advisory Database
The Verge (AI)
CNBC Technology
OpenAI Blog
Apr 29, 2026

Seven families are suing OpenAI and its CEO after a school shooting in Tumbler Ridge, Canada, claiming the company failed to alert police about the shooter's suspicious ChatGPT activity. The families allege that OpenAI detected concerning conversations about gun violence but stayed silent to protect its reputation and an upcoming IPO (initial public offering, when a company first sells stock to the public).

The Verge (AI)
The Verge (AI)
The Hacker News
Apr 29, 2026

AI models can now find and exploit software vulnerabilities faster than security teams can defend against them, creating urgent security challenges for AI-driven development. Wiz addressed this by launching an AI-BOM (a tool that automatically catalogs AI frameworks, models, and IDE extensions like GitHub Copilot and Cursor) to give security teams visibility into how AI tools interact with their data, plus embedding security guardrails directly into developer IDEs through plugins that catch hardcoded secrets, misconfigurations, and AI-specific risks like prompt injection (tricking an AI by hiding instructions in its input) before code is committed.

Fix: Wiz Code plugins for AI-native IDEs (like Claude Code and Cursor) embed security directly into development workflows using pre-commit hooks (automated checks that run before code is saved) to catch hardcoded secrets, IaC (infrastructure-as-code) misconfigurations, vulnerabilities, and AI-specific issues. Additionally, Wiz Skills allow developers to automatically pull active security issues from the Wiz Security Graph and apply fixes directly in the IDE using the Wiz Green Agent, which generates fixes based on full code-to-cloud context.

Wiz Research Blog
The Verge (AI)