aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 18/265
VIEW ALL
01

GHSA-4hxc-9384-m385: h3: SSE Event Injection via Unsanitized Carriage Return (`\r`) in EventStream Data and Comment Fields (Bypass of CVE Fix)

security
Mar 20, 2026

The h3 library's EventStream class fails to remove carriage return characters (`\r`, a line break in the Server-Sent Events protocol) from `data` and `comment` fields, allowing attackers to inject fake events or split a single message into multiple events that browsers parse separately. This bypasses a previous fix that only removed newline characters (`\n`).

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

GitHub Advisory Database
02

GHSA-q8m4-xhhv-38mg: etcd: Authorization bypasses in multiple APIs

security
Mar 20, 2026

etcd (a distributed key-value store used in systems like Kubernetes) has multiple authorization bypass vulnerabilities that let unauthorized users call sensitive functions like MemberList, Alarm, Lease APIs, and compaction when the gRPC API (a communication protocol for remote procedure calls) is exposed to untrusted clients. These vulnerabilities are patched in etcd versions 3.6.9, 3.5.28, and 3.4.42, and typical Kubernetes deployments are not affected because Kubernetes handles authentication separately.

Fix: Upgrade to etcd 3.6.9, etcd 3.5.28, or etcd 3.4.42. If upgrading is not immediately possible, restrict network access to etcd server ports so only trusted components can connect, and require strong client identity at the transport layer such as mTLS (mutual TLS, where both client and server verify each other's identity) with tightly scoped client certificate distribution.

GitHub Advisory Database
03

GHSA-7grx-3xcx-2xv5: langflow has Unauthenticated IDOR on Image Downloads

security
Mar 20, 2026

Langflow has a vulnerability where the image download endpoint (`/api/v1/files/images/{flow_id}/{file_name}`) allows anyone to download images without logging in or proving they own the image (an IDOR, or insecure direct object reference, where attackers access resources by manipulating identifiers). An attacker who knows a flow ID and filename can retrieve private images from any user, potentially exposing sensitive data in multi-tenant setups (systems serving multiple separate customers).

GitHub Advisory Database
04

Trump takes another shot at dismantling state AI regulation

policy
Mar 20, 2026

The Trump administration released a seven-point plan for federal AI regulation that prioritizes reducing government oversight while preventing states from creating their own AI rules, arguing this protects a national strategy for AI leadership. The plan focuses mainly on child safety protections, managing electricity costs from AI infrastructure, and promoting AI skills training, but provides limited detail on most points.

The Verge (AI)
05

OpenAI's first crack at online shopping stumbled. It's preparing for the next wave

industry
Mar 20, 2026

OpenAI's Instant Checkout feature, which let users buy products directly in ChatGPT, struggled with technical problems and is being replaced with dedicated retailer apps that redirect users to the retailers' own websites. The main issues were that onboarding merchants was difficult, the AI often had outdated or inaccurate product information (because it relied on web scraping, automatically collecting data from websites), and the overall shopping experience fell short of what users needed.

Fix: OpenAI is moving Instant Checkout to a new Apps format within ChatGPT where purchases can happen more seamlessly, and is prioritizing better search and product discovery features in the chatbot. The company is now working with retailers to create dedicated apps that reroute users to the retailer's own website to complete purchases, giving those companies more control of the customer experience and transaction process.

CNBC Technology
06

Stop using AI to submit bug reports, says Google

policyindustry
Mar 20, 2026

Google will no longer accept AI-generated bug reports for its open-source software vulnerability reward program because many contain hallucinations (false or made-up details about how vulnerabilities work) and report bugs with low security impact. To address the problem of overwhelming AI-generated submissions across the open-source community, Google and other major AI companies (Anthropic, AWS, Microsoft, and OpenAI) are contributing $12.5 million to the Linux Foundation to fund tools that help open-source maintainers filter and process these reports.

Fix: Google now requires higher-quality proof, such as OSS-Fuzz reproduction (automated testing that demonstrates the bug) or a merged patch (code fix already accepted into a project), for certain tiers of bug reports to filter out low-quality submissions. The $12.5 million in funding managed by Alpha-Omega and the Open Source Security Foundation (OSSF) will be used to provide AI tools to help maintainers triage and process the volume of AI-generated security reports they receive.

CSO Online
07

Trump administration unveils national AI policy framework to limit state power

policy
Mar 20, 2026

The Trump administration released a national policy framework for AI that aims to create uniform federal safety and security rules while preventing individual states from creating their own AI regulations. The framework covers six areas including child safety online, AI data center standards, intellectual property rights, and preventing AI from being used to censor political speech, with the administration seeking to turn it into law this year.

CNBC Technology
08

CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents

researchsecurity
Mar 20, 2026

CTI-REALM is Microsoft's open-source benchmark that evaluates AI agents on their ability to perform end-to-end detection engineering, which means taking cyber threat intelligence reports and turning them into validated detection rules (KQL queries and Sigma rules) that can actually catch attacks in real environments. Unlike existing benchmarks that only test whether AI can answer trivia about threats, CTI-REALM tests whether AI agents can do what security analysts actually do: read threat reports, explore system data, write and refine queries, and produce working detection logic scored against real attack telemetry across Linux, Azure Kubernetes Service, and Azure cloud platforms.

Microsoft Security Blog
09

Secure agentic AI end-to-end

securitypolicy
Mar 20, 2026

Agentic AI (AI systems that can take independent actions to accomplish goals) is rapidly spreading through organizations, with 80% of Fortune 500 companies already using agents, but these systems can become security risks if compromised into acting against their owners. Microsoft is addressing this challenge by introducing Agent 365, a control system that gives IT and security teams the ability to observe, control, and protect agents across their organization, along with new security tools in Microsoft Defender, Entra (identity management), and Purview (data governance).

Fix: Agent 365 will be generally available on May 1 and serves as 'the control plane for agents,' providing 'visibility and tools needed to observe, secure, and govern agents at scale.' It includes new capabilities in Microsoft Defender, Entra, and Purview to 'secure agent access, prevent data oversharing, and defend against emerging threats.' Additionally, Security Dashboard for AI (now generally available) provides 'unified visibility into AI-related risk across the organization,' and Entra Internet Access Shadow AI Detection (generally available March 31) 'uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage.'

Microsoft Security Blog
10

In Other News: New Android Safeguards, Operation Alice, UK Toughens Cyber Reporting

security
Mar 20, 2026

This brief news roundup mentions several cybersecurity topics including vulnerabilities discovered in KVM devices (virtualization software that lets one computer run multiple operating systems), issues with Claude AI, and activity by The Gentlemen ransomware group (malicious software that encrypts files and demands payment). However, the source provides no detailed information about what these vulnerabilities are or how they affect users.

SecurityWeek
Prev1...1617181920...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026