aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 24/265
VIEW ALL
01

DLSS 5: Has Nvidia’s AI graphics technology gone too far?

industry
Mar 18, 2026

Nvidia has released DLSS 5, a new 3D guided neural rendering model (an AI system that generates realistic graphics in real-time) that can alter a game's lighting and materials during gameplay. Many gamers have criticized the technology for changing how games look in ways they didn't expect, with complaints that it distorts character appearances and doesn't respect the original artists' creative vision.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

The Verge (AI)
02

Reco targets AI agent blind spots with new security capability

securityindustry
Mar 18, 2026

Reco, a SaaS security platform, launched "Reco AI Agent Security" on March 18 to address "agent sprawl," the problem of autonomous AI agents (like Copilot and ChatGPT integrations) accessing sensitive data and taking actions across multiple systems without human oversight. The new tool gives security teams visibility and control over these AI agents by using behavior-based detection (analyzing API call patterns and workflow signatures) instead of traditional connection-based methods, identifying risks like agents with excessive permissions or misconfigured access to customer data.

Fix: Reco AI Agent Security is explicitly designed as the mitigation. According to the source, the offering provides: (1) AI agent discovery through multi-layered detection that analyzes API call patterns and service account activity to identify autonomous behavior; (2) risk analysis by correlating activity across applications and recognizing workflow signatures of automation tools like n8n, Zapier, and Make; and (3) governance and control over all AI agents operating in the SaaS ecosystem. The platform tracks OAuth connections, analyzes decision-making patterns that indicate autonomous action, and monitors cross-application activity to identify agents that traditional SSPM tools miss.

CSO Online
03

Claude Code Security and Magecart: Getting the Threat Model Right

security
Mar 18, 2026

Magecart attacks (malicious code injected into e-commerce sites to steal payment data) often hide in third-party resources like images or scripts that never enter a company's code repository, making them invisible to static analysis tools like Claude Code Security. Claude Code Security is designed to scan code you own, so it cannot detect malicious code injected at runtime through compromised external libraries, CDNs (content delivery networks that distribute files globally), or data hidden in binary files like favicons, which means repository-based scanning has a fundamental blind spot for this attack class.

The Hacker News
04

We asked experts about the most responsible ways to use AI tools – here’s what they said

safety
Mar 18, 2026

The article discusses expert advice on responsible AI tool use, emphasizing that people should use AI as a brainstorming partner and for organizing information, but should not let it replace their own decision-making. A 2025 survey shows that one-third of US adults use ChatGPT, with particularly high adoption among people under 30.

The Guardian Technology
05

Can you prove the person on the other side is real?

securitysafety
Mar 18, 2026

Synthetic identity fraud, where criminals create fake people using AI-generated documents and deepfakes (realistic fake videos or audio), is becoming a major threat in estate and identity verification work. Traditional security checks that look at device fingerprints or typing patterns are no longer reliable because AI can now imitate these signals. The text explains that the real challenge by 2026 will be distinguishing legitimate people from manufactured personas, especially in high-stakes situations involving inheritance and family claims.

Fix: The source suggests moving from asking "Who is this?" to a more forensic approach: "How did this identity—and its digital footprint—come to exist?" This shift means prioritizing provenance (where the identity originated), issuer verification (confirming documents are real), and cross-channel consistency (checking if the person's presence makes sense across multiple systems) over accepting surface-level plausibility. However, the text does not provide specific technical implementations or detailed steps for executing this approach.

CSO Online
06

China’s ‘AI tigers’ see shares surge after Nvidia CEO touts OpenClaw as ‘next ChatGPT’

industry
Mar 18, 2026

Chinese AI companies saw significant stock gains after Nvidia CEO Jensen Huang praised OpenClaw, an open-source AI agent (a program that can perform tasks independently), as "the next ChatGPT." Companies like MiniMax and Zhipu, which are among China's leading AI developers building large language models (AI systems trained on huge amounts of text to understand and generate language), have integrated OpenClaw into their products and are launching their own versions based on it.

CNBC Technology
07

CISOs rethink their data protection strategies

securitypolicy
Mar 18, 2026

CISOs (Chief Information Security Officers, the top security leaders at companies) are updating their data protection strategies because employees are rapidly sharing company data with AI tools, including public models like ChatGPT, creating new security risks. A CISO at a law firm added a new protection layer that classifies data based on whether it can be safely used with AI and invested in new monitoring tools, while also regularly evaluating new technologies to ensure controls keep pace with AI innovations.

Fix: The source describes one organization's approach: add a protection layer that classifies and tags data based on whether it could be used with AI and in what circumstances, invest in new tools to support that layer, monitor the vendor landscape for emerging capabilities, and evaluate new technologies being deployed to determine whether new controls are needed for them. However, no specific technical solutions, patches, or vendor recommendations are explicitly named in the source text.

CSO Online
08

Meta's Manus launches desktop app to bring its AI agent onto personal devices amid OpenClaw craze

industrysafety
Mar 18, 2026

Meta-owned Manus launched a desktop application with a feature called 'My Computer' that allows its AI agent (a program that can complete complex, multi-step tasks automatically) to access and control files, tools, and applications directly on a user's computer, rather than only working in the cloud. This move competes with OpenClaw, a free, open-source AI agent that similarly runs on local devices. Experts have raised security and privacy concerns about giving AI agents local device access, but Manus addressed this by requiring explicit user approval before the agent executes tasks.

Fix: Manus's mitigation for security and privacy risks includes a control mechanism requiring explicit user approval before task execution. According to Manus, users can choose "Allow Once" for individual review of each action or "Always Allow" for trusted, recurring actions, keeping users "firmly in control."

CNBC Technology
09

OWASP GenAI Security Project Expands AI Security Frameworks Ahead of RSA 2026, Celebrates Continued Sponsor Support

securitypolicy
Mar 18, 2026

The OWASP GenAI Security Project, an open-source community focused on AI security, announced expansion of its resources and frameworks with over 25,000 members contributing practical guidance and tools. The project is being highlighted at the RSA 2026 conference, indicating growing industry adoption of AI security best practices.

OWASP GenAI Security
10

Survey on Learning-based Dynamic Fault Localization: From Traditional Machine Learning to Large Language Models

research
Mar 18, 2026

This survey examines methods for automatically finding bugs in software code by using machine learning and AI models, tracing the evolution from traditional machine learning techniques to modern large language models (LLMs, which are AI systems trained on vast amounts of text data). The research covers how these AI-based approaches learn patterns to pinpoint where faults occur in code, making debugging faster and more efficient than manual inspection.

ACM Digital Library (TOPS, DTRAP, CSUR)
Prev1...2223242526...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026