aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
1
[LAST_7D]
158
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 16/265
VIEW ALL
01

Sen. Warren questions DOD about Anthropic blacklist that 'appears to be retaliation'

policysafety
Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Mar 23, 2026

Senator Elizabeth Warren is questioning the Department of Defense's decision to blacklist AI company Anthropic as a "supply chain risk," calling it retaliation after the company refused to let the DOD use its AI models for fully autonomous weapons or domestic mass surveillance. Anthropic has filed a lawsuit against the Trump administration, while OpenAI has secured a DOD contract despite similar concerns from lawmakers about whether safeguards exist to prevent the technology from being used for mass surveillance or autonomous weapons.

CNBC Technology
02

Introducing Wiz Agents & Workflows: Security at the Speed of AI

securityindustry
Mar 23, 2026

Wiz has introduced AI agents and workflows designed to help security teams respond to threats faster by automating investigation and remediation tasks. The system uses three specialized agents—Red (finds vulnerabilities), Blue (investigates threats), and Green (fixes issues)—that work together in a continuous loop to detect, analyze, and resolve security risks at machine speed rather than relying on manual human work.

Wiz Research Blog
03

We Found Eight Attack Vectors Inside AWS Bedrock. Here's What Attackers Can Do with Them

security
Mar 23, 2026

AWS Bedrock is Amazon's platform for building AI applications that connect foundation models (pre-trained AI systems) to enterprise data and systems like Salesforce and SharePoint. Researchers discovered eight attack vectors that allow attackers to exploit this connectivity, including log manipulation (hiding their tracks in audit logs), knowledge base compromise (stealing enterprise data), agent hijacking (taking control of autonomous AI agents), and prompt poisoning (corrupting AI instructions).

The Hacker News
04

The insider threat rises again

securitypolicy
Mar 23, 2026

Insider threats (security risks from people inside an organization) are becoming more common and damaging, with 42% of organizations reporting increased malicious insider incidents and an average cost of $13.1 million per incident. These threats come from both intentional bad actors and careless mistakes, and are worsened by new technologies like AI agents (software that can act independently with system access), remote work, and economic pressure on employees.

CSO Online
05

New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud

securityindustry
Mar 23, 2026

Organizations deploying AI tools and agents are creating new security vulnerabilities, particularly through attacks like indirect prompt injection (tricking an AI by hiding malicious instructions in its input) and agentic tool chain attacks (compromising the sequence of tools an AI agent uses). CrowdStrike is addressing this gap by expanding its Falcon platform with new AI detection and response capabilities that monitor desktop AI applications, discover shadow AI (unauthorized AI tools), and detect threats across endpoints, cloud, and SaaS environments.

Fix: CrowdStrike Falcon AIDR is extending runtime threat detection to desktop AI applications (ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor) with visibility into prompt content and the ability to detect prompt attacks and data leaks. The capability is currently in pre-beta and will be generally available in Q2. Additionally, AI Discovery in CrowdStrike Falcon Exposure Management, now generally available, automatically discovers AI-related components running on endpoints in real time, including AI apps, agents, LLM (large language model) runtimes, MCP (Model Context Protocol) servers, and IDE extensions.

CrowdStrike Blog
06

AI influencer awards season is upon us

industry
Mar 22, 2026

AI influencers are becoming a serious commercial industry, with new awards like an 'AI Personality of the Year' contest emerging alongside AI beauty pageants and music competitions. The contest, backed by companies like OpenArt, Fanvue, and ElevenLabs, aims to recognize the creative work and growing cultural influence of AI influencers.

The Verge (AI)
07

Experimenting with Starlette 1.0 with Claude skills

industry
Mar 22, 2026

Starlette 1.0 was released in March 2026 with breaking changes from previous versions, notably replacing the old on_startup and on_shutdown parameters with a new lifespan mechanism (an async context manager for managing app startup and shutdown). Since LLMs were trained on older Starlette code, the author created a Skill (a custom knowledge document that Claude can reference) by having Claude clone the Starlette repository, build documentation with code examples, and add it to their Claude chat so the AI could generate working Starlette 1.0 code.

Fix: The source explicitly mentions the solution implemented: creating a Skill document. The author states "I decided to see if I could get this working with a Skill" and describes the process: "Clone Starlette from GitHub...Build a skill markdown document for this release which includes code examples of every feature." They then used the "Copy to your skills" button to add this skill to their Claude chat, enabling Claude to generate correct Starlette 1.0 code in subsequent conversations.

Simon Willison's Weblog
08

An efficient hierarchical secret sharing for privacy-preserving distributed gradient descent algorithm

securityprivacy
Mar 22, 2026

This research paper describes a method for protecting privacy in distributed gradient descent (a technique where multiple computers work together to train AI models by each processing part of the data). The authors propose using hierarchical secret sharing (a cryptographic approach where information is split into pieces distributed across multiple parties, so no single party can see the complete data) to keep individual data private while still allowing the AI training process to work efficiently.

Elsevier Security Journals
09

Why Spotify AI more than music will be the secret to keeping subscribers

industry
Mar 22, 2026

Spotify is investing heavily in AI-powered music discovery tools, including a new ChatGPT integration and a Prompted Playlist feature that let users describe what they want to hear through conversation rather than traditional buttons. Spotify executives say these AI features are key to keeping subscribers engaged as music catalogs become similar across streaming apps, with their interactive AI DJ feature already used by 90 million subscribers.

CNBC Technology
10

Musk says he’s building Terafab chip plant in Austin, Texas

industry
Mar 22, 2026

Elon Musk announced plans to build a Terafab chip manufacturing plant in Austin, Texas, jointly operated by Tesla and SpaceX to produce chips for robotics, AI, and space data centers. Musk and other industry leaders are concerned that chip makers cannot produce enough chips fast enough to meet growing demand from the AI industry, though building a chip fabrication plant requires billions of dollars, many years, and specialized equipment.

The Verge (AI)
Prev1...1415161718...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026