aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 119/371
VIEW ALL
01

We Found Eight Attack Vectors Inside AWS Bedrock. Here's What Attackers Can Do with Them

security
Mar 23, 2026

AWS Bedrock is Amazon's platform for building AI applications that connect foundation models (pre-trained AI systems) to enterprise data and systems like Salesforce and SharePoint. Researchers discovered eight attack vectors that allow attackers to exploit this connectivity, including log manipulation (hiding their tracks in audit logs), knowledge base compromise (stealing enterprise data), agent hijacking (taking control of autonomous AI agents), and prompt poisoning (corrupting AI instructions).

The Hacker News
02

The insider threat rises again

securitypolicy
Mar 23, 2026

Insider threats (security risks from people inside an organization) are becoming more common and damaging, with 42% of organizations reporting increased malicious insider incidents and an average cost of $13.1 million per incident. These threats come from both intentional bad actors and careless mistakes, and are worsened by new technologies like AI agents (software that can act independently with system access), remote work, and economic pressure on employees.

CSO Online
03

New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud

securityindustry
Mar 23, 2026

Organizations deploying AI tools and agents are creating new security vulnerabilities, particularly through attacks like indirect prompt injection (tricking an AI by hiding malicious instructions in its input) and agentic tool chain attacks (compromising the sequence of tools an AI agent uses). CrowdStrike is addressing this gap by expanding its Falcon platform with new AI detection and response capabilities that monitor desktop AI applications, discover shadow AI (unauthorized AI tools), and detect threats across endpoints, cloud, and SaaS environments.

Fix: CrowdStrike Falcon AIDR is extending runtime threat detection to desktop AI applications (ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor) with visibility into prompt content and the ability to detect prompt attacks and data leaks. The capability is currently in pre-beta and will be generally available in Q2. Additionally, AI Discovery in CrowdStrike Falcon Exposure Management, now generally available, automatically discovers AI-related components running on endpoints in real time, including AI apps, agents, LLM (large language model) runtimes, MCP (Model Context Protocol) servers, and IDE extensions.

CrowdStrike Blog
04

AI influencer awards season is upon us

industry
Mar 22, 2026

AI influencers are becoming a serious commercial industry, with new awards like an 'AI Personality of the Year' contest emerging alongside AI beauty pageants and music competitions. The contest, backed by companies like OpenArt, Fanvue, and ElevenLabs, aims to recognize the creative work and growing cultural influence of AI influencers.

The Verge (AI)
05

Experimenting with Starlette 1.0 with Claude skills

industry
Mar 22, 2026

Starlette 1.0 was released in March 2026 with breaking changes from previous versions, notably replacing the old on_startup and on_shutdown parameters with a new lifespan mechanism (an async context manager for managing app startup and shutdown). Since LLMs were trained on older Starlette code, the author created a Skill (a custom knowledge document that Claude can reference) by having Claude clone the Starlette repository, build documentation with code examples, and add it to their Claude chat so the AI could generate working Starlette 1.0 code.

Fix: The source explicitly mentions the solution implemented: creating a Skill document. The author states "I decided to see if I could get this working with a Skill" and describes the process: "Clone Starlette from GitHub...Build a skill markdown document for this release which includes code examples of every feature." They then used the "Copy to your skills" button to add this skill to their Claude chat, enabling Claude to generate correct Starlette 1.0 code in subsequent conversations.

Simon Willison's Weblog
06

An efficient hierarchical secret sharing for privacy-preserving distributed gradient descent algorithm

securityprivacy
Mar 22, 2026

This research paper describes a method for protecting privacy in distributed gradient descent (a technique where multiple computers work together to train AI models by each processing part of the data). The authors propose using hierarchical secret sharing (a cryptographic approach where information is split into pieces distributed across multiple parties, so no single party can see the complete data) to keep individual data private while still allowing the AI training process to work efficiently.

Elsevier Security Journals
07

Why Spotify AI more than music will be the secret to keeping subscribers

industry
Mar 22, 2026

Spotify is investing heavily in AI-powered music discovery tools, including a new ChatGPT integration and a Prompted Playlist feature that let users describe what they want to hear through conversation rather than traditional buttons. Spotify executives say these AI features are key to keeping subscribers engaged as music catalogs become similar across streaming apps, with their interactive AI DJ feature already used by 90 million subscribers.

CNBC Technology
08

Musk says he’s building Terafab chip plant in Austin, Texas

industry
Mar 22, 2026

Elon Musk announced plans to build a Terafab chip manufacturing plant in Austin, Texas, jointly operated by Tesla and SpaceX to produce chips for robotics, AI, and space data centers. Musk and other industry leaders are concerned that chip makers cannot produce enough chips fast enough to meet growing demand from the AI industry, though building a chip fabrication plant requires billions of dollars, many years, and specialized equipment.

The Verge (AI)
09

AI was everywhere at gaming’s big developer conference — except the games

industry
Mar 22, 2026

At the Game Developers Conference, AI tools were heavily promoted for creating game content, NPCs (non-player characters, the computer-controlled characters in games), and automating quality assurance tasks, but these AI systems were largely absent from actual commercial games being released. The gap between AI hype in the gaming industry and its real-world implementation in finished games remains significant.

The Verge (AI)
10

CVE-2026-4538: A vulnerability was identified in PyTorch 2.10.0. The affected element is an unknown function of the component pt2 Loadi

security
Mar 22, 2026

PyTorch 2.10.0 contains a vulnerability in its pt2 Loading Handler component that allows unsafe deserialization (loading data in a way that can execute unintended code) through an unknown function. The vulnerability can only be exploited locally (by someone with access to the affected computer), but an exploit is publicly available, and the PyTorch team has not yet responded to the initial report.

NVD/CVE Database
Prev1...117118119120121...371Next