aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 114/371
VIEW ALL
01

The AI Hype Index: AI goes to war

industrypolicy
Mar 25, 2026

This article summarizes recent developments in AI, including controversies over weaponizing AI models like Claude, major user departures from ChatGPT, and large protests against AI in London. On a lighter note, AI agents (software programs that can act independently to accomplish tasks) are becoming popular online, with companies hiring their creators and developing quirky applications where AI agents appear to develop their own beliefs and philosophies.

MIT Technology Review
02

AI is breaking traditional security models — Here’s where they fail first

securityindustry
Mar 25, 2026

Traditional enterprise security relied on slow, manual processes where vulnerabilities were discovered through periodic scans, then triaged and fixed in a delayed workflow. AI and LLM-based systems are breaking this model by automating triage (the process of sorting and prioritizing findings), delivering vulnerabilities with full context and demanding immediate action, which forces organizations to rethink who is responsible for fixes and how quickly decisions happen. This shift also makes accountability explicit rather than implicit, requiring security teams to transition from handling individual findings to overseeing AI decision-making accuracy and approving exceptions.

CSO Online
03

How Charlotte AI AgentWorks Fuels Security's Agentic Ecosystem

industrysecurity
Mar 25, 2026

Modern cybersecurity operations face attacks that happen in seconds, overwhelming traditional human-centered defenses. CrowdStrike introduced Charlotte AI AgentWorks and Charlotte Agentic SOAR, two interconnected systems that use AI agents (autonomous software that can reason and take actions) to work alongside security analysts, automating routine tasks while keeping humans in control through oversight and guardrails.

CrowdStrike Blog
04

OpenAI ends Disney partnership as it closes Sora video-making app

industry
Mar 25, 2026

OpenAI has shut down Sora, its AI video-generation app (software that creates realistic videos from text descriptions), less than two years after launch, to focus on other projects like robotics and autonomous AI agents. The closure ends both the consumer app and professional platform, though image-making tools in ChatGPT remain unaffected. Disney, which had recently licensed its intellectual property (creative works and characters owned by a company) to Sora in a landmark deal, said it will now explore partnerships with other AI platforms.

BBC Technology
05

Introducing the OpenAI Safety Bug Bounty program

securitysafety
Mar 24, 2026

OpenAI has launched a Safety Bug Bounty program to identify AI abuse and safety risks in its products, complementing its existing Security Bug Bounty program. The new program focuses on issues like prompt injection (tricking an AI by hiding instructions in its input) that hijacks AI agents to perform harmful actions, unauthorized feature access, and proprietary information leaks, even if they don't qualify as traditional security vulnerabilities. Researchers can submit reports on reproducible safety issues that pose plausible and material harm to users.

OpenAI Blog
06

Auto mode for Claude Code

safetysecurity
Mar 24, 2026

Anthropic introduced auto mode for Claude Code, a new permissions system where Claude automatically decides whether to allow actions with safeguards in place. A separate classifier model (Claude Sonnet 4.6) reviews each action before it runs to block requests that go beyond the task scope, target untrusted infrastructure, or appear malicious, using customizable default filters that cover allowed operations like read-only requests and local file work, while blocking risky actions like force-pushing to git repositories or executing external code.

Simon Willison's Weblog
07

CSA Launches CSAI Foundation for AI Security

policysecurity
Mar 24, 2026

The Cloud Security Alliance has created a new nonprofit organization called the CSAI Foundation to help manage and secure autonomous AI agents (AI systems that can make decisions and take actions on their own). The foundation will use risk intelligence (methods to identify and understand potential dangers) and certification (official verification of safety standards) to govern these AI ecosystems.

Dark Reading
08

OpenAI shutters AI video generator Sora in abrupt announcement

industry
Mar 24, 2026

OpenAI abruptly shut down Sora, its AI video generator tool (software that creates realistic videos from text descriptions), just six months after launching it as a standalone app in 2024. The company announced the closure on social media, thanking users who created and shared videos with the platform.

The Guardian Technology
09

OpenAI shutters short-form video app Sora as company reels in costs

industry
Mar 24, 2026

OpenAI shut down its Sora app, a tool that let users generate short videos (create videos from text descriptions) and remix videos from other users, just six months after launching it despite reaching one million downloads. The company is cutting costs to justify its $730 billion valuation and focus on high-productivity business uses, particularly competing in the enterprise (business) market rather than consumer applications.

CNBC Technology
10

CVE-2026-24158: NVIDIA Triton Inference Server contains a vulnerability in the HTTP endpoint where an attacker may cause a denial of ser

security
Mar 24, 2026

CVE-2026-24158 is a vulnerability in NVIDIA Triton Inference Server's HTTP endpoint that allows attackers to cause a denial of service (temporarily making a service unavailable) by sending a large compressed payload. The vulnerability stems from improper memory allocation (CWE-789, where a system reserves too much memory based on untrusted input).

NVD/CVE Database
Prev1...112113114115116...371Next