aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 115/371
VIEW ALL
01

CVE-2026-24141: NVIDIA Model Optimizer for Windows and Linux contains a vulnerability in the ONNX quantization feature, where a user cou

security
Mar 24, 2026

NVIDIA Model Optimizer for Windows and Linux has a vulnerability in its ONNX quantization feature (a technique that makes AI models smaller and faster by reducing precision) where unsafe deserialization (unsafely converting data from a file into program objects) can occur when a user provides a specially crafted input file. A successful attack could allow an attacker to execute code, gain higher privileges, change data, or steal information.

NVD/CVE Database
02

CVE-2025-33254: NVIDIA Triton Inference Server contains a vulnerability where an attacker may cause internal state corruption. A success

security
Mar 24, 2026

NVIDIA Triton Inference Server has a vulnerability (CVE-2025-33254) where an attacker can corrupt internal state, a condition that occurs when data becomes inconsistent or broken, potentially causing a denial of service (making a service unavailable to legitimate users). The vulnerability is caused by a race condition (a bug that happens when multiple processes access shared data at the same time without proper coordination).

NVD/CVE Database
03

CVE-2025-33244: NVIDIA APEX for Linux contains a vulnerability where an unauthorized attacker could cause a deserialization of untrusted

security
Mar 24, 2026

NVIDIA APEX for Linux has a vulnerability where attackers can deserialize untrusted data (process data from untrusted sources, potentially running malicious code hidden in that data), affecting PyTorch versions earlier than 2.6. A successful attack could allow code execution, denial of service (making a system unavailable), privilege escalation (gaining higher access levels), data tampering, and information disclosure.

NVD/CVE Database
04

CVE-2025-33238: NVIDIA Triton Inference Server Sagemaker HTTP server contains a vulnerability where an attacker may cause an exception.

security
Mar 24, 2026

CVE-2025-33238 is a vulnerability in NVIDIA Triton Inference Server's Sagemaker HTTP server that allows an attacker to trigger an exception, potentially causing a denial of service (DoS, where a system becomes unavailable to legitimate users). The underlying issue involves a race condition (a timing flaw when multiple processes access shared resources without proper protection).

NVD/CVE Database
05

Baltimore is first U.S. city to sue over Grok deepfake porn as legal pressure mounts on Musk's xAI

safetypolicy
Mar 24, 2026

Baltimore has become the first major U.S. city to sue Elon Musk's xAI over its Grok image generator, which can create deepfakes (AI-manipulated videos or images that realistically fake someone's appearance or actions) of non-consensual sexual content involving women and children. The lawsuit claims xAI violated consumer protection laws by marketing Grok and X as safe while allowing mass creation of non-consenting intimate images (sexually explicit content created without permission) and child sexual abuse material. Baltimore is asking the court to force xAI to stop targeting its residents, redesign its platforms to prevent exploitation, and change its marketing practices.

CNBC Technology
06

Anthropic and Pentagon face off in court over ban on company’s AI model

policysecurity
Mar 24, 2026

Anthropic, an AI company, is suing the US Department of Defense in federal court to challenge a ban on government use of its Claude AI chatbot after the company refused to allow the technology to be used in autonomous weapons systems (machines that can make lethal decisions without human control) and mass surveillance. The Defense Secretary declared Anthropic a supply chain risk (a company considered unsafe to do business with), which the company argues will cause massive financial and business harm.

The Guardian Technology
07

OpenAI just gave up on Sora and its billion-dollar Disney deal

industry
Mar 24, 2026

OpenAI has discontinued Sora, its video generation tool (AI that creates videos from text descriptions), along with the standalone app and developer API access that launched in late 2024. This shutdown affects a major licensing deal with Disney announced just months earlier, in which Disney had agreed to invest $1 billion in OpenAI.

The Verge (AI)
08

Arm’s first CPU ever will plug into Meta’s AI data centers later this year

industry
Mar 24, 2026

Arm, a UK chip design company, is manufacturing its first CPU (central processing unit, the main processor in a computer) called the Arm AGI CPU, designed specifically for inference (running AI models in the cloud). Meta will be the first customer, using this chip in its data centers alongside processors from other companies like Nvidia and AMD to power AI tools.

The Verge (AI)
09

Baltimore sues Elon Musk’s AI company over Grok’s fake nude images

safetypolicy
Mar 24, 2026

Baltimore's mayor and city council sued Elon Musk's xAI company, claiming that its Grok chatbot (an AI assistant designed for general conversation) violated consumer protection laws by creating nonconsensual sexualized images. The lawsuit argues that xAI deceptively marketed Grok and its platform X without disclosing the risks and potential harms users could face.

The Guardian Technology
10

Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

safetypolicy
Mar 24, 2026

Agentic AI systems (AI that can independently take actions rather than just make suggestions) are becoming more powerful by gaining direct access to computer systems, creating new governance challenges. The article uses OpenClaw as a case study to illustrate why better oversight and control mechanisms are needed as these autonomous systems become more capable and integrated into real-world operations.

SecurityWeek
Prev1...113114115116117...371Next