aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 85/371
VIEW ALL
01

Iran threatens OpenAI’s Stargate data center in Abu Dhabi

security
Apr 6, 2026

Iran's Islamic Revolutionary Guard Corps (IRGC, a military organization) published a video threatening to destroy OpenAI's Stargate data center in Abu Dhabi if the US attacks Iran's power plants. The threat was posted to social media on April 3rd and specifically showed images of OpenAI's $30 billion facility under construction in the United Arab Emirates.

The Verge (AI)
02

Google DeepMind Researchers Map Web Attacks Against AI Agents

securityresearch
Apr 6, 2026

Researchers at Google DeepMind have identified a vulnerability called 'AI Agent Traps' that allows attackers to manipulate and exploit AI agents (autonomous programs that can browse the web and take actions) by hosting malicious web content designed to deceive them. This research maps out how these attacks work against AI systems that visit websites.

SecurityWeek
03

Shadow AI in Healthcare Is Here to Stay

securitypolicy
Apr 6, 2026

Healthcare workers are increasingly using AI tools on their own to handle heavy workloads, and organizations cannot stop this trend. The source emphasizes that healthcare organizations should strengthen their security practices to reduce the damage if these unsanctioned AI tools are compromised or misused.

Dark Reading
04

OWASP GenAI Security Project Gets Update, New Tools Matrix

securitypolicy
Apr 6, 2026

OWASP (Open Web Application Security Project, a standards group for security best practices) has updated its generative AI security guidance to address 21 identified risks in AI systems. The update recommends that companies use separate but coordinated defense strategies tailored specifically for generative AI (AI that creates text, images, or code) and agentic AI (AI that can take actions independently).

Dark Reading
05

Announcing the OpenAI Safety Fellowship

researchpolicy
Apr 6, 2026

OpenAI is launching a Safety Fellowship program (September 2026 to February 2027) for external researchers to conduct independent studies on safety and alignment (making sure AI systems behave as intended and don't cause harm) of advanced AI systems. Fellows will work on topics like safety evaluation, ethics, robustness, privacy protection, and oversight of AI agents, receiving mentorship, compute resources, and a monthly stipend while producing research outputs like papers or datasets.

OpenAI Blog
06

6 ways attackers abuse AI services to hack your business

security
Apr 6, 2026

Attackers are increasingly exploiting legitimate AI systems and services instead of using traditional malware, a trend called "living off the AI land." Examples include poisoning MCP servers (tools that connect AI assistants to external services) in supply chains, abusing AI platforms like Claude and Copilot as command-and-control channels (hidden pathways for sending malicious instructions), and hijacking AI agents (automated systems that perform tasks) to extract sensitive data or perform destructive actions. The shift represents a fundamental change in AI security threats, moving beyond simple prompt injection (tricking an AI by hiding instructions in its input) to more sophisticated agent hijacking (taking control of automated AI systems).

CSO Online
07

Escaping the COTS trap

policysecurity
Apr 6, 2026

Commercial off-the-shelf software (COTS, meaning ready-made software products sold online or in stores) initially seems attractive because it deploys quickly and costs less than custom development, but organizations often get trapped when they want to switch platforms, as their systems become deeply entangled with the vendor's technology. AI-powered security tools are creating a new type of lock-in by relying on proprietary training data, vendor-specific threat intelligence feeds (collections of indicators showing cyber attacks), and specialized hardware, making it expensive and difficult to migrate away.

CSO Online
08

How China fell for a lobster: What an AI assistant tells us about Beijing's ambition

industry
Apr 5, 2026

OpenClaw, an open-source AI assistant built by an Austrian developer, sparked a major trend in China in March 2024 because it can be customized to work with Chinese AI models, unlike Western tools like ChatGPT that are inaccessible there. Users enthusiastically adapted OpenClaw's code to create personalized versions they called "lobsters," using them for tasks like e-commerce product listings, stock analysis, and productivity, with some claiming dramatic efficiency gains. The phenomenon reflects China's broader push to develop and embrace AI technology, driven by government support and the success of homegrown platforms like DeepSeek.

BBC Technology
09

I let Gemini in Google Maps plan my day and it went surprisingly well

industry
Apr 5, 2026

Google has integrated Gemini (an AI assistant that's built into Google services) into Google Maps, allowing it to help plan daily itineraries by suggesting nearby locations. The author tested this feature by having Gemini plan a full day around their city and found it effective, discovering both obvious and unexpected recommendations for places to visit.

The Verge (AI)
10

CVE-2026-5530: A flaw has been found in Ollama up to 18.1. This issue affects some unknown processing of the file server/download.go of

security
Apr 4, 2026

A vulnerability (CVE-2026-5530) has been discovered in Ollama up to version 18.1 that allows attackers to perform SSRF (server-side request forgery, where an attacker tricks a server into making unwanted requests on their behalf) through the Model Pull API component. The flaw can be exploited remotely by authenticated users, and the vendor has not responded to disclosure attempts.

NVD/CVE Database
Prev1...8384858687...371Next