aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 118/371
VIEW ALL
01

Faster attacks and ‘recovery denial’ ransomware reshape threat landscape

securityindustry
Mar 23, 2026

A 2026 Mandiant security report shows that attackers are operating faster and more collaboratively, with hand-offs between threat groups now happening in 22 seconds instead of 8+ hours. Attackers are shifting tactics away from email phishing (6% of attacks) toward voice phishing (11%) and other interactive social engineering, while increasingly targeting recovery systems through 'recovery denial' ransomware to prevent organizations from restoring after breaches.

CSO Online
02

Varonis Atlas: Securing AI and the Data That Powers It

securityindustry
Mar 23, 2026

Varonis Atlas is an AI security platform that helps organizations discover, monitor, and protect AI systems across their enterprise, from custom AI models to chatbots and AI agents. The platform addresses a major security gap: most organizations don't know which AI systems they have, what data those systems can access, or whether they're compliant with regulations, creating risks since AI agents can read and modify data at machine speed. Atlas covers the entire AI security lifecycle through features like continuous AI discovery, posture management (vulnerability and misconfiguration assessment), runtime protection, and compliance reporting.

BleepingComputer
03

Confronting the CEO of the AI company that impersonated me

safetyprivacy
Mar 23, 2026

Grammarly (now part of Superhuman) launched a feature called Expert Review in August that used AI to create cloned versions of real journalists and writers, including the interviewer, without their permission to provide writing suggestions. The company faced backlash and legal action, ultimately killing the feature entirely and offering an opt-out option.

Fix: Superhuman responded by first offering an email-based opt out and then killing the feature entirely.

The Verge (AI)
04

CLIP-ADA: CLIP-Guided Artifact-Invariant Generalizable Synthetic Image Detection

research
Mar 23, 2026

This research paper presents CLIP-ADA, a method for detecting synthetic images (fake images created by AI generators) that works better across different types of generators and artifacts. The method analyzes how CLIP (a vision-language model that understands both images and text) processes images at different levels, then uses this understanding to train detectors that rely less on specific artifact patterns and more on general forensic features, achieving over 6% better accuracy on unseen synthetic images.

IEEE Xplore (Security & AI Journals)
05

SRAP: Robust and Transferable Self-Reversible Adversarial Patch for Image Privacy Protection

researchsecurity
Mar 23, 2026

Researchers developed SRAP (Self-Reversible Adversarial Patch), a technique that creates adversarial patches (small, intentionally corrupted image regions designed to fool AI models) that can be reversed back to the original image while protecting privacy. The method improves two key weaknesses in existing adversarial patches: transferability (working across different AI models, achieving up to 90% success rate) and robustness (resisting image processing and defensive techniques), and demonstrates an 88% attack success rate against commercial AI services.

IEEE Xplore (Security & AI Journals)
06

You Built the Brain. Now Protect It.

securityindustry
Mar 23, 2026

As companies convert traditional data centers into AI factories (facilities that produce and run large language models, or LLMs) to generate revenue and gain competitive advantages, they face new security risks. Check Point has created a blueprint architecture (a detailed security design plan) to help enterprises protect these AI data centers as the market grows significantly from $236 billion in 2025 to $934 billion by 2030.

Check Point Research
07

Check Point at RSAC – How We’re Helping Our Customers Secure their AI Transformation

securitypolicy
Mar 23, 2026

Companies are quickly adopting AI tools to improve productivity and gain business advantages, but this creates new security risks. AI tools often access sensitive company data like customer records and emails, and employees may use LLMs (large language models, AI systems trained on huge amounts of text) without approval, risking accidental leaks of confidential information.

Check Point Research
08

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

policyindustry
Mar 23, 2026

This newsletter covers multiple AI-related developments, including animal welfare advocates exploring how artificial general intelligence (AGI, a theoretical AI system that can learn and perform any intellectual task) might reduce animal suffering, the White House unveiling a light-touch AI regulation framework, and various corporate moves like OpenAI adding ads to free ChatGPT and the Pentagon adopting Palantir's AI for military targeting. The article also discusses Elon Musk being found liable for misleading Twitter investors and a case where an Australian woman's experimental brain implant was removed against her wishes despite significantly improving her quality of life.

MIT Technology Review
09

Sen. Warren questions DOD about Anthropic blacklist that 'appears to be retaliation'

policysafety
Mar 23, 2026

Senator Elizabeth Warren is questioning the Department of Defense's decision to blacklist AI company Anthropic as a "supply chain risk," calling it retaliation after the company refused to let the DOD use its AI models for fully autonomous weapons or domestic mass surveillance. Anthropic has filed a lawsuit against the Trump administration, while OpenAI has secured a DOD contract despite similar concerns from lawmakers about whether safeguards exist to prevent the technology from being used for mass surveillance or autonomous weapons.

CNBC Technology
10

Introducing Wiz Agents & Workflows: Security at the Speed of AI

securityindustry
Mar 23, 2026

Wiz has introduced AI agents and workflows designed to help security teams respond to threats faster by automating investigation and remediation tasks. The system uses three specialized agents—Red (finds vulnerabilities), Blue (investigates threats), and Green (fixes issues)—that work together in a continuous loop to detect, analyze, and resolve security risks at machine speed rather than relying on manual human work.

Wiz Research Blog
Prev1...116117118119120...371Next