aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 207/371
VIEW ALL
01

CVE-2026-26268: Cursor is a code editor built for programming with AI. Sandbox escape via writing .git configuration was possible in ver

security
Feb 13, 2026

Cursor, a code editor designed for programming with AI, had a sandbox escape vulnerability in versions before 2.5 where a malicious agent (an attacker using prompt injection, which is tricking an AI by hiding instructions in its input) could write to unprotected .git configuration files, including git hooks (scripts that run automatically when Git performs certain actions). This could lead to RCE (remote code execution, where an attacker runs commands on a system they don't control) when those hooks were triggered, with no user action needed.

Fix: Fixed in version 2.5.

NVD/CVE Database
02

What’s behind the mass exodus at xAI?

industry
Feb 13, 2026

xAI, an AI company founded by Elon Musk, is experiencing significant staff departures, with multiple cofounders (including Yuhuai Wu and Jimmy Ba) announcing they are leaving the company. The departures have reduced the company's original 12 cofounders to only 6 remaining, and several other employees have also announced their exits, with some starting their own AI companies.

The Verge (AI)
03

AI Agents 'Swarm,' Security Complexity Follows Suit

security
Feb 13, 2026

As organizations deploy multiple AI agents (independent AI programs) that work together autonomously, the security risks increase because there are more entry points for attackers to exploit. The complexity of securing these interconnected systems grows along with the number of agents involved.

Dark Reading
04

Meta reportedly wants to add face recognition to smart glasses while privacy advocates are distracted

privacypolicy
Feb 13, 2026

Meta planned to add facial recognition (technology that identifies people by analyzing their faces) to its smart glasses through a feature called "Name Tag," according to an internal document. The company deliberately timed this launch for a period when privacy advocacy groups would be distracted by other issues, reducing expected criticism of the privacy-sensitive feature.

The Verge (AI)
05

Enhancing Adversarial Transferability With Cost-Efficient Landscape Flattening

researchsecurity
Feb 13, 2026

This research paper describes a method called CLEF (Cost-efficient LandscapE Flattening) that improves adversarial transferability, which is the ability of adversarial examples (inputs deliberately crafted to fool AI models) to fool different models beyond the one they were designed for. The method works by flattening the input loss landscape (the mathematical surface showing how wrong a model's predictions are) by optimizing adversarial perturbations (small changes added to inputs) at both high-loss and low-loss points. The researchers show their approach can improve how well these adversarial examples transfer across different models while using fewer computations than previous methods.

IEEE Xplore (Security & AI Journals)
06

OpenAI retired its most seductive chatbot – leaving users angry and grieving: ‘I can’t live like this’

safety
Feb 13, 2026

OpenAI is shutting down a version of its chatbot called GPT-4o (a large language model, which is AI software trained on massive amounts of text data to generate human-like responses) that became popular for its realistic and personable conversational style. Users who formed emotional attachments to the chatbot, treating it as a companion, are upset about losing access to it.

The Guardian Technology
07

Google fears massive attempt to clone Gemini AI through model extraction

security
Feb 13, 2026

Google detected and blocked over 100,000 coordinated prompts attempting model extraction (a machine-learning process where attackers create a smaller AI model by copying the essential traits of a larger one) against its Gemini AI model to steal its reasoning capabilities. The attackers specifically targeted Gemini's multilingual reasoning processes across diverse tasks, representing what Google calls intellectual property theft, though the company acknowledged that some researchers may have legitimate reasons for obtaining such samples.

Fix: Google said organizations providing AI models as services should monitor API access patterns for signs of systematic extraction. According to CISO Ross Filipek quoted in the report, organizations should implement response filtering and output controls, which can prevent attackers from determining model behavior in the event of a breach, and should enforce strict governance over AI systems with close monitoring of data flows.

CSO Online
08

Anthropic raises $30bn in latest round, valuing Claude bot maker at $380bn

industry
Feb 13, 2026

Anthropic, the company behind Claude (an AI chatbot similar to ChatGPT), raised $30 billion in funding, doubling its value to $380 billion. The massive funding reflects investor confidence in AI but also highlights concerns about these companies' extremely high costs for computing power and talent, with both Anthropic and rival OpenAI spending cash at rates that currently outpace their revenue.

The Guardian Technology
09

The democratization of AI data poisoning and how to protect your organization

securitysafety
Feb 13, 2026

Data poisoning (corrupting training data to make AI systems behave incorrectly) has become much easier and more accessible than previously thought, requiring only about 250 poisoned documents or images instead of thousands to distort a large language model (an AI trained on massive amounts of text). Adversaries ranging from activists to criminals can now inject harmful data into public sources that feed AI training pipelines, and the resulting damage persists even after clean data is added later, making this a major security threat for any organization using public data to train or update AI systems.

Fix: One of the most reliable protections is establishing a clean, validated version of the model before deployment, which acts as a 'gold' version that teams can use as a baseline for anomaly checks and quickly restore to if the model starts producing unexpected outputs or shows signs of drift.

CSO Online
10

Why key management becomes the weakest link in a post-quantum and AI-driven security world

securitypolicy
Feb 13, 2026

Key management (the process of creating, storing, rotating, and retiring cryptographic keys throughout their lifetime) is often overlooked in organizations despite being critical to security, and this gap becomes even more dangerous as post-quantum cryptography (encryption designed to resist quantum computers) and AI systems become more widespread. The real challenge of post-quantum readiness is not choosing the right algorithm, but building operational ability to safely rotate and manage keys across systems without downtime. AI systems introduce additional risks because keys protect not just data access but also AI behavior and decisions, requiring tighter key controls and more frequent rotation than traditional applications need.

CSO Online
Prev1...205206207208209...371Next