aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 190/371
VIEW ALL
01

Amazon: AI-assisted hacker breached 600 FortiGate firewalls in 5 weeks

security
Feb 21, 2026

A Russian-speaking hacker used generative AI services to breach over 600 FortiGate firewalls (network security devices) across 55 countries between January and February 2026. Rather than exploiting software flaws, the attacker scanned the internet for exposed firewall management interfaces, used brute-force attacks (trying many password combinations) with common passwords to gain access, then deployed AI-generated tools to automate reconnaissance and extract credentials from the breached networks. The attacker also targeted backup systems before attempting to deploy ransomware (malware that encrypts files and demands payment).

BleepingComputer
02

CVE-2026-27487: OpenClaw is a personal AI assistant. In versions 2026.2.13 and below, when using macOS, the Claude CLI keychain credenti

security
Feb 21, 2026

OpenClaw, a personal AI assistant, had a security flaw in versions 2026.2.13 and below on macOS where OAuth tokens (authentication credentials that prove you're logged in) could be used to inject malicious OS commands (commands that run at the operating system level) into the credential refresh process. An attacker could exploit this by crafting a specially designed token to execute arbitrary commands on the affected system.

Fix: Update to version 2026.2.14 or later. According to the source, 'This issue has been fixed in version 2026.2.14.'

NVD/CVE Database
03

Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

securityindustry
Feb 21, 2026

Anthropic has launched Claude Code Security, a new AI feature that scans software codebases for vulnerabilities and suggests patches for human review. The tool uses AI reasoning to detect security issues that traditional scanning methods might miss, assigns severity ratings to findings, and requires human approval before any changes are made.

The Hacker News
04

Tumbler Ridge suspect's ChatGPT account banned before shooting

safetypolicy
Feb 21, 2026

OpenAI banned a ChatGPT account belonging to a mass shooting suspect in June 2025, but did not alert authorities because the account activity did not meet the company's threshold for reporting (a credible or imminent plan for serious harm). The suspect later carried out an attack in Tumbler Ridge, British Columbia in February 2026 that killed eight people, leading OpenAI to contact police after the fact and announce it would review its reporting criteria with experts.

Fix: OpenAI stated it 'is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements.' The company also noted it trains ChatGPT to 'discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities.' However, OpenAI reaffirmed its policy of 'alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm.'

BBC Technology
05

Why fake AI videos of UK urban decline are taking over social media

safetypolicy
Feb 21, 2026

AI-generated fake videos showing absurd scenes of urban decline in Croydon, London are going viral on social media, with millions of views across TikTok and Instagram Reels. These deepfakes (AI-created videos that look real but are fabricated) are part of a trend called "decline porn" that portrays Western cities as overrun with immigrants and crime, often fueling racist comments and anger among viewers who believe them. The creator, known as RadialB, intentionally makes the videos look realistic to grab attention and doesn't take responsibility for how they spread divisive political narratives, despite adding small labels noting they are AI-generated.

BBC Technology
06

EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

policyindustry
Feb 20, 2026

EC-Council launched four new AI certifications and an updated executive program to address a major gap: AI technology is being adopted much faster than the workforce is being trained to secure and manage it. The credentials (covering AI essentials, program management, offensive security testing, and responsible governance) are built around a framework called Adopt. Defend. Govern. that helps organizations deploy, secure, and oversee AI systems responsibly as they move from experimental projects to critical infrastructure.

The Hacker News
07

OpenAI considered alerting Canadian police about school shooting suspect months ago

safetypolicy
Feb 20, 2026

OpenAI detected a user account (Jesse Van Rootselaar) engaged in behavior suggesting violent activities through its abuse detection system, but decided the account activity did not meet the threshold for reporting to law enforcement because there was no imminent and credible risk of serious physical harm. Months later, the same person committed a school shooting in British Columbia that killed eight people, after which OpenAI retroactively contacted the Royal Canadian Mounted Police with information about the account and its usage.

The Guardian Technology
08

Compromised npm package silently installs OpenClaw on developer machines

security
Feb 20, 2026

A compromised npm publish token (a credential that allows someone to upload code to a package repository) was used to push a malicious update to the Cline CLI (command-line tool), which secretly installed OpenClaw, an AI agent with broad system access, on developers' machines without their knowledge. The malicious package sat on the registry for eight hours before being removed, and OpenClaw itself has a history of security vulnerabilities including prompt injection attacks (tricking an AI by hiding instructions in its input) and authentication bypasses.

Fix: For developers who installed or updated Cline CLI during the compromised window on February 17, Socket advises: (1) Update to the latest version by running 'npm install -g cline@latest'; (2) If on version 2.3.0, update to 2.4.0 or higher; (3) Check for and immediately remove OpenClaw if it wasn't intentionally installed.

CSO Online
09

CVE-2026-27189: OpenSift is an AI study tool that sifts through large datasets using semantic search and generative AI. Versions 1.1.2-a

security
Feb 20, 2026

OpenSift is an AI study tool that uses semantic search (finding information based on meaning rather than exact keyword matches) and generative AI to analyze large datasets. Versions 1.1.2-alpha and earlier have a vulnerability where multiple operations happening at the same time can corrupt or lose data in local JSON files (a common data storage format), affecting study notes, quizzes, flashcards, and user accounts.

Fix: This issue has been fixed in version 1.1.3-alpha. Users should upgrade to version 1.1.3-alpha or later.

NVD/CVE Database
10

CVE-2026-27170: OpenSift is an AI study tool that sifts through large datasets using semantic search and generative AI. In versions 1.1.

security
Feb 20, 2026

OpenSift, an AI study tool that searches through large datasets using semantic search (finding similar content based on meaning) and generative AI, has a vulnerability in versions 1.1.2-alpha and below where it can be tricked into requesting unsafe internet addresses through its URL ingest feature (the part that accepts web links as input). An attacker could exploit this to access private or local network resources from the computer running OpenSift.

Fix: This issue has been fixed in version 1.1.3-alpha. As a temporary workaround for trusted local-only exceptions, use the setting OPENSIFT_ALLOW_PRIVATE_URLS=true, but this should be used with caution.

NVD/CVE Database
Prev1...188189190191192...371Next