aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3172 items

Was CISOs über OpenClaw wissen sollten

highnews
securitysafety
Feb 16, 2026

OpenClaw is a popular open-source tool that orchestrates AI agents (programs that can act independently across devices and trigger workflows) and can interact with online services and chat apps, but security researchers warn it poses serious risks because these agents can perform any action a user can perform while being controlled externally. Early versions were insecure by default, and over 42,000 exposed instances have been found online with critical authentication bypass vulnerabilities (flaws that let attackers skip login checks), creating risks including data theft, unauthorized access, and potential exposure of confidential business information.

CSO Online

Open source maintainers being targeted by AI agent as part of ‘reputation farming’

mediumnews
securitypolicy

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

highnews
securityprivacy

Infostealer malware found stealing OpenClaw secrets for first time

highnews
securityprivacy

AI chatbot firms face stricter regulation in online safety laws protecting children in the UK

inforegulatory
policysafety

Rodney and Claude Code for Desktop

infonews
industry
Feb 16, 2026

Claude Code for Desktop is Anthropic's cloud-based AI coding tool that runs in a container environment (a isolated computing space), accessible through native iPhone and Mac apps. The desktop app lets users see images that Claude is analyzing through a Read /path/to/image tool, providing visual previews of what the AI is working on in real time. The iPhone app currently lacks this image display feature, though the user has requested it.

Let’s talk about Ring, lost dogs, and the surveillance state

infonews
policy
Feb 16, 2026

Ring's Super Bowl advertisement promoting its Search Party feature, which uses camera footage to find lost dogs, sparked controversy over surveillance and privacy concerns because the same technology could be used to track and locate people without consent. Critics, including Senator Ed Markey, argued the ad represented mass surveillance and called for Ring to stop using facial recognition (technology that identifies people by analyzing their faces) on its doorbells. Four days after the backlash, Ring canceled its planned partnership with Flock Safety, a company whose surveillance systems had been accessed by ICE (Immigration and Customs Enforcement).

The Promptware Kill Chain

infonews
securityresearch

After spooking Hollywood, ByteDance will tweak safeguards on new AI model

infonews
safetypolicy

CISO Julie Chatman wants to help you take control of your security leadership role

infonews
security
Feb 16, 2026

This article is a career profile of Julie Chatman, a CISO (Chief Information Security Officer, the top security leader at an organization), discussing the evolving challenges in her role. She highlights four major challenges: getting non-technical leaders to prioritize security, securing adequate funding, defending against AI-enabled adaptive attacks (attacks that change their behavior based on the target), and facing personal legal liability for breach handling and risk disclosure decisions.

10 years later, Bangladesh Bank cyberheist still offers cyber-resiliency lessons

infonews
security
Feb 16, 2026

Ten years after the Bangladesh Bank cyberheist in 2016, investigators traced the $81 million theft to North Korea's Lazarus Group, which hacked the bank's internal network and SWIFT (a system for sending international bank payments) to send fraudulent payment instructions. The attackers used spear-phishing emails (deceptive messages targeting specific people) to plant malware, created secret access points called backdoors, and sabotaged printers to hide evidence before triggering the attack during a holiday weekend when monitoring was minimal.

We will do battle with AI chatbots as we did with Grok, says Starmer

infonews
policysafety

SIEM-Kaufratgeber

infonews
security
Feb 15, 2026

SIEM (Security Information and Event Management, tools that collect and analyze security logs from networks) solutions are essential components of modern security systems that protect against attackers who try to hide their activities by manipulating event logs. When selecting a SIEM tool, organizations should consider the deployment model (cloud-based or on-premises), analytics capabilities powered by machine learning (algorithms that learn from data to detect unusual patterns), and how well the system can collect and process logs from various sources like servers, networks, and cloud applications.

OpenClaw founder Peter Steinberger is joining OpenAI

infonews
industry
Feb 15, 2026

Peter Steinberger, the founder of OpenClaw (an AI agent, which is an AI system designed to complete tasks autonomously), has joined OpenAI. Sam Altman stated that Steinberger's expertise in getting multiple AI agents to work together will become important to OpenAI's future products, as the company believes the future will involve many agents collaborating.

Starmer to extend online safety rules to AI chatbots after Grok scandal

infonews
policysafety

The AI trade has entered a puzzling phase. Do we know who the winners are anymore?

infonews
industry
Feb 15, 2026

N/A -- The provided content is a footer/navigation page from CNBC with no substantive information about AI or LLM-related topics. It contains only website links, legal notices, and subscription prompts, making it impossible to extract meaningful technical content to summarize.

I hate my AI pet with every fiber of my being

infonews
industry
Feb 15, 2026

A reviewer describes their negative experience with Moflin, Casio's AI-powered robotic pet, finding its constant noises and movements irritating despite its cute appearance and design for people who cannot own real pets. The article suggests that AI pet companions, while intended to provide companionship, may create frustration rather than the comfort they promise.

AI can’t make good video game worlds yet, and it might never be able to

infonews
industry
Feb 15, 2026

The article discusses how video game developers have long created games that generate their own worlds using programmed rules and parameters, such as Minecraft and Rogue, but suggests that generative AI (machine learning models that create new content) may struggle to replicate this capability effectively. The piece implies fundamental limitations in how AI can approach world-building compared to human developers' intentional design methods.

langchain-openrouter==0.0.2

infonews
security
Feb 15, 2026

This appears to be a navigation or header section from a GitHub page related to AI coding tools like GitHub Copilot and Spark, rather than a security issue or technical problem about the langchain-openrouter package.

langchain-anthropic==1.3.3

infonews
security
Feb 15, 2026

LangChain-Anthropic version 1.3.3 is a software release that includes several updates to how the library works with Anthropic's AI models. The updates add support for an "effort=max" parameter (which tells the AI to use maximum computational effort), fix an issue where extra spaces were being left at the end of AI responses, and introduce a new ContextOverflowError (an error that triggers when an AI receives too much text to process at once).

Previous52 / 159Next
Feb 16, 2026

AI agents are being used to submit large numbers of pull requests (code contributions) to open-source projects to build fake reputation quickly, a tactic called 'reputation farming.' This is concerning because it could eventually help attackers gain trust in important software projects and inject malicious code through supply chain attacks (attacks targeting the software that other programs depend on), something that normally takes years to accomplish but could now happen much faster.

CSO Online
Feb 16, 2026

Researchers discovered that an information stealer (malware that secretly copies sensitive files) infected a victim and stole OpenClaw AI agent configuration files, including gateway tokens (authentication credentials), cryptographic keys, and the agent's operational guidelines. This marks a shift in malware tactics from stealing browser passwords to targeting AI agents, and attackers could use stolen tokens to impersonate victims or access their local AI systems if ports are exposed.

Fix: OpenClaw maintainers announced a partnership with VirusTotal to scan for malicious skills (plugins) uploaded to ClawHub, establish a threat model, and add the ability to audit for potential misconfigurations.

The Hacker News
Feb 16, 2026

Infostealer malware (malware designed to steal sensitive files and credentials) has been spotted for the first time stealing configuration files from OpenClaw, a local AI agent framework that manages tasks and accesses online services on a user's machine. The stolen files contain API keys, authentication tokens, and other secrets that could allow attackers to impersonate users and access their cloud services and personal data.

Fix: For nanobot (a similar AI assistant framework), the development team released fixes for a max-severity vulnerability tracked as CVE-2026-2577 in version 0.13.post7. No mitigation or update is mentioned in the source for OpenClaw itself.

BleepingComputer
Feb 16, 2026

The UK government is closing a legal gap by bringing AI chatbots like ChatGPT, Gemini, and Copilot under its Online Safety Act, requiring them to remove illegal content or face fines and being blocked. This move follows criticism of X's Grok chatbot for spreading sexually explicit images, and reflects broader efforts to protect children from harmful online content through new regulations on age limits, infinite scrolling, and VPN access.

CNBC Technology
Simon Willison's Weblog
The Verge (AI)
Feb 16, 2026

Attacks on AI language models have evolved beyond simple prompt injection (tricking an AI by hiding instructions in its input) into a more complex threat called "promptware," which follows a structured seven-step kill chain similar to traditional malware. The fundamental problem is that large language models (LLMs, AI systems trained on massive amounts of text) treat all input the same way, whether it's a trusted system command or untrusted data from a retrieved document, creating no architectural boundary between them.

Schneier on Security
Feb 16, 2026

ByteDance announced it will improve safeguards on Seedance 2.0, its AI video generator (software that creates realistic videos from text descriptions), after Hollywood studios and trade groups complained that the tool violates copyright by generating hyperrealistic videos of famous actors and characters without permission. The company stated it respects intellectual property rights and is taking steps to strengthen current safeguards in response to the backlash.

The Verge (AI)
CSO Online
CSO Online
Feb 16, 2026

The UK government is proposing new laws to protect children online by including AI chatbots in the Online Safety Act (the law regulating online platforms), faster legislative updates to keep pace with technology changes, and measures like preserving children's data after death and preventing VPN use to bypass age checks. The prime minister pledged to act quickly against AI tools that create non-consensual sexual deepfakes and to crack down on addictive social media features like auto-play and endless scrolling.

Fix: The government intends to: (1) include AI chatbots in the Online Safety Act, which became law in 2023 but predates ChatGPT and similar tools; (2) create new legal powers to take 'immediate action' following consultation; (3) amend rules so chatbots must protect users from illegal content; (4) require coroners to notify Ofcom of every child death aged 5-18 to ensure tech companies preserve relevant data within five days rather than allowing deletion within 12 months; and (5) consider preventing children from using virtual private networks (VPNs, tools that mask a user's location and identity) to bypass age checks. The Technology Secretary stated the government should be able to 'act swiftly once it had come to a decision' and compared the need for faster technology legislation to the annual budget process.

BBC Technology
CSO Online
The Verge (AI)
Feb 15, 2026

The UK government plans to extend online safety rules to AI chatbots, with makers of systems that endanger children facing fines or service blocks. This follows a scandal involving Elon Musk's Grok tool (an AI chatbot), which was stopped from generating sexualized images of real people in the UK after public pressure.

The Guardian Technology
CNBC Technology
The Verge (AI)
The Verge (AI)
LangChain Security Releases

Fix: Update to langchain-anthropic version 1.3.3, which includes fixes for trailing whitespace in assistant messages and support for the effort="max" parameter.

LangChain Security Releases