aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1245 items

Let’s talk about Ring, lost dogs, and the surveillance state

infonews
policy
Feb 16, 2026

Ring's Super Bowl advertisement promoting its Search Party feature, which uses camera footage to find lost dogs, sparked controversy over surveillance and privacy concerns because the same technology could be used to track and locate people without consent. Critics, including Senator Ed Markey, argued the ad represented mass surveillance and called for Ring to stop using facial recognition (technology that identifies people by analyzing their faces) on its doorbells. Four days after the backlash, Ring canceled its planned partnership with Flock Safety, a company whose surveillance systems had been accessed by ICE (Immigration and Customs Enforcement).

The Verge (AI)

The Promptware Kill Chain

infonews
securityresearch

After spooking Hollywood, ByteDance will tweak safeguards on new AI model

infonews
safetypolicy

CISO Julie Chatman wants to help you take control of your security leadership role

infonews
security
Feb 16, 2026

This article is a career profile of Julie Chatman, a CISO (Chief Information Security Officer, the top security leader at an organization), discussing the evolving challenges in her role. She highlights four major challenges: getting non-technical leaders to prioritize security, securing adequate funding, defending against AI-enabled adaptive attacks (attacks that change their behavior based on the target), and facing personal legal liability for breach handling and risk disclosure decisions.

10 years later, Bangladesh Bank cyberheist still offers cyber-resiliency lessons

infonews
security
Feb 16, 2026

Ten years after the Bangladesh Bank cyberheist in 2016, investigators traced the $81 million theft to North Korea's Lazarus Group, which hacked the bank's internal network and SWIFT (a system for sending international bank payments) to send fraudulent payment instructions. The attackers used spear-phishing emails (deceptive messages targeting specific people) to plant malware, created secret access points called backdoors, and sabotaged printers to hide evidence before triggering the attack during a holiday weekend when monitoring was minimal.

We will do battle with AI chatbots as we did with Grok, says Starmer

infonews
policysafety

SIEM-Kaufratgeber

infonews
security
Feb 15, 2026

SIEM (Security Information and Event Management, tools that collect and analyze security logs from networks) solutions are essential components of modern security systems that protect against attackers who try to hide their activities by manipulating event logs. When selecting a SIEM tool, organizations should consider the deployment model (cloud-based or on-premises), analytics capabilities powered by machine learning (algorithms that learn from data to detect unusual patterns), and how well the system can collect and process logs from various sources like servers, networks, and cloud applications.

OpenClaw founder Peter Steinberger is joining OpenAI

infonews
industry
Feb 15, 2026

Peter Steinberger, the founder of OpenClaw (an AI agent, which is an AI system designed to complete tasks autonomously), has joined OpenAI. Sam Altman stated that Steinberger's expertise in getting multiple AI agents to work together will become important to OpenAI's future products, as the company believes the future will involve many agents collaborating.

Starmer to extend online safety rules to AI chatbots after Grok scandal

infonews
policysafety

The AI trade has entered a puzzling phase. Do we know who the winners are anymore?

infonews
industry
Feb 15, 2026

N/A -- The provided content is a footer/navigation page from CNBC with no substantive information about AI or LLM-related topics. It contains only website links, legal notices, and subscription prompts, making it impossible to extract meaningful technical content to summarize.

I hate my AI pet with every fiber of my being

infonews
industry
Feb 15, 2026

A reviewer describes their negative experience with Moflin, Casio's AI-powered robotic pet, finding its constant noises and movements irritating despite its cute appearance and design for people who cannot own real pets. The article suggests that AI pet companions, while intended to provide companionship, may create frustration rather than the comfort they promise.

AI can’t make good video game worlds yet, and it might never be able to

infonews
industry
Feb 15, 2026

The article discusses how video game developers have long created games that generate their own worlds using programmed rules and parameters, such as Minecraft and Rogue, but suggests that generative AI (machine learning models that create new content) may struggle to replicate this capability effectively. The piece implies fundamental limitations in how AI can approach world-building compared to human developers' intentional design methods.

langchain-openrouter==0.0.2

infonews
security
Feb 15, 2026

This appears to be a navigation or header section from a GitHub page related to AI coding tools like GitHub Copilot and Spark, rather than a security issue or technical problem about the langchain-openrouter package.

langchain-anthropic==1.3.3

infonews
security
Feb 15, 2026

LangChain-Anthropic version 1.3.3 is a software release that includes several updates to how the library works with Anthropic's AI models. The updates add support for an "effort=max" parameter (which tells the AI to use maximum computational effort), fix an issue where extra spaces were being left at the end of AI responses, and introduce a new ContextOverflowError (an error that triggers when an AI receives too much text to process at once).

langchain-openai==1.1.9

lownews
security
Feb 15, 2026

LangChain's OpenAI integration released version 1.1.9, which fixes a bug where URLs in images weren't being properly cleaned up when the system counted how many tokens (units of text that an AI processes) were being used. The update also adds better error handling for when a prompt (input text to an AI) becomes too long to process.

langchain-core==1.2.13

infonews
security
Feb 15, 2026

This is a release announcement for langchain-core version 1.2.13, a software package that provides core functionality for building applications with language models. The release includes documentation improvements, a new OpenRouter provider package, and a code style update.

langchain-openrouter==0.0.1: feat(openrouter): add `langchain-openrouter` provider package (#35211)

infonews
security
Feb 15, 2026

LangChain added a new official package called langchain-openrouter that wraps the OpenRouter Python SDK (a library for accessing different AI models through one interface). This package, which includes a ChatOpenRouter component, handles capabilities that the existing ChatOpenAI component intentionally does not support.

No swiping involved: the AI dating apps promising to find your soulmate

infonews
industry
Feb 15, 2026

New AI-powered dating apps like Fate are emerging that use agentic AI (AI systems that can take actions and make decisions autonomously) and LLMs (large language models, the technology behind systems like ChatGPT) to match users based on personality similarity rather than superficial rankings, and some offer AI coaching to help users have better conversations. These startups aim to address problems with existing dating apps that use algorithmic ranking systems like Elo scores (ratings originally designed for chess) and are criticized for profiting by keeping users on the platform longer.

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

infonews
researchsafety

US military used Anthropic’s AI model Claude in Venezuela raid, report says

infonews
securitypolicy
Previous42 / 63Next
Feb 16, 2026

Attacks on AI language models have evolved beyond simple prompt injection (tricking an AI by hiding instructions in its input) into a more complex threat called "promptware," which follows a structured seven-step kill chain similar to traditional malware. The fundamental problem is that large language models (LLMs, AI systems trained on massive amounts of text) treat all input the same way, whether it's a trusted system command or untrusted data from a retrieved document, creating no architectural boundary between them.

Schneier on Security
Feb 16, 2026

ByteDance announced it will improve safeguards on Seedance 2.0, its AI video generator (software that creates realistic videos from text descriptions), after Hollywood studios and trade groups complained that the tool violates copyright by generating hyperrealistic videos of famous actors and characters without permission. The company stated it respects intellectual property rights and is taking steps to strengthen current safeguards in response to the backlash.

The Verge (AI)
CSO Online
CSO Online
Feb 16, 2026

The UK government is proposing new laws to protect children online by including AI chatbots in the Online Safety Act (the law regulating online platforms), faster legislative updates to keep pace with technology changes, and measures like preserving children's data after death and preventing VPN use to bypass age checks. The prime minister pledged to act quickly against AI tools that create non-consensual sexual deepfakes and to crack down on addictive social media features like auto-play and endless scrolling.

Fix: The government intends to: (1) include AI chatbots in the Online Safety Act, which became law in 2023 but predates ChatGPT and similar tools; (2) create new legal powers to take 'immediate action' following consultation; (3) amend rules so chatbots must protect users from illegal content; (4) require coroners to notify Ofcom of every child death aged 5-18 to ensure tech companies preserve relevant data within five days rather than allowing deletion within 12 months; and (5) consider preventing children from using virtual private networks (VPNs, tools that mask a user's location and identity) to bypass age checks. The Technology Secretary stated the government should be able to 'act swiftly once it had come to a decision' and compared the need for faster technology legislation to the annual budget process.

BBC Technology
CSO Online
The Verge (AI)
Feb 15, 2026

The UK government plans to extend online safety rules to AI chatbots, with makers of systems that endanger children facing fines or service blocks. This follows a scandal involving Elon Musk's Grok tool (an AI chatbot), which was stopped from generating sexualized images of real people in the UK after public pressure.

The Guardian Technology
CNBC Technology
The Verge (AI)
The Verge (AI)
LangChain Security Releases

Fix: Update to langchain-anthropic version 1.3.3, which includes fixes for trailing whitespace in assistant messages and support for the effort="max" parameter.

LangChain Security Releases

Fix: Update to langchain-openai version 1.1.9 or later. The fix for URL sanitization when counting image tokens is included in this release.

LangChain Security Releases
LangChain Security Releases
LangChain Security Releases
The Guardian Technology
Feb 15, 2026

Cognitive debt (the loss of shared understanding in developers' minds about how a system works) is becoming a bigger problem than technical debt (poorly written code) when using generative AI and agentic AI (AI systems that can take actions autonomously). Even if AI produces clean code, developers may lose track of why design decisions were made or how different parts connect, making it impossible to understand or modify the system confidently.

Simon Willison's Weblog
Feb 14, 2026

According to the Wall Street Journal, Claude (an AI model made by Anthropic) was used by the US military in an operation in Venezuela involving airstrikes and resulting in 83 deaths. This violates Anthropic's terms of use, which explicitly forbid Claude from being used for violence, weapons development, or surveillance.

The Guardian Technology