aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1254 items

AI can’t make good video game worlds yet, and it might never be able to

infonews
industry
Feb 15, 2026

The article discusses how video game developers have long created games that generate their own worlds using programmed rules and parameters, such as Minecraft and Rogue, but suggests that generative AI (machine learning models that create new content) may struggle to replicate this capability effectively. The piece implies fundamental limitations in how AI can approach world-building compared to human developers' intentional design methods.

The Verge (AI)

langchain-openrouter==0.0.2

infonews
security
Feb 15, 2026

This appears to be a navigation or header section from a GitHub page related to AI coding tools like GitHub Copilot and Spark, rather than a security issue or technical problem about the langchain-openrouter package.

langchain-anthropic==1.3.3

infonews
security
Feb 15, 2026

LangChain-Anthropic version 1.3.3 is a software release that includes several updates to how the library works with Anthropic's AI models. The updates add support for an "effort=max" parameter (which tells the AI to use maximum computational effort), fix an issue where extra spaces were being left at the end of AI responses, and introduce a new ContextOverflowError (an error that triggers when an AI receives too much text to process at once).

langchain-openai==1.1.9

lownews
security
Feb 15, 2026

LangChain's OpenAI integration released version 1.1.9, which fixes a bug where URLs in images weren't being properly cleaned up when the system counted how many tokens (units of text that an AI processes) were being used. The update also adds better error handling for when a prompt (input text to an AI) becomes too long to process.

langchain-core==1.2.13

infonews
security
Feb 15, 2026

This is a release announcement for langchain-core version 1.2.13, a software package that provides core functionality for building applications with language models. The release includes documentation improvements, a new OpenRouter provider package, and a code style update.

langchain-openrouter==0.0.1: feat(openrouter): add `langchain-openrouter` provider package (#35211)

infonews
security
Feb 15, 2026

LangChain added a new official package called langchain-openrouter that wraps the OpenRouter Python SDK (a library for accessing different AI models through one interface). This package, which includes a ChatOpenRouter component, handles capabilities that the existing ChatOpenAI component intentionally does not support.

No swiping involved: the AI dating apps promising to find your soulmate

infonews
industry
Feb 15, 2026

New AI-powered dating apps like Fate are emerging that use agentic AI (AI systems that can take actions and make decisions autonomously) and LLMs (large language models, the technology behind systems like ChatGPT) to match users based on personality similarity rather than superficial rankings, and some offer AI coaching to help users have better conversations. These startups aim to address problems with existing dating apps that use algorithmic ranking systems like Elo scores (ratings originally designed for chess) and are criticized for profiting by keeping users on the platform longer.

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

infonews
researchsafety

US military used Anthropic’s AI model Claude in Venezuela raid, report says

infonews
securitypolicy

It's been a big — but rocky — week for AI models from China. Here's what's happened

infonews
industry
Feb 14, 2026

Chinese tech companies Alibaba, ByteDance, and Kuaishou released new AI models this week that compete with Western AI tools in robotics and video generation. Alibaba's RynnBrain helps robots understand and interact with physical objects by tracking time and location, while ByteDance's Seedance 2.0 generates realistic videos from text prompts. However, ByteDance suspended Seedance's voice generation feature after concerns emerged that it was creating voices without the consent of the people whose images were used.

Anthropic's public benefit mission

infonews
policy
Feb 13, 2026

Anthropic is a public benefit corporation (a company legally structured to serve public interest, not just shareholders) that has stated its mission as developing AI responsibly for humanity's benefit. The company's official incorporation documents show this mission statement has remained consistent from 2021 to 2024, with only minor wording updates.

The evolution of OpenAI's mission statement

infonews
policyindustry

Anthropic got an 11% user boost from its OpenAI-bashing Super Bowl ad, data shows

infonews
industry
Feb 13, 2026

Anthropic's Super Bowl advertisement criticizing OpenAI's decision to add ads to ChatGPT resulted in an 11% increase in daily active users for Claude (Anthropic's chatbot), outperforming competing AI chatbots from OpenAI, Google, and Meta. The ad campaign reflects growing competition between AI companies as they vie for users and enterprise customers ahead of potential future public offerings.

Claude LLM artifacts abused to push Mac infostealers in ClickFix attack

highnews
security
Feb 13, 2026

Threat actors are abusing Claude artifacts (AI-generated content shared publicly on claude.ai) and Google Ads to trick macOS users into running malicious commands that install MacSync infostealer malware (software that steals sensitive data like passwords and crypto wallets). Over 10,000 users have viewed these fake guides disguised as legitimate tools like DNS resolvers or HomeBrew package managers.

Researchers unearth 30-year-old vulnerability in libpng library

infonews
security
Feb 13, 2026

Researchers discovered a heap buffer overflow (a type of memory corruption flaw where data overflows a temporary memory area) in libpng, a widely-used library for reading and editing PNG image files, that existed for 30 years. The vulnerability in the png_set_quantize function could cause crashes or potentially allow attackers to extract data or execute remote code (run commands on a victim's system), but exploitation requires careful preparation and the flaw is rarely triggered in practice. The flaw affects all libpng versions before 1.6.55.

Battling bots face off in cybersecurity arena

infonews
researchindustry

Anthropic taps ex-Microsoft CFO, Trump aide Liddell for board

infonews
industry
Feb 13, 2026

Anthropic, a startup known for developing Claude (an AI assistant), appointed Chris Liddell, a former Microsoft CFO and Trump administration official, to its board of directors. This move may help improve Anthropic's relationship with the Trump administration, which previously criticized the company for its stance on AI regulation.

What’s behind the mass exodus at xAI?

infonews
industry
Feb 13, 2026

xAI, an AI company founded by Elon Musk, is experiencing significant staff departures, with multiple cofounders (including Yuhuai Wu and Jimmy Ba) announcing they are leaving the company. The departures have reduced the company's original 12 cofounders to only 6 remaining, and several other employees have also announced their exits, with some starting their own AI companies.

AI is indeed coming – but there is also evidence to allay investor fears

infonews
industry
Feb 13, 2026

New AI tools are becoming more powerful, causing investors to worry that AI might eliminate many white-collar jobs (office-based positions requiring advanced skills) or reduce company profits across industries like law, finance, and logistics. However, the article notes that expert opinions are divided about how serious this threat actually is, with some evidence suggesting investor fears may be overstated.

AI Agents 'Swarm,' Security Complexity Follows Suit

infonews
security
Feb 13, 2026

As organizations deploy multiple AI agents (independent AI programs) that work together autonomously, the security risks increase because there are more entry points for attackers to exploit. The complexity of securing these interconnected systems grows along with the number of agents involved.

Previous43 / 63Next
LangChain Security Releases

Fix: Update to langchain-anthropic version 1.3.3, which includes fixes for trailing whitespace in assistant messages and support for the effort="max" parameter.

LangChain Security Releases

Fix: Update to langchain-openai version 1.1.9 or later. The fix for URL sanitization when counting image tokens is included in this release.

LangChain Security Releases
LangChain Security Releases
LangChain Security Releases
The Guardian Technology
Feb 15, 2026

Cognitive debt (the loss of shared understanding in developers' minds about how a system works) is becoming a bigger problem than technical debt (poorly written code) when using generative AI and agentic AI (AI systems that can take actions autonomously). Even if AI produces clean code, developers may lose track of why design decisions were made or how different parts connect, making it impossible to understand or modify the system confidently.

Simon Willison's Weblog
Feb 14, 2026

According to the Wall Street Journal, Claude (an AI model made by Anthropic) was used by the US military in an operation in Venezuela involving airstrikes and resulting in 83 deaths. This violates Anthropic's terms of use, which explicitly forbid Claude from being used for violence, weapons development, or surveillance.

The Guardian Technology
CNBC Technology
Simon Willison's Weblog
Feb 13, 2026

This article tracks how OpenAI's official mission statement, filed annually with the IRS (the U.S. tax authority), changed between 2016 and 2024. Over time, OpenAI removed mentions of openly sharing capabilities, dropped the phrase "as a whole" from "benefit humanity," shifted from wanting to "help" build safe AI to committing to "develop and responsibly deploy" it themselves, and eventually cut the mission down to a single sentence focused on ensuring artificial general intelligence (AI systems designed to handle any task a human can) benefits all of humanity, while notably removing any mention of safety.

Simon Willison's Weblog
CNBC Technology

Fix: Users are recommended to exert caution and avoid executing in Terminal commands they don't fully understand. As noted by Kaspersky researchers, asking the chatbot in the same conversation about the safety of the provided commands is a straightforward way to determine if they're safe or not.

BleepingComputer

Fix: The vulnerability is fixed in libpng version 1.6.55.

CSO Online
Feb 13, 2026

Wiz created a benchmark suite of 257 real-world cybersecurity challenges across five areas (zero-day discovery, CVE detection, API security, web security, and cloud security) to test which AI agents perform best at cybersecurity tasks. The benchmark runs tests in isolated Docker containers (sandboxed environments that prevent interference with the main system) and scores agents based on their ability to detect vulnerabilities and security issues, with Claude Code performing best overall.

CSO Online
CNBC Technology
The Verge (AI)
The Guardian Technology
Dark Reading