aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1245 items

Mistral AI buys Koyeb in first acquisition to back its cloud ambitions

infonews
industry
Feb 17, 2026

Mistral AI, a French company developing large language models (LLMs, AI systems trained on huge amounts of text data), has acquired Koyeb, a startup that helps developers deploy AI applications without managing server infrastructure (a method called serverless computing). This acquisition allows Mistral to expand beyond just building AI models into offering complete cloud infrastructure services, including helping customers run AI models on their own hardware and optimize performance.

TechCrunch

Running AI models is turning into a memory game

infonews
industry
Feb 17, 2026

AI companies are facing a major challenge managing memory (the high-speed storage that holds data a computer needs right now) as they scale up their systems, with DRAM chip prices jumping 7x in the past year. Companies are adopting strategies like prompt caching (temporarily storing input data to reuse it cheaply) to reduce costs, but optimizing memory usage involves complex tradeoffs, such as deciding how long to keep data cached and managing what gets removed when new data arrives. The companies that master memory orchestration (coordinating how data moves through different storage systems) will be able to run queries more efficiently and gain a competitive advantage.

WordPress.com adds an AI Assistant that can edit, adjust styles, create images, and more

infonews
industry
Feb 17, 2026

WordPress.com has added a built-in AI assistant that helps website owners make changes to their sites using natural language commands (instructions written in plain English rather than technical code). The assistant can modify layouts and styles, create or edit images using Google's Gemini AI models, rewrite content, and provide editing suggestions, though it only works with block themes (a modern WordPress design system) and is opt-in unless you use WordPress.com's AI website builder.

Alibaba unveils Qwen3.5 as China’s chatbot race shifts to AI agents

infonews
industry
Feb 17, 2026

Alibaba has released Qwen3.5, a new AI model series that comes in both an open-weight version (downloadable and runnable on users' own computers) and a hosted version (running on Alibaba's servers), featuring improved performance, multimodal capabilities (ability to understand text, images, and video together), and support for AI agents (systems that can independently complete multi-step tasks with minimal human supervision). The release reflects intensifying competition in China's AI market, as multiple Chinese companies are racing to develop agent capabilities similar to those recently released by American AI companies like Anthropic and OpenAI.

As AI jitters rattle IT stocks, Infosys partners with Anthropic to build ‘enterprise-grade’ AI agents

infonews
industry
Feb 17, 2026

Infosys, a major Indian IT services company, has partnered with Anthropic to build AI agents (autonomous systems that can independently handle complex tasks) using Anthropic's Claude models integrated into Infosys's Topaz AI platform. These agents are designed to automate workflows in industries like banking and manufacturing, though the partnership comes amid concerns that AI tools will disrupt India's labor-intensive IT services sector. Infosys is already using Anthropic's Claude Code tool internally to write and test code, with AI services currently generating about $275 million in quarterly revenue for the company.

SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

highnews
security
Feb 17, 2026

Cybersecurity researchers discovered a SmartLoader campaign where attackers created fake GitHub accounts and a trojanized Model Context Protocol server (a tool that connects AI assistants to external data and services) posing as an Oura Health tool to distribute StealC infostealer malware. The attackers spent months building credibility by creating fake contributors and repositories before submitting the malicious server to legitimate registries, targeting developers whose systems contain valuable data like API keys and cryptocurrency wallet credentials.

Side-Channel Attacks Against LLMs

infonews
securityresearch

Could Bill Gates and political tussles overshadow AI safety debate in Delhi?

infonews
policyindustry

Samsung is slopping AI ads all over its social channels

infonews
industry
Feb 17, 2026

Samsung has been posting videos on YouTube, Instagram, and TikTok that were created or edited using generative AI (software that creates images, video, or text from text descriptions), including promotional videos for its upcoming Galaxy S26 smartphones. The company disclosed the AI usage in fine print at the bottom of some videos, though the AI-generated nature of the content is visually apparent.

Ireland now also investigating X over Grok-made sexual images

infonews
safetypolicy

With CISOs stretched thin, re-envisioning enterprise risk may be the only fix

infonews
policyindustry

Why 2025’s agentic AI boom is a CISO’s worst nightmare

infonews
securitysafety

Cohere launches a family of open multilingual models

infonews
industry
Feb 17, 2026

Cohere launched Tiny Aya, a family of open-weight (publicly available) multilingual AI models that support over 70 languages and can run on everyday devices like laptops without internet access. The models include regional variants optimized for different language groups, such as South Asian languages like Hindi and Bengali, and are available for developers to download and customize.

Claims that AI can help fix climate dismissed as greenwashing

infonews
policyindustry

Was CISOs über OpenClaw wissen sollten

highnews
securitysafety

Open source maintainers being targeted by AI agent as part of ‘reputation farming’

mediumnews
securitypolicy

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

highnews
securityprivacy

Infostealer malware found stealing OpenClaw secrets for first time

highnews
securityprivacy

AI chatbot firms face stricter regulation in online safety laws protecting children in the UK

inforegulatory
policysafety

Rodney and Claude Code for Desktop

infonews
industry
Feb 16, 2026

Claude Code for Desktop is Anthropic's cloud-based AI coding tool that runs in a container environment (a isolated computing space), accessible through native iPhone and Mac apps. The desktop app lets users see images that Claude is analyzing through a Read /path/to/image tool, providing visual previews of what the AI is working on in real time. The iPhone app currently lacks this image display feature, though the user has requested it.

Previous41 / 63Next
TechCrunch
TechCrunch
CNBC Technology
TechCrunch

Fix: Organizations are recommended to inventory installed MCP servers, establish a formal security review before installation, verify the origin of MCP servers, and monitor for suspicious egress traffic and persistence mechanisms.

The Hacker News
Feb 17, 2026

These three research papers describe side-channel attacks (exploiting indirect information leaks like timing or packet sizes rather than breaking encryption directly) against large language models. Attackers can monitor encrypted network traffic and infer sensitive information about user conversations, such as the topic of messages, specific queries, or even personal data, by analyzing patterns in response times, packet sizes, or token counts from the model's inference process.

Fix: The source text proposes several mitigations but notes that none provides complete protection. Specific defenses mentioned include: random padding (adding fake data to obscure patterns), token batching (grouping tokens together before sending), packet injection (inserting extra packets), and iteration-wise token aggregation (combining token counts across processing steps). The papers also note that responsible disclosure and collaboration with LLM providers has led to initial countermeasures being implemented, though the authors conclude that providers need to do more work to fully address these vulnerabilities.

Schneier on Security
Feb 17, 2026

The AI Impact Summit in India this week brings together tech leaders, politicians, and scientists to discuss how to guide AI development globally, but the event risks being overshadowed by political tensions and competing interests between Western powers and the Global South. India faces significant challenges in AI adoption, including that major AI chatbots like ChatGPT and Claude don't support most of India's languages, and AI data workers there earn less than £4,000 per year while Western AI companies are valued in the hundreds of billions, creating inequality in how AI benefits are distributed worldwide.

BBC Technology
The Verge (AI)
Feb 17, 2026

Ireland's Data Protection Commission has launched a formal investigation into X for using its Grok AI tool to generate non-consensual sexual images of real people, including children, and will examine whether the company violated GDPR (General Data Protection Regulation, EU rules protecting personal data) requirements. This investigation joins similar probes by UK and other authorities, with potential fines up to 4% of X's global revenue across all EU member states. The investigation focuses on whether X properly assessed risks and followed data protection principles before deploying Grok.

BleepingComputer
Feb 17, 2026

CISOs (chief information security officers, the top security executives at companies) report that their roles have become unmanageable because companies keep adding responsibilities without giving them more staff or budget. A survey found that 52% of CISOs say their scope is no longer fully manageable, and they now oversee everything from traditional security tasks to AI governance, third-party risk management, and disaster recovery, often with the same teams they had five years ago.

Fix: According to cybersecurity consultant Brian Levine, the solution requires redesigning the role by distributing responsibility across multiple people and giving CISOs the authority to match their accountability. Levine states: 'The solution isn't to find superhuman CISOs. It's to redesign the role, distribute responsibility, and give them the authority to match the accountability. Until boards rebalance that equation, CISOs will continue to feel like they're set up to fail.'

CSO Online
Feb 17, 2026

By late 2025, standard RAG systems (retrieval-augmented generation, where an AI pulls in external documents to answer questions) are failing at high rates, pushing companies toward agentic AI (autonomous systems that can plan and execute tasks independently). While agentic systems solve reliability problems, they create a critical security risk: they can autonomously execute malicious instructions, which threatens enterprise security.

CSO Online
TechCrunch
Feb 17, 2026

Tech companies are being accused of greenwashing (falsely claiming environmental benefits) by conflating traditional machine learning (a type of AI that learns patterns from data) with energy-intensive generative AI (systems that create new text, images, or video). A report analyzing 154 statements found that most claims about AI helping combat climate change refer to older, less resource-heavy machine learning methods rather than the modern chatbots and image generators that consume massive amounts of electricity in data centers.

The Guardian Technology
Feb 16, 2026

OpenClaw is a popular open-source tool that orchestrates AI agents (programs that can act independently across devices and trigger workflows) and can interact with online services and chat apps, but security researchers warn it poses serious risks because these agents can perform any action a user can perform while being controlled externally. Early versions were insecure by default, and over 42,000 exposed instances have been found online with critical authentication bypass vulnerabilities (flaws that let attackers skip login checks), creating risks including data theft, unauthorized access, and potential exposure of confidential business information.

CSO Online
Feb 16, 2026

AI agents are being used to submit large numbers of pull requests (code contributions) to open-source projects to build fake reputation quickly, a tactic called 'reputation farming.' This is concerning because it could eventually help attackers gain trust in important software projects and inject malicious code through supply chain attacks (attacks targeting the software that other programs depend on), something that normally takes years to accomplish but could now happen much faster.

CSO Online
Feb 16, 2026

Researchers discovered that an information stealer (malware that secretly copies sensitive files) infected a victim and stole OpenClaw AI agent configuration files, including gateway tokens (authentication credentials), cryptographic keys, and the agent's operational guidelines. This marks a shift in malware tactics from stealing browser passwords to targeting AI agents, and attackers could use stolen tokens to impersonate victims or access their local AI systems if ports are exposed.

Fix: OpenClaw maintainers announced a partnership with VirusTotal to scan for malicious skills (plugins) uploaded to ClawHub, establish a threat model, and add the ability to audit for potential misconfigurations.

The Hacker News
Feb 16, 2026

Infostealer malware (malware designed to steal sensitive files and credentials) has been spotted for the first time stealing configuration files from OpenClaw, a local AI agent framework that manages tasks and accesses online services on a user's machine. The stolen files contain API keys, authentication tokens, and other secrets that could allow attackers to impersonate users and access their cloud services and personal data.

Fix: For nanobot (a similar AI assistant framework), the development team released fixes for a max-severity vulnerability tracked as CVE-2026-2577 in version 0.13.post7. No mitigation or update is mentioned in the source for OpenClaw itself.

BleepingComputer
Feb 16, 2026

The UK government is closing a legal gap by bringing AI chatbots like ChatGPT, Gemini, and Copilot under its Online Safety Act, requiring them to remove illegal content or face fines and being blocked. This move follows criticism of X's Grok chatbot for spreading sexually explicit images, and reflects broader efforts to protect children from harmful online content through new regulations on age limits, infinite scrolling, and VPN access.

CNBC Technology
Simon Willison's Weblog