All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Mistral AI, a French company developing large language models (LLMs, AI systems trained on huge amounts of text data), has acquired Koyeb, a startup that helps developers deploy AI applications without managing server infrastructure (a method called serverless computing). This acquisition allows Mistral to expand beyond just building AI models into offering complete cloud infrastructure services, including helping customers run AI models on their own hardware and optimize performance.
Pterodactyl Panel has a security flaw where SFTP sessions (file transfer connections) stay active even after a user account is deleted or their password is changed, allowing continued access to server files with revoked credentials. This prevents administrators from immediately stopping access when they suspect a security breach, potentially allowing unauthorized people to read, modify, or delete files.
AI companies are facing a major challenge managing memory (the high-speed storage that holds data a computer needs right now) as they scale up their systems, with DRAM chip prices jumping 7x in the past year. Companies are adopting strategies like prompt caching (temporarily storing input data to reuse it cheaply) to reduce costs, but optimizing memory usage involves complex tradeoffs, such as deciding how long to keep data cached and managing what gets removed when new data arrives. The companies that master memory orchestration (coordinating how data moves through different storage systems) will be able to run queries more efficiently and gain a competitive advantage.
OpenClaw had a vulnerability where its hook endpoint (`POST /hooks/agent`) accepted session keys (identifiers for conversation contexts) directly from user requests, allowing someone with a valid hook token to inject messages into any session they could guess or derive. This could poison conversations with malicious prompts that persist across multiple turns. The vulnerability affected versions 2.0.0-beta3 through 2026.2.11.
WordPress.com has added a built-in AI assistant that helps website owners make changes to their sites using natural language commands (instructions written in plain English rather than technical code). The assistant can modify layouts and styles, create or edit images using Google's Gemini AI models, rewrite content, and provide editing suggestions, though it only works with block themes (a modern WordPress design system) and is opt-in unless you use WordPress.com's AI website builder.
Alibaba has released Qwen3.5, a new AI model series that comes in both an open-weight version (downloadable and runnable on users' own computers) and a hosted version (running on Alibaba's servers), featuring improved performance, multimodal capabilities (ability to understand text, images, and video together), and support for AI agents (systems that can independently complete multi-step tasks with minimal human supervision). The release reflects intensifying competition in China's AI market, as multiple Chinese companies are racing to develop agent capabilities similar to those recently released by American AI companies like Anthropic and OpenAI.
Infosys, a major Indian IT services company, has partnered with Anthropic to build AI agents (autonomous systems that can independently handle complex tasks) using Anthropic's Claude models integrated into Infosys's Topaz AI platform. These agents are designed to automate workflows in industries like banking and manufacturing, though the partnership comes amid concerns that AI tools will disrupt India's labor-intensive IT services sector. Infosys is already using Anthropic's Claude Code tool internally to write and test code, with AI services currently generating about $275 million in quarterly revenue for the company.
Cybersecurity researchers discovered a SmartLoader campaign where attackers created fake GitHub accounts and a trojanized Model Context Protocol server (a tool that connects AI assistants to external data and services) posing as an Oura Health tool to distribute StealC infostealer malware. The attackers spent months building credibility by creating fake contributors and repositories before submitting the malicious server to legitimate registries, targeting developers whose systems contain valuable data like API keys and cryptocurrency wallet credentials.
Samsung has been posting videos on YouTube, Instagram, and TikTok that were created or edited using generative AI (software that creates images, video, or text from text descriptions), including promotional videos for its upcoming Galaxy S26 smartphones. The company disclosed the AI usage in fine print at the bottom of some videos, though the AI-generated nature of the content is visually apparent.
Cohere launched Tiny Aya, a family of open-weight (publicly available) multilingual AI models that support over 70 languages and can run on everyday devices like laptops without internet access. The models include regional variants optimized for different language groups, such as South Asian languages like Hindi and Bengali, and are available for developers to download and customize.
Fix: Update to OpenClaw version 2026.2.12 or later. The fix includes: rejecting the `sessionKey` parameter by default unless explicitly enabled with `hooks.allowRequestSessionKey=true`, adding a `hooks.defaultSessionKey` option for fixed routing, and adding `hooks.allowedSessionKeyPrefixes` to restrict which session keys can be used. The recommended secure configuration disables `allowRequestSessionKey`, sets `defaultSessionKey` to "hook:ingress", and restricts prefixes to ["hook:"].
GitHub Advisory DatabaseFix: Organizations are recommended to inventory installed MCP servers, establish a formal security review before installation, verify the origin of MCP servers, and monitor for suspicious egress traffic and persistence mechanisms.
The Hacker NewsThese three research papers describe side-channel attacks (exploiting indirect information leaks like timing or packet sizes rather than breaking encryption directly) against large language models. Attackers can monitor encrypted network traffic and infer sensitive information about user conversations, such as the topic of messages, specific queries, or even personal data, by analyzing patterns in response times, packet sizes, or token counts from the model's inference process.
Fix: The source text proposes several mitigations but notes that none provides complete protection. Specific defenses mentioned include: random padding (adding fake data to obscure patterns), token batching (grouping tokens together before sending), packet injection (inserting extra packets), and iteration-wise token aggregation (combining token counts across processing steps). The papers also note that responsible disclosure and collaboration with LLM providers has led to initial countermeasures being implemented, though the authors conclude that providers need to do more work to fully address these vulnerabilities.
Schneier on SecurityThe AI Impact Summit in India this week brings together tech leaders, politicians, and scientists to discuss how to guide AI development globally, but the event risks being overshadowed by political tensions and competing interests between Western powers and the Global South. India faces significant challenges in AI adoption, including that major AI chatbots like ChatGPT and Claude don't support most of India's languages, and AI data workers there earn less than £4,000 per year while Western AI companies are valued in the hundreds of billions, creating inequality in how AI benefits are distributed worldwide.
Ireland's Data Protection Commission has launched a formal investigation into X for using its Grok AI tool to generate non-consensual sexual images of real people, including children, and will examine whether the company violated GDPR (General Data Protection Regulation, EU rules protecting personal data) requirements. This investigation joins similar probes by UK and other authorities, with potential fines up to 4% of X's global revenue across all EU member states. The investigation focuses on whether X properly assessed risks and followed data protection principles before deploying Grok.
CISOs (chief information security officers, the top security executives at companies) report that their roles have become unmanageable because companies keep adding responsibilities without giving them more staff or budget. A survey found that 52% of CISOs say their scope is no longer fully manageable, and they now oversee everything from traditional security tasks to AI governance, third-party risk management, and disaster recovery, often with the same teams they had five years ago.
Fix: According to cybersecurity consultant Brian Levine, the solution requires redesigning the role by distributing responsibility across multiple people and giving CISOs the authority to match their accountability. Levine states: 'The solution isn't to find superhuman CISOs. It's to redesign the role, distribute responsibility, and give them the authority to match the accountability. Until boards rebalance that equation, CISOs will continue to feel like they're set up to fail.'
CSO OnlineBy late 2025, standard RAG systems (retrieval-augmented generation, where an AI pulls in external documents to answer questions) are failing at high rates, pushing companies toward agentic AI (autonomous systems that can plan and execute tasks independently). While agentic systems solve reliability problems, they create a critical security risk: they can autonomously execute malicious instructions, which threatens enterprise security.
Tech companies are being accused of greenwashing (falsely claiming environmental benefits) by conflating traditional machine learning (a type of AI that learns patterns from data) with energy-intensive generative AI (systems that create new text, images, or video). A report analyzing 154 statements found that most claims about AI helping combat climate change refer to older, less resource-heavy machine learning methods rather than the modern chatbots and image generators that consume massive amounts of electricity in data centers.
TeamT5 ThreatSonar Anti-Ransomware has a vulnerability where it doesn't properly check uploaded files, allowing attackers with admin access to upload malicious files and run dangerous commands on the server. This vulnerability is currently being exploited by real attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services (a set of government cybersecurity rules), or discontinue use of the product if mitigations are unavailable. The deadline to address this is 2026-03-10.
CISA Known Exploited VulnerabilitiesGoogle Chromium contains a use-after-free vulnerability (a bug where software tries to access memory that has already been freed, potentially causing crashes or allowing attackers to run malicious code) in its CSS (cascading style sheets, the code that controls how web pages look) that could let remote attackers corrupt heap memory (a region of computer memory used for dynamic storage) through a specially crafted HTML page. This vulnerability affects multiple browsers built on Chromium, including Chrome, Edge, and Opera, and is currently being actively exploited by attackers.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Reference the Chrome releases blog at https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html for specific patching details.
CISA Known Exploited VulnerabilitiesZimbra Collaboration Suite (ZCS), an email and collaboration platform, has a server-side request forgery vulnerability (SSRF, where an attacker tricks the server into making unauthorized requests to internal systems) if the WebEx zimlet, a plugin that adds functionality, is installed and zimlet JSP (Java Server Pages, a way to generate dynamic web content) is enabled. This vulnerability is currently being exploited by attackers in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Video ActiveX Control (a reusable software component for video handling) contains a remote code execution vulnerability (a flaw that lets attackers run commands on a victim's computer without permission). An attacker can exploit this by tricking a user into viewing a malicious webpage, which could then execute code with the same permissions as the logged-in user. This vulnerability is currently being exploited by attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited Vulnerabilities