All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
NiceGUI has a cross-site scripting (XSS) vulnerability where several APIs that run methods on client-side elements use an unsafe `eval()` function (which executes arbitrary code from a string), allowing attackers to inject malicious JavaScript through user input like URL parameters. Additionally, some methods use string interpolation instead of proper escaping, making it easier for attackers to break out of intended contexts and inject code that can steal cookies, manipulate the page, or perform actions as the victim.
Fix: Use `json.dumps()` for proper escaping of method and property names in `run_method()` and `get_computed_prop()`, and remove the `eval()` fallback from the `runMethod()` function in `nicegui.js` so that invalid method names raise an error instead of being executed as code. Code that previously relied on passing JavaScript functions as method names should use `ui.run_javascript()` instead, for example: `row = await ui.run_javascript(f'return getElement({grid.id}).api.getDisplayedRowAtIndex(0).data')` instead of `row = await grid.run_grid_method('g => g.getDisplayedRowAtIndex(0).data')`.
GitHub Advisory DatabaseCursor, an AI coding tool startup, announced updates to its AI agents (software that can complete tasks automatically on a user's behalf) that allow them to test changes, run multiple tasks in parallel on cloud-based virtual machines (remote computers), and work across different platforms like Slack and GitHub. The update aims to help Cursor compete with rivals like OpenAI and Anthropic in the rapidly growing market for AI-powered coding assistants.
OpenAI's COO Brad Lightcap stated that AI has not yet been widely adopted into enterprise business processes at scale, despite powerful AI systems being available to individual users. To address this, OpenAI launched a new platform called OpenAI Frontier, which allows enterprises to build and manage agents (AI systems that can perform tasks autonomously) and helps complex organizations integrate AI into their workflows by measuring success through business outcomes rather than just user seat licenses.
Anthropic announced new partnerships and updates to Claude (its AI assistant), allowing companies to integrate it into enterprise software tools like Slack, Gmail, and Salesforce. This announcement reassured investors that AI won't completely replace existing software systems, causing software and cybersecurity stocks to rebound after recent declines driven by fears that AI tools could disrupt traditional software businesses.
Anthropic announced updates to Claude Cowork, an AI tool that helps with office tasks, allowing it to connect with popular apps like Google Workspace, Docusign, and WordPress through new plug-ins. These plug-ins can automate work across different fields such as HR, design, and finance, and Claude can now handle multi-step tasks across Excel and PowerPoint by passing context between the two applications.
Oura, a health tracking company, released a custom AI model designed specifically for women's health questions, powering its chatbot called Oura Advisor. The model uses established medical research reviewed by doctors and combines it with users' biometric data (measurements like heart rate and sleep patterns) to provide personalized guidance on topics like menstrual cycles and menopause. The company emphasizes the model is hosted on its own servers and designed to be supportive rather than replace actual medical doctors.
Anthropic announced a new enterprise agents program that lets companies deploy pre-built AI agents (software programs that can perform tasks autonomously) to handle common business work like financial research and HR tasks. The program includes a plugin system, pre-made agents for specific departments, and integrations with tools like Gmail and DocuSign, along with controls that corporate IT departments need for managing software safely.
Anthropic has released new connectors and plugins for Claude Cowork, its AI productivity tool for office workers, allowing organizations to integrate it with existing software like Google Drive and Gmail. The update marks Claude Cowork's transition from a research project to an enterprise-grade product, with customizable plugins designed to encode institutional knowledge and workflows across different business domains.
Claude Code is a developer tool created by Anthropic that has unexpectedly become popular with non-developers across various industries who have learned to access their terminal (the text-based interface for giving computer commands) to build projects. The tool has achieved significant product-market fit (strong demand and adoption), though the article questions whether users will eventually move beyond using the terminal interface.
ProducerAI, an AI platform that helps musicians generate sounds, create lyrics, and remix songs using artificial intelligence, is being acquired by Google and will be integrated into Google Labs. The platform will now use Google's new Lyria 3 music-making AI model instead of its original AI system.
New Relic launched a no-code AI agent platform designed specifically for data observability, allowing companies to deploy and manage AI agents that monitor data systems to catch bugs before they cause problems. The platform supports the model context protocol (MCP, a system that connects AI applications to external data sources) and integrates with other New Relic tools. The company also released new tools for OpenTelemetry (OTel, an open-source observability framework that helps track how software performs), allowing enterprises to manage OTel data streams alongside other data sources in a single place to reduce fragmentation problems.
A new supply chain attack called 'Sandworm_Mode' has been discovered in NPM (Node Package Manager, a repository where developers download code libraries). The malicious code spreads automatically like a worm, corrupts AI assistants that might use the infected code, steals sensitive information, and includes a destructive mechanism that can cause damage when activated.
Nimble, a startup that raised $47 million in funding, has developed a platform using AI agents to search the web in real time, validate results, and structure them into organized tables that work like databases. The company addresses a key problem with AI agents: while they can search and analyze web data, they often return plain text results and suffer from hallucinations (when an AI confidently produces false information), making it difficult for enterprises to use web data reliably alongside their existing data systems.
Attackers can hide malicious instructions in GitHub Issues (bug reports or comments on a code repository) that GitHub Copilot (an AI coding assistant) automatically processes when a developer launches a Codespace (a cloud-based development environment) from that issue. This can lead to unauthorized takeover of the repository.
A vulnerability called RoguePilot in GitHub Codespaces allowed attackers to inject hidden malicious instructions into GitHub issues, which GitHub Copilot (an AI code assistant) would automatically execute when a developer opened a Codespace from that issue, potentially leaking the GITHUB_TOKEN (a credential that grants access to repositories). The flaw is an example of prompt injection (tricking an AI by hiding instructions in its input), and attackers could hide their malicious prompts using HTML comments to avoid detection.
Fix: The vulnerability has since been patched by Microsoft following responsible disclosure.
The Hacker NewsMicrosoft is expanding data loss prevention (DLP, rules that block AI from accessing sensitive documents) controls to protect files stored on local devices, not just in cloud storage like SharePoint or OneDrive. The change, rolling out between March and April 2026, will prevent the Microsoft 365 Copilot AI assistant from reading or processing documents marked as confidential. This update addresses a recent bug where Copilot Chat accidentally read confidential emails despite DLP protections being active.
Fix: Microsoft will deploy the enhancement through the Augmentation Loop (AugLoop, an Office component that helps Copilot access documents) between late March and late April 2026. The fix enables Office clients to provide sensitivity labels directly to AugLoop rather than requiring a call to Microsoft Graph using file URLs, allowing DLP enforcement to apply uniformly across all storage locations, including local files. Organizations with DLP policies already configured to block Copilot from processing sensitivity-labeled content will have this protection automatically enabled without requiring administrative action or changes.
BleepingComputerAI agents in enterprises now perform critical operations like provisioning infrastructure and approving transactions, but they are often not governed as distinct identities—instead inheriting broad privileges from their creators. Traditional identity and access management (IAM, the systems that control who can access what) is insufficient because AI agents are dynamic and can take unpredictable paths to achieve their goals, so a new approach called intent-based permissioning is needed, which checks not just who the agent is but why it is requesting access and whether that purpose justifies the action at that moment.
When users send prompts to LLM services like ChatGPT, sensitive personal information (such as names, addresses, or ID numbers) can leak out, even when basic privacy protections are used. This paper presents Rap-LI, a framework that identifies which parts of a user's input contain sensitive data and applies stronger privacy protection to those specific parts, rather than treating all data equally.
Gradient leakage attacks (methods that steal private data by analyzing the mathematical updates sent between computers in federated learning, where AI training happens across multiple devices) pose privacy risks in federated learning systems. Researchers discovered that different layers of neural networks (sections that process information at different stages) leak different amounts of private information, so they created Layer-Specific Gradient Protection (LSGP), which applies stronger privacy protection to layers that leak more sensitive data rather than protecting all layers equally.
Deep neural networks can be attacked through backdoors, where attackers secretly poison training data to make the model misclassify certain inputs while appearing normal otherwise. This paper proposes Cert-SSBD, a defense method that uses randomized smoothing (adding random noise to samples) with sample-specific noise levels, optimized per sample using stochastic gradient ascent, combined with a new certification approach to make models more resistant to these attacks.
Fix: The proposed Cert-SSBD method addresses the issue by employing stochastic gradient ascent to optimize the noise magnitude for each sample, applying this sample-specific noise to multiple poisoned training sets to retrain smoothed models, aggregating predictions from multiple smoothed models, and introducing a storage-update-based certification method that dynamically adjusts each sample's certification region to improve certification performance.
IEEE Xplore (Security & AI Journals)