aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,650
[LAST_24H]
1
[LAST_7D]
155
Daily BriefingSunday, March 29, 2026
>

Bluesky Launches AI-Powered Feed Customization Tool: Bluesky released Attie, an AI assistant that lets users create custom content feeds by describing what they want in plain language rather than adjusting technical settings. The tool runs on Claude (Anthropic's language model) and will integrate into apps built on Bluesky's AT Protocol.

Latest Intel

page 32/265
VIEW ALL
01

CVE-2025-15060: claude-hovercraft executeClaudeCode Command Injection Remote Code Execution Vulnerability. This vulnerability allows rem

security
Mar 16, 2026

CVE-2025-15060 is a remote code execution vulnerability in claude-hovercraft that allows attackers to run arbitrary code without needing to log in. The flaw exists in the executeClaudeCode method, which fails to properly validate user input before using it in a system call (a request to run operating system commands), allowing attackers to inject malicious commands.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
NVD/CVE Database
02

CVE-2025-14287: A command injection vulnerability exists in mlflow/mlflow versions before v3.7.0, specifically in the `mlflow/sagemaker/

security
Mar 16, 2026

MLflow versions before v3.7.0 contain a command injection vulnerability (a flaw where attackers insert malicious commands into input that gets executed) in the sagemaker module. An attacker can exploit this by passing a malicious container image name through the `--container` parameter, which the software unsafely inserts into shell commands and runs, allowing arbitrary command execution on affected systems.

Fix: Update MLflow to version v3.7.0 or later.

NVD/CVE Database
03

⚡ Weekly Recap: Chrome 0-Days, Router Botnets, AWS Breach, Rogue AI Agents & More

security
Mar 16, 2026

This week's security news includes Google patching two actively exploited Chrome vulnerabilities in the graphics and JavaScript engines that could allow code execution, Meta discontinuing encrypted messaging on Instagram, and law enforcement disrupting botnets (malware networks that hijack routers) like SocksEscort and KadNap that were being used for fraud and illegal proxy services. A threat actor also exploited a compromised npm package (a JavaScript code library) to breach an AWS cloud environment and steal data.

Fix: Google addressed the Chrome vulnerabilities in versions 146.0.7680.75/76 for Windows and macOS, and 146.0.7680.75 for Linux.

The Hacker News
04

Shadow AI is everywhere. Here’s how to find and secure it.

securitypolicy
Mar 16, 2026

Shadow AI refers to AI tools used throughout an organization without IT oversight or approval, creating security and governance challenges. The source describes Nudge Security as a platform that addresses this by providing continuous discovery of AI apps and user accounts, monitoring for sensitive data sharing in AI conversations, and tracking which AI tools have access to company data through integrations.

Fix: According to the source, Nudge Security delivers mitigation through: (1) a lightweight IdP (identity provider, the system that manages user identities) integration with Microsoft 365 or Google Workspace that takes less than 5 minutes to enable, which analyzes machine-generated emails to detect new AI accounts and tool adoption; (2) a browser extension for real-time monitoring of risky behaviors and alerts when sensitive data (PII, secrets, financial info) is shared with AI tools; (3) tracking of SaaS-to-AI integrations and their access scopes; and (4) configurable alerts for new AI tools or policy violations.

BleepingComputer
05

From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs

researchsecurity
Mar 16, 2026

This article examines how large language models (AI systems trained on huge amounts of text data) can be used in cybersecurity red teaming (simulated attacks to test defenses) and blue teaming (defensive security work), mapping their abilities to established security frameworks. However, LLMs struggle in difficult, real-world situations because they have limitations like hallucinations (generating false information confidently), poor memory of long conversations, and gaps in logical reasoning.

IEEE Xplore (Security & AI Journals)
06

Nurturing agentic AI beyond the toddler stage

safetypolicy
Mar 16, 2026

Autonomous AI agents (AI systems that operate independently to complete complex tasks with minimal human oversight) have advanced rapidly, creating new governance challenges because they can operate at machine speed without humans in the loop to approve each decision. Unlike traditional chatbots where humans reviewed outputs before consequential actions, agents now directly modify enterprise systems and data, making organizations legally liable for any harm caused (similar to how parents are responsible for their children's actions). Without building governance rules directly into the code that controls these agents' permissions and actions, organizations face significant risks from drift (where agents behave differently than intended) and unauthorized access to critical systems.

MIT Technology Review
07

Why Security Validation Is Becoming Agentic

securityindustry
Mar 16, 2026

Organizations typically use separate security tools (BAS tools, pentesting products, vulnerability scanners) that don't communicate with each other, creating blind spots because attackers chain multiple vulnerabilities together in coordinated operations. The article proposes that agentic AI (autonomous AI agents that can plan, execute, and reason through complex tasks without human direction at each step) should be applied to security validation to create a unified, continuous system that combines adversarial perspective (how attackers get in), defensive perspective (whether defenses stop them), and risk perspective (which exposures actually matter).

The Hacker News
08

Open VSX extensions hijacked: GlassWorm malware spreads via dependency abuse

security
Mar 16, 2026

Threat actors are spreading GlassWorm malware through Open VSX extensions (add-ons for the VS Code editor) by abusing dependency relationships, a feature that automatically installs other extensions when one is installed. Instead of hiding malware in every extension, attackers create legitimate-looking extensions that gain user trust, then update them to depend on separate extensions containing the malware loader, making the attack harder to detect.

Fix: As of March 13, Open VSX has removed the majority of the transitively malicious extensions. Socket researchers recommend treating extension dependencies with the same scrutiny typically applied to software packages, monitoring extension updates, auditing dependency relationships, and restricting installation to trusted publishers where possible.

CSO Online
09

OpenAI’s adult mode will reportedly be smutty, not pornographic

safety
Mar 16, 2026

OpenAI is developing an "adult mode" for ChatGPT that will allow users to generate text conversations with adult themes, described as "smut" rather than pornography. The feature will initially support only text and will not generate images, voice, or video content. OpenAI claims to have reduced "serious mental health issues" in its AI model enough to safely relax safety restrictions (the guardrails that prevent the AI from producing certain types of content) for this feature.

The Verge (AI)
10

GenAI-Security als Checkliste

policysecurity
Mar 15, 2026

OWASP, a nonprofit cybersecurity organization, has published a checklist to help companies secure their use of generative AI and LLMs (large language models, which are AI systems trained on massive amounts of text to understand and generate human language). The checklist covers six key areas: understanding competitive and adversarial risks, threat modeling (identifying how attackers might exploit AI systems), maintaining an inventory of AI tools and assets, and ensuring proper governance and security controls are in place.

CSO Online
Prev1...3031323334...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026