aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3115 items

The Download: The Pentagon’s new AI plans, and next-gen nuclear reactors

infonews
securitypolicy
Mar 18, 2026

The Pentagon is planning to create secure environments where AI companies can train their models on classified military data, which would embed sensitive intelligence like surveillance reports into the AI systems themselves and bring these companies closer to classified information than before. This represents a major shift from current use of AI models like Claude in classified settings, but introduces unique security risks by allowing models to learn from rather than just access classified data.

MIT Technology Review

DLSS 5: Has Nvidia’s AI graphics technology gone too far?

infonews
industry
Mar 18, 2026

Nvidia has released DLSS 5, a new 3D guided neural rendering model (an AI system that generates realistic graphics in real-time) that can alter a game's lighting and materials during gameplay. Many gamers have criticized the technology for changing how games look in ways they didn't expect, with complaints that it distorts character appearances and doesn't respect the original artists' creative vision.

Reco targets AI agent blind spots with new security capability

infonews
securityindustry

Claude Code Security and Magecart: Getting the Threat Model Right

infonews
security
Mar 18, 2026

Magecart attacks (malicious code injected into e-commerce sites to steal payment data) often hide in third-party resources like images or scripts that never enter a company's code repository, making them invisible to static analysis tools like Claude Code Security. Claude Code Security is designed to scan code you own, so it cannot detect malicious code injected at runtime through compromised external libraries, CDNs (content delivery networks that distribute files globally), or data hidden in binary files like favicons, which means repository-based scanning has a fundamental blind spot for this attack class.

Micron rides memory price spike into earnings with stock up 62%, drubbing its tech peers

infonews
industry
Mar 18, 2026

Micron Technology's stock has surged 62% in 2026 due to a severe shortage of memory chips (computer components that store data temporarily) needed for AI graphics processing units (GPUs, specialized chips that power artificial intelligence). The shortage is driven by massive demand from cloud companies like Amazon and Google building AI data centers, and SK Hynix estimates the memory crunch will continue for another four to five years, pushing prices higher across the industry.

We asked experts about the most responsible ways to use AI tools – here’s what they said

infonews
safety
Mar 18, 2026

The article discusses expert advice on responsible AI tool use, emphasizing that people should use AI as a brainstorming partner and for organizing information, but should not let it replace their own decision-making. A 2025 survey shows that one-third of US adults use ChatGPT, with particularly high adoption among people under 30.

Can you prove the person on the other side is real?

infonews
securitysafety

China’s ‘AI tigers’ see shares surge after Nvidia CEO touts OpenClaw as ‘next ChatGPT’

infonews
industry
Mar 18, 2026

Chinese AI companies saw significant stock gains after Nvidia CEO Jensen Huang praised OpenClaw, an open-source AI agent (a program that can perform tasks independently), as "the next ChatGPT." Companies like MiniMax and Zhipu, which are among China's leading AI developers building large language models (AI systems trained on huge amounts of text to understand and generate language), have integrated OpenClaw into their products and are launching their own versions based on it.

CISOs rethink their data protection strategies

infonews
securitypolicy

Meta's Manus launches desktop app to bring its AI agent onto personal devices amid OpenClaw craze

infonews
industrysafety

OWASP GenAI Security Project Expands AI Security Frameworks Ahead of RSA 2026, Celebrates Continued Sponsor Support

inforesearchIndustry
security

Survey on Learning-based Dynamic Fault Localization: From Traditional Machine Learning to Large Language Models

inforesearchPeer-Reviewed
research

Die besten Hacker-Filme

infonews
security
Mar 18, 2026

This is a curated list of hacker-themed films arranged chronologically, from War Games (1983) to Live Free or Die Hard (2007), intended for security professionals who enjoy cinema. The article provides plot summaries, genres, and review scores from multiple sources for each film, with a note that the list may cause procrastination.

CVE-2025-66376: Synacor Zimbra Collaboration Suite (ZCS) Cross-Site Scripting Vulnerability

infovulnerability
security
Mar 17, 2026
CVE-2025-66376🔥 Actively Exploited

CVE-2026-20963: Microsoft SharePoint Deserialization of Untrusted Data Vulnerability

infovulnerability
security
Mar 17, 2026
CVE-2026-20963

Microsoft SharePoint has a deserialization of untrusted data vulnerability (a flaw where the software unsafely processes data from untrusted sources, allowing attackers to inject malicious code). An unauthorized attacker can exploit this over a network to execute code on affected systems. This vulnerability is currently being actively exploited by real attackers.

Nvidia CEO Jensen Huang says OpenClaw is 'definitely the next ChatGPT'

infonews
industry
Mar 17, 2026

Nvidia CEO Jensen Huang highlighted OpenClaw, an open-source autonomous AI agent platform (a system that can complete tasks and make decisions with minimal human input, unlike traditional chatbots), calling it "the next ChatGPT" and a major breakthrough in AI interaction. Nvidia launched NemoClaw, an enterprise version of OpenClaw that adds security, scalability, and oversight tools to make these autonomous agents safe for real-world business use, addressing concerns about security, privacy, and control as these systems gain the ability to act independently.

The Pentagon is planning for AI companies to train on classified data, defense official says

infonews
policysecurity

OpenAI preps for IPO by end of year, tells employees ChatGPT must be 'productivity tool'

infonews
industry
Mar 17, 2026

OpenAI is preparing for an initial public offering (IPO, where a private company sells shares to the public) potentially by the end of 2024, with leadership telling employees that ChatGPT must focus on being a productivity tool for businesses. The company is shifting strategy to convert its 900 million weekly users into enterprise customers and has scaled back its infrastructure spending targets from $1.4 trillion to $600 billion by 2030 to present a more realistic financial picture to investors.

GHSA-2cpp-j2fc-qhp7: AWS API MCP File Access Restriction Bypass

mediumvulnerability
security
Mar 17, 2026
CVE-2026-4270

The AWS API MCP Server (a tool that lets AI assistants interact with AWS services) has a vulnerability in versions 0.2.14 through 1.3.8 where attackers can bypass file access restrictions and read files they shouldn't be able to access, even when the server is configured to block file operations or limit them to a specific directory.

GHSA-9x67-f2v7-63rw: AVideo vulnerable to unauthenticated SSRF via HTTP redirect bypass in LiveLinks proxy

highvulnerability
security
Mar 17, 2026
CVE-2026-33039

AVideo's LiveLinks proxy endpoint validates URLs to block requests to internal networks, but only checks the initial URL. When a URL redirects (sends back a `Location` header pointing elsewhere), the code follows the redirect without re-validating the new target, letting attackers reach internal services like cloud metadata or private networks. The endpoint is also completely unauthenticated, so anyone can access it.

Previous5 / 156Next
The Verge (AI)
Mar 18, 2026

Reco, a SaaS security platform, launched "Reco AI Agent Security" on March 18 to address "agent sprawl," the problem of autonomous AI agents (like Copilot and ChatGPT integrations) accessing sensitive data and taking actions across multiple systems without human oversight. The new tool gives security teams visibility and control over these AI agents by using behavior-based detection (analyzing API call patterns and workflow signatures) instead of traditional connection-based methods, identifying risks like agents with excessive permissions or misconfigured access to customer data.

Fix: Reco AI Agent Security is explicitly designed as the mitigation. According to the source, the offering provides: (1) AI agent discovery through multi-layered detection that analyzes API call patterns and service account activity to identify autonomous behavior; (2) risk analysis by correlating activity across applications and recognizing workflow signatures of automation tools like n8n, Zapier, and Make; and (3) governance and control over all AI agents operating in the SaaS ecosystem. The platform tracks OAuth connections, analyzes decision-making patterns that indicate autonomous action, and monitors cross-application activity to identify agents that traditional SSPM tools miss.

CSO Online
The Hacker News
CNBC Technology
The Guardian Technology
Mar 18, 2026

Synthetic identity fraud, where criminals create fake people using AI-generated documents and deepfakes (realistic fake videos or audio), is becoming a major threat in estate and identity verification work. Traditional security checks that look at device fingerprints or typing patterns are no longer reliable because AI can now imitate these signals. The text explains that the real challenge by 2026 will be distinguishing legitimate people from manufactured personas, especially in high-stakes situations involving inheritance and family claims.

Fix: The source suggests moving from asking "Who is this?" to a more forensic approach: "How did this identity—and its digital footprint—come to exist?" This shift means prioritizing provenance (where the identity originated), issuer verification (confirming documents are real), and cross-channel consistency (checking if the person's presence makes sense across multiple systems) over accepting surface-level plausibility. However, the text does not provide specific technical implementations or detailed steps for executing this approach.

CSO Online
CNBC Technology
Mar 18, 2026

CISOs (Chief Information Security Officers, the top security leaders at companies) are updating their data protection strategies because employees are rapidly sharing company data with AI tools, including public models like ChatGPT, creating new security risks. A CISO at a law firm added a new protection layer that classifies data based on whether it can be safely used with AI and invested in new monitoring tools, while also regularly evaluating new technologies to ensure controls keep pace with AI innovations.

Fix: The source describes one organization's approach: add a protection layer that classifies and tags data based on whether it could be used with AI and in what circumstances, invest in new tools to support that layer, monitor the vendor landscape for emerging capabilities, and evaluate new technologies being deployed to determine whether new controls are needed for them. However, no specific technical solutions, patches, or vendor recommendations are explicitly named in the source text.

CSO Online
Mar 18, 2026

Meta-owned Manus launched a desktop application with a feature called 'My Computer' that allows its AI agent (a program that can complete complex, multi-step tasks automatically) to access and control files, tools, and applications directly on a user's computer, rather than only working in the cloud. This move competes with OpenClaw, a free, open-source AI agent that similarly runs on local devices. Experts have raised security and privacy concerns about giving AI agents local device access, but Manus addressed this by requiring explicit user approval before the agent executes tasks.

Fix: Manus's mitigation for security and privacy risks includes a control mechanism requiring explicit user approval before task execution. According to Manus, users can choose "Allow Once" for individual review of each action or "Always Allow" for trusted, recurring actions, keeping users "firmly in control."

CNBC Technology
policy
Mar 18, 2026

The OWASP GenAI Security Project, an open-source community focused on AI security, announced expansion of its resources and frameworks with over 25,000 members contributing practical guidance and tools. The project is being highlighted at the RSA 2026 conference, indicating growing industry adoption of AI security best practices.

OWASP GenAI Security
Mar 18, 2026

This survey examines methods for automatically finding bugs in software code by using machine learning and AI models, tracing the evolution from traditional machine learning techniques to modern large language models (LLMs, which are AI systems trained on vast amounts of text data). The research covers how these AI-based approaches learn patterns to pinpoint where faults occur in code, making debugging faster and more efficient than manual inspection.

ACM Digital Library (TOPS, DTRAP, CSUR)
CSO Online

Zimbra Collaboration Suite (ZCS) has a cross-site scripting vulnerability (XSS, a type of attack where malicious code runs in a user's browser) in its Classic UI that allows attackers to exploit CSS @import directives (special commands that load external stylesheets) in email HTML. This vulnerability is currently being actively exploited by attackers in real-world attacks.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. The due date for remediation is 2026-04-01.

CISA Known Exploited Vulnerabilities

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Due date: 2026-03-21.

CISA Known Exploited Vulnerabilities

Fix: Nvidia addressed risks with NemoClaw by building "guardrails, including privacy protections, oversight tools, and enterprise-grade security to ensure these agents can be deployed safely at scale."

CNBC Technology
Mar 17, 2026

The Pentagon is planning to let AI companies train their models on classified military data in secure facilities, which would allow the AI to learn from and embed sensitive intelligence like surveillance reports. While this could make AI systems more accurate for military tasks, experts warn it creates risks: classified information that the AI learns could accidentally be shared with people or military departments that shouldn't have access to it, potentially endangering operatives or exposing secrets.

MIT Technology Review
CNBC Technology

Fix: Upgrade to version 1.3.9 or later.

GitHub Advisory Database
GitHub Advisory Database