All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
The Department of Defense has designated Anthropic (an AI company) as a "supply-chain risk" after the company refused to give the military unrestricted access to its AI systems, specifically declining to allow mass surveillance of Americans or autonomous weapons that can fire without human oversight. Hundreds of tech workers from major firms have signed an open letter opposing this designation, arguing it punishes the company for declining a contract and sets a dangerous precedent that could force other companies to accept government demands or face retaliation. The designation is not yet final, as the government must complete a risk assessment and notify Congress before it takes effect, and Anthropic says it will challenge the designation in court.
Google Chrome had a security flaw (CVE-2026-0628, a CVSS score of 8.8, which measures vulnerability severity from 0-10) that allowed malicious browser extensions to gain unauthorized access to the Gemini Live panel, a built-in AI assistant, and perform privileged actions like accessing cameras, microphones, and local files. The vulnerability was caused by insufficient policy enforcement in the WebView tag (a component that displays web content), which let attackers inject malicious code into pages that should have been protected.
Nvidia is investing $4 billion total ($2 billion each) into two companies, Lumentum and Coherent, that develop photonics technology (devices like optical transceivers and lasers that move data using light). These technologies could make AI data centers more energy-efficient and allow faster data transfer between components, building on Nvidia's previous acquisition of Mellanox to strengthen its networking capabilities.
Anthropic's Claude AI experienced elevated errors and degraded performance on Monday, particularly affecting Claude Opus 4.6 (the latest version of their AI model). The company identified the issues and worked on fixes, with some problems on claude.ai and related services being resolved.
A security flaw in Chrome's Gemini Live feature (Google's AI assistant) could allow malicious browser extensions (add-ons that modify Chrome's behavior) to take control of the AI tool, spy on users, and steal their files. The vulnerability created a serious risk for anyone using this feature with untrusted extensions installed.
Nvidia is investing $4 billion total ($2 billion each) in two U.S. companies, Lumentum and Coherent, that develop photonics technologies (systems using light for sensing and data transfer). These investments include multi-billion dollar purchase commitments and aim to support Nvidia's AI infrastructure expansion by securing advanced optical and laser components needed for large-scale AI data centers.
A vulnerability in OpenClaw allowed malicious websites to connect to the OpenClaw gateway (a system that manages AI agents) on localhost (a computer's own network), guess passwords through brute force attacks (trying many password combinations rapidly), and take control of AI agents. This exposed AI systems to unauthorized hijacking from untrusted websites.
Anthropic's Claude service experienced a widespread outage on Monday morning, affecting Claude.ai and Claude Code (though the Claude API remained functional), with most users encountering errors during login. The company identified the issue was related to login and logout systems and stated it was implementing a fix, though no root cause or technical details were disclosed.
Claude, an AI assistant made by Anthropic, experienced a widespread outage on March 2, 2026, affecting users across all platforms including web, mobile, and API (the interface developers use to connect to the service). Users reported failed requests, timeouts (when the system doesn't respond in time), and inconsistent responses, with the company still investigating the cause as of the last update.
A high-severity vulnerability (CVE-2026-0628) in Google Chrome's Gemini AI feature allowed malicious extensions with basic permissions to hijack the Gemini panel and gain unauthorized access to sensitive resources like the camera, microphone, screenshots, and local files. Google released a fix in early January 2026, and the vulnerability highlights how integrating AI directly into browsers creates new security risks when AI components have overly broad access to the browser environment.
A bug in Google's Gemini AI Panel allowed attackers to escalate privileges (gain higher-level access to a system), violate user privacy during browsing, and access sensitive resources. The vulnerability created a security risk by opening a door for unauthorized control of the system.
True cybersecurity culture is about real behaviors and decisions people make under pressure, not awareness campaigns or posters. The article argues that most organizations accidentally train employees to ignore security by rewarding speed over safety, creating confusing policies, making secure processes difficult, and failing to acknowledge security concerns, then suggests fixing this by redesigning workflows to make secure choices the easiest and most obvious option.
Fix: Google patched the vulnerability in Chrome version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux in early January 2026.
The Hacker NewsFix: According to the status updates mentioned: an issue with Claude Opus 4.6 had 'a fix was in the works' as of 10:49 a.m. ET, and issues on claude.ai, console, and claude code were reported as 'resolved' as of 10:47 a.m. ET.
CNBC TechnologyDeepfakes and injection attacks (where attackers substitute fake video or audio into a system's input stream) are increasingly being used to bypass identity verification systems in critical moments like bank account opening, remote hiring, and account recovery. Traditional deepfake detection alone is insufficient because attackers can either create high-quality synthetic media or completely bypass the camera sensor using injection attacks, so organizations need to validate entire identity sessions end-to-end, including device integrity and user behavior signals, rather than just checking if a face looks real.
OpenAI negotiated with the Pentagon to use its AI systems for military purposes, while Anthropic refused and was blacklisted for rejecting two uses: domestic mass surveillance (monitoring Americans without individual consent) and lethal autonomous weapons (AI systems that can kill targets without a human making the final decision). OpenAI's CEO claimed to have found a way to maintain safety limits in the company's military contract, though the article does not detail what those specific terms are.
OwnerHunter is a system that uses large language models (AI trained on vast amounts of text) to identify who owns a website by analyzing webpage content across multiple languages. It improves on older methods that struggled when webpages listed many names or were written in non-English languages, using strategies like checking multiple sources on a page and verifying results to accurately determine the true owner.
OpenAI secured a deal with the U.S. Department of Defense after the Trump administration forced federal agencies to stop using Anthropic's AI technology, citing disagreements over how the Pentagon wanted to use the artificial intelligence startup's systems. OpenAI's CEO Sam Altman stated that his company shares the same ethical boundaries (called guardrails, which are safety limits built into AI systems) as Anthropic regarding how the technology should be used.
Anti-AI protest groups organized a march in London on February 28 with a couple hundred protesters expressing concerns about generative AI (AI systems trained on large amounts of data to generate text, images, or other content), ranging from job displacement and harmful content to existential risks. The protest represents a significant growth in organized anti-AI activism, with groups like Pause AI expanding rapidly since their 2023 founding to mobilize larger crowds around concerns that researchers have documented about AI systems like ChatGPT and Gemini.
Researchers demonstrated that LLMs (large language models, AI systems trained on vast amounts of text) can effectively de-anonymize people by identifying them from their anonymous online posts across platforms like Hacker News, Reddit, and LinkedIn. By analyzing just a handful of comments, these AI systems can infer personal details like location, occupation, and interests, then search the web to match and identify the anonymous user with high accuracy across tens of thousands of candidates.
Fix: Google released a fix in early January 2026. Additionally, Palo Alto Networks' Prisma Browser is mentioned as a product designed to prevent extension-based attacks like this vulnerability.
Palo Alto Unit 42AI is developing faster than government regulation can keep up, creating risks like chatbots giving harmful advice to teens and potential misuse for creating biological weapons. Unlike industries such as nuclear power or pharmaceuticals, AI companies are not required to disclose safety problems or undergo independent testing before releasing new models to the public. The author argues that independent oversight of AI platforms is necessary to protect people's rights and safety.
Security leaders (CISOs, who oversee an organization's security strategy) face pressure to enable innovation like AI adoption while reducing risk and staying within budget constraints. The source argues that well-governed innovation actually reduces risk by preventing uncontrolled tool sprawl and shadow IT (unauthorized software systems), but unmanaged innovation creates fragile systems that increase damage from security incidents. The key is bringing discipline to experimentation by automating routine tasks, giving teams ownership of meaningful improvements with clear end goals, and using AI strategically only where it changes the risk equation without creating new vulnerabilities.
Fix: The source recommends: 'Make the secure path the easiest path. People choose defaults. Give them good ones. Create golden paths for common work. Secure templates. Approved tools. Automated guardrails. Self-service access with sane limits.' The text also advises organizations to 'Remove friction. Clarify choices. Make it hard to do the wrong thing by accident and easy to make the best possible decision.'
CSO Online