All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Gradio (an open-source Python package for building web interfaces quickly) has a vulnerability in versions before 6.7 on Windows with Python 3.13 and newer that allows attackers to read any file from the server by exploiting a flaw in how the software checks if file paths are absolute (starting from the root directory). The vulnerability exists because Python 3.13 changed how it defines absolute paths, breaking Gradio's protections against path traversal (accessing files outside intended directories).
Fix: Update Gradio to version 6.7 or later, which fixes the issue.
NVD/CVE DatabaseGradio, a Python package for building web interfaces, has a security flaw in versions 4.16.0 through 6.5.x where it automatically enables fake OAuth routes (authentication shortcuts) that accidentally expose the server owner's Hugging Face access token (a credential used to authenticate with Hugging Face services) to anyone who visits the login page. An attacker can steal this token because the session cookie (a small file storing login information) is signed with a hardcoded secret, making it easy to decode.
President Trump directed federal agencies to stop using Anthropic's AI products and gave them six months to phase out usage, after the company disputed with the Department of Defense. The Pentagon's Secretary of Defense designated Anthropic as a supply-chain risk to national security, meaning military contractors can no longer do business with the company, because Anthropic refused to let its AI models be used for mass domestic surveillance or fully autonomous weapons (systems that make decisions and take action without human control).
President Trump ordered federal agencies to stop using Claude (an AI system made by Anthropic) after the company's CEO refused to sign a military agreement that would allow unlimited use of their technology. The disagreement centers on whether Anthropic's AI should be available for all military purposes, including domestic surveillance.
A software developer who was skeptical about AI coding agents discovered they have become significantly more capable, using them to build increasingly complex projects including a Rust implementation of machine learning algorithms. The developer notes that recent AI coding models (like Opus 4.6 and Codex 5.3) are dramatically better than earlier versions, but this improvement is hard to communicate publicly without sounding like promotional hype.
Google's API keys (simple identifiers that were designed only for billing purposes) unexpectedly gained the ability to authenticate access to private Gemini AI project data without any warning to developers. Researchers found 2,863 exposed keys that could let attackers steal files, datasets, and documents, or rack up expensive bills by running the AI model repeatedly.
Despite strong earnings and growth forecasts, Nvidia's stock fell 6% this week as investors worry that spending by tech companies on AI infrastructure will peak soon and competition is increasing. Major AI companies like OpenAI and Meta are now diversifying away from Nvidia's GPUs (graphics processing units, specialized chips for AI computations) by adopting alternative chips from companies like Amazon, Google, and Advanced Micro Devices.
ChatGPT has reached 900 million weekly active users and 50 million paying subscribers, with OpenAI reporting that subscriber growth accelerated significantly in early 2026. The company announced a $110 billion funding round, one of the largest private funding rounds ever, with major investments from Amazon, Nvidia, and SoftBank at a $730 billion valuation.
Anthropic is offering free access to Claude Max (their $200/month AI assistant plan) for six months to open source maintainers who meet specific criteria: primary maintainers of public repositories with 5,000+ GitHub stars or 1 million+ monthly NPM downloads, with recent commits or reviews in the last three months. The program accepts up to 10,000 contributors, and maintainers who don't quite meet the stated criteria can still apply and explain their importance to the ecosystem.
Perplexity has launched Computer, an agentic tool (software that can independently execute complex tasks) that combines 19 different AI models to handle workflows like data collection, analysis, and report creation. The tool runs in the cloud and is available only to subscribers of Perplexity Max (the $200/month tier), though a planned demo was canceled hours before a press event due to flaws discovered in the product.
Anthropic, an AI company, is refusing the Pentagon's demands for unrestricted access to its AI technology, specifically opposing its use for domestic mass surveillance (tracking citizens without limits) and fully autonomous weapons (weapons that make kill decisions without human control). Over 300 Google employees and 60 OpenAI employees signed an open letter supporting Anthropic's stance, and leaders at both companies have informally expressed sympathy for Anthropic's position, though the Pentagon has threatened to declare Anthropic a security risk or use the Defense Production Act (a law allowing the government to force companies to produce needed goods) if it doesn't comply.
Fix: Update to Gradio version 6.6.0, which fixes the issue.
NVD/CVE DatabaseAnthropic, maker of the AI chatbot Claude, refused the Pentagon's demand to allow unrestricted military use of its technology, citing concerns about safeguards against mass surveillance and autonomous weapons (systems that make decisions without human control). President Trump ordered all federal agencies to stop using Anthropic's technology in response, escalating a public dispute within the AI industry about balancing national security needs with AI safety protections.
Fix: Site administrators should check the GCP console for keys allowing the Generative Language API and look for unrestricted keys marked with a yellow warning icon. Exposed keys should be rotated or regenerated (replaced with new ones) with a grace period to avoid breaking apps using the old keys. Google's roadmap includes making API keys created through AI Studio default to Gemini-only access and blocking leaked keys while notifying customers when they detect them.
CSO OnlineAI assistants designed to find security vulnerabilities (weaknesses in software that attackers can exploit) are not yet reliable enough for professional use, despite their potential to help find bugs faster. Experts say current AI tools have problems with both accuracy and speed, making them unsuitable for businesses and developers who need dependable security scanning.
OpenAI CEO Sam Altman publicly supported rival company Anthropic in its dispute with the US Department of Defense over AI tool usage, stating that OpenAI shares Anthropic's refusal to allow certain uses like domestic surveillance and autonomous offensive weapons. The Pentagon has threatened Anthropic with retaliation, including invoking the Defense Production Act (a law letting the government use a company's products as it sees fit) or labeling the company a supply chain risk, but Anthropic maintains its position on restricting potentially harmful applications.
OpenAI CEO Sam Altman sent an internal memo to staff expressing support for rival company Anthropic in a dispute with the Pentagon over AI model usage, stating that both companies oppose using AI for mass surveillance or fully autonomous weapons. About 70 OpenAI employees signed an open letter supporting Anthropic, which has a deadline to decide whether to allow the Department of Defense unrestricted access to its AI models. Altman indicated OpenAI is negotiating with the Pentagon to deploy its own models in classified environments while maintaining ethical boundaries around domestic surveillance and autonomous offensive weapons.
Fix: Altman proposed that OpenAI would ask for a contract with the Pentagon that covers "any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." He also stated the company would "build technical safeguards and deploy personnel to ensure things are working correctly" in classified environments.
CNBC TechnologyIn a deposition for his lawsuit against OpenAI, Elon Musk claimed that his company xAI prioritizes AI safety better than OpenAI, and that ChatGPT has caused mental health harms including suicides while Grok has not. Musk's lawsuit challenges OpenAI's transition from a nonprofit to a for-profit company, arguing that commercial interests compromise safety priorities, though xAI itself has faced safety issues including the generation of non-consensual intimate images by Grok.
Anthropic and the U.S. Department of Defense are in conflict over how the military can use Anthropic's AI models. Anthropic refuses to allow its AI for mass surveillance of Americans or fully autonomous weapons (systems that select and fire at targets without human decision-makers), while the Pentagon argues it should be permitted to use the technology for any lawful purpose. The core dispute is whether the companies that build powerful AI systems or the government that deploys them should control how those systems are used.
Anthropic is refusing to accept new Pentagon contract terms that would remove safety restrictions (guardrails, the built-in limits on what an AI model will do) from its AI models, which would allow uses like mass surveillance of Americans and fully autonomous lethal weapons (weapons that can select and fire at targets without human control). Despite pressure from the Pentagon, including threats to label Anthropic a supply chain risk (a designation suggesting it poses a national security threat), CEO Dario Amodei says the company will not compromise on these ethical boundaries, while competitors OpenAI and xAI have reportedly agreed to the terms.
The Pentagon is pressuring Anthropic (an AI company) to remove safety restrictions on its technology or face being labeled a 'supply chain risk' that could cost it billions in contracts. The pressure includes demands for military access to the AI for surveillance and autonomous weapons systems, raising concerns among tech workers about how their work might be used.
The U.S. Department of Defense is in a standoff with Anthropic, an AI company, over whether the company will remove safeguards from its AI models to allow military uses like mass domestic surveillance and fully autonomous weapons (systems that can make combat decisions without human control). This conflict highlights a major shift in power: private companies now control cutting-edge AI technology rather than governments, forcing the Pentagon to negotiate with industry over how AI will be deployed in national security and warfare.