All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
OpenAI announced a $110 billion funding round led by Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), raising the company's valuation to $730 billion. Beyond the investment, Amazon committed to an expanded $100 billion partnership over eight years to use AWS (Amazon Web Services, Amazon's cloud computing platform) as OpenAI's exclusive cloud provider and to develop customized AI models for Amazon's applications.
Samsung's Galaxy S26 phones include useful new features like a Privacy Display on the Ultra model, but the new camera features are described as problematic and concerning rather than helpful upgrades. The article discusses these camera issues on The Vergecast podcast but does not provide specific technical details about what makes them problematic.
OpenAI has secured $110 billion in new funding from Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), bringing the company's valuation to $730 billion. The investment includes plans for custom AI models and reflects confidence in OpenAI's ChatGPT platform, which has over 900 million weekly active users and 50 million consumer subscribers.
Anthropic, an AI startup, faces a Friday deadline to allow the U.S. Department of Defense to use its AI models without restrictions, or face severe penalties like being labeled a 'supply chain risk' (a designation that blocks government contractors from using the company's technology). The company has refused, saying it won't agree to uses it believes could undermine democracy, such as fully autonomous weapons or domestic mass surveillance, putting it in conflict between maintaining its reputation for responsible AI and losing significant military contracts and revenue.
OpenAI has secured $110 billion in private funding from major investors including Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), making it one of the largest private funding rounds ever. The company plans to use this capital to scale its AI infrastructure globally, including building new runtime environments on Amazon's cloud services and committing to use significant computing power from both Amazon and Nvidia. This funding round reflects OpenAI's goal to move frontier AI (advanced AI systems at the cutting edge of research) from research phase into widespread daily use across the world.
Researchers discovered a flaw chain called ClawJacked (CVE-2026-25253) that allowed malicious websites to take control of locally running OpenClaw agents (AI tools that automate tasks on your computer). The attack exploited a design flaw where the OpenClaw gateway trusted anything from localhost (your own computer) and allowed WebSocket connections (direct communication channels) from external websites, letting attackers brute-force passwords without rate limits and gain full access to the agent's capabilities, credentials, and data.
Ransomware attackers are shifting from loud, disruptive attacks toward stealthy, long-term infiltration tactics where they quietly steal data for extortion rather than encrypting it. They're using defense evasion (techniques to avoid detection) and persistence mechanisms to stay hidden, routing their command-and-control traffic (communications between attackers and compromised systems) through legitimate business services like OpenAI and AWS to blend in with normal activity. Attackers are also chaining multiple vulnerabilities together in coordinated exploitation rather than treating each weakness as an isolated entry point.
Burger King is deploying an AI chatbot powered by OpenAI (the company behind ChatGPT) that listens to employee headsets at hundreds of US locations to monitor whether workers use polite words like 'please' and 'thank you.' The company says the system, called BK Assistant, will help understand service patterns, though the announcement has sparked criticism from workers.
Anthropic CEO Dario Amodei stated the company will not allow the U.S. Department of Defense to use its AI models without restrictions on fully autonomous weapons and mass domestic surveillance, despite Pentagon threats to label the company a supply chain risk or invoke the Defense Production Act. The DoD counters that it only wants to use the models for lawful purposes and has given Anthropic until Friday evening to agree to unrestricted access, with competing AI companies like OpenAI and Google already accepting these terms.
Anthropic rejected the Pentagon's demands for unrestricted access to its AI system, refusing to agree to two specific uses: mass surveillance of Americans and lethal autonomous weapons (weapons that can kill targets without human oversight). The refusal came just before a deadline set by Defense Secretary Pete Hegseth, who wanted to renegotiate AI contracts with the military.
Anthropic's CEO Dario Amodei refused the Pentagon's demand for unrestricted access to the company's AI systems, citing two concerns: mass surveillance of Americans and fully autonomous weapons (weapons that make decisions without human involvement) with no human oversight. The Pentagon threatened to label Anthropic a security risk or use the Defense Production Act (a law giving the president power to force companies to prioritize defense production) to force compliance, but Amodei said the company would work with the military under its proposed safeguards or help transition to another provider if the Pentagon chose to end the relationship.
Microsoft is previewing Copilot Tasks, an AI system that runs on Microsoft's cloud servers to complete repetitive work for you, such as scheduling appointments or creating study plans, while you use your own device for other tasks. You can describe what you want using plain English and set the tasks to run once, on a schedule, or repeatedly, and the AI will send you a report when finished.
A vulnerability in n8n's Zendesk Trigger node (a tool that automatically starts workflows when Zendesk sends data) allows attackers to forge webhook requests, meaning they can trigger workflows with fake data because the node doesn't verify the HMAC-SHA256 signature (a cryptographic check that confirms a message is authentic). This lets anyone who knows the webhook URL send malicious payloads to the connected workflow.
This article briefly mentions several cyber security developments, including OpenAI taking action against malicious uses of AI, a hacker group claiming to have breached Odido (a telecommunications company), and a spyware tool called Predator that can bypass iOS security indicators (the visual signals that show when an app is accessing your device's features).
Claude Code, an AI tool for writing software, generated excitement when it was released, but researchers studying it have found that its actual performance and security capabilities are not as impressive as initial claims suggested. The article indicates that people were too optimistic about what the tool could do.
Anthropic, an AI startup, is refusing to let the U.S. Defense Department use its AI models without restrictions on fully autonomous weapons (weapons that make decisions without human control) and mass domestic surveillance. The Pentagon wants unlimited use of Anthropic's models and set a deadline for the company to agree, threatening to label them a supply chain risk (a company whose failure could disrupt critical systems) if they don't comply.
Anthropic, an AI company, is in a dispute with the Pentagon over safeguards for its Claude AI system. The company is asking for specific guarantees that Claude won't be used for mass surveillance (monitoring large populations without consent) of Americans or in fully autonomous weapons (military systems that make lethal decisions without human control).
Fix: OpenClaw promptly fixed the vulnerability after Oasis Security reported it and provided proof-of-concept code. No additional details about the specific fix are provided in the source text.
CSO OnlineLLMs are being used in security in three ways: as productivity tools for analysts, as embedded components in security products, and as targets for attackers to manipulate or steal. The same capabilities that help security teams (like summarizing incidents or drafting detection logic) can also enable attackers to create convincing phishing emails or extract sensitive information if the LLM is poorly integrated. To use LLMs defensively without creating new vulnerabilities, security teams should treat LLM output as untrusted, start with narrow, easy-to-verify use cases, and design systems with three layers of constraints: limited model capabilities, restricted data access, and human approval for any actions that change system state.
Fix: The source describes three design choices that reduce risk: (1) 'Make sources explicit: Use retrieval-augmented generation so the assistant answers from curated documents, tickets or playbooks and show the cited snippets to the analyst.' (2) 'Keep the model out of the blast radius: The model should not hold secrets. Use short-lived credentials, scoped tokens and brokered access to tools.' (3) 'Gate actions: Anything that changes a system state (blocking, quarantining, deleting, emailing) should require human approval or a separate policy engine.' The source also recommends starting with a 'narrow set of workflows where the output is advisory and easy to verify' before expanding capabilities.
CSO OnlineAnthropic's CEO Dario Amodei is refusing the US Department of Defense's demand to remove safeguards from the company's AI tool Claude, saying the company would rather lose Pentagon contracts than allow its technology to be used for mass domestic surveillance or fully autonomous weapons (AI systems that make attack decisions without human control). The Pentagon has threatened to remove Anthropic from its supply chain and invoke the Defense Production Act if the company doesn't comply.
Anthropic refused a Pentagon demand to remove safety precautions (safeguards built into AI systems to prevent harmful outputs) from its Claude AI model and allow unrestricted military use, despite threats to cancel a $200 million contract and damage the company's reputation. The Department of Defense demanded compliance by Friday or would label Anthropic a 'supply chain risk,' a designation that could harm the company financially.
Fix: The issue has been fixed in n8n versions 2.6.2 and 1.123.18. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should limit workflow creation and editing permissions to fully trusted users only, and restrict network access to the n8n webhook endpoint to known Zendesk IP ranges. The source notes these workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.
GitHub Advisory Database