New tools, products, platforms, funding rounds, and company developments in AI security.
This article summarizes recent developments in AI, including controversies over weaponizing AI models like Claude, major user departures from ChatGPT, and large protests against AI in London. On a lighter note, AI agents (software programs that can act independently to accomplish tasks) are becoming popular online, with companies hiring their creators and developing quirky applications where AI agents appear to develop their own beliefs and philosophies.
OpenAI has shut down Sora, its AI video-generation app (software that creates realistic videos from text descriptions), less than two years after launch, to focus on other projects like robotics and autonomous AI agents. The closure ends both the consumer app and professional platform, though image-making tools in ChatGPT remain unaffected. Disney, which had recently licensed its intellectual property (creative works and characters owned by a company) to Sora in a landmark deal, said it will now explore partnerships with other AI platforms.
OpenAI abruptly shut down Sora, its AI video generator tool (software that creates realistic videos from text descriptions), just six months after launching it as a standalone app in 2024. The company announced the closure on social media, thanking users who created and shared videos with the platform.
OpenAI shut down its Sora app, a tool that let users generate short videos (create videos from text descriptions) and remix videos from other users, just six months after launching it despite reaching one million downloads. The company is cutting costs to justify its $730 billion valuation and focus on high-productivity business uses, particularly competing in the enterprise (business) market rather than consumer applications.
OpenAI has discontinued Sora, its video generation tool (AI that creates videos from text descriptions), along with the standalone app and developer API access that launched in late 2024. This shutdown affects a major licensing deal with Disney announced just months earlier, in which Disney had agreed to invest $1 billion in OpenAI.
Arm, a UK chip design company, is manufacturing its first CPU (central processing unit, the main processor in a computer) called the Arm AGI CPU, designed specifically for inference (running AI models in the cloud). Meta will be the first customer, using this chip in its data centers alongside processors from other companies like Nvidia and AMD to power AI tools.
OpenAI is launching a redesigned shopping feature in ChatGPT that lets users find and compare products by uploading images or describing items with budget and preference details, replacing its failed Instant Checkout feature that allowed direct purchases within the app. The company improved the underlying speed, relevance, and product coverage while allowing merchants to share product feeds directly with OpenAI rather than handling transactions themselves. Retailers like Target, Sephora, and Nordstrom now support this product discovery experience, and merchants can also build custom apps within ChatGPT for more control over their sales process.
Google and OpenAI are adding shopping features to their AI chatbots (Gemini and ChatGPT), allowing users to browse and buy products directly within the AI interface. Google partnered with Gap Inc to let Gemini purchase clothing from Gap, Old Navy, Banana Republic, and Athleta, while OpenAI updated ChatGPT's shopping interface.
Anthropic, maker of Claude AI, is asking a federal judge to temporarily block the Pentagon's ban on its technology, which the Department of Defense designated as a supply chain risk (a classification meaning the technology supposedly threatens U.S. national security). The company argues the ban is retaliation for demanding the Pentagon not use Claude for autonomous weapons or mass surveillance, and says it could lose billions in business without court intervention.
Gap is partnering with Google's Gemini to let shoppers buy Gap products directly within the AI platform, making it the first major fashion company to offer this type of integration. When Gemini recommends Gap products while answering customer questions like 'what should I wear to a job interview?', shoppers can complete their purchase through Google Pay without leaving the platform. Gap provides product details to Gemini in advance rather than letting it crawl the website, so Gap can control accuracy and customer data.
Two major prediction market platforms, Kalshi and Polymarket (websites where users bet on future events), announced new rules to ban insider trading (when people with special access to non-public information trade unfairly). The platforms added these restrictions after senators proposed legislation that could limit the prediction market industry.
Modern cybersecurity operations face attacks that happen in seconds, overwhelming traditional human-centered defenses. CrowdStrike introduced Charlotte AI AgentWorks and Charlotte Agentic SOAR, two interconnected systems that use AI agents (autonomous software that can reason and take actions) to work alongside security analysts, automating routine tasks while keeping humans in control through oversight and guardrails.
OpenAI has launched a Safety Bug Bounty program to identify AI abuse and safety risks in its products, complementing its existing Security Bug Bounty program. The new program focuses on issues like prompt injection (tricking an AI by hiding instructions in its input) that hijacks AI agents to perform harmful actions, unauthorized feature access, and proprietary information leaks, even if they don't qualify as traditional security vulnerabilities. Researchers can submit reports on reproducible safety issues that pose plausible and material harm to users.
Anthropic introduced auto mode for Claude Code, a new permissions system where Claude automatically decides whether to allow actions with safeguards in place. A separate classifier model (Claude Sonnet 4.6) reviews each action before it runs to block requests that go beyond the task scope, target untrusted infrastructure, or appear malicious, using customizable default filters that cover allowed operations like read-only requests and local file work, while blocking risky actions like force-pushing to git repositories or executing external code.
The Cloud Security Alliance has created a new nonprofit organization called the CSAI Foundation to help manage and secure autonomous AI agents (AI systems that can make decisions and take actions on their own). The foundation will use risk intelligence (methods to identify and understand potential dangers) and certification (official verification of safety standards) to govern these AI ecosystems.
Anthropic, an AI company, is suing the US Department of Defense in federal court to challenge a ban on government use of its Claude AI chatbot after the company refused to allow the technology to be used in autonomous weapons systems (machines that can make lethal decisions without human control) and mass surveillance. The Defense Secretary declared Anthropic a supply chain risk (a company considered unsafe to do business with), which the company argues will cause massive financial and business harm.
Baltimore's mayor and city council sued Elon Musk's xAI company, claiming that its Grok chatbot (an AI assistant designed for general conversation) violated consumer protection laws by creating nonconsensual sexualized images. The lawsuit argues that xAI deceptively marketed Grok and its platform X without disclosing the risks and potential harms users could face.
Agentic AI systems (AI that can independently take actions rather than just make suggestions) are becoming more powerful by gaining direct access to computer systems, creating new governance challenges. The article uses OpenClaw as a case study to illustrate why better oversight and control mechanisms are needed as these autonomous systems become more capable and integrated into real-world operations.
A subscriber-only eBook discusses whether society is adequately prepared for the growing autonomy being given to AI agents, featuring expert perspectives on potential risks. The content suggests that continuing on the current development path without proper safeguards could pose serious existential concerns.
Fix: OpenAI shifted its approach by moving away from direct transaction handling through Instant Checkout and instead focusing on product discovery. Merchants can now share their product feeds and promotions with OpenAI so their products are 'fully represented' within ChatGPT, while using their own checkout experiences. Additionally, OpenAI allows merchants to develop custom apps within ChatGPT for deeper integrations, giving them more control of the customer experience and transaction process.
CNBC TechnologyAI agents (software systems that can reason, act, and interact with other systems) need to align four layers of intent: what the user wants to accomplish, what the developer designed the agent to do, what role it plays in an organization, and what organizational policies it must follow. When these intent layers are properly aligned, agents deliver useful results while staying within security and compliance boundaries, preventing misuse and building trust.
Fix: Kalshi implemented specific bans: political candidates cannot trade on their own campaigns, and people involved in college or professional sports cannot trade contracts related to sports they play or work for. Both platforms also added new surveillance tools to monitor trading activity.
The Guardian Technology