New tools, products, platforms, funding rounds, and company developments in AI security.
This talk covers how software developers are adopting AI coding agents, from simple question-asking with ChatGPT to agents writing entire programs. The speaker emphasizes that trusting AI output (like Claude Opus) requires pairing it with test-driven development (TDD, a practice where you write tests before the actual code) and manual testing, since automated tests alone don't guarantee the software will actually run correctly.
Major AI infrastructure projects like OpenAI's Stargate datacentre (a massive computing facility where AI systems run) are facing financial and timeline challenges, with OpenAI backing away from parts of a planned $500 billion expansion in Texas. The article suggests that massive investments in datacentres and AI chips represent a significant economic gamble, with the UK potentially at particular risk if this 'AI bubble' deflates.
Microsoft is planning to release Gaming Copilot, an AI assistant that helps players when they get stuck in games, on current-generation Xbox consoles later this year. The assistant, which responds to voice commands, has already been tested in beta versions on Xbox's mobile app, Windows 11, and Xbox Ally handhelds, and Microsoft plans to expand it to additional gaming services.
Nvidia is shifting focus toward CPUs (central processing units, the main general-purpose chips in computers) alongside its famous GPUs (graphics processing units) because agentic AI (AI systems that autonomously complete tasks by orchestrating multiple agents working together) requires significant general computing power to move data and coordinate workflows. The company is unveiling new CPU details at its GTC conference, with demand from major partners like Meta driving a predicted doubling of the CPU market from $27 billion in 2025 to $60 billion by 2030.
Anthropic has made 1M context (the ability to process 1 million tokens, which are small units of text that AI models break language into) generally available for its Opus 4.6 and Sonnet 4.6 models at standard pricing, with no additional charge for using the full window. This differs from competitors like OpenAI and Gemini, which charge premium rates when token usage exceeds certain thresholds (200,000 tokens for Gemini 3.1 Pro and 272,000 for GPT-5.4).
This content consists of letters to an editor about family quizzes and avoiding AI chatbots. One letter mentions that submitting gibberish to chatbots can circumvent them and quickly connect users to human support staff.
Anthropic, an AI company, is in a legal dispute with the Pentagon over restrictions on how its AI models can be used, specifically trying to prevent deployment in domestic mass surveillance or fully autonomous lethal weapons (AI systems that make kill decisions without human control). The conflict highlights a shift in the tech industry's approach to military AI, with companies like Google previously refusing military partnerships, but now facing pressure to work with the Pentagon under the Trump administration.
Researchers discovered Slopoly, a backdoor malware (a hidden entry point into a system) likely created using an LLM (large language model, an AI trained on text data), that was deployed in ransomware attacks by the financially motivated group Hive0163. The malware uses a command-and-control framework (a central server that sends instructions to compromised systems) to steal data and maintain access, and its AI-generated code shows unusual features like detailed comments and clear variable names that are rare in human-written malware, suggesting that attackers are using AI tools to speed up custom malware creation.
Facebook Marketplace is introducing AI-powered features to help sellers work more efficiently, including an auto-reply tool that uses Meta AI to automatically respond to common questions about whether items are still available. Sellers can toggle this feature on when creating a listing, and the AI will draft editable responses that sellers can customize before sending.
Rajesh Jha, a top Microsoft executive who oversaw Office and has worked at the company for over 35 years, is retiring in July. His departure is significant because Microsoft is trying to integrate AI models from companies like OpenAI and Anthropic into products like 365 Copilot (an AI assistant add-on for Microsoft 365 business subscriptions), and his leadership will be split among four other executives reporting directly to CEO Satya Nadella.
Webflow, a website-building platform, has acquired Vidoso, an AI content-generation startup that uses large language models (AI systems trained on text data to generate new text) to help companies create marketing materials like images, videos, and blog posts. The acquisition aims to help Webflow expand its marketing capabilities and address a key problem: frontier models (AI systems trained on general internet data) create generic content without understanding a company's specific brand rules and approval workflows.
Google and Samsung announced that Gemini, their AI assistant, can now automate tasks by controlling apps on your behalf through a virtual interface, starting with food delivery and rideshare services. Users can give simple text prompts and Gemini will interact with these apps to complete actions like ordering food or booking rides, which is a capability AI assistants have long promised but rarely delivered.
OpenClaw, an open-source AI agent, has critical security flaws that could let attackers trick it into leaking sensitive data through prompt injection (embedding malicious instructions in web content to manipulate the AI). The platform's weak default security settings and high system privileges create additional risks, including accidental data deletion, malicious code installation through skill repositories, and exploitation of known vulnerabilities that could compromise entire business systems.
Fix: To counter these risks, users and organizations are advised to: strengthen network controls, prevent exposure of OpenClaw's default management port to the internet, isolate the service in a container, avoid storing credentials in plaintext, download skills only from trusted channels, disable automatic updates for skills, and keep the agent up-to-date.
The Hacker NewsServiceNow's CEO warns that AI agents (software programs that can perform tasks independently) automating work could push college graduate unemployment into the mid-30s within a few years, making it harder for entry-level workers to stand out. Multiple major tech companies are already using AI to cut jobs and reduce hiring costs, affecting both technical roles like coding and white-collar positions across industries.
The US Department of War designated Anthropic as a 'supply chain risk' (a classification that prevents a company from being used in government contracts) after the company refused to remove safety restrictions on its AI model Claude, specifically rejecting military demands to enable fully autonomous weapons and domestic mass surveillance. Anthropic is challenging this designation in court, and legal experts question whether the Department of War has the authority to impose such restrictions outside of actual contract disputes.
The US military is considering using generative AI systems (AI models that can create text and analyze data) to help rank military targets and recommend which ones to strike, with human officials making final decisions. The Pentagon is also favoring OpenAI's ChatGPT and xAI's Grok for these high-stakes military applications, while facing criticism from officials who claim that Anthropic's Claude would negatively affect the defense supply chain.
Major technology companies are offering extremely high salaries to attract top AI researchers, causing many academics to leave universities for industry jobs. This "AI brain drain" is particularly affecting young, highly-cited researchers and threatens academia's ability to conduct research driven by curiosity rather than profit, as well as its role in providing independent ethical review. However, research shows that scientific breakthroughs actually come from large collaborative teams rather than individual geniuses, making the tech industry's focus on poaching individual top talent misguided.
Onyx Security, a new startup, has received $40 million in funding to build a control pane (a central dashboard for managing systems) that helps organizations monitor and manage autonomous AI agents (AI systems that can perform tasks independently without constant human direction) and speed up their adoption.
The US military may use generative AI chatbots (AI systems trained on large amounts of text data to have conversations) to rank and prioritize target lists for human review, according to a Pentagon official. These systems, which could include OpenAI's ChatGPT or xAI's Grok, would work alongside existing military AI tools like Maven (a system using computer vision to analyze drone footage) to speed up targeting decisions. However, while generative AI outputs are easy to access, they are harder to verify than traditional military AI systems, raising concerns as the Pentagon faces scrutiny over recent military strikes.
OpenAI CEO Sam Altman met with lawmakers including Senator Mark Kelly to discuss the company's defense contract with the Department of Defense, particularly concerns about how AI systems could be used in warfare and surveillance. The meeting highlighted disagreements between AI companies and the military over safeguards, with Kelly stating that Congress plans to draft legislation creating guardrails (safety boundaries) around government AI contracts, since the technology is advancing faster than lawmakers can regulate it.