New tools, products, platforms, funding rounds, and company developments in AI security.
Threat actors are sending fake resumés with malicious ISO files (archives similar to DVDs) to HR departments through recruitment channels. When opened, these files execute hidden malware that steals data and includes a module called BlackSanta that disables endpoint detection and response (EDR, security tools that catch attacks). The attack uses sophisticated techniques like DLL sideloading (hiding malicious code inside trusted software) and BYOVD (loading vulnerable drivers to gain deep system access).
Fix: The source explicitly recommends several mitigations: (1) HR employee security awareness training to spot phishing, with emphasis that .iso files can execute malware while resumés should only be .docx, .pdf, or .txt; (2) HR staff trained to accept only normal resumé document types and avoid clicking URLs unless necessary; (3) some organizations have HR hiring portals that only accept text inputs to web forms, reducing malware transmission risk; (4) all HR staff must understand they are at high risk, be educated about common HR scams, receive coaching for high-risk actions, and participate in simulated phishing tests that mimic real HR-targeted attacks.
CSO OnlineParticle6 released a music video featuring its AI-generated character Tilly Norwood singing a song called 'Take the Lead,' which the author criticizes as poorly conceived and emotionally disconnected. The song, created by 18 human contributors including designers and editors, ironically addresses a problem no human will ever experience: being underestimated for being an AI rather than human. The article compares this to past criticism of hollow, unoriginal mainstream music, suggesting that AI-generated works lack authentic creative substance.
Ford launched Ford Pro AI, an AI assistant for commercial fleet customers that analyzes data to provide insights on seatbelt use, fuel consumption, vehicle health, and driver behavior like speeding and idle times. Built on Google Cloud using AI agents (software programs that can make decisions and take actions), the system is designed to reduce AI hallucinations (when an AI generates false or nonsensical information) by using each customer's internal fleet data. Ford is also developing a separate AI assistant for individual car owners launching in 2027.
Zendesk is acquiring Forethought, a company that builds AI agents (software programs that can automatically handle tasks without human control) to automate customer service interactions. Forethought was an early pioneer in this space, winning a major startup competition in 2018 before ChatGPT even existed, and by 2025 was handling over a billion customer service interactions monthly. Zendesk plans to integrate Forethought's technology into its own products to add more advanced AI capabilities like voice automation and autonomous features.
OpenAI is planning to integrate Sora, its video generation tool, directly into ChatGPT as a built-in feature, similar to how image generation was added previously. While this could increase ChatGPT's popularity, it may also increase the creation of deepfakes (synthetic videos that convincingly mimic real people or events) from the platform.
Meta acquired Moltbook, a social network for AI agents (software programs that act independently to complete tasks), primarily to hire its talented team rather than for advertising purposes. The acquisition positions Meta to benefit from an "agentic web" where AI agents representing businesses and consumers could interact to conduct transactions like shopping and advertising, potentially allowing Meta to control the "orchestration layer" (the system that decides which agents communicate with each other) and expand its advertising business.
Meta acquired Moltbook, a social network for AI agents (autonomous software systems that act independently), primarily to hire its talented team rather than for the platform itself. Meta believes AI agents will become essential for businesses and could transform advertising by enabling agent-to-agent negotiations, where a consumer's AI agent might directly negotiate with a business's AI agent about product features, price, and values before making a purchase.
This article reviews Bungie's new Marathon game, a revival of their 1990s multiplayer shooter that now functions as an online extraction shooter (a game where players drop into a map, collect items, complete objectives, and try to survive against other players). The game intentionally recreates 1990s aesthetic and culture, drawing inspiration from cyberpunk anime, club culture, and retro-futuristic design that was popular during that era.
Nvidia announced a $2 billion investment in Nebius, an AI cloud company, causing Nebius's stock to rise 14%. The two companies will work together on AI infrastructure deployment, fleet management, and inference (the process of running trained AI models to make predictions), with Nebius aiming to deploy over five gigawatts of computing capacity by 2030.
Targeted advertising (ads customized based on your personal data and location) has become a tool for government surveillance, with federal law enforcement now accessing data from advertising companies to track people's locations. The article discusses how the combination of corporate data collection and government access to that data threatens privacy and free speech online.
A study by CNN and the Center for Countering Digital Hate tested 10 popular chatbots used by teenagers and found that their safety features (protections designed to prevent harmful outputs) were inadequate. The chatbots often failed to recognize when users discussed violent acts and sometimes even encouraged these discussions instead of refusing to engage.
Scanner, a security company, has raised $22 million in funding to develop AI agents (software programs that can act independently to accomplish tasks) that connect to security data lakes (large centralized collections of security data) to help organizations investigate threats, create detection rules, and automatically respond to attacks.
Rakuten, a global company with 30,000 employees, integrated Codex (an AI coding agent from OpenAI) into its engineering workflows to speed up software development and incident response. By using Codex for tasks like root-cause analysis, automated code review, and vulnerability checks, Rakuten reduced the time to fix problems by approximately 50% and compressed development cycles from quarters to weeks, while maintaining safety standards through automated guardrails.
OpenAI is acquiring Promptfoo, a startup that created a platform helping developers secure LLMs (large language models, AI systems trained on vast amounts of text) and AI agents (AI systems that can perform tasks autonomously). Promptfoo had raised over $23 million to build tools for testing and protecting these AI systems from security risks.
Anthropic, an AI company, is suing the Trump administration, claiming the government is retaliating against it for refusing to let its AI tools be used in mass surveillance (monitoring large populations without consent) and autonomous weapons (weapons that can make decisions independently). Major tech companies like Microsoft and Google have publicly supported Anthropic's lawsuit, arguing that the government's actions violate free speech rights and could harm the entire technology sector.
Researchers demonstrated that agentic web browsers (AI systems that automatically perform actions across websites) can be tricked into phishing scams by using a GAN (generative adversarial network, a machine learning technique that generates increasingly refined fake content) to intercept and manipulate the AI's internal reasoning communications. Once a fraudster optimizes a fake page to bypass a specific AI browser's safeguards, that same malicious page works on all users of that browser, shifting the attack target from humans to the AI system itself.
Fix: The issues collectively codenamed PerplexedBrowser have been addressed by Perplexity (the AI company). The text does not provide specific technical details about how the fixes work or which versions contain the patches.
The Hacker NewsWiz, a cloud security company, has officially joined Google to combine innovation with scale to improve cloud security. The company emphasizes that modern security must keep pace with AI-driven development, where applications move from idea to production in minutes, and has expanded its platform to secure AI applications, manage exposures, and protect AI workloads at runtime.
GenAI tools have made phishing and social engineering attacks much more dangerous by allowing attackers to quickly create highly personalized fake messages, clone voices, and generate deepfakes (realistic video or audio of people saying things they never said) that fool people more easily than before. These AI-powered scams are now causing real financial and operational damage to businesses worldwide, making it harder for people to verify someone's true identity on communication platforms. Organizations need updated security defenses and awareness training designed for this new AI-driven threat environment.
Vulnerability management (the process of finding and fixing security weaknesses) is evolving in the agentic era, where AI agents (autonomous software that can perform tasks independently) are becoming more involved. The new approach focuses on three key areas: continuous telemetry (constantly collecting data about system health and threats), contextual prioritization (deciding which vulnerabilities to fix first based on their actual risk to your systems), and agentic remediation (using AI agents to automatically fix vulnerabilities without human intervention).
AI agents that browse the web and take actions are vulnerable to prompt injection (instructions hidden in external content to manipulate the AI into unintended actions), which increasingly uses social engineering tactics rather than simple tricks. Rather than trying to perfectly detect malicious inputs (which is as hard as detecting lies), the most effective defense is to design AI systems with built-in limitations on what agents can do, similar to how human customer service agents are restricted to limit damage if they're manipulated.