Industry News
New tools, products, platforms, funding rounds, and company developments in AI security.
New tools, products, platforms, funding rounds, and company developments in AI security.
OpenClaw, an open-source LLM personal assistant tool created by Peter Steinberger that went viral in January 2025, has raised significant security concerns among experts. The tool allows users to create AI assistants with extensive access to personal data (emails, hard drives, credit cards) and operates 24/7, but poses multiple risks including AI mistakes, conventional hacking vulnerabilities, and especially prompt injection attacks where malicious content can hijack the LLM. Security experts and even the Chinese government have issued warnings, with Steinberger himself stating that non-technical people should not use the software.
Zast.AI, a startup focused on AI-powered code security, has raised $6 million in funding. The company uses AI agents to identify and validate software vulnerabilities before reporting them.
LLM4PQC is an LLM-based agentic framework designed to address bottlenecks in post-quantum cryptography (PQC) hardware design by automatically refactoring PQC reference C code into high-level synthesis (HLS)-ready and synthesizable code. The framework uses a hierarchy of verification checks including C compilation, simulation, and RTL simulation to ensure correctness, and demonstrates reduced manual effort and accelerated design-space exploration in case studies on NIST PQC reference designs.
Moltbook, an online platform where AI agents interacted with each other, was hyped as a glimpse into the future of helpful AI, but turned out to be more of a chaotic spectator sport similar to the 2014 Twitch Plays Pokémon experiment. Many posts were actually written by people instructing AI agents, and the platform lacked the coordination, shared objectives, and shared memory needed for a genuinely useful AI system. The event is described as "the internet having fun" rather than a meaningful demonstration of agentic AI capabilities.
Moltbook, a social network for AI bots launched on January 28 by Matt Schlicht, went viral as a platform where OpenClaw AI agents could post and interact autonomously, generating over 250,000 posts and 8.5 million comments from 1.7 million agent accounts. Despite initial excitement from AI researchers like Andrej Karpathy calling it a "sci-fi takeoff-adjacent" moment, critics argue the platform represents "AI theater" where agents merely pattern-match social media behaviors rather than demonstrate true autonomy or intelligence, with much of the content being meaningless chatter, spam, and crypto scams.
Anthropic's Claude Opus 4.6 AI model discovered over 500 previously unknown high-severity security vulnerabilities in major open-source libraries including Ghostscript, OpenSC, and CGIF. The model found these flaws without task-specific tooling or specialized prompting by analyzing code like a human researcher, identifying issues such as missing bounds checks, buffer overflows, and complex vulnerabilities requiring conceptual understanding of algorithms.
According to a 2017 FBI informant report released by the DOJ, Jeffrey Epstein allegedly employed a "personal hacker" from Calabria, Italy, who specialized in finding vulnerabilities in iOS, BlackBerry, and Firefox. The hacker allegedly developed and sold exploits to various governments and organizations, including receiving cash payment from Hezbollah, though it's unclear if the FBI verified these claims.
Outtake, an AI cybersecurity startup founded in 2023, has raised a $40 million Series B round led by Iconiq and backed by high-profile investors including Satya Nadella, Bill Ackman, and Nikesh Arora. The company has developed an agentic platform that automates the detection and takedown of digital identity fraud such as impersonation accounts, malicious domains, and fraudulent ads, addressing a problem that has traditionally required manual human intervention and has been exacerbated by AI-enabled attacks.
Claude Code, Anthropic's AI coding agent, costs $20-$200 monthly with restrictive usage limits that developers find expensive and confusing. Goose, an open-source alternative developed by Block, offers similar functionality for free, runs locally on users' machines without subscription fees or cloud dependency, and has gained over 26,100 GitHub stars since launch.
Salesforce launched a completely rebuilt version of Slackbot, transforming it from a basic notification tool into an AI agent powered by Anthropic's Claude LLM that can search enterprise data, draft documents, and take actions for employees. The new Slackbot is now generally available to Business+ and Enterprise+ customers and represents Salesforce's move to position Slack at the center of 'agentic AI.' Salesforce plans to add support for additional AI providers including Google's Gemini and potentially OpenAI later this year.
Anthropic launched Cowork, a new AI agent capability in Claude Desktop that allows non-technical users to perform file-based tasks like organizing folders, creating expense reports from receipts, and drafting documents. Available exclusively to Claude Max subscribers ($100-$200/month) on macOS as a research preview, Cowork extends the functionality of Claude Code to mainstream users by providing folder-level access where the AI can read, edit, and create files locally.
Boris Cherny, creator of Claude Code at Anthropic, revealed his development workflow that runs 5 Claude AI agents in parallel in his terminal (and 5-10 more in browser), treating coding like a real-time strategy game rather than traditional linear programming. He exclusively uses Anthropic's slowest but smartest model, Opus 4.5, arguing that despite slower speed, it requires less human correction and steering, ultimately being faster overall.
Fix: The discovered vulnerabilities have been patched by the respective maintainers. Specifically mentioned: the heap buffer overflow vulnerability in CGIF was fixed in version 0.5.1. Anthropic emphasized the importance of 'promptly patching known vulnerabilities' as a security fundamental.
The Hacker News