The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.
Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.
Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.
Researchers have developed a method to hide secret data inside large language models (AI systems trained on massive amounts of text) by encoding information into the model's parameters during training. The hidden data doesn't interfere with the model's normal functions like text classification or generation, but authorized users with a secret key can extract the concealed information, enabling covert communication. The method leverages transformers (the neural network architecture behind modern AI language models) and its self-attention mechanisms (components that help the model focus on relevant parts of input) to achieve high capacity for hidden data while remaining undetectable.