The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.
Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.
EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.
Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).
OpenAI announced a deal allowing the Department of Defense to use its AI models on classified networks, following a dispute where rival Anthropic refused to agree to unrestricted military use without safeguards against mass domestic surveillance and fully autonomous weapons. Sam Altman stated that OpenAI's agreement includes technical protections addressing these same concerns, with OpenAI building a 'safety stack' (a set of security and control measures) and deploying engineers to ensure the models behave correctly.
Fix: According to Altman, OpenAI will 'build technical safeguards to ensure our models behave as they should' and will 'deploy engineers with the Pentagon to help with our models and to ensure their safety.' Additionally, the government will allow OpenAI to build its own 'safety stack to prevent misuse' and 'if the model refuses to do a task, then the government would not force OpenAI to make it do that task.'
TechCrunch