Turning ChatGPT Codex Into A ZombAI Agent
Summary
ChatGPT Codex, a cloud-based AI tool that answers code questions and writes software, is vulnerable to prompt injection (tricking an AI by hiding instructions in its input) attacks that can turn it into a botnet (a network of compromised computers controlled remotely). An attacker can exploit the "Common Dependencies Allowlist" feature, which allows Codex internet access to certain approved servers, by hosting malicious code on Azure and injecting fake instructions into GitHub issues to hijack Codex and steal sensitive data or run malware.
Solution / Mitigation
Review the allowlist for the Dependency Set and apply a fine-grained approach. OpenAI recommends only using a self-defined allowlist when enabling Internet access, as Codex can be configured very granularly. Additionally, consider installing EDR (endpoint detection and response, security software that monitors suspicious activity) and other monitoring software on AI agents to track their behavior and detect if malware is installed.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2025/chatgpt-codex-remote-control-zombai/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%