Introducing the OpenAI Safety Bug Bounty program
Summary
OpenAI has launched a Safety Bug Bounty program to identify AI abuse and safety risks in its products, complementing its existing Security Bug Bounty program. The new program focuses on issues like prompt injection (tricking an AI by hiding instructions in its input) that hijacks AI agents to perform harmful actions, unauthorized feature access, and proprietary information leaks, even if they don't qualify as traditional security vulnerabilities. Researchers can submit reports on reproducible safety issues that pose plausible and material harm to users.
Classification
Affected Vendors
Related Issues
Original source: https://openai.com/index/safety-bug-bounty
First tracked: March 25, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 92%