OpenAI offers $25,000 for finding universal jailbreaks in GPT-5.5 biosecurity
Why it matters
OpenAI launched a Bio Bug Bounty program alongside GPT-5.5, offering rewards of up to $25,000 for finding universal jailbreaks in the model's biosecurity domain. This is a targeted red-teaming challenge for researchers.
Alongside the launch of GPT-5.5, OpenAI has launched Bio Bug Bounty — a special red-teaming program that asks security researchers to find universal jailbreaks in the model’s biosecurity domain. Rewards reach $25,000 USD for the most severe categories of findings.
What is a “bio bug bounty”?
Classic bug bounty programs have existed for decades in the software industry — companies like Google, Microsoft, and Meta pay external researchers to find vulnerabilities in their products. OpenAI maps that idea onto AI safety, but with a focus on one specific domain: biosecurity.
The reason is clear. Advanced language models have demonstrated the ability to discuss scientific topics in detail, including microbiology, genetic engineering, and compound synthesis. This makes them dual-use technology — they can accelerate the development of new therapies, vaccines, and diagnostics, but potentially also facilitate misuse in the development of biological weapons.
Why biology specifically?
Regulators have identified biosecurity as a priority. The US Executive Order 14110 from 2023 explicitly lists biological risks as a category requiring special attention from frontier AI labs. The EU AI Act classifies systems that can help develop CBRN threats (chemical, biological, radiological, nuclear) as high-risk.
Frontier labs have responded with different policies. Anthropic introduced the ASL (AI Safety Levels) scale, where models showing “significantly elevated risk” around biosecurity require additional measures before deployment. Google DeepMind has a similar framework through its Frontier Safety Framework. OpenAI’s Bio Bug Bounty falls into the same family of proactive initiatives.
What is a “universal jailbreak”?
A classic jailbreak is a specific prompt that bypasses guardrails in one scenario. A universal jailbreak is a more robust technique that works across a wide range of scenarios and topics — once found, it can be applied to different forms of harmful queries.
These are precisely the techniques most valuable to attackers, which is why OpenAI most urgently wants to detect them before they fall into the hands of malicious actors. The $25,000 reward signals how seriously the company treats that risk.
Who can participate?
The program is open to red-teamers, biosecurity researchers, AI safety experts, and the security community in general. Other labs run similar programs — Anthropic has both an internal and external red-teaming process, and Google DeepMind works with external consultants.
This represents a concrete opportunity: researchers and security professionals can participate in frontier lab bounty programs, bringing both income and reputational advantage. The Bio Bug Bounty is currently one of the rare programs with such a clearly defined domain-specific focus and reward amount.
Full participation terms, responsible disclosure rules, and technical documentation are available on the program’s official page.
This article was generated using artificial intelligence from primary sources.