OpenAI: GPT-5.5 and GPT-5.5-Cyber expand the Trusted Access for Cyber program
OpenAI is expanding the Trusted Access for Cyber (TAC) program to thousands of verified defensive researchers and hundreds of teams protecting critical software infrastructure. The program introduces GPT-5.5 with reduced restrictions, and the specialized GPT-5.5-Cyber for reverse engineering and malicious software analysis.
This article was generated using artificial intelligence from primary sources.
OpenAI on May 7, 2026, announced a significant expansion of the Trusted Access for Cyber (TAC) program, placing GPT-5.5 and the specialized GPT-5.5-Cyber at the center of its strategy for supporting the defensive cyber ecosystem. The program opens to thousands of verified defensive researchers and hundreds of teams protecting critical software infrastructure.
What distinguishes GPT-5.5 within the TAC program from the public version?
Two weeks before this announcement, OpenAI released GPT-5.5 to all ChatGPT Plus, Pro, Business, and Enterprise users. The version available through TAC has reduced restrictions in cyber domains, permitting deeper analysis of security scenarios, writing more detailed security reports, and assisting in the engineering of defensive tools. OpenAI runs this permissive variant through Codex, giving verified teams direct access to advanced capabilities within their development environment.
What is new in the GPT-5.5-Cyber model?
GPT-5.5-Cyber is intended exclusively for defensive researchers at the highest tier of TAC verification and is the first OpenAI model explicitly designated for combined offensive-defensive security work. The specific use cases OpenAI cites include writing proof-of-concept exploits for discovered vulnerabilities, running simulations of an organization’s security posture, bug hunting, studying malicious software, and reverse engineering attacks. Fewer restrictions mean the model can directly generate and analyze content that the public model would refuse, but only within the closed channel of approved users.
Why is this step strategically significant?
The announcement fits into OpenAI’s broader positioning that the defensive side of the cyber ecosystem is structurally behind attackers and that scaled AI tools can reverse that asymmetry. By scaling the program to thousands of defenders and hundreds of teams, OpenAI moves from a pilot phase to an operational network where the defense of critical infrastructure, financial systems, and software supply chains gains specialized tools. The step also sets an industry precedent for managed access to models with reduced restrictions tied to strict identity verification.
Frequently Asked Questions
- Who can access the GPT-5.5-Cyber model?
- Only defensive teams and researchers verified at the highest tier of the OpenAI Trusted Access for Cyber program, with fewer guardrails than the public version of the model.
- What does GPT-5.5-Cyber specifically enable?
- Writing proof-of-concept exploits for discovered bugs, running security posture simulations, reverse engineering attacks, and analyzing malicious software.
- What is the scope of the program?
- OpenAI is scaling it to thousands of individual defensive researchers and hundreds of teams responsible for protecting critical software.
Related news
OpenAI: how to run Codex safely in production — sandbox, approvals and agent telemetry
arXiv:2605.04572: SQSD reveals that even benign fine-tuning undermines model safety
arXiv:2605.04019: automated red teaming agent achieves 85% success rate against Meta Llama Scout with 45+ attacks and 450+ transformations