OpenAI launches Workspace Agents in ChatGPT: Codex-powered agents for enterprise teams
Why it matters
OpenAI introduced Workspace Agents, Codex-powered AI agents integrated directly into the ChatGPT interface. The agents run in the cloud, automate complex workflows, and help enterprise teams scale work through connected tools with an emphasis on cross-application security.
OpenAI introduced Workspace Agents, a new generation of AI agents integrated directly into the ChatGPT interface. The agents are powered by the Codex model and are designed to automate complex workflows within enterprise teams. Rather than a separate application, the agents live in the same interface users already use for conversations with ChatGPT.
The announcement marks a paradigmatic shift in OpenAI’s positioning. ChatGPT is no longer just a Q&A chatbot, but a platform that runs agents capable of completing concrete business tasks without constant user supervision.
What are Workspace Agents?
Workspace Agents are Codex-powered agents that execute in the cloud and can independently complete multi-step tasks. Codex is OpenAI’s model specialized for code and tool execution, giving agents the ability to manage files, call APIs, and combine results from multiple sources.
The key difference from classic assistants is in duration and autonomy. Instead of the user having to respond to each follow-up question, the agent can take on a task and independently decide what steps need to be taken. Cloud execution means the user doesn’t have to keep their computer open or run local tools.
Why is this announcement important?
Until now, ChatGPT was primarily a chat interface, and OpenAI offered agentic capabilities through the Assistants API or Operator. Integrating agents directly into enterprise ChatGPT means teams already paying for a subscription get agents as part of the basic package.
For the market, this means direct pressure on competitors like Microsoft Copilot, Google’s Gemini for Workspace, and startups building agent platforms from the ground up. OpenAI is using the ChatGPT base of hundreds of millions of users as a distribution channel for agents — an advantage smaller players can hardly replicate.
The focus on enterprise team workflows also shows that OpenAI recognizes where the most money is made from agents — not with individual consumers, but in large organizations that need to automate repetitive processes.
How does OpenAI address security between applications?
The biggest challenge for agents working across multiple tools is the question of trust and authorization. If an agent can read email, write to a CRM, and run scripts, a single wrongly worded instruction or prompt injection can lead to data leakage or unauthorized actions.
OpenAI emphasizes cross-app security as one of the foundations of Workspace Agents in the announcement. Although the details are not publicly elaborated, the focus suggests mechanisms that limit what an agent is allowed to do in each connected application and how data is passed between them.
What does this mean for companies outside the US?
For organizations already using ChatGPT Enterprise or Team subscriptions, Workspace Agents open up the possibility of automation without additional development. Smaller companies without their own developer teams can get agents ready to use in the same interface their employees already know.
It is important to note, however, that enterprise agents require careful data preparation and clearly defined access policies. Without clean processes and defined boundaries, agents can amplify existing disorder rather than reduce it.
Related news
Anthropic: Memory for Managed Agents in public beta — AI agents that remember context between sessions
GitHub: Cloud agent sessions now available directly from issues and project views
ArXiv SWE-chat — a dataset of real developer interactions with AI coding agents in production