6 Sources
6 Sources
[1]
OpenAI updates ChatGPT with Codex-powered 'workspace agents' for teams - 9to5Mac
OpenAI launch week continues today with the introduction of workspace agents in ChatGPT. The company describes the new feature as an evolution of GPTs, custom tools that ChatGPT users can build within the app. Last year, OpenAI introduced the ability for anyone to create a custom GPT through ChatGPT. Custom GPTs are dedicated tools that you create inside ChatGPT for specific purposes. Starting today, workspace agents are here to upgrade the experience and replace custom GPTs. "Teams can now create shared agents that handle complex tasks and long-running workflows, all while operating within the permissions and controls set by their organization," OpenAI says. Codex technology powers the new workspace agents, OpenAI says. Workspace agents are an evolution of GPTs. Powered by Codex, they can take on many of the tasks people already do at work -- from preparing reports, to writing code, to responding to messages. They run in the cloud, so they can keep working even when you're not. They're also designed to be shared within an organization, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time. The new feature is part of a wider push to make AI tools both user-specific and always active. Workspace agents arrive today as a research preview feature through ChatGPT Business, Enterprise, Edu, and Teachers plans, the company says. In the future, OpenAI will also make it possible to convert custom GPTs directly into workspace agents.
[2]
OpenAI unveils Workspace Agents, a successor to custom GPTs for enterprises that can plug directly into Slack, Salesforce and more
OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers. Called "Workspace Agents," OpenAI's new offering essentially allows users on its ChatGPT Business ($20 per user per month) and variably priced Enterprise, Edu and Teachers subscription plans to design or select from pre-existing agent templates that can take on work tasks across third-party apps and data sources including Slack, Google Drive, Microsoft apps, Salesforce, Notion, Atlassian Rovo, and other popular enterprise applications. Put simply: these agents can be created and accessed from ChatGPT, but users can also add them to third-party apps like Slack, communicate with them across disparate channels, ask them to use information from the channel they're in and other third-party tools and apps, and the agents will go off and do work like drafting emails to the entire team, selected members, or pull data and make presentations. Human users can trust that the agent will manage all this complexity and complete the task as requested, even if the user who requested it leaves. It's the end of "babysitting" agents and the start of letting them go off and get shit done for your business -- according to your defined business processes and permissions, of course. The product experience appears centered on the Agents tab in the ChatGPT sidebar, where teams can discover and manage shared agents. This functions as a kind of team directory: a place where agents built by coworkers can be reused across a workspace. The broader idea is that AI becomes less of an individual productivity trick and more of a shared organizational resource. In this sense, OpenAI is targeting one of office work's oldest pain points: the handoff between people, systems, and steps in a process. OpenAI says workspace agents will be free for the next two weeks, until May 6, 2026, after which credit-based pricing will begin. The company also says more capabilities are on the way, including new triggers to start work automatically, better dashboards, more ways for agents to take action across business tools, and support for workspace agents in its AI code generation app, Codex. For more information on how to get started building and using them, OpenAI recommends heading over to its online academy page on them here and its help desk documentation here. The most significant shift in this announcement is the move away from purely session-based interaction. Workspace agents are powered by Codex -- the cloud-based, partially open-source AI coding harness that OpenAI has been aggressively expanding in 2026 -- which gives them access to a workspace for files, code, tools, and memory. OpenAI says the agents can do far more than answer a prompt. They can write or run code, use connected apps, remember what they have learned, and continue work across multiple steps. That description lines up closely with the capabilities OpenAI shipped into Codex just six days ago, including background computer use, more than 90 new plugins spanning tools like Atlassian Rovo, CircleCI, GitLab, Microsoft Suite, Neon by Databricks, and Render, plus image generation, persistent memory, and the ability to schedule future work and wake up on its own to continue across days or weeks. Workspace agents inherit that plumbing. When one pulls a Friday metrics report, it is effectively spinning up a Codex cloud session with the right tools attached, running code to fetch and transform data, rendering charts, writing the narrative, and persisting what it learned for next week. When that same agent is deployed to a Slack channel, it is a Codex instance listening for mentions and threading its work back in. This is the technical decision enterprise buyers should focus on. Building an agent on a code-execution substrate rather than a pure LLM-call-and-response loop is what gives workspace agents the ability to do real work -- transforming a CSV, reconciling two systems of record, generating a chart that is actually correct -- rather than describing what the work would look like. In earlier AI assistant models, progress paused when the user stopped interacting. Workspace agents change that by running in the cloud and supporting long-running workflows. Teams can also set them to run on a schedule. That means a recurring reporting agent can pull data on a set cadence, generate charts and summaries, and share the results with a team without anyone manually kicking off the process. Here at VentureBeat, we analyze story traffic and user return rate on a weekly basis -- exactly the kind of recurring, multi-step, multi-source task that could theoretically be automated with a single workspace agent. Any enterprise with a weekly reporting rhythm pulling from dynamic data sources is likely to find a use for these agents. Agents also retain memory across runs. OpenAI says they can be guided and corrected in conversation, so they improve the more a team uses them. Over time they start to reflect how a team actually works -- its processes, its standards, its preferred ways of handling recurring jobs -- which is a meaningfully different proposition from the static instruction-set GPTs that preceded them. OpenAI's claim is that agents should gather information and take action where work already happens, rather than forcing teams into a separate interface. That point becomes clearest in the Slack examples. OpenAI's launch materials show a product-feedback agent operating inside a channel named #user-insights, answering a question about recent mobile-app feedback with a themed summary pulled from multiple sources. The company's demo lineup walks through a sample team directory of agents: Spark for lead qualification and follow-up, Slate for software-request review, Tally for metrics reporting, Scout for product feedback routing, Trove for third-party vendor risk, and Angle for marketing and web content. OpenAI also shared more functional examples its own teams use internally -- a Software Reviewer that checks employee requests against approved-tools policy and files IT tickets; an accounting agent that prepares parts of month-end close including journal entries, balance-sheet reconciliations, and variance analysis, with workpapers containing underlying inputs and control totals for review; and a Slack agent used by the product team that answers employee questions, links relevant documentation, and files tickets when it surfaces a new issue. In a sense, it is a continuation of the philosophy OpenAI espoused for individuals with last week's Codex desktop release: the agent joins the workflow where work is already happening, draws in context from the surrounding apps, takes action where permitted, and keeps moving. Workspace agents are not a standalone launch. They sit inside a roughly 12-month arc in which OpenAI has been systematically rebuilding ChatGPT, the API, and the developer platform around agents. Workspace agents are explicitly positioned by OpenAI as an evolution of its custom GPTs, introduced in late 2023, which gave users a way to create customized versions of ChatGPT for particular roles and use cases. However, now OpenAI says it is deprecating the custom GPT standard for organizations in a yet-to-be determined future date, and will require Business, Enterprise, Edu and Teachers users to update their GPTs to be new workspace agents. Individuals who have made custom GPTs can continue using them for the foreseeable future, according to our sources at the company. In October 2025, OpenAI introduced AgentKit, a developer-focused suite that includes Agent Builder, a Connector Registry, and ChatKit for building, deploying, and optimizing agents. In February 2026, it introduced Frontier, an enterprise platform focused on helping organizations manage AI coworkers with shared business context, execution environments, evaluation, and permissions. Workspace agents arrive as the no-code, in-product entry point that sits on top of that stack -- even if OpenAI does not explicitly describe the architectural relationship in its materials. The subtext across all three launches is the same: OpenAI has decided that the future of ChatGPT-for-work is fleets of permissioned agents, not single chat windows -- and that GPTs, its first attempt at letting businesses customize ChatGPT, were not enough. Because workspace agents can act across business systems, OpenAI puts heavy emphasis on governance. Admins can control who is allowed to build, run, and publish agents, and which tools, apps, and actions those agents can reach. The role-based controls are more granular than the ones most custom-GPT rollouts ever had: admins can toggle, per role, whether members can browse and run agents, whether they can build them, whether they can publish to the workspace directory, and -- separately -- whether they can publish agents that authenticate using personal credentials. That last setting is the risky case, and OpenAI explicitly recommends keeping it narrowly scoped. Authentication itself comes in two flavors, and the choice has real consequences. In end-user account mode, each person who runs the agent authenticates with their own credentials, so the agent only ever sees what that individual is allowed to see. In agent-owned account mode, the agent uses a single shared connection so users don't have to authenticate at run time. OpenAI's documentation strongly recommends service accounts rather than personal accounts for the shared case, and flags the data-exfiltration risk of publishing an agent that authenticates as its creator. Write actions -- sending email, editing a spreadsheet, posting a message, filing a ticket -- default to Always ask, requiring human approval before the agent executes. Builders can relax specific actions to "Never ask" or configure a custom approval policy, but the default posture is human-in-the-loop. OpenAI also claims built-in safeguards against prompt-injection attacks, where malicious content in a document or web page tries to hijack an agent. The claim is welcome but not yet proven in the wild. For organizations that want deeper visibility, OpenAI says its Compliance API surfaces every agent's configuration, updates, and run history. Admins can suspend agents on the fly, and OpenAI says an admin-console view of every agent built across the organization, with usage patterns and connected data sources, is coming soon. Two caveats worth flagging for security-sensitive buyers: workspace agents are off by default at launch for ChatGPT Enterprise workspaces pending admin enablement, and they are not available at all to Enterprise customers using Enterprise Key Management (EKM). OpenAI also ships an analytics dashboard aimed at helping teams understand how their agents are being used. Screenshots in the launch materials show measures like total runs, unique users, and an activity feed of recent runs, including one by a user named Ethan Rowe completing a run in a #b2b-sales channel. The mockup detail supports OpenAI's broader point: the company wants organizations to measure not just whether agents exist, but whether they are being used. The clearest early-adopter signal in the launch itself comes from Rippling. Ankur Bhatt, who leads AI Engineering at the HR platform, says workspace agents shortened the traditional development cycle enough that a sales consultant was able to build a sales agent without an engineering team. "It researches accounts, summarizes Gong calls, and posts deal briefs directly into the team's Slack room," Bhatt says. "What used to take reps 5-6 hours a week now runs automatically in the background on every deal." OpenAI's announcement names SoftBank Corp., Better Mortgage, BBVA, and Hibob as additional early testers. Workspace agents do not land in a vacuum. They land in the middle of a broader OpenAI push -- through AgentKit, through Frontier, through the Codex overhaul -- to make agents more persistent, more connected, and more useful inside real organizational workflows. They also land in a deeply crowded field: Microsoft Copilot Studio is wired into the Microsoft 365 base, Google is pushing Agentspace, Salesforce has rebuilt itself as agent infrastructure with Agentforce, and Anthropic recently introduced Claude Managed Agents, all different flavors of similar ideas -- agents that cut across your apps and tools, take actions on schedules repeatedly as desired, and retain some degree of memory, context, and permissions and policies. But this launch matters because it turns OpenAI's strategy into something concrete for the teams already paying for ChatGPT, and because it quietly retires the product those teams were most recently told to standardize on. If workspace agents live up to the pitch -- shared, reusable, scheduled, permissioned coworkers that follow approved processes and keep work moving when their human is offline -- it would mark a meaningful change in what workplace software does. Less passive software waiting for input, more active systems helping teams coordinate, execute, and move faster together. The era of the digital coworker has begun. And, on OpenAI's plans at least, the era of the custom GPT is ending.
[3]
ChatGPT workspace agents turn AI into a team member
OpenAI is pushing ChatGPT beyond just answering questions, and this latest update makes that shift pretty obvious. With workspace agents, ChatGPT is starting to look less like a chatbot and more like a full-blown work assistant. What are workspace agents in ChatGPT, and how do they work? OpenAI has introduced workspace agents, which are essentially shared AI agents designed to handle complex, multi-step tasks across teams. Unlike regular prompts, these agents don't just respond once and stop. They can plan, execute, and continue working in the background, even after the user steps away. They run in the cloud, meaning they can keep processing workflows, updating outputs, and handling tasks over time without constant input.  What makes them different is how deeply they integrate into workflows. These agents can access files, run code, connect to tools, and even operate across platforms like ChatGPT and Slack. Why is OpenAI turning ChatGPT into a team assistant? This feels like a natural next step in the AI race. Tools like ChatGPT have already become essential for writing, coding, and research. Workspace agents take that further by automating entire workflows instead of just assisting with parts of them. For example, a team could create a shared agent that tracks feedback, summarizes reports, responds to internal queries, and even flags issues automatically. Instead of multiple people doing repetitive tasks, the agent handles it continuously in the background. There is also a strong collaboration angle here. These agents are designed to be shared within organizations, meaning teams can build one workflow and reuse it across projects, improving it over time instead of starting from scratch each time. Recommended Videos Of course, this is still early. These agents operate within permissions, require setup, and are meant to assist rather than replace human decision-making. But the direction is clear. ChatGPT is no longer just something that helps you think. It is slowly becoming something that works alongside you.
[4]
OpenAI Launches Workspace Agents Feature in ChatGPT - Decrypt
The feature is free in research preview until May 6, when credit-based pricing begins. OpenAI is pushing ChatGPT beyond the chat box with the launch of "workspace agents," a new feature that lets businesses automate recurring tasks even when employees are offline. Announced on Wednesday, OpenAI said in a post that, unlike the custom GPTs users have built in the past, workspace agents are powered by OpenAI's Codex model and run as persistent assistants that can connect to external apps, retain information across projects, and complete multi-step workflows without repeated prompts. "Workspace agents are an evolution of GPTs. Powered by Codex, they can take on many of the tasks people already do at work -- from preparing reports, to writing code, to responding to messages," OpenAI said. "They run in the cloud, so they can keep working even when you're not. They're also designed to be shared within an organization, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time." According to OpenAI, users can create an AI agent from a new tab in ChatGPT by describing a desired workflow. ChatGPT then helps map the process, connect tools, and test the new agent. Once active, the agents can run on schedules or respond to specific triggers. "AI has already helped people work faster on their own, but many of the most important workflows inside an organization depend on shared context, handoffs, and decisions across teams," OpenAI said in a statement. "Workspace agents are designed for that kind of work: they can gather context from the right systems, follow team processes, ask for approval when needed, and keep work moving across tools." The new feature comes as the race to develop agentic AI enters a new and heavily funded phase, with tech giants including Google, Microsoft, and Amazon investing billions to build autonomous systems capable of completing tasks with limited human oversight. As experts continue to warn about the dangers of prompt injection and other cybersecurity threats, OpenAI said companies can limit what data and tools the agents can access, require human approval for sensitive actions, and monitor for prompt injection attacks. Workspace agents are available now in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans. OpenAI said the feature will remain free until May 6, 2026, before moving to a credit-based pricing model. While OpenAI said its own teams are already using the technology, the company emphasized that GPTs will remain available, adding that "we'll make it easy to convert GPTs into workspace agents."
[5]
OpenAI subscribers get new 'workspace agents' to automate complex tasks across teams - SiliconANGLE
OpenAI subscribers get new 'workspace agents' to automate complex tasks across teams OpenAI Group PBC said today it's pushing ChatGPT outside of its usual chat interface with the launch of "workspace agents," which is a new feature that allows business users to automate recurring tasks, even when their human employees aren't online. In a blog post today, OpenAI explained that its workspace agents are powered by the Codex model and designed to run as "persistent assistants." They can connect to third-party software applications, retain context across projects and perform multistep workflows without the need for repeated prompting. They're meant to be the next evolution of the company's "GPTs," which are specialized, no-code versions of ChatGPT tailored by their users to perform specific tasks, hobbies and workflows. Powered by Codex, they can take on many of the tasks people already do at work -- from preparing reports, to writing code, to responding to messages," the company wrote. "They run in the cloud, so they can keep working even when you're not. They're also designed to be shared within an organization, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time." OpenAI said users can create workspace agents through a new tab in the ChatGPT interface. All they have to do is describe the desired workflow they want it to perform. ChatGPT will then map the process it's going to use, connect the required tools and test the agent to make sure it does the job correctly. Once the user is satisfied, they can activate the new agent and set it to run on a schedule or respond to a specific trigger. The company explained that AI already helps people to work faster, but it's still a work in progress on more complex tasks that depend on shared context, handoffs and decisions made across teams. "Workspace agents are designed for that kind of work," it said. "They can gather context from the right systems, follow team processes, ask for approval when needed and keep working moving across tools." OpenAI debuted the new feature at a time when the race to develop agentic AI tools is heating up, with rivals such as Google LLC, Microsoft Corp. and Amazon Web Services Inc. all investing billions of dollars in an effort to create their own autonomous systems for work. The company also faces substantial pressure from Anthropic PBC, which is widely regarded as having taken the lead in the agentic AI race thanks to tools such as Claude Code and Cowork. There are still risks to using autonomous AI agents, but OpenAI said it's taking steps to ensure workspace agents won't be targeted by prompt injection attacks and other threats. To protect them, it's giving companies the ability to limit what kind of data and tools the agents can access. Users can also set their agents to require approval before performing sensitive actions. The workspace agents are available now as a research preview for ChatGPT Business, Edu, Enterprise and Teachers subscribers. They're free to use between now and May 6, after which point it will shift to a credit-based pricing model. OpenAI said custom GPTs will still remain available, with the company promising to develop a mechanism that will allow customers to "convert GPTs into workspace agents" at a later date.
[6]
OpenAI brings Codex-powered workspace agents to ChatGPT: How they work
OpenAI has launched a new feature called workspace agents in ChatGPT. These agents are designed for organisations where multiple people collaborate on shared tasks. They can continue running in the background, even when users are not actively using ChatGPT. OpenAI describes workspace agents as an evolution of GPTs. 'Powered by Codex, they can take on many of the tasks people already do at work -- from preparing reports, to writing code, to responding to messages,' OpenAI said in a blogpost. 'They run in the cloud, so they can keep working even when you're not. They're also designed to be shared within an organisation, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time.' Also read: Anthropic investigates alleged unauthorised access to its Mythos AI model: Here is what happened Workspace agents are powered by Codex and can take care of tasks such as writing reports, generating code, replying to messages, and organising information. They operate in the cloud. Users can create an agent by simply describing a task or uploading a file. ChatGPT then helps turn that into a workflow by defining steps, connecting tools and adding required actions. They are built to handle real-world team workflows. For instance, an agent can collect data from different sources, process it, and prepare outputs like emails or reports. It can also ask for approval before taking important actions, ensuring users stay in control. Admins also get access to monitoring tools that show how agents are being used. They can control access, manage permissions and ensure sensitive data is protected. Built-in safeguards also help prevent issues like misuse or harmful instructions. Also read: OpenAI CEO Sam Altman takes dig at Anthropic Mythos AI, calls it fear-based marketing Workspace agents are currently available in research preview for ChatGPT Business, Enterprise, Edu and Teachers plans. Admins in Enterprise and Edu plans can enable them using role-based controls. The feature is free to use until May 6, 2026. After that, OpenAI plans to introduce a credit-based pricing model.
Share
Share
Copy Link
OpenAI introduced workspace agents, a Codex-powered evolution of custom GPTs that transforms ChatGPT into a persistent team assistant. These cloud-based agents can automate complex tasks, integrate with Slack and Salesforce, and continue working even when users are offline. Available now in research preview for business users until May 6.
OpenAI has unveiled workspace agents, marking a significant shift in how ChatGPT functions within enterprise environments. The new feature represents an evolution of custom GPTs, transforming the AI chatbot into a shared organizational resource that can handle long-running workflows and automate complex tasks without constant human oversight
1
. Unlike traditional custom GPTs that respond to individual prompts, workspace agents are powered by Codex and run persistently in the cloud, continuing work even when employees are offline4
.Source: 9to5Mac
The feature arrives as part of OpenAI's broader push to make AI for teams more collaborative and autonomous. Teams can now create shared agents that handle everything from preparing reports and writing code to responding to messages across multiple platforms
1
. These AI work assistants are designed to be built once and reused across an organization, improving over time as teams refine their workflows3
.
Source: SiliconANGLE
What sets workspace agents apart is their deep integration with third-party applications. Enterprise users can deploy these agents across popular collaboration tools including Slack, Google Drive, Microsoft apps, Salesforce, Notion, and Atlassian Rovo
2
. The agents can be created and accessed from ChatGPT, but also added directly to platforms like Slack, where they can communicate across disparate channels, pull data from multiple sources, and complete tasks like drafting team emails or generating presentations2
.This integration capability addresses one of office work's persistent challenges: the handoff between people, systems, and process steps. Users can trust that agents will manage complexity and complete tasks as requested, even after the person who initiated the work has left
2
. The Codex model gives these agents access to a workspace for files, code, tools, and memory, enabling them to write or run code, use connected apps, remember what they've learned, and continue work across multiple steps2
.The technical architecture behind workspace agents represents a fundamental shift from session-based interaction to persistent execution. Building agents on a code-execution substrate rather than a pure language model loop enables them to perform real work—transforming CSV files, reconciling systems of record, or generating accurate charts—rather than simply describing what the work would look like
2
.Teams can set agents to run on schedules, enabling recurring tasks like weekly reporting that pulls data on a set cadence, generates charts and summaries, and shares results without manual initiation
2
. The agents retain memory across runs and can be guided and corrected through conversation, making them adaptable to changing business needs2
. This capability to automate workflows means teams can focus on strategic decisions while agents handle repetitive, multi-step processes3
.
Source: VentureBeat
Related Stories
As the race to develop agentic AI intensifies, with competitors like Google, Microsoft, and Amazon investing billions in autonomous systems, OpenAI has implemented security measures to address concerns about prompt injection attacks and data security
4
. Companies can limit what data and tools agents can access, require human approval for sensitive actions, and monitor for injection attacks4
. These controls ensure agents operate within the permissions and processes set by their organization1
.Workspace agents launched as a research preview available to ChatGPT Business, Enterprise, Edu, and Teachers plan subscribers
5
. ChatGPT Business costs $20 per user per month, while Enterprise, Edu, and Teachers plans have variable pricing2
. The feature remains free until May 6, 2026, after which credit-based pricing will begin2
. OpenAI plans to add new capabilities including triggers to start work automatically, improved dashboards, expanded API integration for business tools, and support for workspace agents in its AI code generation app, Codex2
. Custom GPTs will remain available, with OpenAI promising to make it easy to convert GPTs into workspace agents4
.Summarized by
Navi
[2]
[3]
1
Policy and Regulation

2
Technology

3
Business and Economy
