Nvidia gives 10,000 employees early access to GPT-5.5 through OpenAI Codex on Blackwell systems

Reviewed byNidhi Govil

5 Sources

Share

Nvidia has deployed OpenAI Codex powered by GPT-5.5 to over 10,000 employees across engineering, legal, marketing, finance, and HR departments. Running on Nvidia's GB200 NVL72 Blackwell systems, the agentic coding application delivers a 35x cost reduction and 50x efficiency boost compared to previous models. Employees report debugging cycles that once took days now close in hours, with some calling the results "mind-blowing" and "life-changing."

Nvidia Deploys GPT-5.5 Across Enterprise Workforce

Nvidia has rolled out OpenAI Codex powered by GPT-5.5 to more than 10,000 employees in what represents one of the largest internal deployments of a frontier AI model at enterprise scale

1

. The agentic coding application now spans departments including engineering, product, legal, marketing, finance, sales, HR, operations, and developer programs, marking a shift from traditional chatbot implementations to AI agents that actively perform work

4

.

Source: TechSpot

Source: TechSpot

Jensen Huang, Nvidia's founder and CEO, urged employees to embrace the technology in an internal email, writing, "Let's jump to lightspeed. Welcome to the age of AI"

1

. Huang emphasized the distinction between passive tools and active assistance, noting that "Chatbots answer questions. Agents do work"

2

. Following the successful internal deployment to 10,000 employees, OpenAI Codex has been made available to Nvidia's entire workforce

5

.

Significant Cost Reduction and Efficiency Improvements in Software Development

The deployment runs on Nvidia's GB200 NVL72 rack-scale systems, which deliver 35x lower cost per million tokens and 50x higher token output per second per megawatt compared to prior-generation systems

1

. These economics make AI viable at enterprise scale, addressing one of the primary barriers to widespread adoption of frontier models in business environments

2

.

Source: TechRadar

Source: TechRadar

Engineers who have been using the GPT-5.5-powered tool for several weeks report measurable efficiency gains in how they build and maintain software development projects. Debugging cycles that once stretched across days are now closing in hours, while experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases

1

. Teams are shipping end-to-end features from natural-language prompts with stronger reliability and fewer wasted cycles than earlier models

2

. Employees have described the results as "mind-blowing" and "life-changing"

3

.

Secure Cloud Virtual Machines and Enterprise Architecture

The rollout employs an enterprise architecture where OpenAI Codex agents run in sandboxed cloud virtual machines, enabling them to work with real company data without exposing it externally

1

. Each AI agent runs on a dedicated cloud virtual machine, with the Codex desktop app connecting via Secure Shell (SSH) to approved VMs

4

.

From a policy perspective, Nvidia says the deployment is governed by a zero-data-retention model and read-only integrations into production systems

1

. AI agents interact with those systems over command-line tools and "Skills"β€”the same internal toolkit used to run automation workflows across the company, providing an additional control layer on what Codex can execute

1

. These agents can read company data but cannot directly modify or delete it, addressing security concerns central to agentic deployments

4

.

Collaboration Between Nvidia and OpenAI on Nvidia Blackwell Infrastructure

Nvidia frames the GPT-5.5 deployment as the latest step in a more-than-decade-long collaboration between Nvidia and OpenAI across hardware, software, and model deployment

1

. The partnership dates back to 2016, when Jensen Huang delivered an Nvidia DGX-1 AI supercomputer to OpenAI's San Francisco headquarters

1

.

Source: TweakTown

Source: TweakTown

More recently, the collaboration produced what Nvidia calls a major milestone: the joint bring-up of the first GB200 NVL72 100,000-GPU cluster, which completed multiple large-scale training runs and "set a new benchmark for system-level reliability at frontier scale"

1

. GPT-5.5 is the product of that Nvidia Blackwell infrastructure running at full strength

1

. Huang noted in his communications that "Codex runs on Nvidia... trained on Nvidia Blackwell, inferencing on Nvidia AI infrastructure"

5

.

OpenAI CEO Sam Altman confirmed the rollout, stating the companies tested deploying Codex across an entire organization

5

. OpenAI president Greg Brockman indicated the company is exploring similar deployments with other enterprises

5

. Nvidia has also established a Codex Lab with OpenAI to support internal adoption, with training sessions planned for employees in the coming weeks

5

. This deployment matters because it demonstrates how frontier models can scale across entire enterprises when paired with appropriate infrastructure and security measures, potentially setting a template for other organizations considering similar implementations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo