5 Sources
5 Sources
[1]
OpenAI debuts GPT 5.5, Nvidia gave early access to 10,000 employees through Codex
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Nvidia has rolled out OpenAI's latest frontier model internally, giving more than 10,000 employees access to GPT-5.5 through OpenAI's Codex agentic coding application. The deployment spans engineering, product, legal, marketing, finance, sales, HR, operations, and developer programs, with employees using Codex for both knowledge work and software development. Codex now runs on GPT-5.5 hosted on Nvidia's GB200 NVL72 rack-scale systems. Nvidia says the systems deliver 35x lower cost per million tokens and 50x higher token output per second per megawatt than prior-generation systems, economics it argues make frontier-model inference viable at enterprise scale. Inside Nvidia, engineers have been using the GPT-5.5-powered Codex app for several weeks, and the company says the gains are already measurable in how they build and maintain software. Debugging work that used to take days is now being completed within hours, and experiments that previously required weeks are progressing overnight in complex, multi-file codebases, according to the Nvidia post. Teams are also using natural-language prompts to deliver end-to-end features with greater reliability and fewer wasted cycles than earlier models, the company says. Nvidia founder and CEO Jensen Huang urged employees to make use of Codex in an internal email. "Let's jump to lightspeed. Welcome to the age of AI," he wrote. The rollout uses an enterprise architecture in which Codex agents run in sandboxed cloud virtual machines, enabling them to work with real company data without exposing it externally. Each agent runs on a dedicated cloud virtual machine, with the Codex desktop app connecting via Secure Shell (SSH) to approved VMs. From a policy perspective, Nvidia says the deployment is governed by a zero-data-retention model and read-only integrations into production systems. Agents interact with those systems over command-line tools and "Skills" - the same internal toolkit used to run automation workflows across the company, which provides an additional control layer on what Codex can execute. Nvidia frames the GPT-5.5 deployment as the latest step in a more-than-decade-long collaboration with OpenAI across hardware, software and model deployment. Their collaboration goes back to 2016, when Huang delivered an Nvidia DGX-1 AI supercomputer to OpenAI's San Francisco headquarters. More recently, that collaboration produced what Nvidia calls a major milestone: the joint bring-up of the first GB200 NVL72 100,000-GPU cluster, which completed multiple large-scale training runs and "set a new benchmark for system-level reliability at frontier scale." GPT-5.5, the company concludes, "is the product of that infrastructure running at full strength."
[2]
'It was awesome to see it work': OpenAI deploys GPT-5.5 Codex across Nvidia Blackwell systems - 50x efficiency boost and 35x cost reduction makes AI 'viable at enterprise scale'
* Nvidia says OpenAI Codex deployment offers 50x increase in token output * Offers 35x reduction in cost compared to GPT-4o * 10,000 NVIDIA employees now operate codex across HR, legal, engineering and more OpenAI has deployed its Codex model powered by GPT5.5 across Nvidia in an AI-native enterprise deployment. The model is now available to over 10,000 Nvidia employees, and has reportedly resulted in a 35x reduction in cost and 50x increase in token output per megawatt compared to GPT-4o. Codex has already delivered "mind-blowing" and "life-changing" results, according to Nvidia employees. 'Frontier-model inference viable at enterprise scale' In a blog post, Nvidia says, "Debugging cycles that once stretched across days are closing in hours. Experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases. Teams are shipping end-to-end features from natural-language prompts, with stronger reliability and fewer wasted cycles than earlier models." The model has been deployed on Nvidia's GB200 NVL72 rack-scale systems, and this is what has made the huge savings in cost and token output possible, Nvidia says. The Nvidia-Codex adoption moves beyond the typical AI deployments most businesses have seen. Nvidia CEO Jensen Huang said, "Chatbots answer questions. Agents do work." Codex's original function was to aid GitHub users as a Copilot-style coding assistant, but can now operate as an agentic assistant that can perform tasks across non-technical departments such as product, legal, marketing, finance, sales, HR, and operations. An email from Jensen Huang, shared by Sam Altman on X, says, "Incredibly excited to all [that] will use Codex to accelerate and do work never possible before. Please convey my congratulations to your team on showing the world the frontier yet again. And thank them again for inventing GPT that gave us the springboard to reason, plan, use tools, and beyond." The deployment of Codex on Nvidia systems comes as OpenAI's rivalry with firms such as Anthropic is reaching new heights, especially as Anthropic's Claude Mythos draws widespread attention. While both models have wide-ranging and potentially revolutionary applications, Anthropic's Mythos focuses on patching zero-day vulnerabilities at scale (as demonstrated by Firefox), whereas OpenAI's GPT-5.5 Codex can handle real company data in a secure cloud sandbox, allowing employees to move beyond chatbots and interact with an AI agent capable of securely automating entire workflows. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[3]
Nvidia rolls out GPT-5.5-based Codex to 10,000 of its employees, who apparently all think it's 'mind-blowing' and 'life-changing'
To paraphrase some lyrics from my mispent youth pining for a Hot Topic, 'this ain't a scene, it's a goddamn AI arms race'. The preview of DeepSeek V4 is now live, and given that the AI giant was reportedly not so keen to grant Nvidia and AMD early access to the new model, the American AI industry has been equally keen to outpace China-based businesses any way it can. Case in point, Nvidia is really pushing OpenAI's GPT-5.5. DeepSeek wouldn't give Nvidia the sneak peek, so 10,000 of its employees got an early look at OpenAI's latest frontier model instead. The company went with a widespread rollout of Codex, specifically, which is OpenAI's agentic coding application powered by GPT-5.5. This has apparently resulted in big efficiency wins. "Debugging cycles that once stretched across days are closing in hours," says Nvidia. The Nvidia blog continues in a similarly effusive tone: "Experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases. Teams are shipping end-to-end features from natural-language prompts, with stronger reliability and fewer wasted cycles than earlier models." As with anything agentic, security was top of mind for the rollout. So, to "ensure maximum security and auditability" (and presumably to stop Codex from getting into anything it shouldn't), all participating Nvidia employees were given a cloud-based virtual machine to run the AI agent from. From there, Codex can read company data but can't directly edit it (or indeed, delete). To oversimplify, they locked the bot in a virtual plexiglass box. The blog post also surfaces similarly gushing praise from employees, with choice quotes calling Codex's results "mind-blowing" and "life-changing". Codex was rolled out across plenty of company departments besides engineering and development, with folks working in "product, legal, marketing, finance, sales, HR, and operations" also getting to grips with the application (personally, I would love to hear the honest Codex thoughts of a grumpy office admin). Such enthusiasm from Nvidia is no surprise, though. For one thing, the two companies have an ongoing relationship to the tune of billions (though Nvidia has scaled down what was originally a $100 billion investment in OpenAI, to a slightly less eye-watering $30 billion more recently). Second, GPT-5.5 runs on Nvidia GB200 NVL72 rack-scale systems. This set-up is apparently "capable of delivering 35x lower cost per million tokens and 50x higher token output per second per megawatt compared with prior-generation systems," making it an economically appealing enterprise model. Company head Jensen Huang quipped to OpenAI CEO Sam Altman over email, "Fire up those Blackwells. We need more tokens!"
[4]
NVIDIA deploys GPT-5.5-powered Codex to 10,000 employees, with engineers calling results 'mind-blowing'
NVIDIA has rolled out OpenAI's latest frontier model across its global workforce, with CEO Jensen Huang calling it a milestone in the "age of AI." Over 10,000 NVIDIA employees in engineering, product, legal, finance, and marketing were given early access to Codex, an agentic coding tool powered by GPT-5.5. Engineers have been using the GPT-5.5-powered Codex for several weeks now, reporting big efficiency gains in software development and maintenance. "Debugging cycles that once stretched across days are closing in hours," says NVIDIA. Teams are also using natural-language prompts to deliver end-to-end features more reliably and with fewer wasted cycles than prior models. Huang encouraged employees to adopt Codex in an internal email, describing AI agents as teammates boosting productivity across all roles, not just engineering. "Chatbots answer questions. Agents do work," Huang wrote. "Let's jump to lightspeed. Welcome to the age of AI." Employees have echoed that sentiment, describing Codex as both "mind-blowing" and "life-changing." As with anything agentic, security has been a central focus of the rollout. NVIDIA notes the deployment uses a zero-data-retention model with read-only integrations. AI agents interact via command-line tools and internal "Skills," providing extra control over what Codex can access or execute. The system runs on an enterprise architecture where Codex agents operate in sandboxed cloud virtual machines. These agents can read company data but cannot directly modify or delete it. Each agent runs on a dedicated VM, with the Codex desktop app connecting to approved environments via Secure Shell (SSH). NVIDIA frames the GPT-5.5 deployment as the latest step in its long-running collaboration with OpenAI across hardware, software, and model deployment. The partnership, valued in the billions, has also led to the bring-up of the first GB200 NVL72 cluster, a 100,000-GPU system powering GPT-5.5 and other frontier models. NVIDIA claims these systems deliver up to 35x lower cost per million tokens and 50x higher token throughput per megawatt compared to previous generations. OpenAI announced GPT-5.5 on Thursday, less than two months after releasing GPT-5.4. The company says its latest model improves on coding, computer use, and deeper research capabilities. GPT-5.5 is rolling out to OpenAI's paid subscribers, including Plus, Pro, Business, and Enterprise users, across ChatGPT and its coding assistant Codex.
[5]
Nvidia rolls out OpenAI's Codex AI agent to all employees - The Economic Times
Jensen Huang said Codex is now available to all employees after early access was given to around 10,000 staff across functions such as engineering, product, legal, marketing and sales.Nvidia has rolled out OpenAI's Codex AI agent, powered by the GPT-5.5 model, to its entire workforce, according to an internal email from chief executive Jensen Huang. Huang said Codex is now available to all employees after early access was given to around 10,000 staff across functions such as engineering, product, legal, marketing and sales. "OpenAI's Codex, powered by GPT-5.5, has launched and is available to every NVIDIAN," Huang wrote in the email. He added that employees who tested the system reported strong results, with some calling it "mind-blowing" and "life-changing". Codex is an AI agent designed to assist with tasks such as coding, planning and workflow execution. Nvidia said it is being used beyond software teams, including business and operations functions. Huang said the system runs on Nvidia's Blackwell infrastructure and reflects closer collaboration with OpenAI. "Codex runs on Nvidia... trained on Nvidia Blackwell, inferencing on Nvidia AI infrastructure," he wrote. OpenAI chief executive Sam Altman confirmed the rollout in a post on X, saying the companies tested deploying Codex across an entire organisation. OpenAI president Greg Brockman said the company is exploring similar deployments with other enterprises. Nvidia has also set up a Codex Lab with OpenAI to support internal adoption, with training sessions planned for employees in the coming weeks. The development comes shortly after OpenAI's coding assistant Codex crossed over four million weekly developers. The company added one million users in two weeks after reporting a three million count earlier this month. To deepen Codex's enterprise integration, the San Francisco-based company launched Codex Labs this month, a hands-on programme where its Codex experts work directly with organisations. Seven global systems integrators (GSIs), including Accenture, Capgemini, Cognizant, Infosys and Tata Consultancy Services (TCS) have already enrolled.
Share
Share
Copy Link
Nvidia has deployed OpenAI Codex powered by GPT-5.5 to over 10,000 employees across engineering, legal, marketing, finance, and HR departments. Running on Nvidia's GB200 NVL72 Blackwell systems, the agentic coding application delivers a 35x cost reduction and 50x efficiency boost compared to previous models. Employees report debugging cycles that once took days now close in hours, with some calling the results "mind-blowing" and "life-changing."
Nvidia has rolled out OpenAI Codex powered by GPT-5.5 to more than 10,000 employees in what represents one of the largest internal deployments of a frontier AI model at enterprise scale
1
. The agentic coding application now spans departments including engineering, product, legal, marketing, finance, sales, HR, operations, and developer programs, marking a shift from traditional chatbot implementations to AI agents that actively perform work4
.Source: TechSpot
Jensen Huang, Nvidia's founder and CEO, urged employees to embrace the technology in an internal email, writing, "Let's jump to lightspeed. Welcome to the age of AI"
1
. Huang emphasized the distinction between passive tools and active assistance, noting that "Chatbots answer questions. Agents do work"2
. Following the successful internal deployment to 10,000 employees, OpenAI Codex has been made available to Nvidia's entire workforce5
.The deployment runs on Nvidia's GB200 NVL72 rack-scale systems, which deliver 35x lower cost per million tokens and 50x higher token output per second per megawatt compared to prior-generation systems
1
. These economics make AI viable at enterprise scale, addressing one of the primary barriers to widespread adoption of frontier models in business environments2
.
Source: TechRadar
Engineers who have been using the GPT-5.5-powered tool for several weeks report measurable efficiency gains in how they build and maintain software development projects. Debugging cycles that once stretched across days are now closing in hours, while experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases
1
. Teams are shipping end-to-end features from natural-language prompts with stronger reliability and fewer wasted cycles than earlier models2
. Employees have described the results as "mind-blowing" and "life-changing"3
.The rollout employs an enterprise architecture where OpenAI Codex agents run in sandboxed cloud virtual machines, enabling them to work with real company data without exposing it externally
1
. Each AI agent runs on a dedicated cloud virtual machine, with the Codex desktop app connecting via Secure Shell (SSH) to approved VMs4
.From a policy perspective, Nvidia says the deployment is governed by a zero-data-retention model and read-only integrations into production systems
1
. AI agents interact with those systems over command-line tools and "Skills"βthe same internal toolkit used to run automation workflows across the company, providing an additional control layer on what Codex can execute1
. These agents can read company data but cannot directly modify or delete it, addressing security concerns central to agentic deployments4
.Related Stories
Nvidia frames the GPT-5.5 deployment as the latest step in a more-than-decade-long collaboration between Nvidia and OpenAI across hardware, software, and model deployment
1
. The partnership dates back to 2016, when Jensen Huang delivered an Nvidia DGX-1 AI supercomputer to OpenAI's San Francisco headquarters1
.
Source: TweakTown
More recently, the collaboration produced what Nvidia calls a major milestone: the joint bring-up of the first GB200 NVL72 100,000-GPU cluster, which completed multiple large-scale training runs and "set a new benchmark for system-level reliability at frontier scale"
1
. GPT-5.5 is the product of that Nvidia Blackwell infrastructure running at full strength1
. Huang noted in his communications that "Codex runs on Nvidia... trained on Nvidia Blackwell, inferencing on Nvidia AI infrastructure"5
.OpenAI CEO Sam Altman confirmed the rollout, stating the companies tested deploying Codex across an entire organization
5
. OpenAI president Greg Brockman indicated the company is exploring similar deployments with other enterprises5
. Nvidia has also established a Codex Lab with OpenAI to support internal adoption, with training sessions planned for employees in the coming weeks5
. This deployment matters because it demonstrates how frontier models can scale across entire enterprises when paired with appropriate infrastructure and security measures, potentially setting a template for other organizations considering similar implementations.Summarized by
Navi
[4]
19 Dec 2025β’Technology

02 Feb 2026β’Technology

12 Feb 2026β’Technology

1
Technology

2
Science and Research

3
Technology
