3 Sources
3 Sources
[1]
NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI
Companies can build AI factories with NVIDIA Vera Rubin-powered A5X instances scaling up to nearly 1 million Rubin GPUs, Gemini on Google Distributed Cloud, confidential NVIDIA Blackwell GPUs and agentic AI built on Gemini Enterprise Agent Platform with NVIDIA Nemotron and NeMo. NVIDIA and Google Cloud have collaborated for more than a decade, co‑engineering a full‑stack AI platform that spans every technology layer -- from performance‑optimized libraries and frameworks to enterprise‑grade cloud services. This foundation enables developers, startups and enterprises to push agentic and physical AI out of the lab and into production -- from agents that manage complex workflows to robots and digital twins on the factory floor. At Google Cloud Next this week in Las Vegas, the partnership reaches a new milestone, with advancements to expand Google Cloud AI Hypercomputer for AI factories that will power the next frontier of agentic and physical AI. These include the new NVIDIA Vera Rubin-powered A5X bare-metal instances; a preview of Google Gemini on Google Distributed Cloud running on NVIDIA Blackwell and NVIDIA Blackwell Ultra GPUs; confidential VMs with NVIDIA Blackwell GPUs; and agentic AI on Gemini Enterprise Agent Platform with NVIDIA Nemotron open models and the NVIDIA NeMo framework. Next-Generation Infrastructure: From NVIDIA Blackwell to Vera Rubin At Google Cloud Next, Google announced A5X powered by NVIDIA Vera Rubin NVL72 rack-scale systems, which -- through extreme codesign across chips, systems and software -- deliver up to 10x lower inference cost per token and 10x higher token throughput per megawatt than the prior generation. A5X will use NVIDIA ConnectX-9 SuperNICs, combined with next-generation Google Virgo networking, scaling to up to 80,000 NVIDIA Rubin GPUs within a single site cluster and up to 960,000 NVIDIA Rubin GPUs in a multisite cluster, enabling customers to run their largest AI workloads on NVIDIA‑optimized infrastructure. "At Google Cloud, we believe the next decade of AI will be shaped by customers' ability to run their most demanding workloads on a truly integrated, AI‑optimized infrastructure stack," said Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud. "By combining Google Cloud's scalable infrastructure and managed AI services with NVIDIA's industry‑leading platforms, systems and software, we're giving customers flexibility to train, tune and serve everything from frontier and open models to agentic and physical AI workloads -- while optimizing for performance, cost and sustainability." Google Cloud's broad NVIDIA Blackwell portfolio ranges from A4 VMs with NVIDIA HGX B200 systems to rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, all the way to fractional G4 VMs with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Customers can right-size their acceleration capabilities, whether using multiple interconnected NVL72 racks that scale out to tens of thousands of NVIDIA Blackwell GPUs, a single rack that can scale up to 72 Blackwell GPUs with fifth-generation NVIDIA NVLink and NVLink 5 Switch, or just one-eighth of a GPU. This comprehensive platform helps teams optimize every workload, from mixture-of-experts reasoning, multimodal inference and data processing to complex simulations for the next frontier of physical AI and robotics. Leading frontier AI labs are already putting this infrastructure to work. Thinking Machines Lab is scaling its Tinker application programming interface (API) on A4X Max VMs with GB300 NVL72 systems to accelerate training, while OpenAI is running large‑scale inference on NVIDIA GB300 (A4X Max VMs) and GB200 NVL72 systems (A4X VMs) on Google Cloud for some of its most demanding inference workloads, including for ChatGPT. Secure AI Wherever It Needs to Run: Sovereign and Confidential Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now in preview on Google Distributed Cloud, so customers can bring Google's frontier models wherever their most sensitive data resides. NVIDIA Confidential Computing with the NVIDIA Blackwell platform enables Gemini models to run in a protected environment where prompts and fine‑tuning data stay encrypted and can't be seen or altered by unauthorized parties, including the infrastructure operators. In the public cloud, the preview of Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs brings these protections to multi‑tenant environments -- helping safeguard prompts, AI models and data so customers in regulated industries can access the power of AI without compromising on security or performance. This is the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud, giving Google Cloud customers a new foundation for secure, high‑performance AI. Open Models and APIs for Agentic AI The NVIDIA platform on Google Cloud is optimized to run every kind of model -- from Google's frontier Gemini and Gemma families to NVIDIA Nemotron open models and the broader open weight ecosystem -- equipping developers to build agentic AI systems that reason, plan and act. NVIDIA Nemotron 3 Super is available on Gemini Enterprise Agent Platform, giving developers a direct path to discovering, customizing and deploying NVIDIA‑optimized reasoning and multimodal models for agentic workflows. Google Cloud and NVIDIA are also making it easier to train and customize open models at scale. Managed Training Clusters on Gemini Enterprise Agent Platform introduced a new managed reinforcement learning (RL) API built with NVIDIA NeMo RL for accelerating RL training at scale while automating cluster sizing, failure recovery and job execution, so teams can focus on agent behavior and model quality instead of infrastructure management. Cybersecurity leader CrowdStrike uses NVIDIA NeMo open libraries such as NeMo Data Designer, NeMo Automodel and NeMo Megatron Bridge to generate synthetic data and fine-tuning Nemotron and other open large language models for domain-specific cybersecurity. Running on Managed Training Clusters on Gemini Enterprise Agent Platform with NVIDIA Blackwell GPUs, these capabilities accelerate threat detection, investigation and response. Building the Future of Industrial and Physical AI Building industrial and physical AI at scale demands powerful hardware and a combination of open models, libraries and frameworks to develop these complex end-to-end workflows. NVIDIA AI infrastructure, open models and physical AI libraries available on Google Cloud, is mainstreaming industrial and physical AI applications, enabling customers to simulate, optimize and automate real-world workflows. Solutions from leading industrial software providers, including Cadence and Siemens Digital Industries Software, are now available on Google Cloud, accelerated on NVIDIA AI infrastructure. These applications are powering the next-generation design, engineering and manufacturing of everything from chips to autonomous vehicles, robotics, aerospace platforms, heavy machinery and large-scale production systems. With NVIDIA Omniverse libraries and the open source NVIDIA Isaac Sim robotics simulation framework available on Google Cloud Marketplace, developers can build physically accurate digital twins and develop custom robotics simulations pipelines to train, simulate and validate robots before real-world deployment. NVIDIA NIM microservices for models like NVIDIA Cosmos Reason 2 can be deployed to Google Vertex AI and Google Kubernetes Engine. This enables robots and vision AI agents to see, reason and act in the physical world like humans, powering use cases such as automated data curation and annotation, advanced robot planning and reasoning, and intelligent video analytics agents for real-time insights and decision-making. Together, these technologies help developers seamlessly move from computer-aided design to living industrial digital twins and AI‑driven robots, accelerating processes from design sign‑off to factory optimization on the NVIDIA platform running on Google Cloud. Proven Impact: From Startups to Global Enterprises Global enterprises, AI labs and high‑growth startups are using NVIDIA and Google Cloud's co-engineered platform to move from prototyping to production faster, including Snap, Schrödinger and Salesforce. Snap is cutting the cost of large‑scale A/B testing by shifting data pipelines to GPU‑accelerated Spark on Google Cloud. Schrödinger is shrinking weekslong drug discovery simulations into just hours with NVIDIA accelerated computing on Google Cloud. Startups are orchestrating the next wave of AI innovation -- building new agents and AI‑native applications using NVIDIA accelerated computing on Google Cloud. As part of a broader ecosystem highlighted through NVIDIA Inception and Google for Startups, CodeRabbit and Factory are using NVIDIA Nemotron‑based models on Google Cloud to power code review and autonomous software development agents, while Aible, Mantis AI, Photoroom and Baseten are building enterprise data, video intelligence, generative imagery and managed inference solutions on the full‑stack NVIDIA platform on Google Cloud. More than 90,000 developers have become a part of the joint NVIDIA and Google Cloud developer community in just over a year, tapping this platform to build and scale new AI applications. In addition, NVIDIA has been honored at Next as Google Cloud Partner of the Year in two categories -- AI Global Technology Partner and Infra Modernization Compute -- in recognition of deep technical expertise and go-to-market alignment. Together, NVIDIA and Google Cloud are giving customers a cloud‑scale platform to turn experimental agents and simulations into production systems that review code, secure fleets, enable new AI applications and optimize factories in the real world. Learn more about the companies' collaboration by attending NVIDIA sessions, demos and workshops at Google Cloud Next.
[2]
Google Bets On The Agentic AI Era With Its AI Hypercomputer, Merges 8th-Gen TPUs, NVIDIA Rubin, & Axion CPUs Together
Google has announced the AI Hypercomputer, which brings together TPUv8 series, NVIDIA Rubin, & Axion CPUs to power the Agentic AI era. Gone are the days of supercomputers; the Agentic AI era will be all about hypercomputers, which will combine various compute options to deliver customers the most flexible and performant AI architecture ever built. Today, at Google's Cloud Next 26 event, the company formally announced its AI Hypercomputer. The new high-performance computing datacenter for Agentic AI houses an advanced, purpose-built architecture that unifies performance-optimized hardware for compute, storage, networking, open software, and ML frameworks. To make Google's AI Hypercomputer possible, the company had to go above and beyond. It will house its latest custom TPUv8 series, Axion Cloud CPUs, and will also deploy NVIDIA Rubin GPUs. Today's announcement also comes with the launch of Google's 8th Gen TPU lineup, which comes in two flavors: the TPU 8t and the TPU 8i. Google TPU 8t - Training Chip The Google TPU 8t chip is designed as a training powerhouse, reducing the deployment of frontier models from months to weeks. The chip offers the highest possible compute throughput, shared memory, and interchip bandwidth in the most power-efficient package ever built. The TPU 8t chip has a total FP4 compute capacity of 121 Exaflops per pod, 2.84x higher than Ironwood. The second chip, TPU 8i, is designed for inference and pairs an incredible 288 GB of HBM memory with 384 MB of on-chip SRAM, which is a 3x boost in capacities over the previous generation. With such a large SRAM, you can keep models active entirely on the chip. The TPU 8i chip has a total FP8 compute capacity of 331.8 Exaflops per pod, 6.74x higher than Ironwood. The salient features of TPU 8i include: When it comes to generation over generation improvements, the TPU8t Training chip offers a 2.7x better performance per dollar improvement over Ironwood "TPUv7" in large-scale training, the TPU8i Inference chip offers a 80% performance per dollar improvement over Ironwood "TPUv7" in low-latency targets for MoE model. Both chips also deliver twice the performance per watt improvement, which is vital for AI TCO. Both chips support Google's 4th Gen liquid cooling technology that is able to sustain the higher compute and performance densities, not possible with air cooling. And with that, let's round up the main highlights of the Google AI Hypercomputer, which are listed below: Google Cloud will also be one of the first AI infrastructures to offer NVIDIA VR200 (Vera Rubin) accelerators. The Rubin GPUs will be paired with Google's brand new Virgo network, offering massive-scale training clusters alongside Google's own 8th Gen TPU family. The Google AI hypercomputer will be used by several customers, including big names such as the US DOE, Boston Dynamics, Citadel Securities, Thinking Machine Labs, and Axia Energy.
[3]
NVIDIA and Google Cloud expand AI collaboration with new infrastructure By Investing.com
Investing.com -- NVIDIA and Google Cloud announced on Wednesday an expansion of their partnership to advance agentic and physical AI capabilities, introducing new infrastructure and services at Google Cloud Next in Las Vegas. The companies unveiled NVIDIA Vera Rubin-powered A5X bare-metal instances, which can scale up to 960,000 NVIDIA Rubin GPUs in a multisite cluster. The A5X instances use NVIDIA ConnectX-9 SuperNICs combined with Google Virgo networking, scaling to up to 80,000 NVIDIA Rubin GPUs within a single site cluster. The new systems deliver up to 10 times lower inference cost per token and 10 times higher token throughput per megawatt compared to the prior generation, according to the announcement. Google Cloud's NVIDIA Blackwell portfolio includes A4 VMs with NVIDIA HGX B200 systems, rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, and fractional G4 VMs with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. OpenAI is running large-scale inference on NVIDIA GB300 and GB200 NVL72 systems on Google Cloud for some of its inference workloads, including for ChatGPT. Thinking Machines Lab is scaling its Tinker API on A4X Max VMs with GB300 NVL72 systems. Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now in preview on Google Distributed Cloud. The companies also introduced Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs, marking the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud. NVIDIA Nemotron 3 Super is now available on Gemini Enterprise Agent Platform. Google Cloud and NVIDIA introduced a new managed reinforcement learning API built with NVIDIA NeMo RL for accelerating training at scale. CrowdStrike uses NVIDIA NeMo open libraries to generate synthetic data and fine-tune Nemotron and other open large language models for cybersecurity applications, running on Managed Training Clusters on Gemini Enterprise Agent Platform with NVIDIA Blackwell GPUs. Solutions from Cadence and Siemens Digital Industries Software are now available on Google Cloud, accelerated on NVIDIA AI infrastructure. NVIDIA Omniverse libraries and the NVIDIA Isaac Sim robotics simulation framework are available on Google Cloud Marketplace. NVIDIA received Google Cloud Partner of the Year recognition in two categories: AI Global Technology Partner and Infra Modernization Compute. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
NVIDIA and Google Cloud announced an expanded AI collaboration at Google Cloud Next in Las Vegas, introducing next-generation infrastructure for agentic AI. The partnership brings NVIDIA Vera Rubin-powered A5X instances that can scale up to 960,000 GPUs, delivering 10x lower inference costs and 10x higher token throughput than previous generations.
NVIDIA and Google Cloud announced a major expansion of their decade-long partnership at Google Cloud Next in Las Vegas this week, unveiling next-generation infrastructure designed to accelerate agentic and physical AI from development into production
1
. The expanded AI collaboration introduces NVIDIA Vera Rubin-powered A5X bare-metal instances capable of scaling to 80,000 NVIDIA Rubin GPUs within a single site cluster and up to 960,000 NVIDIA Rubin GPUs in a multisite cluster3
. This massive AI infrastructure represents a significant leap in computing capacity, enabling customers to run their largest AI workloads on performance-optimized systems.
Source: NVIDIA
The new A5X instances utilize NVIDIA ConnectX-9 SuperNICs combined with Google's next-generation Virgo networking, delivering up to 10x lower inference cost per token and 10x higher token throughput per megawatt compared to the prior generation
1
. This dramatic improvement stems from extreme codesign across chips, systems and software within the NVIDIA Vera Rubin NVL72 rack-scale systems. Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud, emphasized that "the next decade of AI will be shaped by customers' ability to run their most demanding workloads on a truly integrated, AI-optimized infrastructure stack"1
.Google formally announced its AI Hypercomputer at Google Cloud Next, a high-performance computing datacenter that unifies performance-optimized hardware for compute, storage, networking, open software, and ML frameworks
2
. The system houses Google's latest custom TPUv8 series alongside Axion CPUs and NVIDIA Blackwell GPUs. The TPU 8t training chip offers 121 Exaflops per pod of FP4 compute capacity, representing a 2.84x increase over the previous Ironwood generation, while the TPU 8i inference chip delivers 331.8 Exaflops per pod, a 6.74x improvement2
.
Source: Wccftech
OpenAI is already running large-scale inference workloads on NVIDIA GB300 and GB200 NVL72 systems on Google Cloud for ChatGPT and other demanding applications
3
. Thinking Machines Lab is scaling its Tinker API on A4X Max VMs with GB300 NVL72 systems to accelerate training1
. Google Cloud's comprehensive NVIDIA Blackwell portfolio ranges from A4 VMs with NVIDIA HGX B200 systems to rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, extending to fractional G4 VMs with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs1
.Related Stories
Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now in preview on Google Distributed Cloud, enabling customers to deploy frontier models wherever sensitive data resides
1
. The companies introduced Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs, marking the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud3
. This enables Google Gemini to run in protected environments where prompts and fine-tuning data stay encrypted, helping regulated industries access AI without compromising security or performance.The partnership extends to agentic AI development with NVIDIA Nemotron 3 Super now available on Gemini Enterprise Agent Platform
3
. Google Cloud and NVIDIA introduced a managed reinforcement learning API built with NVIDIA NeMo RL for accelerating training at scale. CrowdStrike uses NVIDIA NeMo open libraries to generate synthetic data and fine-tune Nemotron for cybersecurity applications on Managed Training Clusters3
. For physical AI applications, NVIDIA Omniverse libraries and the NVIDIA Isaac Sim robotics simulation framework are now available on Google Cloud Marketplace, enabling development of digital twins and robotics applications3
. Major customers including the US Department of Energy, Boston Dynamics, Citadel Securities, and Axia Energy are already utilizing the AI Hypercomputer for their training clusters and inference workloads2
.Summarized by
Navi
19 Mar 2025•Technology

10 Apr 2025•Technology

Yesterday•Technology

1
Policy and Regulation

2
Technology

3
Business and Economy
