4 Sources
4 Sources
[1]
Arm joins NVLink Fusion ecosystem -- Arm's clients to get access to Nvidia GPUs
Arm and Nvidia announced at the Supercomputing '25 conference that Arm had joined the NVLink Fusion ecosystem, marking a major advance for the technology, which is now supported by two major microarchitecture developers and four CPUs developers in total. For Nvidia, this means that Arm's customers will develop processors that can work with Nvidia's AI accelerators, whereas Arm will also be able to design CPUs that could compete against Nvidia's own or Intel processors in Nvidia-based systems. "Arm is integrating NVLink IP so that their customers can build their CPU SoCs to connect Nvidia GPUs," said Dion Harris, the head of data center product marketing at Nvidia. " With NVLink Fusion, hyperscalers can significantly reduce design complexity, save development costs, and reach the market faster. The addition of Arm customers provides more options for specialized semi-custom infrastructure." Arm is a large company with diverse businesses, including ISA and IP licensing, and the development of custom CPUs and system-on-chips (SoCs) for large customers. For each type of business, NVLink Fusion support gives certain benefits. As an IP provider, Arm gets a major new competitive lever in the data-center market by supporting NVLink Fusion. By integrating NVLink IP directly into its architecture portfolio, Arm can offer its licensees a ready-made pathway to build CPUs that plug natively into Nvidia's AI accelerator ecosystem. In theory, this makes Arm-based designs far more attractive to hyperscalers and sovereign cloud builders who want custom CPUs and compatibility with market-leading Nvidia GPUs for AI and HPC. Previously, Nvidia's Grace CPUs were the only processors compatible with Nvidia GPUs for NVLink connectivity. While Nvidia only mentions Arm as an IP provider, Arm also gains benefits as a developer of its own CPUs aimed at hyperscalers and sovereign organizations. Specifically, Arm gains the ability to compete directly inside Nvidia-based systems. With native NVLink Fusion integration, future Arm-designed server CPUs can compete head-to-head with both Nvidia's Grace and Vera, as well as Intel Xeon, in systems where Nvidia GPUs are the central compute element. With NVLink Fusion, Arm CPUs can become first-class participants in rack-scale NVLink solutions, assuming that Nvidia allows this to happen, which is not guaranteed. Also, NVLink Fusion support strengthens Arm's position as an ISA licensor, as it makes the Arm architecture inherently more attractive to hyperscalers and chip designers who want custom CPUs tightly integrated with Nvidia GPUs. By ensuring that Arm-based CPU designs can work with Nvidia GPUs using the coherent NVLink fabric -- rather than being limited to PCIe -- Arm gains ecosystem gravity and 'future-proof' relevance that competing ISAs like x86 and RISC-V cannot match today. For sure, this poses risks to both AMD and Intel as the former is barely interested in supporting NVLink, while the latter is years away from building custom NVLink-supporting Xeon CPUs for Nvidia's rack-scale systems. Then again, we have to keep in mind chip development cycles and other factors here, as by the time Arm-based CPUs with NVLink are ready, Intel's custom Xeon CPUs will be ready as well. Arm's support for NVLink Fusion benefits Nvidia by massively expanding the pool of CPUs that can serve natively in Nvidia-centric AI systems using NVLink, without Nvidia having to build all those CPUs itself. By enabling Arm licensees -- such as Google, Meta, and Microsoft -- to integrate NVLink directly into their SoCs, Nvidia ensures that future Arm-based processors will be either architected around Nvidia GPUs, or at least compatible with them. On the one hand, this could reduce the appeal of open alternatives like UALink; on the other hand reduce the appeal of AI accelerators from companies like AMD, Broadcom, and Tenstorrent in general. As an added bonus, it also strengthens Nvidia's position in sovereign AI projects that use Arm CPUs (at least in the next few years): governments and cloud providers that want custom Arm CPUs for control-plane or data-loading tasks can now adopt them without leaving Nvidia's GPUs. All in all, Arm's addition to the NVLink ecosystem is a win for both Arm, Nvidia, and a bunch of their partners, but could pose great risks for AMD, Intel, and Broadcom.
[2]
Arm custom chips get a boost with Nvidia partnership
Jensen Huang, CEO of Nvidia, reacts during the 2025 Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju, South Korea, October 31, 2025. Arm on Monday said that central processing units based on its technology will be able to integrate with AI chips using Nvidia's NVLink Fusion technology. The move will make it easier for customers of both companies who prefer a custom approach to their infrastructure -- namely hyperscalers -- to pair Arm-based Neoverse CPUs with Nvidia's dominant graphics processing units. It's the latest example of Nvidia using dealmaking to partner with nearly every major technology company as it finds itself at the center of the AI industry. The announcement signals that Nvidia is opening up its NVLink platform to integrate with a wide variety of custom chips, instead of forcing customers to use its CPUs. Nvidia currently sells an AI product called Grace Blackwell that pairs multiple GPUs with an Nvidia-branded Arm-based CPU. Other configurations include servers that use CPus from Intel or Advanced Micro Devices. But Microsoft, Amazon and Google are all developing or deploying Arm-based CPUs in their clouds to give them more control over the set ups and reduce their costs. Arm doesn't make CPUs but it licenses its instruction set technology that those chips need. The company also sells designs that allow partners to more quickly build Arm-based chips. As part of Monday's announcement, Arm said that custom Neoverse chips will include a new protocol that'll allow them to move data seamlessly with GPUs. The CPU has historically been the most important part in a server. But generative AI infrastructure is based around the AI accelerator chip, which in most cases is an Nvidia GPU. As many as eight GPUs can be paird with a CPU in an AI server. In September, Nvidia said it would invest $5 billion into Intel, the leading CPU maker. A key part of the deal was to enable Intel CPUs to integrate into AI servers using Nvidia's NVLink technology. Nvidia reached an agreement to buy Arm for $40 billion in 2020, but the deal failed in 2022 because of regulatory issues in the U.S. and U.K. Nvidia had a small stake in Arm, which is majority-owned by Softbank, as of February. Meanwhile, Softbank liquidated its entire stake in Nvidia earlier this month and Softbank is backing the OpenAI Stargate project, which plans to use Arm technology in addition to chips from Nvidia and AMD.
[3]
Arm and Nvidia link up to let hyperscalers build custom AI servers
Arm announced on Monday that its Neoverse CPUs will integrate with Nvidia's AI chips through NVLink Fusion technology, enabling hyperscalers to pair Arm-based processors with Nvidia graphics processing units in custom infrastructure setups. The integration simplifies the process for customers preferring tailored infrastructure, particularly hyperscalers, to combine Arm-based Neoverse CPUs directly with Nvidia's dominant GPUs. This development stems from Arm's statement that central processing units based on its technology will connect using Nvidia's NVLink Fusion. Hyperscalers, large-scale cloud operators, often design custom systems to optimize performance and costs in data centers supporting AI workloads. Nvidia employs partnerships across the technology sector amid its pivotal position in the AI industry. The announcement indicates Nvidia opening its NVLink platform to various custom chips, rather than requiring customers to adopt its own CPUs. Nvidia currently markets Grace Blackwell, an AI product that links multiple GPUs with an Nvidia-branded Arm-based CPU. Separate server configurations incorporate CPUs from Intel or Advanced Micro Devices, providing options for diverse hardware combinations in AI environments. Microsoft, Amazon, and Google develop or deploy Arm-based CPUs within their cloud platforms to enhance control over configurations and lower expenses. These companies integrate such processors to customize data center operations, aligning hardware with specific workload demands in cloud computing services. Arm licenses its instruction set technology, essential for building compatible chips, and provides designs that accelerate partner development of Arm-based processors. Arm does not manufacture CPUs itself but enables rapid production through these resources. Under Monday's announcement, custom Neoverse chips incorporate a new protocol for seamless data movement between CPUs and GPUs, facilitating efficient communication in high-performance computing tasks. In traditional servers, the CPU served as the primary component. Generative AI infrastructure centers on AI accelerator chips, predominantly Nvidia GPUs, with configurations supporting up to eight GPUs paired with a single CPU. This structure prioritizes accelerator performance for processing intensive AI models and data. In September, Nvidia committed $5 billion to Intel, the leading CPU manufacturer. A core element of this investment enables Intel CPUs to connect with Nvidia's NVLink technology in AI servers, broadening compatibility options. Nvidia agreed to acquire Arm for $40 billion in 2020, but regulators in the U.S. and U.K. blocked the deal in 2022. As of February, Nvidia retained a small stake in Arm, which SoftBank majority-owns. Earlier this month, SoftBank sold its entire Nvidia stake. SoftBank supports OpenAI's Stargate project, planned to incorporate Arm technology alongside chips from Nvidia and AMD.
[4]
NVIDIA & Arm Partner To Bring NVLink Fusion Support On Neoverse Platforms, Hi-Bandwidth Interconnect For AI Data Centers
NVIDIA & Arm have strengthened their partnership with the announcement of NVLink Fusion support for Neoverse AI data center platforms. NVIDIA NVLink Fusion Now Available on Arm Neoverse AI Data Centers, Accelerating AI With More Bandwidth & Inter-Chip Coherency NVIDIA Press Release: AI is reshaping data centers in a once-in-a-generation architectural shift, where efficiency per watt defines success. At the center is Arm Neoverse, deployed in over a billion cores and projected to reach 50% hyperscaler market share by 2025. Every major provider -- AWS, Google, Microsoft, Oracle, and Meta -- is building on Neoverse, underscoring its role in powering AI at scale. To meet surging demand, Arm is extending Neoverse with NVIDIA NVLink Fusion, the high-bandwidth, coherent interconnect first pioneered with Grace Blackwell. NVLink Fusion links CPUs, GPUs, and accelerators into one unified rack-scale architecture, removing memory and bandwidth bottlenecks that limit AI performance. Connected with Arm's AMBA CHI C2C protocol, it ensures seamless data movement between Arm-based CPUs and partners' preferred accelerators. Together, Arm and NVIDIA are setting a new standard for AI infrastructure, enabling ecosystem partners to build differentiated, energy-efficient systems that accelerate innovation across the AI era. "Folks building their own ARM CPU, or using an Arm IP, can actually have access to NVLink Fusion, be able to connect that ARM CPU to an Nvidia GPU or to the rest of the NVLink ecosystem, and that's happening at the racks and scale-up infrastructure," said Buck. Arm Press Release: Two years ago, Arm and NVIDIA achieved an industry first with the NVIDIA Grace Hopper platform and NVIDIA NVLink, delivering coherent CPU-GPU integration that redefined high-performance computing. To continue innovating at this pace, the ecosystem needs choice and flexibility -- and NVLink Fusion gives partners the ability to connect Arm-based compute with their preferred accelerators through a coherent, high-bandwidth interface. The strong momentum and sustained customer demand for Grace Blackwell are now fueling the expansion of NVLink Fusion across the full Neoverse ecosystem, enabling partners to build differentiated, energy-efficient AI systems on Arm that meet the performance and scalability demands of the AI era. Ecosystem partners are adopting NVLink Fusion to remove memory and bandwidth bottlenecks that limit AI system performance. NVIDIA NVLink Fusion was built to interface with AMBA CHI C2C (Coherent Hub Interface Chip-to-Chip) -- a technology invented by Arm that provides the critical protocol definition for a coherent, high-bandwidth connection between CPUs and accelerators. Building on this foundation, Arm is enabling the Neoverse platform with the latest edition of the AMBA CHI C2C protocol -- ensuring C2C compatibility with NVIDIA NVLink Fusion -- so that Neoverse-based SoCs can move data seamlessly between Arm-based CPUs and partners' preferred accelerators. The result is quicker integration, faster time to market, higher-bandwidth accelerated compute, and greater flexibility for ecosystem partners building next-generation AI systems. The Arm-NVIDIA partnership continues to grow, driving new levels of co-design and collaboration that deliver intelligence per watt, shaping the architecture of the AI era. Follow Wccftech on Google to get more of our news coverage in your feeds.
Share
Share
Copy Link
Arm integrates with Nvidia's NVLink Fusion technology, allowing hyperscalers to build custom AI servers combining Arm-based Neoverse CPUs with Nvidia GPUs. This partnership expands options for AI infrastructure while strengthening both companies' positions in the data center market.
Arm and Nvidia announced at the Supercomputing '25 conference that Arm has joined the NVLink Fusion ecosystem, marking a significant advancement in AI infrastructure development
1
. The integration enables central processing units based on Arm's technology to connect seamlessly with Nvidia's AI accelerators through NVLink Fusion technology2
.
Source: Wccftech
The partnership centers on integrating NVLink IP directly into Arm's architecture portfolio, allowing licensees to build CPUs that connect natively with Nvidia's GPU ecosystem
1
. Custom Neoverse chips will incorporate a new protocol that enables seamless data movement between CPUs and GPUs, utilizing Arm's AMBA CHI C2C (Coherent Hub Interface Chip-to-Chip) protocol for coherent, high-bandwidth connections4
."Arm is integrating NVLink IP so that their customers can build their CPU SoCs to connect Nvidia GPUs," said Dion Harris, head of data center product marketing at Nvidia
1
. This integration removes memory and bandwidth bottlenecks that traditionally limit AI system performance.The collaboration particularly benefits hyperscalers—large-scale cloud operators including Microsoft, Amazon, and Google—who are developing or deploying Arm-based CPUs in their cloud platforms
3
. These companies can now pair Arm-based Neoverse CPUs directly with Nvidia's dominant graphics processing units in custom infrastructure setups, providing enhanced control over configurations while reducing costs.With NVLink Fusion, hyperscalers can significantly reduce design complexity, save development costs, and reach the market faster
1
. The technology enables up to eight GPUs to be paired with a single CPU in AI server configurations, prioritizing accelerator performance for processing intensive AI models.Related Stories
For Arm, the partnership provides a major competitive advantage in the data center market. As an IP provider, Arm gains the ability to offer licensees a ready-made pathway to build CPUs compatible with market-leading Nvidia GPUs for AI and high-performance computing applications
1
. This makes Arm-based designs more attractive to hyperscalers and sovereign cloud builders seeking custom CPU solutions.For Nvidia, the partnership massively expands the pool of CPUs that can serve natively in Nvidia-centric AI systems without requiring Nvidia to build all processors itself
1
. By enabling Arm licensees to integrate NVLink directly into their systems-on-chips, Nvidia ensures future Arm-based processors will be architected around or compatible with Nvidia GPUs.The announcement represents Nvidia's strategy of opening its NVLink platform to integrate with various custom chips rather than forcing customers to use its CPUs exclusively
2
. Nvidia currently sells Grace Blackwell, an AI product pairing multiple GPUs with an Nvidia-branded Arm-based CPU, alongside configurations using Intel or AMD CPUs.This development could pose significant challenges to competitors, particularly AMD and Intel, as it reduces the appeal of alternative interconnect solutions and strengthens Nvidia's position in sovereign AI projects
1
. In September, Nvidia invested $5 billion in Intel to enable Intel CPUs to integrate into AI servers using NVLink technology, demonstrating the strategic importance of these partnerships3
.Summarized by
Navi
19 May 2025•Technology

01 Apr 2025•Technology

16 Oct 2024•Technology
