Nvidia's $2 billion Marvell investment turns competitor into ecosystem partner via NVLink Fusion

Reviewed byNidhi Govil

14 Sources

Share

Nvidia has invested $2 billion in Marvell Technology, pulling one of the largest custom AI chip designers into its NVLink Fusion ecosystem. The deal ensures that Marvell's custom accelerators for hyperscalers like Amazon, Google, and Microsoft remain dependent on Nvidia's infrastructure, effectively turning potential competition into a revenue stream through mandatory platform components.

Nvidia Invests in Marvell to Expand AI Infrastructure Control

Nvidia announced a $2 billion investment in Marvell Technology, establishing a strategic Nvidia Marvell partnership that integrates the custom chip designer into Nvidia's NVLink Fusion ecosystem

1

2

. The deal positions Nvidia to capture revenue from custom AI accelerators that hyperscalers commission specifically to reduce dependence on Nvidia GPUs

4

. Marvell's stock surged nearly 13 percent following the announcement, while Nvidia shares rose 5.6 percent

4

. The move represents a significant expansion of Nvidia's influence across AI infrastructure, from silicon design to networking technology for AI.

Source: Analytics Insight

Source: Analytics Insight

NVLink Fusion Creates Proprietary Interconnect Ecosystem

NVLink Fusion, first announced in May 2025, enables heterogeneous AI infrastructure where non-Nvidia accelerators can communicate with Nvidia GPUs, CPUs, and networking hardware over a high-bandwidth, low-latency fabric

1

5

. The architecture requires every NVLink Fusion platform to include at least one Nvidia product, whether a CPU, GPU, or switch

1

. This design ensures that Marvell-designed ASICs using the fabric generate Nvidia revenue regardless of their intended purpose

4

. Under the partnership, Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia supplies Vera CPUs, ConnectX NICs, BlueField DPUs, NVLink interconnect, and Spectrum-X switches

1

.

Custom AI Accelerators Now Tied to Nvidia Platform

Marvell is one of two dominant custom ASIC design houses alongside Broadcom, with clients including AWS, Microsoft, and Google

1

. The company helps these hyperscalers develop alternatives to Nvidia AI chips, including Amazon's Trainium series, making this investment particularly strategic

1

2

. Marvell's custom AI XPU business generated $1.5 billion in fiscal 2026 revenue and is expected to double by fiscal 2028, with 18 active custom silicon projects including 12 devices for Amazon, Google, Microsoft, and Meta

4

. The company reported $8.2 billion in revenue for fiscal year 2026, with data center revenue accounting for more than 74 percent of the total

1

5

.

Source: Benzinga

Source: Benzinga

Silicon Photonics and AI Data Centers Drive Collaboration

The two companies will collaborate on silicon photonics and AI-RAN infrastructure for 5G and 6G networks

1

2

. Marvell's acquisition of Celestial AI late last year added photonic fabric technology to its portfolio, and this deal places that capability inside Nvidia's ecosystem

1

. Silicon photonics uses light instead of copper to move data between chips at speeds that next-generation AI clusters demand

4

. Jensen Huang, founder and CEO of Nvidia, stated: "The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories"

5

. Matt Murphy, chairman and CEO of Marvell, emphasized that "by connecting Marvell's leadership in high-performance analog, optical DSP, silicon photonics and custom silicon to Nvidia's expanding AI ecosystem through NVLink Fusion, we are enabling customers to build scalable, efficient AI infrastructure".

Source: Market Screener

Source: Market Screener

Strategic Pattern of $2 Billion Investments Emerges

Nvidia has made a series of $2 billion investments in recent months, including stakes in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum

4

. Each targets a different layer of AI infrastructure being built at unprecedented speed, from cloud providers to chip design tools and high-speed optical interconnects

4

. The common thread is that each investment makes the recipient more dependent on Nvidia's platform while Nvidia gains both financial exposure to and architectural influence over potential competitors

4

. This approach effectively functions as what analysts describe as a toll booth on custom ASICs, ensuring revenue generation even when customers build alternatives to Nvidia GPUs

4

.

NVLink Fusion Ecosystem Grows While UALink Faces Challenges

Marvell joins an NVLink Fusion ecosystem that has expanded steadily since launch. Samsung Foundry joined in October to offer design-to-manufacturing support for NVLink-compatible custom chips, while Arm entered in November, enabling its licensees to build CPUs with native NVLink connectivity

5

. Nvidia rivals AMD, Intel, and Broadcom remain absent and instead back the open UALink standard as a competing rack-scale interconnect

5

. However, UALink faces what analysts describe as a crisis of the commons, with competing priorities among members and a 128G specification launch that trails the pace of accelerator deployment

4

. NVLink Fusion is becoming the default interconnect standard for custom AI silicon, not because it is open, but because Nvidia's software ecosystem, particularly CUDA, makes it the path of least resistance for customers who need hardware to work immediately

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo