Google in talks with Marvell Technology to develop two AI chips, diversifying chip suppliers

Reviewed byNidhi Govil

15 Sources

Share

Google is negotiating with Marvell Technology to design two new AI chips—a memory processing unit and an inference-optimized TPU—adding a third partner to its custom silicon supply chain alongside Broadcom. The move signals Google's strategy to diversify chip suppliers as inference workloads become the dominant compute cost in AI.

Google Expands Custom AI Chip Strategy with Marvell Technology

Google is in deal talks with Marvell Technology to develop two AI chips designed to run AI models more efficiently, according to a report by The Information citing two people with direct knowledge of the discussions

1

. The potential partnership would involve creating a memory processing unit to complement Google's existing Tensor Processing Unit (TPU) infrastructure and a new TPU built specifically for running AI models during inference

2

. The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production

4

.

Source: Wccftech

Source: Wccftech

Marvell Technology shares jumped 7% in premarket trading on Monday following the news, positioning the company to add more than $9 billion to its market value of $122.15 billion if gains held

5

. The stock has rallied approximately 50% year to date, with a 30% gain in April alone

3

. Meanwhile, Broadcom shares fell nearly 2% Monday as investors assessed the implications of Google potentially diversifying its chip design partnerships

2

.

Diversifying Beyond Broadcom Without Replacement

The discussions came days after Broadcom, Google's primary custom chip partner, announced a long-term agreement to design and supply TPUs and networking components through 2031

3

. The timing suggests Google is not replacing Broadcom but adding a third design partner to a chip supply chain that already includes Broadcom for high-performance chip variants, MediaTek for cost-optimized variants at 20% to 30% lower cost, and TSMC for fabrication

3

. This strategy represents diversification rather than substitution, mirroring how automotive companies manage component suppliers to prevent any single vendor from gaining excessive leverage.

Google deploys TPUs for training AI models and to respond to user queries, a process known as inferencing, and has historically worked with Broadcom to design its chips

1

. The move to diversify chip suppliers comes amid surging demand for custom AI chips as businesses seek alternatives to Nvidia's pricey processors

5

. Big Tech companies such as Google and Facebook-parent Meta are moving fast to reduce dependence on external chip suppliers by expanding their custom chip efforts

1

.

Source: ET

Source: ET

Inference Emerges as Dominant AI Workload Driver

Google's seventh-generation TPU, Ironwood, debuted this month as what the company calls "the first Google TPU for the age of inference," delivering ten times the peak performance of the TPU v5p and scaling to 9,216 liquid-cooled chips in a superpod producing 42.5 FP8 exaflops

3

. Google plans to build millions of Ironwood units this year, and the Marvell-designed AI inference chips would supplement rather than replace Ironwood, potentially targeting different workload profiles or cost points

3

.

The shift from training to inference as the primary demand driver is reshaping the chip market. Training a frontier AI model is a one-time event requiring enormous compute for weeks or months, while inference runs continuously, serving every query from every user

3

. As AI products reach hundreds of millions of users, inference becomes the dominant expense, making purpose-built inference silicon a competitive advantage that general-purpose GPUs cannot match on cost or efficiency. TPU sales have become a key driver of growth in Google's cloud revenue as it aims to show investors that its AI investments are generating returns

4

.

Marvell's Growing Custom Silicon Business

Marvell's data center revenue reached a record $6.1 billion in its fiscal year ending February 2026, with total revenue of $8.2 billion, up 42% year over year

3

. The chip designer runs a custom silicon business with a $1.5 billion annual run rate across 18 cloud-provider design wins, building custom AI chips for Amazon's Trainium processors, Microsoft's Maia AI accelerator, and Meta's data processing unit, in addition to existing work with Google on the Axion ARM CPU

3

.

Source: Benzinga

Source: Benzinga

Both Marvell and its larger rival Broadcom help clients with designing chips, as growing adoption of AI tools boosts demand for specialized processors used in advanced data centers powering AI workloads

5

. They act in a design-services role, helping customers translate chip designs into silicon and providing back-end support before processors are sent to be manufactured at fabrication plants by companies like Taiwan Semiconductor Manufacturing Company

2

.

Nvidia invested $2 billion in Marvell at the end of March, partnering through NVLink Fusion to integrate Marvell's custom chips and networking with Nvidia's interconnect fabric, positioning the company at the intersection of both GPU and ASIC ecosystems

3

. In December 2025, Marvell acquired Celestial AI for up to $5.5 billion, gaining photonic interconnect technology for what CEO Matt Murphy called "the industry's most complete connectivity platform for AI and cloud customers"

3

. Murphy is targeting 20% market share in custom AI chips and expects roughly 30% year-over-year revenue growth in fiscal 2027.

Market Implications and Competitive Landscape

The custom ASIC market is projected to grow 45% in 2026 and reach $118 billion by 2033 as AI hardware development accelerates

3

. Last week, Meta extended its deal with Broadcom to produce several generations of custom AI processors, with the social media giant having paid Broadcom $2.3 billion last year for AI chip design and related services . Meta also committed to deploy 1 gigawatt of its own custom MTIA chips using Broadcom technology

2

.

Marvell trades at 33.35 times the estimates of its earnings for the next 12 months, compared with 27.84 for Broadcom . The average stock rating of 44 analysts covering Marvell is "buy" with a median price target of $125

5

. AI lab Anthropic uses a range of chips, including in-house TPUs designed by Google, to develop and run its AI software and chatbot Claude, demonstrating the broader ecosystem reliance on these custom AI accelerators

5

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
Š 2026 TheOutpost.AI All rights reserved