MatX AI chip startup raises $500 million to challenge Nvidia's dominance in LLM processing

4 Sources

Share

MatX, an AI chip startup founded by former Google engineers, has secured $500 million in Series B funding led by Jane Street and Situational Awareness. The company aims to deliver processors 10 times better at training and running large language models than Nvidia's GPUs, with plans to begin shipping in 2027.

MatX Secures $500 Million in Series B Funding to Challenge Nvidia

MatX, an AI chip startup founded by two former Google engineers, has raised $500 million in Series B funding led by Jane Street and Situational Awareness, an investment fund formed by former OpenAI researcher Leopold Aschenbrenner

1

. The round also attracted participation from Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick Collison and John Collison . While the company declined to disclose its exact valuation, it confirmed the figure now stands at several billion dollars, a significant leap from its $300 million valuation following its approximately $100 million Series A round led by Spark Capital in 2024

1

.

Source: The Register

Source: The Register

Former Google Engineers Target LLM Optimization

MatX was co-founded in 2023 by Reiner Pope and Mike Gunter, both alumni of Google's semiconductor business. Pope led AI software development for Google's TPUs, the tech giant's proprietary AI chips, while Gunter served as a lead designer of the TPU hardware before they departed in 2022

1

. Their mission centers on creating a processor specifically optimized for large language models, with the ambitious goal of making it 10 times better at training LLMs and delivering results than Nvidia's GPUs

1

. The startup joins a growing number of companies vying for a piece of the artificial intelligence chips market long dominated by Nvidia .

Source: Bloomberg

Source: Bloomberg

Blending SRAM and HBM for Dual Performance Advantages

The MatX One chip employs a novel architecture that Pope describes as a "splittable systolic array," which allows the chip to configure its circuits dynamically to match the datasets being processed

4

. The design strategy differentiates MatX from competitors by combining two distinct memory approaches. Currently, Nvidia and Google rely heavily on HBM to build chips that handle the massive volume of calculations needed to train AI models, while other companies have used SRAM to process individual user queries faster for inference workloads .

Source: TechCrunch

Source: TechCrunch

MatX One will store most model weights in SRAM cells, a high-speed memory variety often embedded directly into chips that provides lower latency than other types of RAM

4

. Meanwhile, the chip will use HBM to store KV cache data, which tracks a model's states across sessions

3

. "Our position is that it is actually possible to do both in the same product and you get a much better product as a result," Pope explained . The company expects its chip to deliver more than 2,000 tokens per second for large 100-layer mixture of expert models

3

.

Manufacturing Plans and Market Entry Timeline

The new funding will enable MatX to produce its chips with TSMC, with plans to start shipping in 2027

1

. The startup expects to complete the final design of its chip this year and hopes to begin the tape-out process, the final step of the semiconductor development workflow, within that timeframe

4

. The capital injection is specifically intended to help MatX reserve manufacturing capacity and parts to ensure rapid scaling once ready, according to co-founder Gunter . "This round puts us almost on the same footing as the players that have a huge amount of money," Gunter noted .

Investor Confidence and Market Positioning

Aschenbrenner expressed strong confidence in MatX's approach, stating the chip is uniquely suited to handle pretraining and reinforcement learning, two key techniques for developing AI models, and "has a real shot at becoming the most important AI chip company of this generation" . Jane Street added that the investment "highlights our commitment to supporting transformative technology companies" . MatX said its tests show the proposed chip can outperform Nvidia's upcoming Rubin Ultra product based on computing performance per square millimeter .

The startup currently employs about 100 people and is hiring rapidly for engineering roles, though it remains less focused on building a large salesforce, as its goal centers on selling to the select few leading AI labs . Top AI developers, including OpenAI and Anthropic, are increasingly relying on a mix of multiple chipmakers and cloud providers to meet their computing needs, potentially opening doors for newer alternatives to compete with Nvidia . Pope remains mindful of the challenges ahead, noting that ramping up manufacturing will present difficulties, particularly with memory components in short supply, and that success requires matching the market on multiple fronts while leading on at least one .

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo