4 Sources
4 Sources
[1]
Nvidia challenger AI chip startup MatX raised $500M
MatX, a chip startup founded by two former Google hardware engineers, has raised a $500 million Series B led by Jane Street and Situational Awareness, an investment fund formed by former OpenAI researcher Leopold Aschenbrenner. The company's goal is to make its processors 10 times better at training LLMs and delivering results than Nvidia's GPUs. Other investors in the round include Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick Collison and John Collison, the startup's founder and CEO Reiner Pope announced Tuesday in a post on LinkedIn. Although the company didn't release its latest valuation, Etched, MatX's closest competitor, raised a $500 million round at a $5 billion valuation, Bloomberg reported last month. Etched didn't immediately respond to a request for comment. MatX's latest round comes more than a year after its Series A of about $100 million, which was led by Spark Capital. TechCrunch earlier reported that the 2024 round valued the startup at more than $300 million. Before co-founding MatX in 2023, Pope led AI software development for Google's TPUs, the tech giant's proprietary AI chips. His co-founder, Mike Gunter, was a lead designer of the TPU hardware before leaving to launch the startup. The new funding will help MatX produce its chips with TSMC, with plans to start shipping them in 2027.
[2]
AI Chip Startup MatX Raises $500 Million to Compete With Nvidia
MatX, an AI chip startup founded by two alumni of Google's semiconductor business, has raised more than $500 million in a new funding round to produce hardware that competes with Nvidia Corp. The financing was led by Jane Street and Situational Awareness, the investment firm founded by former OpenAI researcher Leopold Aschenbrenner. Other backers include Marvell Technology Inc., venture firms NFDG and Spark Capital, as well as Stripe Inc. co-founders Patrick and John Collison. The company declined to disclose the exact valuation, but said it's now valued at several billion dollars. MatX is one of a growing number of upstarts vying for a piece of the artificial intelligence chips market long dominated by Nvidia's graphics processing units. The startup was founded by Reiner Pope, who worked on software for Google's chips and AI models, and Mike Gunter, a hardware engineer for the search giant's tensor processing unit chips. Pope and Gunter left Google in 2022 with the goal of creating a better chip from scratch that focuses solely on running large language models, the technology that underpins today's AI chatbots. Their goal is to design a new line of competitive hardware that blends two distinct approaches used by other chipmakers. Currently, Nvidia and Google rely heavily on high-bandwidth memory to build chips that handle the massive volume of calculations needed to train AI models. Other chip companies have used static random access memory to process individual user queries faster, catering to the surging demand for inference, or running AI models after they've been trained. "Our position is that it is actually possible to do both in the same product and you get a much better product as a result," said Pope, who serves as the company's chief executive officer. The startup expects to complete the final design of its chip this year and hopes to begin shipping in 2027. MatX plans to work with Taiwan Semiconductor Manufacturing Co. to make the product. The new funding is intended to help it reserve manufacturing capacity and parts to ensure it can ship rapidly once ready, Gunter said. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. "It lets us compete on kind of equal grounds with the largest companies in the way that they can scale very quickly," he said. "This round puts us almost on the same footing as the players that have a huge amount of money." Jane Street said in a statement that the investment "highlights our commitment to supporting transformative technology companies." Aschenbrenner, meanwhile, said MatX's chip is uniquely suited to handle pretraining and reinforcement learning, two key techniques for developing AI models. It "has a real shot at becoming the most important Al chip company of this generation," he said. MatX has about 100 employees and is hiring quickly for engineering roles. The startup is less focused on building up a large salesforce, as its goal is to sell to the select few leading AI labs. Top AI developers, including OpenAI and Anthropic, are increasingly relying on a mix of multiple chipmakers and cloud providers to meet their computing needs, potentially opening the door to entertain newer alternatives. MatX said its tests show the proposed chip can outperform Nvidia's upcoming Rubin Ultra product, based on a measure of computing performance per square millimeter. Still, Pope is mindful of the uphill battle to credibly rival Nvidia. Like all companies vying to sell AI chips, MatX will need to correctly predict where the market is going and what types of AI models gain popularity, then tailor its hardware accordingly, he said. Ramping up manufacturing will also present challenges, particularly with memory components in short supply. "You need to match what is in the market on all of maybe five different important aspects and you need to be far ahead on at least one of them," Pope said. "A lot of the startup mentality is minimum viable product will be ahead on one thing and ignore everything else. That seems to have not really won in the market."
[3]
AI chip startups soak up $1.1B in VC funding this week
Fears of an AI bubble haven't tempered vulture capitalists' enthusiam for silicon AI chip startups collectively walked away with more than a billion dollars of new capital on Tuesday, showing that venture capitalists are still excited about the opportunity to challenge Nvidia's dominance despite all the talk of an AI bubble. MatX, which was founded in 2022 by Google engineers Reiner Pope and Mike Gunter, received the lion's share of the cash. The startup raked in $500 million in a series B funding round led by VC firms Jane Street and Situational Awareness LP. The startup aims to put out its first chip, an LLM-optimized accelerator called the MatX One, later this year. While many AI startups like Groq, dMatrix, and SambaNova have prioritized inference, Matx says its chip will do it all: pre-training, reinforcement learning, and inference prefill and decode. Beyond this, concrete details on the chip remain thin. The company boasts the chip's split systolic array will deliver the highest "FLOPS per mm2" and scale to "hundreds of thousands of chips." Speaking of scale, the company is certainly going to need plenty of silicon if it wants to cram the latest LLMs into SRAM. Compared to the HBM used by AMD or Nvidia, SRAM is orders of magnitude faster. MatX expects its first chip will be able to deliver more than 2,000 tokens a second for a large 100-layer mixture of expert models. The downside to SRAM is it isn't very space-efficient. The largest dies today can only fit a few hundred megabytes of the stuff and still have room for compute. MatX isn't the first to go down this road. Cerebras got around this constraint by building a wafer-sized chip, while for Groq it was a numbers game: Want to run a bigger model? Just add (hundreds) more chips. MatX appears to be borrowing that same strategy. However, unlike Groq or Cerebras, MatX will also use HBM -- just not to store model weights if it can avoid it. Instead, MatX says, the HBM will be used to store the model's key-value (KV) caches, which keep track of a model's states across sessions. (You can think of KV caches as the model's short term memory.) By combining SRAM and HBM in this way, MatX believes its chip will be able to achieve both the sheer throughput of GPUs and the speed of SRAM-based designs. And now it's got more money to try and prove it out. Also on Tuesday, Dutch startup Axelera announced it had raised a quarter billion dollars in a new funding round led by Innovation Industries to advance the development of its low-power RISC-V based AI accelerators. Unlike MatX which is going straight for Nvidia's jugular, Axelera foundations are much more modest. Its Europa and Metis AI accelerators are designed primarily for power constrained edge workloads, like computer vision, robotics. But by focusing on the edge first, the company aims to develop a compute architecture that can scale efficiently to take on any AI/ML task whether it's running at the edge or in the datacenter. The company has already shown progress toward this end. Axelera's latest chip, Europa, boasts up to 629 TOPS of INT8, fed by 64GB of DRAM good for 200 GB/s of bandwidth. In terms of compute, this puts it on par with an Nvidia A100 while using less than a sixth the power at 45 watts. Having said that, it still trails the nearly six-year-old accelerator in memory capacity (80GB of HBM2E) and bandwidth 2 TB/s. And that's for a chip that's still primarily designed for the edge. The company is working on a new chip designed to provide even greater performance. That chip, codenamed Titania, is being developed in partnership with the EU's EuroHPC Digital Autonomy with RISC-V in Europe (DARE) program, which sees a domestic alternative to US chips for supercomputing. Finally, SambaNova received a $350 million cash infusion from Vista Equity, Cambium Capital, and Intel's investment fund to bring its next-gen dataflow accelerators to market. We've got all the details here, but in short, the funding was announced alongside a multi-year collab that'll see the chip startup cram Chipzilla's Xeons into its AI servers. The company also disclosed a new AI accelerator, the SN50, which will be deployed by SoftBank in its Japanese datacenters starting later this year. ®
[4]
Chip startup MatX raises $500M to speed up large language models - SiliconANGLE
MatX Inc., a chip startup founded by former Google LLC engineers, has raised $500 million in funding to bring its first product to market. Jane Street and Situational Awareness led the Series B investment. MatX stated today that they were joined by more than a half dozen others including chipmaker Marvell Technology Inc. and Stripe Inc.'s co-founders. The startup previously raised over $100 million from a consortium that included many of the same backers. MatX is developing a processor optimized to run large language models. The company says that the chip, which is called the MatX One, will provide higher throughput than today's graphics cards. Hundreds of thousands of MatX One accelerators can be linked together into a cluster to run large-scale training and inference workloads. Many artificial intelligence processors implement a circuit design known as a systolic array. It's a collection of relatively simple, identical computing modules linked together by a network. Each module performs a small portion of the calculations involved in processing an AI prompt. MatX One is based on an architecture the company calls a splittable systolic array. The name hints that the chip may be capable of splitting its systolic arrays into multiple smaller ones. That approach makes it possible to tailor the configuration of a chip's circuits to the datasets they process, which boosts efficiency. MatX One will store most model weights, the settings that determine how an LLM processes prompts, in SRAM cells. SRAM is a high-speed memory variety that is often embedded directly into chips next to their logic circuits. The technology provides lower latency than other types of RAM, which speeds up processing. MatX One will use slower, higher-capacity memory called HBM to store KV cache data. A KV cache is a mechanism that LLMs use to speed up processing. It reduces the need to repeat frequently occurring calculations by caching their results, which saves time. A series of research blog posts on MatX's website hints that its chip will also support other performance optimization methods. One post reveals that the company has been working to combine two of the most popular methods, speculative decoding and blockwise sparse attention. The former technology speeds up prompt response generation, while the latter increases the efficiency of LLMs' attention mechanism. "The chip combines the low latency of SRAM-first designs with the long-context support of HBM," MatX co-founder and chief executive officer Reiner Pope wrote in a blog post today. "These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs." The company will use its newly raised capital to finalize the design of its chip. MatX hopes to complete the tape-out process, the final step of the semiconductor development workflow, within a year.
Share
Share
Copy Link
MatX, an AI chip startup founded by former Google engineers, has secured $500 million in Series B funding led by Jane Street and Situational Awareness. The company aims to deliver processors 10 times better at training and running large language models than Nvidia's GPUs, with plans to begin shipping in 2027.
MatX, an AI chip startup founded by two former Google engineers, has raised $500 million in Series B funding led by Jane Street and Situational Awareness, an investment fund formed by former OpenAI researcher Leopold Aschenbrenner
1
. The round also attracted participation from Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick Collison and John Collison . While the company declined to disclose its exact valuation, it confirmed the figure now stands at several billion dollars, a significant leap from its $300 million valuation following its approximately $100 million Series A round led by Spark Capital in 20241
.
Source: The Register
MatX was co-founded in 2023 by Reiner Pope and Mike Gunter, both alumni of Google's semiconductor business. Pope led AI software development for Google's TPUs, the tech giant's proprietary AI chips, while Gunter served as a lead designer of the TPU hardware before they departed in 2022
1
. Their mission centers on creating a processor specifically optimized for large language models, with the ambitious goal of making it 10 times better at training LLMs and delivering results than Nvidia's GPUs1
. The startup joins a growing number of companies vying for a piece of the artificial intelligence chips market long dominated by Nvidia .
Source: Bloomberg
The MatX One chip employs a novel architecture that Pope describes as a "splittable systolic array," which allows the chip to configure its circuits dynamically to match the datasets being processed
4
. The design strategy differentiates MatX from competitors by combining two distinct memory approaches. Currently, Nvidia and Google rely heavily on HBM to build chips that handle the massive volume of calculations needed to train AI models, while other companies have used SRAM to process individual user queries faster for inference workloads .
Source: TechCrunch
MatX One will store most model weights in SRAM cells, a high-speed memory variety often embedded directly into chips that provides lower latency than other types of RAM
4
. Meanwhile, the chip will use HBM to store KV cache data, which tracks a model's states across sessions3
. "Our position is that it is actually possible to do both in the same product and you get a much better product as a result," Pope explained . The company expects its chip to deliver more than 2,000 tokens per second for large 100-layer mixture of expert models3
.Related Stories
The new funding will enable MatX to produce its chips with TSMC, with plans to start shipping in 2027
1
. The startup expects to complete the final design of its chip this year and hopes to begin the tape-out process, the final step of the semiconductor development workflow, within that timeframe4
. The capital injection is specifically intended to help MatX reserve manufacturing capacity and parts to ensure rapid scaling once ready, according to co-founder Gunter . "This round puts us almost on the same footing as the players that have a huge amount of money," Gunter noted .Aschenbrenner expressed strong confidence in MatX's approach, stating the chip is uniquely suited to handle pretraining and reinforcement learning, two key techniques for developing AI models, and "has a real shot at becoming the most important AI chip company of this generation" . Jane Street added that the investment "highlights our commitment to supporting transformative technology companies" . MatX said its tests show the proposed chip can outperform Nvidia's upcoming Rubin Ultra product based on computing performance per square millimeter .
The startup currently employs about 100 people and is hiring rapidly for engineering roles, though it remains less focused on building a large salesforce, as its goal centers on selling to the select few leading AI labs . Top AI developers, including OpenAI and Anthropic, are increasingly relying on a mix of multiple chipmakers and cloud providers to meet their computing needs, potentially opening doors for newer alternatives to compete with Nvidia . Pope remains mindful of the challenges ahead, noting that ramping up manufacturing will present difficulties, particularly with memory components in short supply, and that success requires matching the market on multiple fronts while leading on at least one .
Summarized by
Navi
[1]
[3]
23 Nov 2024•Technology

14 Jan 2026•Startups

18 Dec 2025•Technology

1
Technology

2
Technology

3
Policy and Regulation
