5 Sources
5 Sources
[1]
Intel-Backed SambaNova Raises Cash, Touts SoftBank Chip Contract
Intel will combine SambaNova's offerings with its own chips in products for data center customers as part of a "multiyear collaboration", with the goal of serving customers until its own AI accelerators are ready. Artificial intelligence chip startup SambaNova Systems Inc. raised $350 million in a new funding round and said that customers such as SoftBank Group Corp. are preparing to use its latest technology in data centers. As part of the fundraising announcement, SambaNova introduced a chip called SN50 that's designed to run as much as five times faster than rival processors. The startup is working with investor Intel Corp. to speed adoption of the technology, which will ship to customers later this year. SambaNova is trying to carve out a piece of the massive market for AI computing -- an area currently dominated by Nvidia Corp. There's growing interest in alternatives to Nvidia's AI accelerators, the chips that help develop and run artificial intelligence models, especially if they can offer a cost advantage. "Customers are asking for more choice and more efficient ways to scale AI," Kevork Kechichian, general manager of Intel's data center group, said in a statement. As part of a "multiyear collaboration," Intel will combine SambaNova's offerings with its own chips in products for data center customers. SoftBank will be the first customer to deploy the SN50 platform, using it to power AI computing in Japan, SambaNova said. The Japanese company already uses older SambaNova products. The Series E funding round was led by Vista Equity Partners and Cambium Capital, with Battery Ventures, Mayfield Capital and other new investors participating. The financing was oversubscribed, with "strong participation" from Intel Capital, the chipmaker's venture arm, SambaNova said. Intel Chief Executive Officer Lip-Bu Tan also serves as chairman of SambaNova, and the latest fundraising followed stalled talks about the chip giant acquiring the startup. Intel said it took steps to make sure it wouldn't face questions about conflicts of interest during the discussions. "Lip-Bu Tan recused himself from this process and Kevork Kechichian was the executive sponsor," the company said in an emailed statement. The San Jose, California-based startup declined to give its valuation in the deal. A person familiar with the matter said it was above $2 billion. That would still be well below SambaNova's valuation in 2021, when a round led by SoftBank Group Corp.'s Vision Fund 2 put the business's worth at around $5 billion. For Intel, the combination of its processors and SambaNova's offerings will help serve customers until its own AI accelerators are ready. The interim arrangement is meant to keep its technology relevant during an explosive build-out of AI capacity.
[2]
SambaNova raises $350M with Intel backing
Upstart's 5th-gen RDU aims to undercut Nvidia's B200 on speed and cost AI infrastructure company SambaNova has raised $350 million to advance its dataflow architecture, which it pitches as an alternative to GPU-based AI systems. Some of the money came from Intel Capital, scotching rumors Chipzilla wanted to buy SambaNova. Other participants in this funding round include Vista Equity, Cambium Capital, and several other VC funds that expect strong returns when SambaNova brings its latest generation of reconfigurable dataflow units (RDUs) to market. Intel will get especially close to the upstart with a "multi-year" collaboration that aims to provide customers an alternative to GPUs for generative AI deployments. Naturally, that means SambaNova's new RDUs will use Xeon CPUs, but, beyond that, the alliance will include hardware-software co-design. "We've got a product that's very competitive. They've got scale; they've got capital; they've got customers that we can collaborate on," SambaNova CEO Rodrigo Liang told El Reg. Intel is not just off the pace in the generative AI arena - the giant has arguably missed the boat entirely following repeated missteps with its datacenter GPU and Gaudi product lines. "As we evolve and expand our AI engagements from edge to cloud, we're addressing these needs in multiple ways to remain a key player in the ecosystem and protect and grow market share," Kevork Kechichian, EVP of Intel's Datacenter Group, said in a statement. SambaNova expects to ship its SN50 accelerators later this year with Japan's SoftBank already signed up as one of the startup's first customers. The new chip represents a significant upgrade over SambaNova's 2024-vintage SN40L. The company says the SN50 will deliver a 2.5x higher 16-bit floating-point performance and 5x higher perf at FP8. That works out to 1.6 and 3.2 petaFLOPS respectively. SambaNova says its signature three-tier memory hierarchy, which allows it to swap between models in a fraction of a second and efficiently offload key-value caches, remains largely unchanged. Each RDU features 432 MB of on-chip SRAM, 64 GB of HBM2E memory good for 1.8 TB/s of bandwidth, and between 256 GB and 2 TB of DDR5 memory. Flexibility on the latter point will no doubt win SambaNova some points considering the skyrocketing price of memory. HBM2E might seem like an odd choice given its age, but Liang is keen to ensure his company can ship product at a time of rising memory prices. "From a cost perspective, it's important to make sure that we don't get into a supply chain fight," he said. While a big improvement over its predecessor, the SN50 doesn't look all that impressive on paper, at least compared to modern GPUs. It'll deliver about 64 percent of the dense FP8 compute, a third of the HBM capacity, and less than a quarter of the memory bandwidth of Nvidia's nearly two-year-old Blackwell generation. However, it's important to remember that "peak" advertised FLOPS and bandwidth aren't the same thing as achievable FLOPS or bandwidth. SambaNova argues that its dataflow architecture, which aims to reduce data movement overheads by overlapping computation and communication, allows it to use fewer, less powerful accelerators. In the case of the SN50, SambaNova boasts it can deliver up to five times higher per-user generation speed compared to Nvidia's B200. SambaNova's claims would be hard to swallow if it weren't already one of the highest performing inference providers. According to Artificial Analysis, SambaNova's SN40L accelerators are able to serve up LLMs like the 230 billion parameter MiniMax M2 model at up to 378 tokens per second, more than a hundred tokens per second faster than the next closest GPU-based inference provider. Having said that, GPU-based inference platforms are catching up as Nvidia's NVL72 racks see wider adoption. SambaNova's performance also varies from model to model, so it is not a clear leader in all scenarios. We should also note that Nvidia seems to have gotten the memo on dataflow, having acquihired Groq's engineering team and licensed its architecture late last year. While SambaNova says it doesn't need ultra-dense racks to be competitive, the company has designed its new architecture to scale. For the SN50, a single inference worker can now scale across up to 256 accelerators, more than 3.5x the number found in Nvidia's NVL72 rack. But with just 16 air-cooled RDUs and 15-30 kW per rack, SambaNova isn't packaging its chips nearly as densely. This larger scale-up domain is aided by faster interconnects. SambaNova tells us it equipped each RDU with 2.2 TB/s of bidirectional chip-to-chip bandwidth via a switched fabric. Inference performance isn't SambaNova's only shtick. The large pool of DDR5 memory available to each accelerator enables SambaNova to quickly move customer models and key-value caches - essentially the model's short-term memory - in and out of memory in a matter of milliseconds. "As we move into the world of agents, one of the things that you're starting to see is the customization of these models is causing these racks to run really inefficiently," Liang said. "Everybody wants their own models, but they don't use their own models to the same level that a shared model would be used." In other words, when everyone is accessing a common model, it's relatively easy to maintain high utilization, but when everyone is running their own model, this becomes much more difficult for service providers to manage. "The economics for every player today are not as good as they need to be for scale," Liang said. "What we spent the better part of 2025 doing was actually getting the product to the point where, per rack, we had the right economics for inference so that service providers could actually make a profit serving tokens." Having accomplished this, Liang reckons SambaNova's focus moving forward will be on selling infrastructure rather than following companies like Groq down the path of building a dedicated inference cloud. ®
[3]
Intel partners with AI chip startup SambaNova after acquisition talks reportedly failed
In addition to running Intel, Lip-Bu Tan is chairman of artificial intelligence chipmaker SambaNova, which he first invested in eight years ago. Now Intel is pumping money into the startup as it tries to take on industry leader Nvidia. SambaNova, a maker of chips for running generative AI models, has agreed to adopt Intel server chips and graphics cards in a multiyear collaboration, according to a Tuesday release. Intel is also participating in a $350 million funding round, after initially investing in SambaNova in 2019. For years, Nvidia's graphics processing units have been the silicon of choice for AI model companies like Anthropic and OpenAI, which kickstarted the AI boom with the launch of ChatGPT in late 2022. While Nvidia has been the leading beneficiary of the AI craze and is now the world's most valuable publicly traded company, Intel's revenue has declined for four straight years. Intel is now assembling its own graphics card, Tan said at a Cisco event earlier this month. The stock is up 75% in the past year, largely thanks to massive investments from the U.S. government and Nvidia. But to become a real player in the AI chip market, Intel needs help. The company previously looked at buying SambaNova for $1.6 billion but talks fell apart, Bloomberg reported in January. SambaNova counts Hugging Face, Meta and major AI labs as customers. With the new partnership, Intel and SambaNova will work together on sales and marketing to boost adoption. "We're not doing all this overnight," Rodrigo Liang, SambaNova's co-founder and CEO, told CNBC in an interview. "It's not like we're showing up tomorrow with all these things ready, but it's something that we are doing some good planning work to make sure that we're actually working this out, and then we're delivering more streamlined solutions." Intel and SambaNova declined to comment on whether the two companies discussed an acquisition. Tan became SambaNova's chairman in 2017. Walden International, the venture firm he started in 1987, placed an early bet on the startup, alongside Google's venture arm. Tan recused himself from discussions about the collaboration, an Intel spokesperson said. SambaNova is touting a new chip called the SN50 that Liang said delivers higher performance than the GPUs in Nvidia's B200 systems, based on Blackwell, and provides more computing power for the same price. The company says customers can connect up to 256 of the processors together. SoftBank, a leading investor in OpenAI, will deploy the SN50 and is an existing SambaNova customer, the startup said. Liang said SambaNova is well aware of Nvidia's dominance, but he sees room for other players. "I think we have to be realistic about the fact that Nvidia is pervasive today, as far as kind of what people like to use, but there are new technologies that are coming out that are going to run those really, really efficiently," he said. "And if you route that traffic in a heterogeneous environment, like most people are starting to do, you know you're going to get much better value out of the infrastructure." SambaNova wants to expand its own cloud for running AI models and is looking to sell clusters that companies can run in their data centers. SambaNova's other investors include private equity firm Vista Equity Partners, Battery Ventures, Cambium Capital, Qatar Investment Authority, Seligman Ventures and T. Rowe Price.
[4]
SambaNova steps up its challenge to Nvidia with new chip, $350M funding and a powerful ally in Intel - SiliconANGLE
SambaNova steps up its challenge to Nvidia with new chip, $350M funding and a powerful ally in Intel Chipmaker SambaNova Systems Inc. unveiled its most advanced artificial intelligence processor today as it closed on a bumper $350 million late-stage round of funding from Vista Equity Partners, Cambium Capital and others. Alongside the funding, SambaNova said it's going to collaborate with Intel Corp. on the development of new, high-performance and cost-effective systems for AI inference. It's intended to give enterprises an alternative to the graphics processing units that power most workloads today. SambaNova's $350 million Series E round saw "strong participation" from Intel's investment arm, Intel Capital. A host of other investors joined in too, including Assam Ventures, Battery Ventures, Gulf Development Public Company Limited,, Mayfield Capital, QIA, Saudi First Data, Seligman Ventures, T. Rowe Price, &E, 8Square, Atlantic Bridge, BlackRock, GV, Nepenthe, Nuri Capital and Redline Capital. SambaNova has positioned itself as a rival to Nvidia. It develops high-performance computer chips that lend themselves to AI model training and inference. Its chips can be accessed via the cloud or deployed on-premises in its own hardware appliances. One of its biggest selling points is power efficiency: SambaNova claims that its chips can generate more tokens per kilowatt hour than comparable processors from rivals. The new SN50 chip announced today promises to improve inference performance dramatically. According to SambaNova, it delivers five times more compute and four times greater networking bandwidth than its previous-generation SN40 chipset. Customers will be able to link up to 256 accelerators over a blazing fast, multi-terabit-per-second interconnect, enabling them to support much bigger and longer-context AI models with greater throughput and responsiveness, without escalating their compute costs, the company said. SambaNova said the SN50 differs wildly from Nvidia's GPUs. Technically speaking, it's a Reconfigurable Dataflow Unit or RDU, essentially a specialized AI accelerator chip that's more similar to Google LLC's tensor processing units or Amazon Web Services Inc.'s Trainium chips. It's designed for high-performance training and inference of massive, trillion-parameter large language models. The SN50 is based on a three-tier memory architecture that can support AI models with up to 10 trillion parameters and 10 million context lengths, enabling deeper reasoning and more intelligent autonomous systems than previously possible. It also claims a lower cost per token, thanks to its resident multimodel memory and agentic caching capabilities that optimize power efficiency. The SN50 is targeted at applications such as AI voice assistants that demand ultra-low latency to run in real time. It'll be able to power thousands of simultaneous sessions, it said. "AI is no longer a context to build the biggest model," said SambaNova co-founder and Chief Executive Rodrigo Liang. "The real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud." Liang previously appeared on SiliconANGLE Media's livestreaming studio theCUBE, where he spoke in depth about the company's RDU architecture and why it excels at AI inference: Liang said SambaNova says it can already light up data centers now, but in order to continue doing so in the future, it has decided to work much more closely with Intel. The collaboration sees Intel invest in the startup to accelerate the deployment of a new, Intel-powered "AI cloud" that's based on the existing SambaNova Cloud platform. Intel will enhance SambaNova Cloud its Xeon central processing units to help create a more efficient infrastructure that's optimized for multimodal large language models. Intel's Xeon CPUs excel at general-purpose operations and managing system operations, while the SN50 is optimized for the rapid processing of large datasets and performing complex calculations. Combining them in a single cloud would allow more efficient task distribution, improving latency, throughput and the overall performance of AI workloads. Intel said it will be able to accelerate SambaNova's cloud expansion in other ways too, by providing reference architectures and deployment blueprints, and through its partnerships with software vendors and systems integrators. Once it's ready, Intel and SambaNova plan to co-market and co-sell the new platform by leveraging Intel's existing relationships with enterprises and its partner channels. The partnership holds a lot of promise for both companies. SambaNova can benefit from Intel's global reach and manufacturing base to scale its AI processors, while Intel is getting the chance to finally make its mark on a market that has largely passed it by. Until now, Intel has been unable to compete with Nvidia and other chipmakers, such as Advanced Micro Devices Inc., in the AI industry. SambaNova's powerful SN50 chips, coupled with Intel's Xeon processors, can potentially change that story. Constellation Research analyst Holger Mueller said it's still possible for Intel, with the help of SambaNova, to make a splash in the AI chip market. "Nvidia gets all of the attention and has most of the market share, but AI models don't actually care about who makes the chip they're running on," he said. "They care about performance. If SambaNova and Intel's inference platform is competitive, the biggest challenge will be to show companies that's the case and convince them to use it, instead of Nvidia's GPUs." It's thought that the companies have been planning this collaboration for some time. Indeed, reports emerged in December that Intel was even considering buying SambaNova outright. Bloomberg said the company was mulling an offer in the region of $1.6 billion. It's not known if Intel actually tabled such an offer, but it seems unlikely SambaNova would have agreed, for the amount was only a third of what it was valued at following its previous funding round in 2021. Kevork Kechichian, executive vice president and general manager of Intel's data center group, said there's a fantastic opportunity in the AI data center market. "Customers want more choice and more efficient ways to scale AI," he said. "By combining Intel's leadership in compute, networking and memory with SambaNova's full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives."
[5]
Intel Inks 'Multiyear' AI Inference Deal With SambaNova After Acquisition Talks End
With Intel's venture capital arm backing SambaNova Systems' new $350 million funding round, the AI chip startup plans to tap into Intel's 'global enterprise, cloud and partner channels' to drive sales of joint offerings for 'cloud-scale AI inference.' Intel plans to tap into its "enterprise, cloud and partner channels" for a new "multiyear strategic collaboration" it has entered with AI chip startup SambaNova Systems after acquisition talks between the two companies recently ended. SambaNova announced the partnership Tuesday alongside a $350 million Series E funding round, which it said received "strong participation" from Intel Capital, and the unveiling of its next-generation SN50 AI chip, which it said can outperform rival products. The funding round was led by private equity firms Vista Equity Partners and Cambium Capital. [Related: Exclusive: Intel Taps Ex-Arm, HPE Exec For Data Center Systems Post Amid AI Reorg] Calling the SN50 the "most efficient chip for agentic AI," the San Jose, Calif.-based startup said that the chip, set to ship later this year, is up to five times faster than competitive chips and can run agentic AI workloads at three times lower costs than GPUs. "AI is no longer a contest to build the biggest model," Rodrigo Liang, co-founder and CEO of SambaNova, said in a statement. "With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud." SambaNova made the funding and partnership announcements after Bloomberg reported last month that discussions for Intel to acquire the startup stalled. The publication first reported on acquisition talks between the two companies last October. The startup's spokesperson said the acquisition deal is "not in discussion at this stage." An Intel spokesperson declined to comment on the matter. The Intel representative said that the company's strategic partnership with SambaNova is meant to complement its AI infrastructure strategy, which spans from Xeon CPUs to GPUs. Last year, the semiconductor giant committed to a data center GPU road map with an annual release cadence before hiring a new GPU chief architect in January. "Customers are asking for more choice and more efficient ways to scale AI," Kevork Kechichian, the head of Intel's Data Center Group, said in a statement. "By combining Intel's leadership in compute, networking and memory with SambaNova's full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale." Intel CEO Lip-Bu Tan has served as chairman of SambaNova's board since it was founded in 2017, according to his LinkedIn profile. His investment firms, Celesta Capital and Walden International, have been longtime investors. While SambaNova's spokesperson declined to say if the firms participated in the Series E funding round or currently hold stakes in the startup, the representative called them "long-standing investors." SambaNova said the funding round along with the new Intel partnership to "deliver cloud-scale AI inference," which it called a multibillion-dollar market opportunity, will help with the SN50's production ramp and distribution. The multiyear collaboration between SambaNova and Intel will focus on the delivery of "high-performance, cost-efficient AI inference solutions for AI-native companies, model providers, enterprises and government organizations around the world." This will involve the expansion of SambaNova's vertically integrated AI cloud platform using Intel's Xeon CPUs, which it said will be "supported by reference architectures, deployment blueprints and partnerships with systems integrators and software vendors." The startup said the combination of its systems with Intel's CPUs, accelerators and networking technologies will power "scalable, production-ready inference for reasoning, code generation, multimodal applications and agentic workflows." The two companies plan to engage in co-selling and co-marketing activities for these offerings, with Intel expected to tap into its "global enterprise, cloud and partner channels to accelerate adoption across the AI ecosystem." SambaNova said the first SN50 customer is Japanese investment giant SoftBank Group, which plans to integrate the chip into next-generation AI data centers in Japan. In addition to owning large stakes in Intel and rival Arm, SoftBank last year acquired Ampere Computing, which designs Arm-compatible CPUs, and AI chip startup Graphcore the year before as part of a new AI infrastructure push. SambaNova said that the SN50 uses its Reconfigurable Data Unit (RDU) architecture to enable "ultra-low latency" for "real-time responsiveness" in applications like voice assistants and "power thousands of simultaneous AI sessions with consistent high performance." The startup has previously said that one strength of the RDU architecture is its ability to combine multiple operations into a single kernel call, eliminating additional overhead associated with launching multiple kernel calls and accelerating performance as a result. It also said that the SN50 enables "higher hardware utilization" to lower the cost of generating tokens and improve return on investment for AI inference. In addition, SambaNova said that the SN50 uses three tiers of memory -- SRAM, HBM and DDR -- to offer "breakthrough model capacity" enabling the ability to run models with more than 10 trillion parameters and over 10 million context lengths. This three-tier memory architecture is optimized by the chip's "resident multi-model memory and agentic caching" to cut infrastructure costs for enterprise-scale AI deployments, according to the startup. While the startup didn't provide more details about the SN50's three-tier memory architecture, it has explained how each memory tier is used for its previous-generation SN40L chip: DDR provides capacity for hosting hundreds of models and the ability to quickly switch them out on a single socket, HBM "holds the currently running model and caches others," and distributed SRAM "enables high operational intensity through spatial kernel fusion and bank-level parallelism." The SambaNova spokesperson said its competitive claims about performance and total cost of ownership are "based on internal benchmarking of SN50 against widely deployed, current-generation GPU systems running large language models." While the performance boost claim is based on "end-to-end throughput gains on latency-sensitive inference workloads," the claim around lower costs is based on "system-level modeling across representative production deployments, incorporating hardware, power, cooling, networking, and sustained utilization," the representative added.
Share
Share
Copy Link
AI chip startup SambaNova Systems secured $350 million in Series E funding led by Vista Equity Partners and Cambium Capital, with strong participation from Intel Capital. The company unveiled its SN50 chip, claiming five times faster performance than rivals, and announced a multiyear strategic collaboration with Intel to deliver AI inference solutions as an alternative to Nvidia's dominance in the market.
SambaNova Systems has closed a $350 million Series E funding round led by Vista Equity Partners and Cambium Capital, with strong participation from Intel Capital, the chipmaker's venture arm
1
. The oversubscribed round included investments from Battery Ventures, Mayfield Capital, and other new investors, positioning the AI chip startup to accelerate production and distribution of its latest technology2
. While SambaNova declined to disclose its valuation, a person familiar with the matter indicated it exceeded $2 billion, though still below the $5 billion valuation achieved in 2021 when SoftBank Group's Vision Fund 2 led a previous round1
.
Source: CRN
Intel has entered into a multiyear strategic collaboration with SambaNova to deliver cloud-scale AI inference solutions, marking a shift after acquisition talks between the companies reportedly stalled at $1.6 billion in January
3
. The partnership will combine SambaNova's offerings with Intel's Xeon CPUs, accelerators, and networking technologies to serve data center customers seeking alternatives to GPU-based AI infrastructure5
. Intel will tap into its global enterprise, cloud, and partner channels to accelerate adoption across the AI ecosystem through co-selling and co-marketing activities4
.SambaNova unveiled its SN50 chip, a Reconfigurable Dataflow Unit (RDU) designed to deliver up to five times faster performance than competitive processors and run agentic AI workloads at three times lower costs than GPUs
1
. The AI chip features a three-tier memory architecture with 432 MB of on-chip SRAM, 64 GB of HBM2E memory delivering 1.8 TB/s bandwidth, and between 256 GB and 2 TB of DDR5 memory2
. According to CEO Rodrigo Liang, the SN50 delivers 2.5 times higher 16-bit floating-point performance and five times higher performance at FP8 compared to its predecessor, working out to 1.6 and 3.2 petaFLOPS respectively2
. The chip is set to ship later this year, with SoftBank Group already signed up as the first customer to deploy the SN50 platform in next-generation AI data centers in Japan5
.
Source: The Register
Related Stories
SambaNova is carving out space in the massive market for AI computing currently dominated by Nvidia, which has become the world's most valuable publicly traded company thanks to the AI boom
3
. The startup's dataflow architecture aims to reduce data movement overheads by overlapping computation and communication, allowing it to use fewer, less powerful AI accelerators while maintaining competitive performance2
. According to Artificial Analysis, SambaNova's SN40L accelerators already serve up large language models like the 230 billion parameter MiniMax M2 model at up to 378 tokens per second, more than a hundred tokens per second faster than the next closest GPU-based inference provider2
. Customers including Hugging Face, Meta, and major AI labs already use SambaNova's technology3
.For Intel, the strategic collaboration with SambaNova serves as a bridge solution to keep its technology relevant during the explosive build-out of AI capacity while its own AI accelerators are still in development
1
. Intel CEO Lip-Bu Tan, who has served as chairman of SambaNova since 2017 and first invested in the company eight years ago through his venture firms Walden International and Celesta Capital, recused himself from discussions about the collaboration to avoid conflicts of interest1
. Kevork Kechichian, general manager of Intel's data center group, stated that customers are asking for more choice and more efficient ways to scale AI, driving demand for alternatives to generative AI deployments2
. The partnership allows Intel to finally make its mark on a market that has largely passed it by after repeated missteps with its datacenter GPU and Gaudi product lines2
.Summarized by
Navi
[2]
[4]
07 Feb 2026•Startups

31 Oct 2025•Business and Economy

13 Dec 2025•Business and Economy

1
Technology

2
Policy and Regulation

3
Policy and Regulation
