SambaNova raises $350M, partners with Intel to deploy SN50 chip claiming 5x speed over Nvidia B200

Reviewed byNidhi Govil

12 Sources

Share

SambaNova secured $350 million in Series E funding led by Vista Equity Partners and announced a multi-year strategic collaboration with Intel to deploy AI inference solutions. The startup unveiled its SN50 AI chip, designed specifically for inference workloads, claiming it delivers five times faster performance and three times better efficiency than Nvidia's B200. SoftBank will be the first customer to deploy the technology in its Japanese data centers.

News article

SambaNova Secures $350 Million Funding and Unveils SN50 AI Chip

SambaNova Systems has raised $350 million in a Series E funding round led by Vista Equity Partners and Cambium Capital, with significant participation from Intel Capital, marking a rare move by Vista beyond its traditional enterprise software focus

2

4

. The funding round was oversubscribed, with Battery Ventures, Mayfield Capital, and other new investors participating, valuing the company above $2 billion—still below its 2021 valuation of around $5 billion

2

. The proceeds will fund expansion of the new SN50 AI chip, scale its SambaCloud platform, and deepen enterprise software integrations

4

.

Intel Partnership Aims to Challenge Nvidia's Dominance

SambaNova announced a multi-year strategic collaboration with Intel to build large-scale AI inference infrastructure around Intel Xeon platforms and SambaNova AI accelerators

1

. The Intel partnership will combine SambaNova's offerings with Intel's own chips in products for data center customers, serving as an interim solution until Intel's own AI accelerators are ready

2

. Intel CEO Lip-Bu Tan, who also serves as chairman of SambaNova after first investing in the startup eight years ago, recused himself from partnership discussions to avoid conflicts of interest

2

5

. This collaboration comes after acquisition talks between Intel and SambaNova stalled, with Intel initially discussing acquiring the startup for roughly $1.6 billion, including debt

4

.

New AI Accelerator Built Specifically for Agentic Inference Workloads

The SN50 AI chip represents a fundamental shift in focus toward inference rather than training, targeting real-time applications such as voice assistants with emphasis on low latency, memory bandwidth, and power consumption

1

. The dual-chiplet processor is based on SambaNova's Reconfigurable Data Unit architecture and features a three-tier memory subsystem—SRAM, HBM, and DDR5—designed to keep multiple models resident for rapid hot-swapping

1

. Each RDU features 432 MB of on-chip SRAM, 64 GB of HBM2E memory good for 1.8 TB/s of bandwidth, and between 256 GB and 2 TB of DDR5 memory

3

. The new chip delivers 2.5x higher 16-bit floating-point performance and 5x higher performance at FP8, working out to 1.6 and 3.2 petaFLOPS respectively

3

.

Performance Claims Position SN50 Against Nvidia B200

SambaNova claims the SN50 accelerator offers five times more compute than competing offerings, specifically positioning it against the Nvidia B200, with three times better efficiency and a threefold lower cost of ownership for inference compared to GPU-based systems

1

. According to SemiAnalysis's InferenceX benchmark results, when running with FP8 precision, a Llama 3.3 70B model with 1K input and 1K output tokens achieves 895 tokens per second per user on the SN50, compared to 184 tokens per second per user on Nvidia's B200

1

. Rodrigo Liang, co-founder and CEO of SambaNova, stated that "AI is no longer a contest to build the biggest model," emphasizing that "the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud"

1

.

Rack-Scale AI Solutions Target Enterprise and Government Markets

SambaNova positions the SN50 primarily as part of its SambaRack SN50 rack-scale solution rather than a standalone processor

1

. Each 20 kW SambaRack SN50 packs 16 SN50 RDU processors, and 16 racks can interconnect up to 256 accelerators using a multi-terabyte-per-second fabric, with each RDU equipped with 2.2 TB/s of bidirectional chip-to-chip bandwidth via a switched fabric

1

3

. The company emphasizes that at 20 kW per rack, the system operates within existing data center power envelopes and relies on air cooling, eliminating the need for liquid cooling or data center modifications

1

. A cluster of 256 SN50 accelerators is designed for extremely large models, including configurations exceeding 10 trillion parameters and context windows of more than 10 million tokens, positioning this capability as essential for reasoning-heavy and multi-model agentic AI workloads

1

.

SoftBank as First Customer Signals Market Validation

SoftBank will be the first customer to deploy the SN50 platform, using it to power AI computing in Japan within its AI data centers

2

4

. The Japanese company already uses older SambaNova products, demonstrating continued confidence in the technology

2

. SambaNova expects to ship its SN50 accelerators later this year, with full SambaRack SN50 systems available in the second half of 2026, though pricing remains undisclosed

1

3

. The joint effort between Intel and SambaNova targets AI inference solutions for AI-native companies, model providers, as well as enterprises and government organizations worldwide, leveraging Intel's traditional customer relationships

1

. SambaNova's dataflow architecture, which aims to reduce data movement overheads by overlapping computation and communication, already positions it as one of the highest performing inference providers, with its SN40L accelerators able to serve models like the 230 billion parameter MiniMax M2 at up to 378 tokens per second

3

. The company counts Hugging Face, Meta, and major AI labs as customers, and through hardware-software co-design with Intel, aims to provide cost-effective alternatives in the AI chip market dominated by GPUs

3

5

. Kevork Kechichian, general manager of Intel's data center group, noted that "customers are asking for more choice and more efficient ways to scale AI," highlighting the growing demand for alternatives in generative AI deployments

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo