4 Sources
4 Sources
[1]
AI chip startup Rebellions raises $400 million at $2.3B valuation in pre-IPO round | TechCrunch
Fresh off a successful Series C funding round in November, the South Korean fabless AI chip startup Rebellions has raised an additional $400 million. The latest funding infusion, which comes before a planned IPO later this year, was led by the Mirae Asset Financial Group and the Korea National Growth Fund. It also comes at the same time that the company is engaging in an aggressive expansion effort -- with recently announced plans to grow its presence not only in Asia but also in the Middle East and the U.S. Founded in 2020, Rebellions develops and designs AI chips while outsourcing their fabrication. The startup's chips are designed for inference -- the compute necessary for AI models to respond to user queries. Inference has grown in importance as LLMs have matured and begun to see widespread commercial deployment. The company closed $124 million in a Series B in 2024. Then, in November, Rebellions raised an additional $250 million during its Series C. As of today, the company's total fundraising haul now stands at $850 million -- $650 million of which was raised in the last six months. Meanwhile, the startup's valuation sits at approximately $2.34 billion, the company said Monday. In addition to the funding round, Rebellions also announced the release of two new products: RebelRack and RebelPOD, which are described as AI infrastructure platforms. POD represents a production-ready unit of inference compute, while Rack "integrates multiple racks into a scalable cluster designed for large-scale AI deployment," the company said. In a conversation with TechCrunch, Rebellions' Chief Business Officer Marshall Choy -- who is leading the company's global expansion efforts -- said it had recently established entities in the U.S., Japan, Saudi Arabia, and Taiwan. Choy said the company was building out its ecosystem of technology partners in the U.S., where it plans to court cloud providers, government agencies, telecom operators, and Neoclouds. He declined to comment on IPO timing. "AI is now measured by its ability to operate in the real world at scale, under power constraints, and with clear economic return," said Sunghyun Park, co-founder and CEO of Rebellions. "That shifts the center of gravity toward inference infrastructure and software that makes that infrastructure usable." Rebellions is one of a new generation of chip startups that have sought to challenge NVIDIA's once iron-clad dominance within the chip industry. As that dominance has begun to wane, other major tech companies like AWS, Meta and Google -- along with the new generation of startups -- have also sought to produce their own chips.
[2]
Rebellions eyes global expansion with rack-scale AI platform
SK Telecom-backed AI chip startup Rebellions has raised $400 million in a pre-IPO funding round to support its global expansion with a new rack-scale compute platform aimed at enterprises and sovereign clouds. Founded in late 2020, the startup produces AI accelerators that have been deployed in numerous applications in the South Korean domestic market. Initially, "we focused a great deal on telcos, service providers, and enterprise-end users within the Korean market," Rebellions chief business officer Marshall Choy told El Reg. "We built up use cases around everything from call centers and customer service to CCTV surveillance for the national highway system." "We're in a very strong position to take those learnings, capabilities, and improvements we've done over the years and bring that out to other regions, outside of Korea, as less of a fresh start, but more of a rinse and repeat type of motion," he added. Following the introduction of its Rebel Quad accelerators, since rebranded as the Rebel100, the company has turned its attention to the rest of the world. Over the past few months, Rebellions has opened offices in Japan, Saudi Arabia, Taiwan, and the US, where it hopes to win over enterprises with its new RebelRack and RebelPods. Before looking at the racks, let's talk about the chips themselves. Our sibling site The Next Platform dug into the Rebel100 last winter, but at a high level, the chip looks quite similar to Nvidia's H200 accelerators from late 2023. According to Rebellions, the processor is capable of a petaFLOP of dense 16-bit floating point math or double that at FP8. However, unlike the H200, which used a monolithic compute die fabbed at TSMC, Rebellions' latest processor uses a chiplet architecture with four compute dies manufactured and packaged by Samsung. That processor is fed by four HBM3e stacks totaling 144 GB of capacity and 4.8 TB/s of aggregate bandwidth. While the smaller compute dies and reliance on Samsung should not only help with yields and avoid competing for TSMC's limited fab and packaging capacity, it still needs to source HBM from somewhere. Memory is already in short supply and HBM is among the scarcest. This is where being a South Korean company with close ties to both the SK chaebol and Samsung comes in handy. SK Hynix and Samsung are the largest suppliers of HBM in the world. Last we heard, Rebellions was sourcing its HBM from Samsung, but in a pinch it shouldn't have to fight that hard to get SK Hynix to kick in some capacity. The chip itself is currently being packaged as a PCIe card with a 600 watt TDP, rather than the OAM or SXM modules we've become accustomed to. Rebellions' reference design calls for eight of these cards to be crammed into a single air-cooled node. High-efficiency, standard form factors such as 19-inch chassis and air cooling were key design points for Rebellions as it meant the system could be deployed into existing enterprise datacenters, something that can't be said of Nvidia's latest generation of liquid-cooled Rubin GPUs. The RebelRack will feature four of these nodes, each connected via quad-400 Gbps networking, for a total of 32 accelerators and 64 petaFLOPS of FP8 compute, 4.6 TB of HBM3e, and 153.6 TB/s of aggregate memory bandwidth. For larger deployments, Rebellions is also developing what it calls the RebelPod, which can scale from eight to 128 nodes, each with eight Rebel100 accelerators interconnected using 800 Gbps Ethernet. "Right now, people think of rack level. I think we're going to be thinking, in a few days from now, about row level and datacenter level," Choy said. Compared to GPU systems, this isn't a lot of networking. Most HGX systems now feature at least one 800 Gbps NIC per GPU. Choy tells us that going forward, the network fabric is going to be a major focus for the company. As we've seen with other rack-scale systems from AMD and Nvidia, compute and networking are only two pieces of the puzzle; you also need software that can stitch everything together cohesively. Rebellions' software stack is nothing exotic. We're told the platform runs on open source frameworks like vLLM, PyTorch, and Triton. For disaggregated inference, it's using llm-d, another open source framework that enables compute-heavy prefill operations on one set of accelerators and memory bandwidth-heavy decode operations on another. "Everything's open source, from vLLM compiler all the way up to the very highest level of stack, Red Hat, OpenShift, and everything in between," Choy said. "If you've used any of these technologies in any other context, you already know how to use Rebellions." We've heard similar claims from chipmakers before that haven't ended up being quite so easy to use. However, Rebellions is a member of the PyTorch Foundation, something that can't be said of many AI chip startups. Rebellions' latest funding round, which was led by Mirae Asset Financial Group and the Korea National Growth Fund, comes as the company eyes an initial public offering, another feat that few other AI chipmakers have undertaken.
[3]
Samsung-backed AI chip firm Rebellions raises $400 million ahead of IPO
The Rebel-Quad is the second-generation product from Rebellions and is made up of four Rebel AI chips. Rebellions, a South Korean firm, is looking to rival companies like Nvidia in AI chips. South Korean AI chip startup Rebellions said Monday it has raised $400 million as it looks to expand into the U.S. market ahead of a public listing. Mirae Asset Financial Group and the Korea National Growth Fund, an investment vehicle of the South Korean government, led the round, which values Rebellions at $2.34 billion. Rebellions is one of the many semiconductor startups looking to capitalize on demand for AI chips and investor appetite for companies that are fueling the build-out of infrastructure for the technology. Sunghyun Park, CEO of Rebellions, told CNBC that the money will be used to expand into the U.S. "Our main target right now is big labs," Park said, naming companies like Meta and xAI as target customers, rather than hyperscalers like Amazon and Microsoft. Park added that Rebellions currently has some active proof-of-concept trials with customers in the U.S. The CEO also said the company is preparing for an initial public offering, as CNBC previously reported, but declined to give any specifics on the timeline or listing location.
[4]
Rebellions lands $400M in funding to lead the South Korean revolt against Nvidia chips - SiliconANGLE
Rebellions lands $400M in funding to lead the South Korean revolt against Nvidia chips Cloud-native artificial intelligence inference chip startup Rebellions Inc. is upping its game in an effort to compete better with rivals such as Nvidia Corp. and Advanced Micro Devices Inc. by raising a massive $400 million in what it calls a "pre-IPO funding round." Mirae Asset Financial Group and the Korea National Growth Fund led the investment in the Seoul-based startup. It comes just months after the company raised $250 million in a Series C round in September. It brings its total amount of funding since inception to $850 million, while its value has soared to $2 billion. Rebellions is one of a handful of well-funded South Korean chip startups trying to take on AI chip giant Nvidia, and today's round may just give it an edge over its domestic rivals, which include Furiosa AI Inc. and DeepX Co. Ltd. The company is the designer of several high-performance inference chips that can be used to run trained AI models in production. Though Nvidia is still believed to dominate the inference market, it's coming under pressure from all sides, with U.S. chipmakers such as Cerebras Systems Inc. also vying for a piece of the pie. Now, Rebellions wants to tap into the U.S. appetite for AI chips too. The money from today's round will be used to fund its U.S. expansion, and with that in mind it has also hired a new chief business officer, Marshall Choy, to lead its operations in that country. Like many of Nvidia's rivals, Rebellions aims to make AI inference more economic and powerful. As generative AI models increase in complexity, the cost of running them has increased dramatically, especially when using graphics processing units. Nvidia's GPUs might be powerful, but they consume vast amounts of energy and procuring them at large scale is prohibitively expensive for many companies. That's why the limiting factor in AI adoption is no longer model capability, but the cost of the infrastructure required to make the most of them. Rebellions co-founder and Chief Executive Sunghyun Park says it's time for the AI industry to move beyond the "silicon-only" mindset, pointing to the friction that exists when integrating AI hardware with software. "AI is now measured by its ability to operate in the real world at scale, under power constraints and with clear economic return," he said. "That shifts the center of gravity toward inference infrastructure and software that makes the infrastructure useful." In an interview with SiliconANGLE, Choy said the company is differentiating itself through a "software-centric" approach to AI hardware. Its chip hardware is built atop a cloud-native stack that's powered by Kubernetes. Because it's open-source, it can work most popular AI developer frameworks, such as PyTorch, Hugging Face and the vLLM inference engine, Choy said. By optimizing its infrastructure for these frameworks, he added, its neural processing units can run inference more efficiently than is possible with traditional GPUs. The approach also speeds up iteration for customers, Choy said. "Our use of open source software without forks enables an identical experience for admins, cloud ops, developers and end users," he explained. "This enables our evaluation and adoption cycles to be much faster than competitive systems in the market today." To accelerate this vision, the company has launched two new vertically integrated infrastructure offerings, called RebelRack and RebelPOD. The first is a "production-ready" unit of inference compute, while the latter clusters multiple RebelRacks together for larger deployments. The system is powered by its Rebel100 NPUs, which are chiplet-based processors optimized for superior performance per watt than GPUs. Rebellion's focus on software also makes it a more complete systems company, Choy said, as opposed to traditional chipmakers that leave it to developers to figure out how to get their AI to run on their hardware. "Our comprehensive approach to delivering to the market's needs is exemplified by the announcement of the new RebelRack and RebelPOD systems, available today, far in advance of other pre-announced offerings by the competition," he insisted. There's a perception in some quarters that breaking into the U.S. market is more challenging, because of American companies' preference for Nvidia's GPUs, but Choy believes that most data center operators will be open-minded about what competitors have to offer. "While some companies may have an Nvidia-first bias, very few have an Nvidia-only policy," he said. "There is growing recognition that the future of AI computing is going to be heterogeneous, with customers looking to match different accelerators and GPU architectures to particular classes of models, applications and use cases, as well as pricing tiers." With $650 million raised in the last six months, Rebellions is clearly in a hurry to make its mark. It has previously said it's working toward an initial public offering, and it's notable that today's round was described as a "pre-IPO" raise. But while the company is happy to state its future intentions, Choy declined to mention any specifics regarding when it might go public, saying the focus for now is strictly on scaling its business and growing its revenue.
Share
Share
Copy Link
South Korean AI chip startup Rebellions has secured $400 million in a pre-IPO funding round led by Mirae Asset Financial Group and Korea National Growth Fund, bringing its valuation to $2.34 billion. The company announced new RebelRack and RebelPOD infrastructure platforms while expanding into the U.S., Japan, Saudi Arabia, and Taiwan to challenge Nvidia's dominance in AI inference chips.
South Korean AI chip startup Rebellions has raised $400 million in a pre-IPO round led by Mirae Asset Financial Group and the Korea National Growth Fund, pushing its valuation to approximately $2.34 billion
1
3
. Founded in 2020, Rebellions has now raised $850 million in total funding, with $650 million coming in just the last six months1
. The funding round comes as the company prepares for an Initial Public Offering (IPO) later this year, though CEO Sunghyun Park declined to provide specific timing or listing location details3
.
Source: The Register
The fresh capital will fuel Rebellions' global expansion efforts, with the company recently establishing entities in the U.S., Japan, Saudi Arabia, and Taiwan
1
. Marshall Choy, the company's Chief Business Officer leading U.S. expansion, told CNBC that Rebellions is targeting major AI labs including Meta and xAI rather than hyperscalers like Amazon and Microsoft3
. The company currently has active proof-of-concept trials with customers in the U.S.3
. In the U.S. market, Rebellions plans to court cloud providers, government agencies, telecom operators, and Neoclouds1
.Alongside the funding announcement, Rebellions unveiled two new products: RebelRack and RebelPOD, described as AI infrastructure platforms designed for enterprise deployment
1
. The RebelRack features four nodes with eight Rebel100 accelerators each, totaling 32 accelerators and 64 petaFLOPS of FP8 compute, 4.6 TB of HBM3e memory, and 153.6 TB/s of aggregate memory bandwidth2
. The rack-scale AI platform uses quad-400 Gbps networking to connect nodes2
. For larger deployments, RebelPOD can scale from eight to 128 nodes interconnected using 800 Gbps Ethernet2
. The system is designed with high-efficiency, standard 19-inch chassis and air cooling, enabling deployment into existing enterprise datacenters without the liquid cooling requirements of Nvidia's latest Rubin GPUs2
.
Source: SiliconANGLE
Rebellions is positioning itself through a software-centric approach built on open-source frameworks including vLLM, PyTorch, and Triton
2
4
. The platform runs on a cloud-native stack powered by Kubernetes, compatible with popular AI developer frameworks such as Hugging Face and the vLLM inference engine4
. Choy emphasized that "everything's open source, from vLLM compiler all the way up to the very highest level of stack, Red Hat, OpenShift, and everything in between"2
. For disaggregated inference, the platform uses llm-d, an open-source framework that enables compute-heavy prefill operations on one set of accelerators and memory bandwidth-heavy decode operations on another2
. Rebellions is a member of the PyTorch Foundation, distinguishing it from many AI chip startups2
.Related Stories
The Rebel100 processor at the heart of these systems is capable of a petaFLOP of dense 16-bit floating point math or double that at FP8
2
. Unlike Nvidia's H200 which uses a monolithic compute die fabbed at TSMC, the Rebel100 employs a chiplet architecture with four compute dies manufactured and packaged by Samsung2
. The processor is fed by four HBM3e stacks totaling 144 GB of capacity and 4.8 TB/s of aggregate bandwidth2
. Being a South Korean company with close ties to both the SK Telecom chaebol and Samsung provides advantages in sourcing HBM from SK Hynix and Samsung, the world's largest HBM suppliers2
. The chip is currently packaged as a PCIe card with a 600 watt TDP, with eight cards crammed into a single air-cooled node2
.Rebellions develops and designs AI chips for inferenceβthe compute necessary for AI models to respond to user queriesβwhile outsourcing their fabrication
1
. Inference has grown in importance as large language models have matured and begun to see widespread commercial deployment1
. Sunghyun Park stated that "AI is now measured by its ability to operate in the real world at scale, under power constraints, and with clear economic return," adding that "this shifts the center of gravity toward inference infrastructure and software that makes that infrastructure usable"1
. As generative AI models increase in complexity, the cost of running them has increased dramatically, especially when using graphics processing units4
. Rebellions is one of several well-funded South Korean chip startups trying to challenge Nvidia, including domestic rivals Furiosa AI Inc. and DeepX Co. Ltd.4
.Summarized by
Navi
[1]
[2]
1
Technology

2
Technology

3
Science and Research
