3 Sources
3 Sources
[1]
Former Google, Meta executives raise $100 million for high-capacity AI servers startup
Majestic develops its entire chip system, much like Nvidia, but each of the startup's servers may replace up to 10 of today's leading racks, the co-founders said. Three former Meta and Google silicon executives on Monday announced they've raised a total of $100 million to build technology they say will reduce cloud companies' spend on data center buildouts. Called Majestic Labs, the startup's co-founders are Ofer Shacham, Sha Rabii and Masumi Reynders, all of whom spent years working together leading silicon products at Meta and Google. Majestic's patent-pending silicon design architecture includes 1,000 times the memory of a typical enterprise-grade server, the co-founders told CNBC. Majestic develops its entire chip system, much like Nvidia, but each of the startup's servers may replace up to 10 of today's leading racks, they said. The company closed its $71 million Series A funding round in September, led by Bow Wave Capital. Among its others investors is Lux Capital. Prior to Monday's announcement, the co-founders had been quietly working on the startup since late 2023. Majestic's funding announcement comes as major tech companies raise their capital expenditures, primarily for data center infrastructure. Alphabet, Meta, Microsoft and Amazon each lifted their guidance for capital expenditures in October, and they collectively expect that number to reach more than $380 billion this year. While the majority of large language models and AI workloads have relied on Nvidia's graphics processing units, or GPUs, more companies are entering the fold. Google last week announced Ironwood, its latest tensor processing unit, or TPUs, which artificial intelligence startup Anthropic plans to use for its Claude model.
[2]
AI Server Startup Majestic Labs Comes Out of Stealth With $100 Mn Funding | AIM
Its patent- pending architecture promises 1000 times the memory capacity of a conventional enterprise server. Three former Google and Meta silicon executives have raised $100 million to launch Majestic Labs, a startup developing high-capacity AI servers designed to cut data centre costs for hyperscalers. Founded by Ofer Shacham, Sha Rabii, and Masumi Reynders, Majestic Labs is building patent-pending silicon architecture that promises up to 1,000 times the memory capacity of conventional enterprise servers. The company says its system can replace up to ten existing server racks, potentially reducing power, cooling, and space requirements for large AI data centers. Majestic's $71 million Series A, led by Bow Wave Capital with participation from Lux Capital, closed in September. The company has been quietly operating since late 2023 and now employs fewer than 50 people split between Los Altos, California, and Tel Aviv, Israel. Prototypes are expected in 2027, with pre-orders already under discussion. "We're not trying to replace GPUs across the board -- we're solving for memory-intensive AI workloads where the fixed compute-to-memory ratio becomes a constraint," Shacham, Majestic's CEO, told CNBC. The startup is targeting hyperscalers and enterprises in data-heavy industries such as finance and pharmaceuticals. The timing aligns with surging global investment in data centre infrastructure. Tech giants Alphabet, Meta, Microsoft, and Amazon together expect capital expenditures to exceed $380 billion in 2025, much of it driven by AI workloads. Majestic's architecture aims to "collapse" multiple racks of hardware into a single server, which the founders say could dramatically increase efficiency. "NVIDIA makes excellent GPUs and has driven incredible AI innovation," Shacham added, "but our goal is to complement that by solving for memory bottlenecks." The trio's collaboration dates back two decades. Reynders joined Google in 2003 and later became director of product management for silicon. Rabii, who sold his chip design startup Arda Technologies to Google in 201, led the Argos video chip team at YouTube. Shacham, who sold Chip Genesis to Google in 2013, oversaw silicon design for consumer hardware. In 2018, the three moved to Meta, where they founded the Facebook Agile Silicon Team (FAST). After layoffs at Reality Labs in 2023, they regrouped to start Majestic Labs, focusing on one of AI's biggest infrastructure bottlenecks -- memory. "We've been friends and colleagues for a long time, so this notion of working together and doing something exciting has always been in the periphery," Reynders said.
[3]
Majestic Labs launches with $100M to build memory-dense AI servers - SiliconANGLE
Majestic Labs launches with $100M to build memory-dense AI servers Artificial intelligence infrastructure startup Majestic Labs Inc. launched today with $100 million in initial funding. The company raised the bulk of the capital, $71 million, through a Series A round led by Bow Wave Capital. Lux Capital led Majestic Labs' earlier seed investment. The hardware maker is also backed by SBI, Upfront, Grove Ventures, Hetz Ventures, QP Ventures, Aidenlair Global and TAL Ventures. Majestic Labs is led by chip engineering executives Ofer Schacham, Masumi Reynders and Sha Rabii. Schacham, the company's Chief Executive Officer, earlier led the Google LLC team that develops chips for the search giant's consumer devices. Reynders and Rabii also held senior roles in the Alphabet Inc. unit's semiconductor group. Large language models generate a significant volume of temporary data while processing prompts. As a result, they have to be deployed on servers with a large memory pool that can accommodate their data. Moreover, LLMs require the ability to move data between a server's memory pool and graphics cards at high speed. Increasing an AI cluster's memory capacity often requires companies to add servers, which in turn necessitates deploying more supporting equipment such as cooling systems. That significantly increases hardware costs. Majestic Labs is tackling the challenge with a server that it says will include as much memory as 10 racks. A server requires a small fraction of the cooling equipment as a rack, which decreases procurement costs and power usage. Majestic Labs says that its system contains a "custom accelerator" chip and a likewise internally-developed memory interface module. The server can be equipped with up to 128 terabytes of RAM. Majestic Labs didn't specify what variety of RAM its system contains but described the technology as "extremely fast, power-efficient, high-bandwidth memory," which hints it might be using HBM memory. HBM is a particularly performant type of RAM that is ubiquitous in AI clusters. A standard RAM chip comprises a single layer of memory cells. An HBM module, in turn, contains upwards of a dozen memory cell layers that are stacked atop one another. The layers are linked together by millions of microscopic, copper-filled electric wires called through-silicon vias. Increasing the amount of memory in an AI cluster's servers can boost LLM performance. However, the size of the performance gain varies based on the inference task that the model performs. The computations an LLM carries between the moment a user enters a prompt and the first output token are more influenced by graphics card performance than memory speed. In contrast, the calculations that are used to generate subsequent output tokens can be significantly accelerated by adding more memory to an AI cluster. Majestic Labs' server can not only run inference workloads but also train new LLMs. "Our systems support vastly more users per server and shorten training time, lifting AI workloads to new heights both on-premises and in the cloud," Rabii said. CNBC reported that Majestic Labs plans to sell its hardware to hyperscale data center operators, financial institutions and pharmaceutical companies. The startup plans to ship its first servers in 2027. In the meantime, Majestic Labs will work to grow its engineering team, enhance the software stack that ships with its server and raise additional funding.
Share
Share
Copy Link
Three former Google and Meta silicon executives have raised $100 million for Majestic Labs, a startup developing high-capacity AI servers with 1,000 times more memory than conventional enterprise servers. The company aims to reduce data center costs by replacing up to 10 server racks with a single unit.
Three former Google and Meta silicon executives have emerged from stealth mode with Majestic Labs, a startup that has raised $100 million to develop next-generation AI servers designed to address critical infrastructure bottlenecks in artificial intelligence workloads
1
. The company is led by CEO Ofer Shacham, alongside co-founders Sha Rabii and Masumi Reynders, all of whom bring decades of experience from leading silicon development teams at major tech companies2
.
Source: Analytics India Magazine
The founding team's collaboration spans over two decades, with their professional relationship beginning at Google in the early 2000s. Reynders joined Google in 2003 and later became director of product management for silicon, while Rabii sold his chip design startup Arda Technologies to Google in 2011 and subsequently led the Argos video chip team at YouTube
2
. Shacham, who sold Chip Genesis to Google in 2013, oversaw silicon design for consumer hardware before the trio moved to Meta in 2018 to establish the Facebook Agile Silicon Team (FAST)2
.Majestic Labs' patent-pending silicon design architecture represents a significant departure from conventional server design, promising up to 1,000 times the memory capacity of typical enterprise-grade servers
1
. The company's approach involves developing its entire chip system, similar to Nvidia's integrated approach, but with a focus on dramatically increasing memory density and efficiency1
.
Source: SiliconANGLE
The startup's servers can be equipped with up to 128 terabytes of RAM and feature a custom accelerator chip alongside an internally-developed memory interface module
3
. While Majestic Labs hasn't specified the exact type of RAM technology used, the company describes it as "extremely fast, power-efficient, high-bandwidth memory," suggesting the potential use of High Bandwidth Memory (HBM) technology3
.The company's technology directly addresses one of the most significant challenges in AI infrastructure: memory bottlenecks in large language model deployments. As Shacham explained, "We're not trying to replace GPUs across the board -- we're solving for memory-intensive AI workloads where the fixed compute-to-memory ratio becomes a constraint"
2
. Each Majestic server is designed to replace up to 10 conventional server racks, potentially offering substantial reductions in power consumption, cooling requirements, and physical space utilization2
.The timing of this technology development aligns with the massive infrastructure investments being made across the tech industry. Major companies including Alphabet, Meta, Microsoft, and Amazon have collectively raised their capital expenditure guidance, with expectations to exceed $380 billion in 2025, primarily driven by AI workload requirements
1
.Related Stories
Majestic Labs completed its $71 million Series A funding round in September, led by Bow Wave Capital with participation from Lux Capital, which also led the company's earlier seed investment
3
. Additional investors include SBI, Upfront, Grove Ventures, Hetz Ventures, QP Ventures, Aidenlair Global, and TAL Ventures3
.The company, which has been operating in stealth mode since late 2023, currently employs fewer than 50 people across offices in Los Altos, California, and Tel Aviv, Israel
2
. Majestic Labs is targeting hyperscale data center operators, financial institutions, and pharmaceutical companies as primary customers for its memory-dense server solutions3
.Summarized by
Navi
[2]
Analytics India Magazine
|12 Mar 2025•Technology

30 Sept 2025•Business and Economy

21 Feb 2025•Business and Economy
