4 Sources
4 Sources
[1]
Broadcom takes on Nvidia with Thor Ultra, an open 800G Ethernet chip for AI clusters
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Connecting the dots: Thor Ultra's distinction lies not in raw throughput but in how it operationalizes open Ethernet for the AI era. By adhering to the UEC specification, it gives data center operators a way to scale AI workloads without being locked into a single vendor's networking ecosystem. This technical positioning - combining ultra-high bandwidth, programmability, and open interoperability - makes Thor Ultra less about competing with Nvidia on speed and more about redefining how AI fabrics are designed and standardized at hyperscale. Broadcom has introduced a new networking chip called Thor Ultra, designed to compete directly with Nvidia in powering data movement across AI clusters. It is the first 800G Ethernet network interface card built to the open Ultra Ethernet Consortium specification, targeting one of the biggest bottlenecks in large language model training: high-bandwidth, low-latency interconnects at data center scale. Thor Ultra moves beyond traditional RDMA architectures, introducing a suite of new capabilities that make Ethernet viable for trillion-parameter workloads. Its packet-level multipathing and out-of-order packet delivery features enable networks to dynamically balance load and sustain throughput across congested fabrics - tasks once dependent on costly, proprietary interconnects. Selective retransmission mechanisms and fully programmable congestion control algorithms further improve link utilization, enabling systems to manage the unpredictable traffic patterns characteristic of distributed AI training environments. At the hardware level, Thor Ultra's 800G line rate doubles the throughput of the previous generation and integrates 200G and 100G PAM4 SerDes options with what Broadcom claims is the industry's lowest bit error rate. The NIC supports PCIe Gen6 x16 connectivity and provides line-rate encryption and decryption via PSP offload, an architectural decision intended to free XPUs from compute-intensive security workloads that can add latency. Secure boot and firmware attestation extend the trusted computing boundary all the way to the NIC. Although Broadcom's Tomahawk and Jericho series have long dominated intra - data center switching, Thor Ultra represents the company's clearest effort yet to redefine the NIC as a programmable extension of the AI fabric rather than a passive endpoint. The combination of a programmable congestion-control pipeline and support for packet trimming and congestion signaling with Tomahawk 5 and 6 underscores a vertically optimized (yet still open) architecture that stands in sharp contrast to Nvidia's tightly coupled, proprietary networking stack.
[2]
Broadcom's Thor Ultra just made PCIe Gen6 Ethernet real for AI tools
Traditional RDMA meets its replacement as Thor Ultra redefines packet control precision Broadcom has introduced the Thor Ultra, described as the industry's first 800G AI Ethernet Network Interface Card (NIC). The card is designed for large data center environments which power advanced AI tools and workloads involving hundreds of thousands of XPUs. Using PCIe Gen6 x16, Thor Ultra improves throughput for Ethernet cards and targets AI data centers where performance and scalability are essential. The Broadcom Thor Ultra 800G Ethernet card addresses several limitations found in traditional RDMA, which lacks multipathing and scalable congestion control. Broadcom's redesign adds "Packet-Level Multipathing" and "Out-of-Order Packet Delivery," enabling more efficient use of network resources. Selective retransmission and programmable congestion control have also been added to reduce latency and data loss in high-load environments. The approach follows the open Ultra Ethernet Consortium (UEC) specification rather than a proprietary model. It allows customers to connect to any compatible network switch or XPU without being tied to a single vendor's infrastructure. The Thor Ultra supports 200G or 100G PAM4 SerDes and long-reach passive copper, while offering one of the lowest bit error rates in the industry to minimize connection instability. Its PCIe Gen6 x16 interface lets it move data at extreme speeds suitable for AI-driven workloads. The card can also perform line-rate encryption and decryption with PSP offload, which reduces computational strain on the host system. Secure boot and signed firmware strengthen device trust, while packet trimming and congestion signaling align it with Broadcom's Tomahawk 5 and 6 switches. Broadcom describes Thor Ultra as a cornerstone of its Ethernet AI networking strategy, joining products such as Tomahawk Ultra and Jericho 4. The company says that together, these components enable a scalable, open ecosystem for high-performance AI computing. Still, it remains to be seen how well Thor Ultra integrates with other vendors' technologies or whether its benefits outweigh the cost and complexity of deployment in non-specialized settings. Broadcom has begun sampling Thor Ultra, which suggests that it may soon begin adoption among early partners and AI data center operators. Via TechPowerUp
[3]
Broadcom to launch new networking chip, as battle with Nvidia intensifies - The Economic Times
The chip, called the Thor Ultra, enables computing infrastructure operators to deploy far more chips than they otherwise could, allowing them to build and run the large models used to power AI apps such as ChatGPT.Broadcom is launching a new networking chip on Tuesday that will help companies build artificial intelligence computing systems by stringing together hundreds of thousands of chips that crunch data, deepening its rivalry with Nvidia. The chip, called the Thor Ultra, enables computing infrastructure operators to deploy far more chips than they otherwise could, allowing them to build and run the large models used to power AI apps such as ChatGPT. The Thor Ultra will battle Nvidia's networking interface chips and aim to further entrench Broadcom's control of network communications inside data centers designed for AI applications. It comes after Broadcom on Monday unveiled a deal to roll out 10 gigawatts worth of custom chips for ChatGPT maker OpenAI beginning in the second half of 2026, challenging Nvidia's grip on the AI accelerator market. AI represents a big opportunity for Broadcom. Chief executive Hock Tan said late last year the market the company is going after for its various AI chips is in the range of $60 billion to $90 billion in 2027, divided between its networking chips and the data center processors it helps Alphabet's Google and OpenAI make. Broadcom reported AI revenue of $12.2 billion in fiscal 2024. In September, it announced a new, unnamed $10 billion customer for its custom data center AI chips. The Thor Ultra chip, part of Broadcom's growing catalog of networking chips, operates as a critical link between an AI system and the rest of the data center. The networking chips help data center operators move information around inside a facility. "In the distributed computing system, network plays an extremely important role in building these large clusters," Ram Velaga, a Broadcom senior vice president, told Reuters. "So I'm not surprised that anybody who's in the GPU business wants to make sure that they are participating in the networking." Focusing on chip design While networking chips are crucial to the company's plans, the AI chips Broadcom helps design for large cloud computing companies such as Google are lucrative. Broadcom has worked on multiple generations of Google's Tensor processor, which Google began designing more than a decade ago. The Tensor chips have generated billions of dollars in revenue for Broadcom, according to analyst estimates. During a tour in September of Broadcom's San Jose network chip-testing labs where engineers work on current and forthcoming designs of its networking chips, company executives detailed measures taken to construct and test new networking processors. Towards that end, Broadcom's engineers doubled the bandwidth on Thor Ultra compared with the prior version. They put the chips through rigorous testing and evaluation from the earliest stages of production. To make a chip like the Thor Ultra or flagship series of Tomahawk networking switches, the engineers build an entire system around the chip. With the hardware system team, the engineers will discuss what kind of package the chip uses, how much power it will need and how much heat it will emit, Velaga said. Broadcom does not sell servers itself but offers the designs for components and systems it creates for testing to its customers in order to give them a reference point to construct the networking infrastructure around it. "For every dollar we invest in our silicon, there is at least $6 to $10 that our ecosystem partners are investing," Velaga said. "So a lot of our focus is on design, as if we are almost ready to take it to production."
[4]
Broadcom to launch networking chip Thor Ultra; unveils Wi-Fi 8 solutions
Broadcom (NASDAQ:AVGO) is launching a new networking chip called Thor Ultra, which will help companies build AI computing systems by stringing together hundreds of thousands of chips that process data, heating up the race with Nvidia (NVDA), Reuters reported. Thor Thor Ultra intensifies competition by enabling broader deployment and could deepen Broadcom's role in data center networking against Nvidia. Thor Ultra doubles previous bandwidth and supports large-scale AI clusters with advanced engineering for efficiency, power, and heat management. Wi-Fi 8 products are tailored for high performance, low latency, and predictability, addressing AI edge network demands in smartphones, IoT, and related devices.
Share
Share
Copy Link
Broadcom introduces Thor Ultra, an 800G Ethernet chip designed for AI clusters, intensifying competition with Nvidia. This open-standard chip aims to redefine data center networking for AI applications.
Broadcom has introduced Thor Ultra, a groundbreaking 800G Ethernet network interface card (NIC) designed to revolutionize data movement across AI clusters
1
2
. This new chip marks a significant milestone in the company's efforts to compete with Nvidia in the rapidly expanding AI infrastructure market.Thor Ultra boasts impressive technical specifications that set it apart from traditional networking solutions:
1
.1
2
.1
2
.Thor Ultra introduces several innovative features designed to address the unique challenges of AI workloads:
1
2
.1
.1
2
.Broadcom's Thor Ultra is positioned as a cornerstone of the company's Ethernet AI networking strategy
2
. By adhering to the UEC specification, Thor Ultra offers data center operators a way to scale AI workloads without being locked into a single vendor's ecosystem1
. This approach stands in contrast to Nvidia's tightly coupled, proprietary networking stack1
.The launch of Thor Ultra comes at a critical time for Broadcom, as the company seeks to capitalize on the growing AI market. CEO Hock Tan has estimated that the addressable market for Broadcom's AI chips could reach $60 billion to $90 billion by 2027
3
.Related Stories
Thor Ultra's introduction has significant implications for the AI infrastructure landscape:
3
4
.3
.1
2
.As Broadcom begins sampling Thor Ultra, the industry eagerly anticipates its adoption among early partners and AI data center operators
2
. The success of Thor Ultra could potentially reshape the AI networking market, offering a compelling alternative to proprietary solutions.However, questions remain about Thor Ultra's integration with other vendors' technologies and whether its benefits will outweigh the cost and complexity of deployment in non-specialized settings
2
. As the battle between Broadcom and Nvidia intensifies, the ultimate impact on the AI infrastructure landscape remains to be seen.Summarized by
Navi
[3]