2 Sources
[1]
No-Nvidias networking club seeks open GPU interconnect tech
Ultra Accelerator Link consortium promises 200 gigabits per second per lane spec will debut in Q1 2025 The Ultra Accelerator Link Consortium - an alliance of enterprise tech vendors that pointedly excludes Nvidia because it wants a shared standard for accelerator-to-accelerator links - has opened its doors and promised to deliver a spec in the first quarter of 2025. The Consortium announced its existence in May, and promised to "define and establish an open industry standard that will enable AI accelerators to communicate more effectively." The group's members include AMD, AWS, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft - a Who's Who of AI, other than Nvidia. Why exclude the market leader? Nvidia's networking business has quietly grown to over a $14 billion annual run rate - a figure only the likes of Cisco and Huawei win from datacenter sales. Plenty of that revenue comes from Nvidia's proprietary InfiniBand and NVLink GPU-to-GPU connection offerings - which aren't easily accessed by rival vendors. When UALink proclaims it wants "an interconnect based upon open standards [to] enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected datacenters" it's therefore both declaring its members' ambition and pointing out that vendors and buyers alike generally appreciate an open alternative to proprietary products. Vendors like open standards because they enable them to sell stuff. As the enterprise tech vendor community sees Nvidia dominate AI, many players would very much like to sell more stuff, but know it's too late to create their own tech. Buyers also like a contested market because that tends to drive down prices. Which brings us to Tuesday's announcement that the UAlink Consortium has done enough paperwork to have been incorporated, and is open to other entities joining the group. The org also teased its forthcoming version 1.0 spec, which it claims "will enable up to 200Gbit/sec per lane scale-up connection for up to 1024 accelerators within an AI pod." That's rather faster than the 112Gbit/sec possible over Nvidia's NVlink and also leaves PCIe 5 eating dust. The consortium promised the spec will emerge in a form fit for general review in the first quarter of 2025. But even then it will still be just an idea, and whatever emerges in early 2025 will be many months or years away from appearing in hardware. Which is why Nvidia CEO Jensen Huang dismissed UALink as a threat last May during Taiwan's Computex exhibition, when he said "By the time the first gen of UALink comes out, we will be at NVLink seven or eight." Which is not to say Nvidia is necessarily hostile to open standards: the acceleration champ this week proudly pointed to its Spectrum X take on Ethernet being used by the 100,000-GPU AI training cluster built by Elon Musk's xAI. UALink Consortium chair Kurtis Bowman offered a canned quote to the effect that "The release of the UALink 1.0 specification in Q1 2025 represents an important milestone as it will establish an open industry standard enabling AI accelerators and switches to communicate more effectively, expand memory access to meet large AI model requirements and demonstrate the benefits of industry collaboration." ®
[2]
UALink Consortium officially incorporates -- NVLink competitor headed by AMD and Intel opens doors to contributor members
The Ultra Accelerator Link (UALink) Consortium has officially incorporated, which means it's now a legal entity. The Consortium seeks to create a new standard for high-speed, low-latency communication across servers in AI data centers, and has board members from AMD, Intel, Meta, Hewlett-Packard Enterprise, Amazon Web Services (AWS), Astera Labs, Cisco, Google, and Microsoft. The Consortium is also looking for new contributor members. UALink seeks to become the industry's open standard for scale-up connections of many AI accelerators, and to become a competitor with Nvidia's proprietary NVLink. NVLink, Nvidia's solution for GPU-to-GPU communication in servers or pods of servers, uses Infiniband -- another effectively Nvidia-owned communication technology -- for higher-level scaling. Infiniband is being challenged by newcomer Ultra Ethernet, which is another major consortium of tech giants creating an open standard to counter Nvidia's dominance. Willie Nelson, president of the UALink Consortium, is also opening the doors for new companies and groups to join the party. "Interested companies are encouraged to join as Contributor members to support our mission: establishing an open and high-performance accelerator interconnect for AI workloads." UALink's 1.0 spec is coming to members this year. The standard will enable up to 200Gbps per lane connection for up to 1,024 accelerators within an AI pod. Assuming Nvidia HGX-style servers with 8 AI accelerators inside, UALink could connect up to 128 of these machines in a pod. It is most likely that UALink will be often used at a smaller scale, however, with pods of around eight servers communicating with each other via UALink and further upscaling being handled by Ultra Ethernet. Consortium members will gain access to the spec this year, with a general review opening up in Q1 2025. The UALink standard's release in Q1 2025 will line up with the release of version 1 of Ultra Ethernet, and AMD recently announced the industry's first Ultra Ethernet-ready 400GbE card. UALink and Ultra Ethernet are both made up of industry giants seeking to dethrone Nvidia and are almost assured to succeed as open standards in the AI data center space thanks to their wide levels of support. "The work being done by the companies in UALink to create an open, high performance and scalable accelerator fabric is critical for the future of AI," said Forrest Norrod, executive VP and general manager of the Data Center Solutions Group at AMD. Samsung is likely to be among the first Contributor members of the Consortium, as it announced its intentions to join back in June. The Ultra Ethernet Consortium will also likely see a lot of crossover, and industry giants such as Baidu, Dell, Huawei, IBM, Nokia, Lenovo, Supermicro, and Tencent joined as contributors to UE in the last few months. Nvidia is expected to remain outside of the Consortium, as its NVLink and Infiniband technologies are proprietary and have seen widespread use already thanks to the company's dominance of the AI data center market.
Share
Copy Link
The Ultra Accelerator Link (UALink) Consortium, comprising major tech companies, has officially incorporated to develop an open standard for high-speed AI accelerator interconnects, aiming to compete with Nvidia's proprietary NVLink technology.
The Ultra Accelerator Link (UALink) Consortium, a coalition of enterprise tech giants, has officially incorporated as a legal entity. This marks a significant step in their mission to establish an open industry standard for AI accelerator interconnects 12. The consortium, which notably excludes Nvidia, includes industry leaders such as AMD, AWS, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft.
The formation of UALink is seen as a direct challenge to Nvidia's stronghold in the AI networking market. Nvidia's proprietary technologies, including InfiniBand and NVLink, have contributed to its annual networking business revenue of over $14 billion 1. The consortium aims to create an open alternative to these proprietary products, potentially disrupting Nvidia's market dominance.
UALink has announced plans to release its version 1.0 specification in the first quarter of 2025. The consortium claims that this specification will enable:
The development of an open standard for AI interconnects could have far-reaching implications for the industry:
UALink is actively seeking new contributor members to support its mission. Willie Nelson, president of the UALink Consortium, has encouraged interested companies to join 2. The consortium's board already includes representatives from major tech companies, and Samsung has expressed interest in joining as a contributor member 2.
Alongside UALink, another industry initiative called Ultra Ethernet is challenging Nvidia's dominance in data center networking. AMD recently announced the industry's first Ultra Ethernet-ready 400GbE card, with the Ultra Ethernet standard also set for release in Q1 2025 2.
Despite these developments, Nvidia remains confident in its position. CEO Jensen Huang dismissed UALink as a threat, stating, "By the time the first gen of UALink comes out, we will be at NVLink seven or eight" 1. However, Nvidia has shown some openness to industry standards, as evidenced by its Spectrum X Ethernet technology being used in Elon Musk's xAI 100,000-GPU AI training cluster 1.
As the AI industry continues to evolve rapidly, the incorporation of UALink and the development of open standards like Ultra Ethernet signal a potential shift in the landscape of AI interconnect technologies. The coming years will likely see increased competition and innovation in this critical area of AI infrastructure.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago