6 Sources
[1]
Apple joins UALink Consortium for open-standard AI accelerator interconnections
Ultra Accelerator Link Consortium (UALink) on Tuesday said that Alibaba Cloud, Apple, and Synopsys have been elected to its board of directors, which allows the companies to influence development of UALink, a technology designed to enable connectivity of AI and HPC accelerators and meant to compete against Nvidia's NVLink. The announcement outlines Apple's interest in datacenter connectivity for AI and may indicate that the company is working on a datacenter product for AI acceleration. The UALink specification version 1.0 due in the first quarter of 2025 will enable connection of up to 1,024 accelerators within an AI computing pod in a low-latency network at a speed of 200 Gb/s. This specification allows for direct data transfers between the memory attached to processors, which is particularly important for AI training workloads. The new standard is an open industry standard that is backed by AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft, which are all known developers of AI hardware and software. There are 65 companies behind UALink. "UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands," said Becky Loop, Director of Platform Architecture at Apple. "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward, and we are excited to join the UALink Board of Directors." Apple has never confirmed plans to develop its own datacenter-class processors for AI, but rumors about such intentions have been floating around for a while. The company could follow the steps of its industry peers like Google and Facebook and work with Broadcom to design custom datacenter accelerators for training and inference. Such approach would enable the company to tailor its AI capabilities to its actual needs and then save on expensive third-party accelerators like those from AMD and Nvidia as well as on power and software stack. For now, this is a pure speculation, but joining the UALink Consortium indicates that the company is interested in datacenter grade connectivity, which certainly implies on development of in-house processors. Alibaba Cloud, just like other major cloud service providers, is interested in developing its own AI hardware so joining the UALink Consortium is a natural fit for the company. "Alibaba Cloud believes that driving AI computing accelerator scale-up interconnection technology by defining core needs and solutions from the perspective of cloud computing and applications has significant value in building the competitiveness of intelligent computing supernodes," said Qiang Liu, VP of Alibaba Cloud, GM of Alibaba Cloud Server Infrastructure. "The UALink consortium, as a leader in the interconnect field of AI accelerators, has brought together key members from the AI infrastructure industry to work together to define interconnect protocol which is natively designed for AI accelerators, driving innovation in AI infrastructure. This will strongly promote the innovation of AI infrastructure and improve the execution efficiency of AI workloads, contributing to the establishment of an open and innovative industry ecosystem." Synopsys does not develop its own AI accelerators, but the company licenses physical IP for its customers, so it is important for the company to be a part of the team behind UALink. "UALink will be critical in addressing the performance and bandwidth communication demands of hyperscale data centers, enabling the high-speed interconnects needed to scale up pods and clusters," said Richard Solomon, UALink Board Member and Sr. Staff Product Manager, Synopsys. "As the leading provider of best-in-class interface IP solutions, Synopsys is committed to contributing our expertise to the UALink Consortium to develop high-speed standards enabling the world's fastest AI accelerator architectures."
[2]
Apple joins UALink group tasked with taking on Nvidia's AI hardware dominance - 9to5Mac
Apple has officially gained a board seat on the Ultra Accelerator Link Consortium, a group of more than 65 members developing next generation AI accelerator architecture. Apple's involvement will allow the company to influence the new standard and push for its adoption. That basically means that Apple will be involved in creating and promoting a standard for connecting a bunch of GPUs together in data centers powering AI tasks. "UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands," said Becky Loop, Director of Platform Architecture at Apple. "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward, and we're excited to join the UALink Board of Directors." What exactly is UALink? Simply put, UALink is an industry effort to compete with Nvidia's dominance in AI hardware. Specifically, the standard aims to compete with Nvidia's proprietary NVLink technology. This piece from Tom's Hardware last fall explains: UALink seeks to become the industry's open standard for scale-up connections of many AI accelerators, and to become a competitor with Nvidia's proprietary NVLink. NVLink, Nvidia's solution for GPU-to-GPU communication in servers or pods of servers, uses Infiniband -- another effectively Nvidia-owned communication technology -- for higher-level scaling. Infiniband is being challenged by newcomer Ultra Ethernet, which is another major consortium of tech giants creating an open standard to counter Nvidia's dominance. Backed by AMD and Intel, the UALink Consortium includes promoter members including AWS, Google, Meta, and Microsoft. Now Apple is involved as well. "The initial 1.0 version 200Gbps UALink specification enables the connection of up to 1K accelerators within an AI pod, and is based on the IEEE P802.3dj PHY Layer," according to the UALink Consortium. "The specification will be available to Contributor Members in 2024, and will be released to the public during the first quarter of 2025."
[3]
Apple joins standards consortium to improve AI server hardware
Apple has joined the board of directors for the Ultra Accelerator Link Consortium, giving it more of a say in how the architecture for AI server infrastructure will evolve. The Ultra Accelerator Link Consortium (UALink), is an open industry standard group for the development of UALink specifications. As a potential key element used for the development of artificial intelligence models and accelerators, the development of the standards could be massively beneficial to the future of AI itself. On Tuesday, it was announced that three more members have been elected to the consortium's board. Apple was one of the trio, alongside Alibaba and Synopsys. The consortium now consists of more than 65 companies as members since its incorporation in October 2024. "UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands," said Becky Loop, Director of Platform Architecture at Apple. "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward, and we're excited to join the UALink Board of Directors." UALink Consortium Board Chair Kurtis Bowman welcomed the three companies to the board. "The continued support for the Consortium will help accelerate adoption of this key industry standard, defining the next-generation interconnect for AI workloads," he said. UALink is described as a "high-speed, scale-up accelerator interconnect technology that advances next-generation AI cluster performance." The consortium tasks itself with developing the technical specifications for the interconnects that reside between AI accelerators, or GPUs. In short, the interconnects are used to perform high-bandwidth connectivity between two processing components, to minimize any bottlenecks and encourage fast communications. In the case here, it's to allow multiple GPUs or AI chips to communicate with each other with minimal lag, so they can work together as if they're one larger chip. This is similar in concept to the interconnect Apple uses in its Apple Silicon Ultra chips, to connect two Max chips together. The concept, when it comes to UALink and AI servers, is that the interconnect would connect together multiple chips together. As UALink describes it, "hundreds of accelerators in a pod," with the interconnect also enabling simple loading and storing of semantics "with software coherency." In simple terms, the UALink envisions using an interconnect to connect many AI chips and GPUs together with extremely fast communications between the components. All so they can work faster for AI development and processing. Currently, the group intends to issue the UALink 1.0 specification in the first quarter of 2025. It is expected to allow for up to 200Gbps of bandwidth per lane, with the possibility of connecting up to 1,024 accelerators in an AI pod. As a company forging ahead in the world of AI development, in part through its introduction of Apple Intelligence, Apple has a vested interest in guiding AI developments. Indeed, there are multiple aspects at play that Apple can take advantage of as part of the UALink board. The most obvious is with developing high-performance AI-chip servers. It has already considered using various systems to develop the AI models used in its products, but better hardware can speed up learning processes, or allow for more processes to take place simultaneously. Ultimately, this can save money on resources, or maintain the same spend but see more benefits. This wouldn't just be for model training purposes, as it's also possible that the improved servers using the interconnects could be used for cloud-based queries. Apple does try to perform its processing on-device, but it also employs servers for tougher off-device queries. With faster servers, these queries could be answered quicker, or with more processing applied, than at present. There may also be an element relating to on-device processing. While intended for high numbers of components to communicate with each other, Apple could feasibly use what its learned for its own hardware. Aside from the interconnect on the Ultra chips, Apple also relies a lot on high-speed connectivity in its chips in general. Optimizing how its system-on-chip creations work will make them higher in performance, which will benefit end users more directly. This last goal could be extremely useful for future chips, but if it will be used in Apple Silicon or not isn't clear at this time. The certain near-term use will be for server hardware. Even so, with the first-gen specification arriving in months, it may still be a very long time before the interconnects become more commonly used in the AI field.
[4]
Apple joins consortium to help develop next-gen AI data center tech | TechCrunch
Apple has joined a consortium creating next-gen technology to link together chips in AI data centers. The consortium, the Ultra Accelerator Link Consortium, is developing a standard called UALink, which connects the AI accelerator chips found within a growing number of server farms. As of Tuesday, Apple is a member of the consortium's board, along with Alibaba and semiconductor company Synopsys. In a statement, Apple director of platform architecture Becky Loop said that UALink "shows great promise" in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands. "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward," Loop added, "and we're excited to join the UALink board of directors." UALink aims to connect chips ranging from GPUs to custom-designed solutions to speed up the training, fine-tuning, and running of AI models. Based on open standards including AMD's Infinity Fabric, the first UALink products are expected to launch in the next couple of years. Intel, AMD, Google, AWS, Microsoft, and Meta are among the UALink consortium's members. Nvidia, which is by far the largest producer of AI accelerators, is not. That's perhaps because Nvidia offers its own proprietary interconnect tech, NVLink, for linking chips within a data center cluster. Apple's participation in UALink comes as the company increases its investments in infrastructure to support Apple Intelligence, its suite of AI product features. According to The Wall Street Journal, Apple is reportedly developing a new server chip to improve the efficiency of its AI data centers. Some of Apple Intelligence's capabilities have been met with mixed reviews. (My colleague Sarah Perez called them "boring and practical.") Last week, Apple said it would update one of those capabilities, AI-summarized news alerts, after users reported seeing inaccurate headlines, including that tennis star Rafael Nadal had come out as gay.
[5]
UALink Archives - 9to5Mac
Apple has officially gained a board seat on the Ultra Accelerator Link Consortium, a group of more than 65 members developing next generation AI accelerator architecture. Apple's involvement will allow the company to influence the new standard and push for its adoption. That basically means that Apple will be involved in creating and promoting a standard for connecting a bunch of GPUs together in data centers powering AI tasks.
[6]
New High-Speed Technology Paves Way for Apple Intelligence Upgrades
Apple has backed a technology that could contribute to future Apple Intelligence advancements. The consortium behind the "Ultra Accelerator Link" or "UALink" technology today announced that Apple, Alibaba, and Synopsys have joined its Board of Directors. The three companies will contribute to further development of the technology. UALink is described as a "high-speed, scale-up interconnect for next-generation AI cluster performance." The consortium is aiming to release UALink 1.0 in the first quarter of 2025, and it will enable data speeds of "up to 200Gbps per lane." The technology could pave the way for faster and more efficient Apple Intelligence servers. "UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands," said Becky Loop, Director of Platform Architecture at Apple. Loop added that "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward." It is not clear if or when Apple will adopt the technology, or if it is merely interested in helping out the overall AI industry. Apple Intelligence servers are currently powered by the M2 Ultra chip, and they are expected to start using M4 series chips this year. In an eventual move away from Mac chips for server use, The Information recently reported that Apple has been developing a new AI server chip that will offer even faster performance for AI workloads.
Share
Copy Link
Apple has joined the board of directors of the Ultra Accelerator Link Consortium (UALink), positioning itself to influence the development of open-standard AI accelerator interconnections. This move signals Apple's growing interest in datacenter AI technologies and potential plans for in-house AI hardware development.
Apple has secured a position on the board of directors of the Ultra Accelerator Link Consortium (UALink), a group dedicated to developing next-generation AI accelerator architecture. This move, announced on Tuesday, places Apple alongside Alibaba Cloud and Synopsys as newly elected board members 123.
UALink is an open industry standard for scale-up connections of AI accelerators, designed to compete with Nvidia's proprietary NVLink technology. The consortium, which now boasts over 65 member companies, aims to create a high-speed, low-latency network for connecting AI and HPC accelerators 12.
The upcoming UALink specification version 1.0, expected in the first quarter of 2025, promises several advancements:
Becky Loop, Director of Platform Architecture at Apple, expressed enthusiasm about joining the UALink Board of Directors, stating, "UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands" 1234.
Apple's participation in UALink raises speculation about the company's future plans in AI hardware:
UALink's development is seen as a direct challenge to Nvidia's dominance in AI hardware. The consortium includes major tech players such as AMD, Intel, Google, Meta, Microsoft, and now Apple 24.
While the immediate focus is on server hardware, Apple's involvement in UALink could potentially influence its on-device processing capabilities:
As Apple continues to invest in its Apple Intelligence suite and reportedly develops new server chips for AI data centers, its participation in UALink signals a strategic move to shape the future of AI infrastructure 45.
Meta Platforms has signed a six-year, $10 billion cloud computing agreement with Google, signaling a major move in its AI infrastructure expansion strategy.
14 Sources
Business
15 hrs ago
14 Sources
Business
15 hrs ago
Elon Musk sought Mark Zuckerberg's support for a $97.4 billion bid to acquire OpenAI, leading to legal complications and raising questions about Meta's role in the AI industry's power dynamics.
13 Sources
Business
15 hrs ago
13 Sources
Business
15 hrs ago
Nvidia CEO Jensen Huang discusses potential new AI chips for China, addresses security concerns, and praises TSMC during his visit to Taiwan, highlighting the complex dynamics of US-China tech relations.
15 Sources
Technology
15 hrs ago
15 Sources
Technology
15 hrs ago
Anthropic, the AI company behind Claude, is close to securing a massive $10 billion funding round, doubling its initial target due to high investor demand. This raise would significantly boost its valuation and fuel its competition with other AI giants.
4 Sources
Business
15 hrs ago
4 Sources
Business
15 hrs ago
OpenAI announces plans to open its first office in India, located in New Delhi, as part of its strategy to tap into the country's rapidly growing AI market and expand its global footprint.
11 Sources
Technology
15 hrs ago
11 Sources
Technology
15 hrs ago