3 Sources
[1]
Nvidia to boost AI server racks to megawatt scale, increasing power delivery by five times or more
Nvidia is developing a new power infrastructure called the 800 V HVDC architecture to deliver the power requirements of 1 MW server racks and more, with plans to deploy it by 2027. According to Nvidia, the current 54V DC power distribution system is already reaching its limit as racks begin to exceed 200 kilowatts. As AI chips become more powerful and demand more electricity, these existing systems would no longer be able to practically keep up, requiring data centers to build new solutions so that their electrical circuits do not get overwhelmed. For example, Nvidia says that its GB200 NVL72 or GB300 NVL72 needs around eight power shelves. If it used 54V DC power distribution, the power shelves would consume 64 U of rack space, which is more than what the average server rack can accommodate. Aside from this, it also said that delivering 1 MW using 54V DC requires a 200 kg copper busbar -- that means a gigawatt AI data center, which many companies are now racing to build, would need 500,000 tons of copper. This is nearly half of the U.S.'s total copper output in 2024, and that's just for one site. So, instead of using the 54V DC system, which is installed directly at the server cabinet, Nvidia is proposing to use the 800 V HVDC, which will connect near the site's 13.8kV AC power source. Aside from freeing up space in the server racks, this will also streamline the approach and make power transmission within the data center more efficient. It will also remove the multiple AC to DC and DC to DC transformations used in the current system, which added complexity. The 800 V HVDC will also reduce the system current for the same power load, potentially increasing the total wattage delivered by up to 85% without the need to upgrade the conductor. "With lower current, thinner conductors can handle the same load, reducing copper requirements by 45%," said the company. "Additionally, DC systems eliminate AC-specific inefficiencies, such as skin effect and reactive power losses, further improving efficiency." According to Digitimes [machine translated], the AI giant is working with Infineon, Texas Instruments, and Navitas to help develop this system. Furthermore, it's expected that they will deploy wide-bandgap semiconductors like gallium nitride (GaN) and silicon carbide (SiC) to achieve the high power densities needed by these powerful AI systems. The 800 V HVDC is a technical challenge that data centers must solve for power efficiency, especially as they start to breach 1 GW capacity and more. This solution should help them reduce wasted power, which, in turn, would reduce operating costs.
[2]
NVIDIA's new 800V high-voltage DC power distrbution system will fuel next-gen data centers
NVIDIA's new 800V high-voltage DC power distribution system: completely move PSU modules out of server racks: more room for computer, networking modules. As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. NVIDIA recently announced that it would be adopting a new 800V high-voltage DC power distribution system, which will power the waves of next-generation data centers. In a new post on X by insider @Jukanlosreve we're hearing that 3 semiconductor companies have been officially named as NVIDIA partners in developing this new 800V high-voltage system, referred to as the "three major power IC players" with Infineon, Texas Instruments (TI), and Navitas. NVIDIA's new 800V high-voltage DC power distribution system has an extremely complex design, with companies forced to offer robust and diverse power solutions to help NVIDIA achieve its lofty goals. NVIDIA is most likely preparing to introduce more power semiconductors in the future, but in the early stages of this new ultra-high-end battleground, these three companies will hold a very important competitive edge. The new 800V high-voltage DC system will see the complete move of the power supply modules out of the server racks themselves, which will free up internal space for more compute and networking modules, maximizing compute density. In its first phase, PSUs will be placed beside the rack in a "sidecar" configuration, where over time, NVIDIA's expansive goal is to gradually integrate these power modules into a centralized power delivery system for the entire AI data center. This means that over time, the entire power supply solution must not just achieve far higher power density than existing technologies, but also offer end-to-end integration capabilities, extending from the power grid infrastructure, all the way through and into the internal architecture of the AI data center. South Korean media outlet The Bell reports that NVIDIA's new technological direction presents a "very high competitive barrier' that on one side of the argument requires an expansive product lineup -- extending from the power grid infrastructure to PSUs, BBUs, and voltage converters on processor boards -- through to the necessary technical and manufacturing capabilities. NVIDIA will need many years to get its proposed 800V high-voltage DC system fully deployed, where transitioning from sidecar configurations to fully centralized architectures to possibly take even longer. The general industry consensus is that NVIDIA's new architecture represents a significant step up in complexity when compared to current AI data centers. As it stands, the AI data center power semiconductor supply chain is already dominated by top-tier companies, smaller PMIC companies will find themselves in a harder position to enter this new market, where their best opportunities might come from current, traditional data centers, where there is wiggle room to compete in the server power supply market.
[3]
Texas Instruments Collaborates with NVIDIA to Revolutionize AI Data Center Power Distribution
With the growth of AI, the power required per data center rack is predicted to increase from 100kW today to more than 1MW in the near future.1 To power a 1MW rack, today's 48V distribution system would require almost 450lbs of copper, making it physically impossible for a 48V system to scale power delivery to support computing needs in the long term.2 The new 800V high-voltage DC power-distribution architecture will provide the power density and conversion efficiency that future AI processors require, while minimizing the growth of the power supply's size, weight and complexity. This 800V architecture will enable engineers to scale power-efficient racks as data-center demand evolves. "A paradigm shift is happening right in front of our eyes," said Jeffrey Morroni, director of power management research and development at Kilby Labs and a TI Fellow. "AI data centers are pushing the limits of power to previously unimaginable levels. A few years ago, we faced 48V infrastructures as the next big challenge. Today, TI's expertise in power conversion combined with NVIDIA's AI expertise are enabling 800V high-voltage DC architectures to support the unprecedented demand for AI computing."
Share
Copy Link
NVIDIA is developing a new 800V high-voltage DC power distribution system to meet the increasing power demands of AI server racks, aiming to boost efficiency and reduce copper usage in data centers.
NVIDIA, the AI giant, is set to revolutionize power distribution in data centers with its new 800V high-voltage DC (HVDC) architecture. This innovative system aims to meet the escalating power demands of AI server racks, which are expected to reach megawatt scale in the near future 1.
Source: TweakTown
As AI chips become increasingly powerful, they require more electricity, pushing current power distribution systems to their limits. NVIDIA reports that the existing 54V DC power distribution system is already struggling as racks begin to exceed 200 kilowatts 1. The company's GB200 NVL72 or GB300 NVL72 systems, for instance, would need around eight power shelves, consuming 64 U of rack spaceβmore than what an average server rack can accommodate 1.
The proposed 800V HVDC system offers several benefits over the current 54V DC system:
Source: DIGITAL TERMINAL
NVIDIA is not working alone on this ambitious project. The company has partnered with semiconductor giants Infineon, Texas Instruments, and Navitas to develop the 800V HVDC system 13. These collaborations are crucial for addressing the complex design challenges and achieving the high power densities required for next-generation AI systems.
The implementation of this new power distribution system is expected to occur in phases:
Jeffrey Morroni, director of power management R&D at Texas Instruments' Kilby Labs, describes this shift as a "paradigm shift," emphasizing how AI data centers are pushing power limits to unprecedented levels 3.
Source: Tom's Hardware
While this new architecture represents a significant advancement, it also presents challenges:
As AI continues to drive technological advancements, NVIDIA's 800V HVDC architecture stands poised to play a crucial role in shaping the future of data center power distribution, enabling the next generation of AI computing at unprecedented scales.
The U.S. Department of Justice and Google present final arguments in a landmark antitrust case, with potential remedies including the sale of Chrome browser and sharing of search data. The role of AI in the future of search adds complexity to the judge's decision.
21 Sources
Policy and Regulation
8 hrs ago
21 Sources
Policy and Regulation
8 hrs ago
The US Department of Energy announces plans for a cutting-edge supercomputer named Doudna, built by Dell and powered by Nvidia's Vera-Rubin chips, designed to accelerate scientific research through AI and traditional simulations.
7 Sources
Technology
16 hrs ago
7 Sources
Technology
16 hrs ago
Amazon Web Services (AWS) is aggressively expanding its global data center network and increasing access to Nvidia's latest AI chips to meet the growing demand for cloud computing and artificial intelligence services.
4 Sources
Technology
27 mins ago
4 Sources
Technology
27 mins ago
Major Chinese tech companies are shifting towards homegrown AI chips as US export controls tighten, potentially reshaping the global AI chip market.
5 Sources
Technology
16 hrs ago
5 Sources
Technology
16 hrs ago
Perplexity AI introduces 'Labs', a new feature for Pro subscribers that can generate reports, spreadsheets, dashboards, and web apps using AI-driven research and analysis.
8 Sources
Technology
8 hrs ago
8 Sources
Technology
8 hrs ago