Curated by THEOUTPOST
On Tue, 15 Oct, 12:01 AM UTC
6 Sources
[1]
NVIDIA CEO Jensen Huang calls Tesla and SpaceX boss Elon Musk 'superhuman'
Elon Musk's new xAI supercomputer is codenamed Colossus, and was built using a cluster of 100,000 x NVIDIA H100 AI GPUs. During an interview with B2g Pod, NVIDIA CEO Jensen Huang said that what Elon and xAI have done is nothing short of extraordinary. Jensen said: "As far as I know, there's only one person in the world who could do that; Elon is singular in his understanding of engineering and construction and large systems and marshalling resources; it's just unbelievable". The process didn't take exactly 19 days, with Money Control saying the entire project took place across 122 days, with Musk claiming back in June that the hardware installation had begun training, with Colossus taking just 19 days... a process that Jensen said usually requires 4 entire years. Huang said that he was impressed with xAI's engineering, networking, infrastructure and software teams, calling them all "extraordinary". The NVIDIA CEO said that by using 100,000 AI GPUs, that xAI's new Colossus supercomputer has become "easily the fastest supercomputer on the planet as one cluster". Jensen added that it would normally take a far, far longer time to get a new supercomputer up and running, with the NVIDIA CEO adding: "a supercomputer that you would build would take normally 3 years to plan and then they deliver the equipment and it take 1 year to get it all working".
[2]
Elon Musk's xAI built a 100,000-GPU supercluster in just 19 days - normally takes years
Crazy: Few would argue that Elon Musk is driven. Despite his various detractors, the entrepreneur has built Tesla and SpaceX into major competitors, if not leaders, in their respective industries. This success comes amid various side endeavors like Neuralink and Twitter/X transition. Now, his xAI team has gotten an AI supercluster up and running in just a few weeks. Elon Musk and his xAI team have seemingly done the impossible. The company built a supercluster of 100,000 Nvidia H200 Blackwell GPUs in only 19 days. Nvidia CEO Jensen Huang called the feat "superhuman." Huang shared the incredible story in an interview with the Tesla Owners Silicon Valley group on X. According to Huang, constructing a supercomputer of this size would take most crews around four years - three years in planning and one year on shipping, installation, and operational setup. However, in less than three weeks, Musk and his team managed the entire process - from concept to full functionality. The xAI supercluster even completed its first AI training run shortly after the cluster was powered up. Huang was almost at a loss for words, struggling to build a head of steam before describing it. "First of all, [stammers] some [stammers] 19 days is incredible ... Do you know how many days 19 days is? It's just a couple of weeks. And the mountain of technology, if you were ever to see it, is unbelievable ... What they achieved is singular. Never been done before. A supercomputer [of comparable size] that you would build, would take, normally, three years to plan - and then they deliver the equipment, and it takes one year to get it all working." Huang conveyed his respect for Musk's engineering expertise, noting the challenges of integrating Nvidia's cutting-edge hardware. "The number of wires that goes into one node ... the back of a computer is all wires," Huang remarked, noting that networking Nvidia equipment requires a different level of complexity than traditional hyper-scale data centers. The project required installing the GPUs and building and getting the permits for an entirely new "X factory," equipped with advanced cooling systems and power infrastructure to ensure the cluster's seamless operation of the 200,000 GPUs. The coordination between Musk's engineers and Nvidia's team was another monumental feat, ensuring that hardware and infrastructure were delivered, installed, and synchronized flawlessly. "This level of integration has never been done before, and it may not be done again anytime soon," Huang remarked. The supercluster represents a massive leap in AI infrastructure, positioning xAI as a significant competitor in AI research and development. With the computational power now available to it, Musk's teams could significantly accelerate projects involving advanced neural networks, deep learning, and natural language processing.
[3]
NVIDIA CEO Jensen Huang talks about Elon Musk building world's largest supercomputer
NVIDIA CEO Jensen Huang has sat down for a long format conversation where he discussed NVIDIA's dominance in AI market and how AI will continue to become adopted into our daily lives. The conversation begins with Huang explaining that AI models are eventually going to become more sophisticated and will eventually evolve into a personal assistant everyone will have access to in their pocket. Huang doesn't give a timeframe for when that will happen but does say it will arrive in some form or another "soon". Given the context of the conversation, it can be assumed that the level of sophistication of this AI would be far superior to anything currently available that claims to be an AI personal assistant. An example would be the coming Siri overhaul with Apple Intelligence. Huang is the CEO of NVIDIA, the company powering the push into AI technologies through its impressive GPUs, and touched on the recent purchase made of 100,000 H100 GPUs by xAI, Elon Musk's AI company. Huang explains there is "only one person in the world" who could build the world's most powerful supercomputer in just nineteen days. The NVIDIA CEO gave more clarity on the achievement of the short time it took Musk and his team to stand up the supercomputer, saying from the moment the concept was approved, the construction of the factory, shipping NVIDIA's hardware, software tuning to the first training of the finished cluster - just nineteen days.
[4]
Elon Musk took 19 days to set up 100,000 Nvidia H200 GPUs; process normally takes 4 years
Elon Musk and the team behind xAI have achieved an engineering marvel, setting up a supercluster of 100,000 H200 Blackwell GPUs in a whopping 19 days. Nvidia CEO Jensen Huang told the story of Elon Musk's incredible installation prowess with members of the Tesla Owners Silicon Valley on X. Huang describes Musk's 19-day escapade with awe and respect, calling the effort "superhuman". The team at xAI purportedly went from the "concept" phase to full-ready compatibility with Nvidia's "gear" in less than three weeks. This includes running xAI's first AI training run on the newly built supercluster as well. From start to finish, the process involved building the massive X factory where the GPUs would reside and equipping the entire factory with liquid cooling and power to make all 200,000 GPUs operational. That's not to mention all of the coordination between Nvidia's and Elon Musk's engineering teams to get all of the hardware and infrastructure shipped and installed precisely and in a coordinated manner. For perspective, Huang states that it takes an average data center four years to do what Elon Musk and his team were able to do in 19 days. Three years of that time alone would be dedicated to planning, while the last year would be used to ship the equipment, install it, and get it all working. Huang also goes into detail describing how complex the networking is on Nvidia's hardware. He explains that networking Nvidia's gear isn't like networking traditional data center servers. "The number of wires that goes in one node...the back of a computer is all wires." Elon Musk's integration of 100,000 H200 GPUs has "never been done before" (according to Jensen Huang) and probably won't be duplicated again by another company, at least not for a very long time.
[5]
Nvidia CEO Jensen Huang Praises Elon Musk For Achieving Something With xAI In 19 Days That Usually Takes At Least A Year: 'Singular In His Understanding Of Engineering' - NVIDIA (NASDAQ:NVDA)
In an episode of the Bg2 Pod, Nvidia Corporation's NVDA CEO Jensen Huang shared his thoughts on a variety of subjects, including Tesla and SpaceX CEO Elon Musk's xAI. What Happened: The podcast posted on Sunday features a discussion between Altimeter Capital's CEO Brad Gerstner and partner Clark Tang with Huang. During the conversation, the Nvidia CEO was asked about xAI's achievement of constructing a large coherent supercluster in Memphis in a matter of months. " "Elon is singular in this understanding of engineering and construction and large systems and marshaling resources," he said. See Also: Elon Musk's X Reaches Agreement With Unilever, Drops It From Ad Boycott Lawsuit: 'First Part Of The Ecosystem-Wide Solution' The Nvidia CEO also praised the engineering, networking, and infrastructure teams at xAI, stating that the integration of technology and software was "incredible." "Just to put in perspective, 100,000 GPUs that's you know easily the fastest supercomputer on the planet as one cluster. A supercomputer that you would build would take normally three years to plan. And then they deliver the equipment and it takes one year to get it all working," Huang stated, adding, "We're talking about 19 days." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: In July earlier this year, xAI initiated the training of the Memphis Supercluster with 100,000 Nvidia H100 GPUs, making it the most powerful AI training cluster in the world. Previously, it was reported that Musk and Oracle's Larry Ellison had implored Huang for additional GPUs during a dinner meeting. This discussion on the Bg2 Pod further highlights the strong relationship between Musk and Huang, which was evident when Musk praised Huang's work ethic earlier in July. Earlier, Huang has also voiced his appreciation for Musk's efforts, especially in the area of self-driving vehicles. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Nvidia's Blackwell Chip Faces AMD's MI350 Challenge In 2025: CEO Lisa Su Says, 'Beginning, Not The End Of The AI Race' Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[6]
'As far as I know, there's only one person in the world who could do that.' Nvidia's CEO praises Elon Musk for a 'superhuman' feat
Naturally, the fact that X has bought hundreds of millions of dollars worth of GPUs has nothing to do with such praise. No siree. I'm not a cynical person by nature. I fully understand how the trillion-dollar tech industry operates. I know that it never pays to say anything negative about a customer who is looking to spend countless dollars more. But sometimes, just sometimes, a CEO will decide to say something that just makes me go "Oh, come on! Really?" and in an interview with a technology investment firm, Nvidia's boss did just that. The statement in question can be heard in a snippet of the interview posted by X channel StockMKTNewz (via Wccftech) when Jen-Hsun Huang was asked for his thoughts on xAI's recent expansion of its Colossus supercomputer build that took just 17 days to complete. "Just building a massive factory, liquid-cooled, energized, permitted in the short time that was done...I mean that is, like, superhuman. And, as far as I know, there's only one person in the world who could do that. You know, I mean, Elon is singular in this understanding of engineering and construction and large systems, and marshalling resources. It's unbelievable," Huang said. Really? Only one person in the world? Just one? Sure, the teams involved do deserve a huge amount of admiration for putting the whole thing together and having it run its first training session in just over two weeks. That's seriously impressive. But to suggest that this only came about because of the one and only Elon Musk seems... well, to paraphrase Huang himself, it seems unbelievable. To be fair to Nvidia's CEO, it's possible that he was referring to the fact that xAI is currently the biggest purchaser of Hopper-powered AI chips and Musk is perhaps the most vocal proponent of AI at the moment, alongside OpenAI's Sam Altman. But I do think it's unfair to place all the credit for the expansion work solely on Musk, without mentioning all of the effort by the planners, designers, engineers, and software developers involved. And to be frank, it's borderline ridiculous to suggest that nobody else in the world could achieve such a feat. Not that I should be surprised because if there's one thing that the tech world is especially consistent at, it's CEO's being all bon homme. Like Huang and Zuckerberg. Huang and Sutskever. Musk and Huang, again. And it's absolutely got nothing to do with the fact that an awful lot of money is being spent by such companies on Nvidia's hardware. Definitely, 100%, certifiably not. Now, I'm off to test more new chips from a well-known vendor. Wonder if its vendor will call my efforts superhuman?
Share
Share
Copy Link
Nvidia CEO Jensen Huang lauds Elon Musk and xAI for constructing a supercomputer with 100,000 GPUs in just 19 days, a feat that typically takes years to accomplish.
In a remarkable display of engineering prowess, Elon Musk's artificial intelligence company xAI has accomplished what many in the tech industry are calling a "superhuman" feat. The company successfully built and deployed a supercomputer cluster comprising 100,000 Nvidia H200 Blackwell GPUs in just 19 days, a process that typically takes around four years to complete 12.
Nvidia CEO Jensen Huang, in an interview with the Tesla Owners Silicon Valley group, expressed his amazement at xAI's achievement. Huang stated, "What they achieved is singular. Never been done before," highlighting the extraordinary nature of the accomplishment 2. He further elaborated on the typical timeline for such projects:
"A supercomputer [of comparable size] that you would build, would take, normally, three years to plan - and then they deliver the equipment, and it takes one year to get it all working" 2.
Codenamed "Colossus," xAI's supercomputer is now considered "easily the fastest supercomputer on the planet as one cluster," according to Huang 1. The project involved not only installing the GPUs but also constructing an entirely new "X factory" equipped with advanced cooling systems and power infrastructure 2.
Huang emphasized the complexity of the project, particularly in terms of networking and integration. He noted, "The number of wires that goes into one node ... the back of a computer is all wires," highlighting the intricate nature of connecting Nvidia's cutting-edge hardware 24.
The coordination between Musk's engineers and Nvidia's team was crucial in ensuring that hardware and infrastructure were delivered, installed, and synchronized flawlessly within the short timeframe 2.
With this new supercomputer, xAI has positioned itself as a significant competitor in AI research and development. The computational power now available to Musk's team could significantly accelerate projects involving advanced neural networks, deep learning, and natural language processing 2.
Huang praised Musk's exceptional abilities, stating, "Elon is singular in his understanding of engineering and construction and large systems and marshalling resources; it's just unbelievable" 1. This sentiment was echoed across multiple sources, emphasizing Musk's unique capacity to drive such ambitious projects to completion 35.
During the conversation, Huang also touched upon the future of AI, predicting that AI models will eventually evolve into sophisticated personal assistants accessible to everyone. While he didn't provide a specific timeline, he suggested that this development would occur "soon" 3.
As the AI landscape continues to evolve rapidly, the achievement of xAI in constructing this supercomputer cluster marks a significant milestone in the field, potentially setting new standards for the speed and scale of AI infrastructure development.
Reference
[4]
Elon Musk's XAI has launched Colossus, a groundbreaking AI training system utilizing 100,000 NVIDIA H100 GPUs. This massive computational power aims to revolutionize AI development and compete with industry giants.
10 Sources
10 Sources
Elon Musk's xAI is expanding its Colossus AI supercomputer from 100,000 to 200,000 NVIDIA Hopper GPUs, making it the world's largest AI training system. The project showcases NVIDIA's Spectrum-X Ethernet networking platform, achieving unprecedented performance in AI workloads.
13 Sources
13 Sources
Elon Musk's Tesla and xAI are pushing Nvidia to its limits with demands for AI chips, highlighting the intense competition in the AI hardware market and the critical role of chip supply in advancing AI technology.
2 Sources
2 Sources
Elon Musk's AI startup xAI is set to dramatically expand its Colossus supercomputer in Memphis, Tennessee, aiming to reach over 1 million GPUs. This ambitious project involves partnerships with major tech companies and significant infrastructure challenges.
10 Sources
10 Sources
Elon Musk's XAI introduces Colossus, the world's most powerful AI training system. While impressive, questions arise about its storage capacity, power usage, and naming convention.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved