Curated by THEOUTPOST
On Mon, 22 Jul, 4:03 PM UTC
11 Sources
[1]
Musk promises "world's most powerful AI" later this year | Digital Trends
Tesla CEO and Twitter/X owner Elon Musk announced Monday that his AI startups, xAI, had officially begun training its Memphis supercomputer, what he describes as "the most powerful AI training cluster in the world." Once fully operational, Musk plans to use it to build "world's most powerful AI by every metric by December of this year," which presumably will be Grok 3. Recommended Videos Nice work by @xAI team, @X team, @Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20am local time. With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world! — Elon Musk (@elonmusk) July 22, 2024 xAI's "Gigafactory of Compute," where the supercomputer is housed, is located in a former Electrolux production facility in Memphis, Tennessee, and was announced just last month. Per Musk, the training cluster will utilize 100,000 Nvidia's H100 GPUs. Those are based on the Hopper microarchitecture in a network roughly four times larger than the current state-of-the-art clusters. Those include the 60k Intel GPU Aurora at the Argonne National Lab, the ~38k AMD GPU Frontier in Oak Ridge, and Microsoft's Eagle, which runs 14,400 NVIDIA H100 GPUs. Opening this training facility constitutes the largest capital investment by a new-to-market company in Memphis' history, according to President and CEO of Greater Memphis Chamber Ted Townsend. The supercomputer will be used "to fuel and fund the AI space for all of his [Musk's] companies first, obviously with Tesla and SpaceX," he said. "If you can imagine the computational power necessary to place humans on the surface of Mars, that is going to happen here in Memphis." Elon Musk's xAI to build world's largest supercomputer in Memphis However, despite the multibillion-dollar investment by xAI, the facility is only expected to generate a few hundred local jobs. What's more, the "[Tennessee Valley Authority] does not have a contract in place with xAI," per a report from WREG. They "are working with xAI and our partners at [Memphis Light, Gas and Water] on the details of the proposal and electricity demand needs." The TVA also pointed out that any project over 100 Megawatts (MW) needs its approval to connect to the state's power grid. Musk's facility could draw up to 150MW during peak usage, estimates MLGW President Doug McGowen.
[2]
Musk's promises "world's most powerful AI" later this year | Digital Trends
Tesla CEO and Twitter/X owner Elon Musk announced Monday that his AI startups, xAI, had officially begun training its Memphis supercomputer, what he describes as "the most powerful AI training cluster in the world." Once fully operational, Musk plans to use it to build "world's most powerful AI by every metric by December of this year," which presumably will be Grok 3. Recommended Videos Nice work by @xAI team, @X team, @Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20am local time. With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world! — Elon Musk (@elonmusk) July 22, 2024 xAI's "Gigafactory of Compute," where the supercomputer is housed, is located in a former Electrolux production facility in Memphis, Tennessee, and was announced just last month. Per Musk, the training cluster will utilize 100,000 Nvidia's H100 GPUs. Those are based on the Hopper microarchitecture in a network roughly four times larger than the current state-of-the-art clusters. Those include the 60k Intel GPU Aurora at the Argonne National Lab, the ~38k AMD GPU Frontier in Oak Ridge, and Microsoft's Eagle, which runs 14,400 NVIDIA H100 GPUs. Opening this training facility constitutes the largest capital investment by a new-to-market company in Memphis' history, according to President and CEO of Greater Memphis Chamber Ted Townsend. The supercomputer will be used "to fuel and fund the AI space for all of his [Musk's] companies first, obviously with Tesla and SpaceX," he said. "If you can imagine the computational power necessary to place humans on the surface of Mars, that is going to happen here in Memphis." Elon Musk's xAI to build world's largest supercomputer in Memphis However, despite the multibillion-dollar investment by xAI, the facility is only expected to generate a few hundred local jobs. What's more, the "[Tennessee Valley Authority] does not have a contract in place with xAI," per a report from WREG. They "are working with xAI and our partners at [Memphis Light, Gas and Water] on the details of the proposal and electricity demand needs." The TVA also pointed out that any project over 100 Megawatts (MW) needs its approval to connect to the state's power grid. Musk's facility could draw up to 150MW during peak usage, estimates MLGW President Doug McGowen.
[3]
Elon Musk's xAI Powers Up 100K Nvidia GPUs to Train Grok
Billionaire entrepreneur Elon Musk's startup xAI has begun training its next AI model at the company's new supercomputing facility in Memphis, Tennessee, the executive confirmed early Monday morning. "Nice work by xAI team, X team, Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20 a.m. local time," Musk wrote on X, formerly Twitter, which he also owns. "With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world!" Nvidia's H100 GPUs are designed for training AI models, which require tons of energy and computing power to operate. Now, Musk has a massive computer cluster of over 100,000 GPUs to train the next version of xAI's quirky chatbot, Grok. Last month, the economic development group Greater Memphis Chamber confirmed that xAI's computing facility, dubbed the "Gigafactory of Compute," was in the works. The organization explained that the facility would take over a former manufacturing facility. Back in May, xAI said it had secured $6 billion in funding to pursue AI development. xAI currently has six job listings for its Memphis supercomputing site for positions such as fiber foreman, network engineer, and project manager, to name a few. While the Greater Memphis Chamber group has praised xAI's decision to open up shop in the area, other locals have expressed concerns about the facility's energy and water consumption. The Memphis Community Against Pollution group, along with two other environmental groups, warned the computing facility creates a significant "energy burden." "xAI is also expected to need at least one million gallons of water per day for its cooling towers," the groups said in an open letter last month. "We encourage xAI to support investment in a City of Memphis wastewater reuse system to reduce strain on our water supply." The CEO of Memphis Light, Gas, and Water estimated xAI's Memphis facility may use up to 150 megawatts of electricity an hour, which is roughly equivalent to that needed to power 100,000 homes. "People are afraid. They're afraid of what's possibly going to happen with the water and they are afraid about the energy supply," Memphis City Council member Pearl Walker said last week. While certainly massive, xAI's Memphis spot may not necessarily be the largest computing facility in the world, however. Tech giants like Microsoft, Google, and Meta are also using data centers to train and operate their AI models, with Meta CEO Mark Zuckerberg vowing to acquire 350,000 Nvidia H100s this year. Musk has previously said that xAI plans to release Grok 2 next month, but it's unclear whether or how it might use this supercomputing cluster. Grok 3, however, will be training on the 100,000 H100s in Memphis, and is slated to release by the end of this year.
[4]
Elon Musk announces 'most powerful' AI training cluster in the world
The multi-company leader announced today on X that xAI -- which offers large language models known as Grok and the chatbot of the same name through X to paid subscribers -- has begun training on the "most powerful AI training cluster in the world," the so-called Memphis Supercluster in Memphis, Tennessee. According to local news outlet WREG, the supercluster is located in the southwestern part of the city and "will be the largest, capital investment by a new-to-market company in the city's history." Yet xAI does not yet have a contract in place with local utility Tennessee Valley Authority, which requires such to provide electricity to projects in excess of 100 megawatts. Chock full of Nvidia H100s Notwithstanding, Musk further detailed that the cluster consists of 100,000 liquid-cooled H100 graphics processing units (GPUs), the chips offered by Nvidia starting last year that are in high-demand by AI model providers, including Musk's rivals (and former friends) at OpenAI. Musk also noted that the cluster is operating on a single RDMA fabric, or Remote Direct Memory Access fabric, which Cisco helpfully notes is a way to provide more efficient and lower latency data transfer between compute nodes without burdening the central processing unit (CPU). xAI aims to offer the 'most powerful AI by every metric' as of Dec. 2024 Obviously, xAI aims to train its own LLMs on the supercluster. But more than that, Musk posted in a reply that the company is aiming to train "the world's most powerful AI by every metric" and to do so "by December this year." He also posted that the Memphis Supercluster would provide a "significant advantage" to this end. Not holding my breath on the timing For all of his many ambitions and successes, Musk is notorious for publicly putting forth and then missing deadlines on many projects such as full self-driving automobiles, robotaxis, and taking people to Mars, so I won't be holding my breath for the December 2024 reveal of the new Grok LLM. But it would be surprising and big boost to xAI's efforts if it did come out in that time frame. Especially with OpenAI, Anthropic, Google, Microsoft, Meta all pursuing more powerful and affordable LLMs and SLMs, xAI will need a new and useful model if it aims to remain competitive in the ongoing gen AI race for customers, users, and attention. Indeed, OpenAI backer Microsoft is itself reportedly working with OpenAI CEO Sam Altman on a $100 billion AI training supercomputer codenamed Stargate, according to The Information. Depending on how that develops, xAI's Memphis Supercluster may not be the most powerful in the world for long.
[5]
Elon Musk Says xAI Is Training 'the Most Powerful A.I. in the World' in Memphis
xAI's progress at its new Memphis data center is "the fastest that anyone has ever gotten a supercomputer to train," Musk said. Elon Musk's startup xAI has already begun training A.I. models at its new supercomputing center in Memphis, Tenn., according to the billionaire. Plans for the facility, which will use thousands of Nvidia (NVDA) graphics processing units (GPUs) to train new versions of xAI's chatbot Grok, a competitor to OpenAI's ChatGPT, were initially announced in June. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters "Nice work by the xAI team, X team, Nvidia and supporting companies getting Memphis Supercluster training started at 4:20 a.m. local time," wrote Musk yesterday (July 22) in an X post. Its Memphis supercomputing center is a key aspect of xAI's plans to gain on other generative A.I. companies, which are also pouring billions into developing data centers across the country to boost the development and training of their A.I. models. xAI was able to get training underway at its center in 19 days after beginning installation, according to Musk, who claimed this is "the fastest that anyone has ever gotten a supercomputer to train." The municipal utility Memphis Light, Gas and Water (MLGW) told Observer that xAI's training in Memphis is currently taking place in an existing substation with utility connections. While the building can accommodate 8 megawatts of power for now, MLGW will upgrade the substation to expand its capacity to 50 megawatts by August 1. Musk's company also plans to construct another substation by 2025 with 150 megawatts of capacity, which local environmental groups say is enough energy to power 100,000 homes. xAI's expansion plans will require approval from the Tennessee Valley Authority (TVA). "As of now, we don't have an agreement in place with xAI," TVA told Observer in a statement, adding that it is "working with our partners at MLGW on details of the proposal and electricity needs." xAI launched in 2023 and raised $6 billion in a venture funding round earlier this year, which valued the company at $24 billion. However, compared to rivals like OpenAI and Microsoft (MSFT), the startup is still several years behind in the A.I. race. "We have a lot of catching up to do relative to companies that have been around for five or 10 or 20 years," said Musk yesterday during an interview with the Canadian author Jordan Peterson. Grok is available to subscribers of X premium and X premium+. Its successor, Grok 2, has already been trained on 15,000 Nvidia H100 GPUs and will hopefully be released in August as a model that is "on par or close to" OpenAI's newest release, GnPT-4, Musk told Peterson. Grok 3, meanwhile, is being trained by 100,000 Nvidia H100 GPUs at xAI's Memphis supercomputing center over the next three to four months, Musk said, with plans to release the model in December. At that time, "Grok 3 should be the most powerful A.I. in the world," he claimed. xAI's team is growing rapidly As xAI gears up for a rapid timeline of A.I. model releases, it is also aggressively expanding. The company is "definitely looking to increase our human talent advantage," wrote Musk, who hired much of his founding team from OpenAI, Google (GOOGL) and DeepMind, in a post on X. The company is hiring for 29 open roles as of today (July 23). Nine of these are based out of Memphis and primarily relate to security, management and troubleshooting positions. The two Memphis-based roles displaying pay ranges consist of security lead jobs and offer salaries of $110,000 to $140,000 and $100,000 to $150,000. xAI, which currently has 97 employees, is also hiring across the Bay Area and London for positions that include A.I. engineers and researchers, product designers and A.I. coding tutors.
[6]
Elon Musk Begins Training xAI With 100,000 Liquid-Cooled NVIDIA H100 GPUs, The Most Powerful AI Training Cluster On The Planet
X Chairman, Elon Musk, announces the commencement of GROK 3 training at Memphis using the current-gen NVIDIA H100 GPUs. The popular venture 'xAI' from the company's chairman has officially begun training on NVIDIA's most powerful data center H100 GPUs. Elon Musk proudly announced this on X, calling it 'the most powerful AI training cluster in the world!'. In the post, he said that the supercluster will be trained by 100,000 liquid-cooled H100 GPUs on a single RDMA fabric and congratulated xAI, X, and team Nvidia for starting the training at Memphis. The training started at 4:20 am Memphis local time and according to another follow-up post, Elon claims that the world's most powerful AI will be ready by December this year. As per the reports, GROK 2 will be ready for release next month and GROK 3 by December. This came around two weeks after xAI and Oracle canceled their $10 billion server deal. xAI was renting Nvidia's AI chips from Oracle but decided to build its own server, ending the existing deal with Oracle, which was supposed to continue for a few years. The project is now aimed at building its own supercomputer superior to Oracle and this is going to be achieved by using a hundred thousand high-performance H100 GPUs. Each H100 GPU costs roughly $30,000 and while GROK 2 did use 20,000 of them, GROK 3 requires five times the power to develop its AI chatbot. This decision comes as a surprise since Nvidia is about to ship its newer H200 GPUs in Q3. H200 was in mass production in Q2 and uses the advanced Hopper architecture, providing better memory configuration, resulting in up to 45% better response time for generative AI outputs. Following the H200, it's not far from now when Nvidia is about to launch its Blackwell-based B100 and B200 GPUs right at the end of 2024. It was expected that the xAI Gigafactory of Compute would be ready before the fall of 2025 but apparently, the operation of the Gigafactory commenced before the original plan. According to Elon, this advanced large language model will be completely trained by the end of 2024, posing itself to be the fastest AI ever the world has seen till now.
[7]
Elon Musk starts training 'world's most powerful AI' - Times of India
Elon Musk's AI venture, xAI, has commenced training its large language model (LLM), Grok, on what he claims is the world's most powerful AI training cluster. Located in Memphis, Tennessee, the system is equipped with 1,00,000 Nvidia H100 AI chips. Musk made the announcement on X (formerly Twitter), his other company. "Nice work by @xAI team, @X team, @Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20am local time," he said in a post. "With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world!" Musk added. In a thread, Musk noted that the cluster will give his company's AI model "a significant advantage". "This is a significant advantage in training the world's most powerful AI by every metric by December this year," he said. Earlier in the day, he said that "Grok is being trained in Memphis". Why this is important The development comes about two weeks after a report said that xAI and Oracle ended talks on a potential $10 billion server deal. The report added that Musk is building its own data centre and is buying AI chips for it - something Musk later confirmed via a post on X. "xAl is building the 100k (100,000) H100 system itself for [the] fastest time to completion. Aiming to begin training later this month. It will be the most powerful training cluster in the world by a large margin," he added. "The reason we decided to do the 100k H100 and next major system internally was that our fundamental competitiveness depends on being faster than any other AI company. This is the only way to catch up," Musk highlighted. According to Musk, xAI will release Grok 2 in August, and Grok 3 will be available in December. "Grok 2 is going through finetuning and bug fixes. Probably ready to release next month," Musk added. The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
[8]
Elon Musk fires up 'the most powerful AI training cluster in the world' -- uses 100,000 Nvidia H100 GPUs on a single fabric
Tech baron Elon Musk has taken to Twitter/X to boast of starting up "the most powerful AI training cluster in the world" which he will use to create the self-professed "world's most powerful AI by every metric." Today, xAI's Memphis Supercluster began AI training using 100,000 liquid-cooled Nvidia H100 GPUs connected with a single RDMA (remote direct memory access) fabric. Whether Musk personally flicked the switch to start up the supercluster seems unlikely, as it is noted that it commenced its gargantuan task at 4.20am CDT, but he did help out the fiber tech guy. In May, we reported on Musk's ambition to open the Gigafactory of Compute by Fall 2025. At the time, Musk hurried to begin work on the supercluster, necessitating the purchase of current-gen 'Hopper' H100 GPUs. It appeared to signal that the tech tycoon didn't have the patience to wait for H200 chips to roll out, not to mention the upcoming Blackwell-based B100 and B200 GPUs. This is despite the expectation that the newer Nvidia Blackwell data center GPUs would ship before the end of 2024. So, if the Gigafactory of Compute was touted for opening by Fall 2025, does today's news mean the project has come to fruition a year early? It could indeed be early, but it seems more likely that the sources talking to Reuters and The Information earlier this year misspoke or were misquoted regarding the timing of the project. Also, with the xAI Memphis Supercluster already up and running, the questions about why xAI did not wait for more powerful or next-gen GPUs are answered. In a follow-up Tweet, Musk explains that the new supercluster will be "training the world's most powerful AI by every metric." We assume from previous statements of intent that the power of xAI's 100,000 H100 GPU installation will now be targeted at Grok 3 training. According to Musk, the refined LLM should be finished with the training stage "by December this year." To put the Memphis Supercluster compute resources in some context, certainly, going by scale, the new xAI Memphis Supercluster easily outclasses anything in the most recent Top500 list in terms of GPU horsepower. The world's most powerful supercomputers such as Frontier (37,888 AMD GPUs), Aurora (60,000 Intel GPUs), and Microsoft Eagle (14,400 Nvidia H100 GPUs) seem to be significantly outgunned by the xAI machine. Perhaps xAI should be congratulated on the speed at which it has got the Memphis Supercluster up and running on a meaningful task. However, some would argue AI isn't that meaningful or useful. Indeed, industry leaders have warned that we must start seeing some revenue generated from the billions invested in the great AI race. AI has certainly been underwhelming on the PC side of things, with Microsoft's Recall misfire and the web services we have access to, which are prone to mistakes and dangerous hallucinations.
[9]
Elon Musk's xAI Flips The Switch On 100K Nvidia H100 GPUs Worth Up To $4B In Memphis Supercluster: 'Most Powerful AI Training Cluster In The World' - NVIDIA (NASDAQ:NVDA)
xAI founder Elon Musk lauded the teams at his AI startup and X as well as Nvidia Corp. NVDA for successfully initiating the training of the Memphis Supercluster with 100,000 Nvidia H100 GPUs. What Happened: On Monday, Musk took to Twitter to express his admiration for the teams involved in the launch of the Memphis Supercluster. The training started at approximately 4:20 a.m. local time, according to Musk. "With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world!" Musk said. With each H100 GPU estimated to cost between $30,000 to $40,000, this is worth an investment of $3 billion to $4 billion. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. This is after xAI raised $6 billion in May this year at a valuation of $24 billion - essentially, Musk's AI startup has invested between 50% to 67% of its fundraising towards Nvidia H100 GPUs. See Also: How To Fix CrowdStrike Issue Causing Blue Screen Of Death On Your Windows PC Why It Matters: This development comes on the heels of xAI's recent search for networking engineers and technicians, particularly in Memphis, a city known for its thriving tech industry and numerous fiber optic networks. Earlier this month, Musk revealed that the upcoming version of his AI chatbot, Grok 3, would be trained on a massive 100,000 Nvidia H100 chips. These chips, also known as Hopper, are essential for data processing in large language models (LLMs) and are highly sought after in Silicon Valley. Furthermore, xAI has reportedly seen an influx of former Tesla employees, particularly after a series of layoffs at the electric vehicle giant. This move has sparked controversy, with some accusing Musk of using Tesla as a talent pool for his other ventures. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Microsoft Blames European Commission Agreement As Reason It Can't Secure Windows Like Apple Secures MacOS After CrowdStrike Outage Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[10]
Elon Musk Says 'Grok 3 Should Be The Most Powerful AI' Upon Its Release, Gives Timeline For xAI Models Aiming To Beat Gemini, ChatGPT - Tesla (NASDAQ:TSLA)
Tesla and SpaceX CEO Elon Musk has announced the timeline for the chatbot models of his AI company xAI scheduled to be released later this year. What Happened: On Monday, while speaking with Dr. Jordan B. Peterson at his Gigafactory in Texas, Musk said that Grok 2 has "finished training" and is now in the bug-fixing stage. The official release is planned for next month. "It should be on par, or close to GPT-4," he said, comparing Grok 2's capabilities with OpenAI's model. The tech billionaire co-founded OpenAI in 2015 and left the startup in 2018 over some differences. The tech mogul went on to say that Grok 3 is currently being trained at the Memphis Data Center and is expected to be released by December. See Also: How To Fix CrowdStrike Issue Causing Blue Screen Of Death On Your Windows PC "What we're doing in the Memphis Data Center is we're actually training Grok 3. So that'll probably finish training in about three or four months and then there'll be some fine-tuning and bug fixing and whatnot. And we're hoping to release Grok 3 by December," he stated, adding, "And Grok 3 should be the most powerful AI in the world at that point." Why It Matters: Musk's AI company, xAI, has made significant strides in enhancing Grok. In May earlier it was reported that xAI plans to integrate multimodal inputs into Grok, allowing users to upload images and receive text-based responses. Earlier this month, Musk disclosed that Grok 3 would be trained on a massive 100,000 Nvidia H100 chips. These chips are crucial for handling data processing in large language models (LLMs) and are highly sought after in Silicon Valley. The tech billionaire has also been using Grok to make fun of his adversaries and competitors. He previously used the AI platform to critique Google's AI-powered search feature, which was receiving backlash from users over its poor performance. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Days After Elon Musk Deemed University Of Tokyo's Idea 'Pointless,' Scientists Develop Smiling Robot With Living Skin Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[11]
Nvidia Stock Soars As Elon Musk Spotlights Latest AI Milestone
The Nvidia stock gained over 2% in the pre-market session today after this announcement. On Monday, July 22, Nvidia's stock saw a significant boost in the pre-market trading session. This rebound comes in the wake of a sharp decline last week, where the stock plummeted over 10%. The turnaround was driven by a notable endorsement from Elon Musk, who highlighted a major achievement involving Nvidia's technology. In a post on X, Elon Musk celebrated the initiation of training for xAI's Memphis Supercluster. "Nice work by xAI team, X team, Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20am local time," Musk wrote. He further noted, "With 100k liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world!" The Memphis Supercluster, a massive computing facility equipped with 100,000 of Nvidia's H100 GPUs, is set to train the next version of xAI's chatbot, Grok. Moreover, these GPUs are designed for training artificial intelligence models. The process demands immense energy and computing power. Earlier, in June, the Greater Memphis Chamber confirmed the development of xAI's computing facility. The facility has been dubbed the "Gigafactory of Compute." Moreover, the organization revealed that the facility would repurpose a former manufacturing site and emphasized the economic benefits for the area. In addition, xAI, backed by $6 billion in funding, has already begun hiring for the site. They have listed positions such as fiber foreman, network engineer, and project manager. While the economic prospects are promising, the project has also faced scrutiny from local environmental groups. The Memphis Community Against Pollution group, along with others, has raised concerns about the facility's substantial energy and water consumption. "xAI is also expected to need at least one million gallons of water per day for its cooling towers," the groups noted in a letter. Hence, they urged xAI to invest in a wastewater reuse system to mitigate the impact on Memphis's water supply. Also Read: Elon Musk Arrives In Tennessee, Are Bitcoin Conference Rumors True? In the pre-market session today, the Nvidia stock rose by 2.26% to $120.58 in the pre-market session. Furthermore, the surge in Nvidia's stock aligns with a broader positive trend in the market. FactSet data shows that from August 2020 to July 2024, the NVDA stock has grown by 956%. However, it lags behind MicroStrategy's impressive 1339% surge during the same period. Meanwhile, other major tech stocks, including Elon Musk's Tesla (153%), Google (140%), and Microsoft (110%), have also seen substantial gains. Whilst, Amazon stock has experienced a more slight increase of 16%. In recent weeks, MicroStrategy's stock has particularly stood out, rising by 15% on Monday to close at $1,611. This surge was driven by a significant rally in Bitcoin's price to $65,000. On the flip side, Nvidia stock plummeted to $117.93, dropping 9.67% in the past week.
Share
Share
Copy Link
Elon Musk's AI company, xAI, has introduced a powerful new supercomputer named 'Memphis' to train its next-generation AI model, Grok 3. The system boasts an impressive array of 100,000 Nvidia H100 GPUs, positioning it as one of the most potent AI training clusters globally.
Elon Musk's artificial intelligence company, xAI, has made a significant stride in the AI race with the unveiling of its new supercomputer, dubbed 'Memphis'. This powerful system is designed to train the company's next-generation AI model, Grok 3, marking a notable advancement in AI computing capabilities 1.
At the heart of Memphis lies an impressive array of 100,000 Nvidia H100 GPUs, making it one of the most potent AI training clusters in existence. This massive computational power is expected to significantly accelerate the development and training of Grok 3, xAI's latest large language model 2.
The introduction of Memphis positions xAI as a formidable competitor in the AI landscape, challenging tech giants like Google, Microsoft, and OpenAI. Musk claims that Memphis is now the most powerful AI training computer in the world, surpassing systems used by other major players in the field 3.
Grok, xAI's conversational AI, has been evolving rapidly. The current version, Grok-1, was trained on a cluster of 32,000 Nvidia H100 GPUs. With Memphis's enhanced capabilities, Grok 3 is poised to be a significant leap forward in AI performance and capabilities 4.
The sheer scale of Memphis highlights the increasing computational demands of advanced AI models. It also underscores the fierce competition in the AI industry, where computational power can be a key differentiator. The supercomputer's capacity could potentially lead to breakthroughs in AI capabilities, including more sophisticated language understanding and generation 5.
While the unveiling of Memphis is undoubtedly impressive, it also raises questions about energy consumption and environmental impact. The operation of such a massive GPU cluster requires substantial power, which could be a point of concern for environmentally conscious observers.
As xAI continues to push the boundaries of AI computing with Memphis, the tech world eagerly anticipates the capabilities of Grok 3. The advancements made possible by this supercomputer could potentially reshape the landscape of AI applications across various industries, from natural language processing to complex problem-solving tasks.
Reference
[3]
Elon Musk announces his efforts to develop the world's most powerful AI, sparking debate and skepticism in the tech community. The ambitious project aims to surpass existing AI models in various metrics.
2 Sources
Elon Musk's XAI has launched Colossus, a groundbreaking AI training system utilizing 100,000 NVIDIA H100 GPUs. This massive computational power aims to revolutionize AI development and compete with industry giants.
10 Sources
Elon Musk's XAI introduces Colossus, the world's most powerful AI training system. While impressive, questions arise about its storage capacity, power usage, and naming convention.
2 Sources
Elon Musk's AI venture, xAI, is raising up to $6 billion at a $50 billion valuation to purchase 100,000 Nvidia chips for its Memphis data center, aiming to enhance its AI capabilities and support Tesla's Full Self-Driving technology.
3 Sources
Elon Musk's xAI is expanding its Colossus AI supercomputer from 100,000 to 200,000 NVIDIA Hopper GPUs, making it the world's largest AI training system. The project showcases NVIDIA's Spectrum-X Ethernet networking platform, achieving unprecedented performance in AI workloads.
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved