Curated by THEOUTPOST
On Fri, 1 Nov, 12:10 AM UTC
2 Sources
[1]
Elon Musk reportedly will blow $10 billion on AI this year | Digital Trends
Between Tesla and xAI, Elon Musk's artificial intelligence aspirations have cost some $10 billion dollars in bringing training and inference compute capabilities online this year, according to a Thursday post on X (formerly Twitter) by Tesla investor Sawyer Merritt. "Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas - and will have 50,000 H100 capacity by the end of October, and ~85,000 H100 equivalent capacity by December," Merritt noted. Recommended Videos By the end of this year, Elon Musk's companies (Tesla & xAI) will have brought online roughly $10 billion worth of training compute capacity in 2024 alone. Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas - and will have... pic.twitter.com/UgvmsBLuQp — Sawyer Merritt (@SawyerMerritt) October 29, 2024 Tesla also revealed its Cortex AI cluster in August, which will be leveraged to train the company's Full Self-Driving system and uses 50,000 Nvidia H100 GPUs along with another 20,000 Dojo AI chips developed by Tesla itself. The Colossus supercomputer, which Tesla unveiled in September, uses just as many H100 GPUs as the Memphis and is slated to expand by another 50,000 H100 and 50,000 H200 GPUs in the coming months. xAI, on the other hand, began assembling its Memphis supercomputer in July at its Gigafactory of Compute, located in an old Electrolux production facility in Memphis, Tennessee. Musk claims that the Memphis is "the most powerful AI training cluster in the world," as it runs on 100,000 Nvidia's H100 GPUs, through Musk has promised to double that capacity in short order. It came online in September and has since been tasked with building the "world's most powerful AI by every metric by December of this year" -- likely, Grok 3. xAI has not disclosed how much the Memphis cost to build, though Tom's Hardware estimates that the company has spent at least $2 billion on GPUs alone. The $10 billion figure is actually half of what, in April, Musk claimed Tesla would spend this year on AI compute capacity. "Tesla will spend around $10 billion this year on combined training and inference AI, the latter being primarily in car," he posted at the time. "Any company not spending at this level, and doing so efficiently, cannot compete." By that measure, Musk's AI efforts are already falling behind deep-pocketed rivals like Microsoft, OpenAI, and Google. In July, for example, analysts estimated that OpenAI would spend around $7 billion on AI compute, while losing around $5 billion on other operating costs. However, the company announced in early October that its latest round of investment funding totaled $6.6 billion at a $157 billion post-money valuation. "The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems," the company wrote in its announcement post. Per a report from Reuters on Thursday, both Microsoft and Meta are spending freely to build out their respective AI compute capabilities. Microsoft is reportedly spending as much capital each quarter as it used to spend annually prior to 2020. The company also reports that its capital spending increased by more than 5% in the first quarter of 2024, to $20 billion, and expects to spend even more in Q2. Meta, on the other hand, has spent as much capital every quarter of 2024 as it did annually until 2017. As for Google, it reportedly spent $13 billion on capital expenditures in Q3 2024, a 63% increase over the same period last year. What's more, the company has dropped some $38 billion into compute infrastructure since the start of the year, an 80% jump from the first three quarters of 2023. Suddenly, $10 billion between a pair of companies and a handful of projects seems almost quaint.
[2]
Elon Musk spent roughly $10 billion on AI training hardware in 2024
Tesla and xAI, Elon Musk's companies, will bring online $10 billion worth of training compute capacity by the end of this year, as observed by Sawyer Merritt, a co-founder of TwinBirch and a Tesla investor. And yet, it probably means that both companies will be somewhat behind schedule set by Elon Musk. Elon Musk and his companies have recently been actively making announcements about AI supercomputers, so indeed, we are talking about huge investments. In July, xAI began AI training using the Memphis Supercluster, which is set to integrate 100,000 liquid-cooled H100 GPUs. This system requires a gargantuan amount of power, drawing at least 150 MW, as the 100,000 H100 GPUs alone account for around 70 MW. The system's total cost is unknown, though GPUs alone would cost around $2 billion (if bought at $20,000 per unit), and typically, AI GPUs account for half of the cost of the whole system. In late August, Tesla unveiled its Cortex AI cluster, equipped with an impressive 50,000 Nvidia H100 GPUs and 20,000 of Tesla's own Dojo AI wafer-sized chips. The Dojo cluster is projected to train Tesla's full self-drive (FSD) capability, so this machine is strategically vital for the company. As for costs, we are talking about two billion on the H100-based machine and about at least a billion on the Dojo supercomputer. That billion could be underestimated as Dojo machines are entirely custom-designed. For example, each Dojo D1 cabinet consumes more than 200 kW (to put it into context, each B200 NVL72 cabinet is expected to consume 120 kW) and therefore requires a fully custom cooling distribution unit (CDU) and power supply, which dramatically increases its cost. Finally, in early September, xAI began operating its Colossus supercomputer, which already integrates 100,000 H100 GPUs and is expected to add 50,000 H100 and 50,000 H200 GPUs in the coming months. This giant AI supercomputer also costs billions. XAI and Tesla probably announced spending of well over $10 billion on AI hardware this year. Of course, it will take some time before all those AI servers are installed and come online, so we can only guess the total cost of functioning AI hardware the two companies installed in 2024. But the most ironic thing about these enormous spending is that they seem to be somewhat behind the ambitious plan that Elon Musk outlined this April, when he said that Tesla alone would spend $10 billion on AI hardware this year. "Tesla will spend around $10 billion this year on combined training and inference AI, the latter being primarily in car," Musk wrote in an X post. "Any company not spending at this level, and doing so efficiently, cannot compete." While Tesla's Cortex AI cluster is probably a costly endeavor that will likely get more expensive over time should the company decide to install more Dojo or more Nvidia-based machines, we doubt that it is that costs significantly more than, say, $5 billion. As for the costs of AI inference hardware in cars, we cannot imagine that AI compute hardware in the vehicles set to be produced by Tesla this year costs $5 billion.
Share
Share
Copy Link
Elon Musk's Tesla and xAI are set to bring online $10 billion worth of AI training compute capacity by the end of 2024, showcasing massive investments in supercomputers and GPU clusters to advance AI development.
Elon Musk's companies, Tesla and xAI, are set to bring online approximately $10 billion worth of AI training compute capacity by the end of 2024, according to Tesla investor Sawyer Merritt 1. This substantial investment underscores the growing arms race in artificial intelligence development among tech giants.
Tesla has made significant strides in expanding its AI capabilities:
xAI, Musk's dedicated AI company, has also made substantial investments:
While the exact cost breakdown is not public, estimates suggest:
Musk's $10 billion investment, while substantial, faces stiff competition from other tech giants:
The massive investments in AI computing infrastructure by Musk's companies and their competitors highlight the intensifying race for AI dominance. As these powerful systems come online, we can expect accelerated developments in various AI applications, from autonomous driving to large language models and beyond.
Reference
[2]
Elon Musk's AI company xAI has raised $5-6 billion in a new funding round, valuing the company at $50 billion. The funds will be used to purchase 100,000 Nvidia chips to expand its AI supercomputer capabilities.
5 Sources
5 Sources
Elon Musk's AI company, xAI, has introduced a powerful new supercomputer named 'Memphis' to train its next-generation AI model, Grok 3. The system boasts an impressive array of 100,000 Nvidia H100 GPUs, positioning it as one of the most potent AI training clusters globally.
11 Sources
11 Sources
Elon Musk unveils Tesla's Dojo supercomputer, a powerful AI training system designed to compete with Nvidia's dominance in the AI hardware market. The move comes as a response to high costs and supply constraints in the AI chip industry.
4 Sources
4 Sources
Elon Musk's AI company xAI raises $6 billion in Series C funding, with plans to expand its Colossus supercomputer and develop advanced AI models to compete with industry leaders.
14 Sources
14 Sources
Major tech companies like Google, Microsoft, Apple, and Amazon are investing billions in artificial intelligence. This article examines their AI expenditures and strategies, highlighting the industry's focus on AI development.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved