4 Sources
4 Sources
[1]
An AI Model Has Been Trained in Space Using an Orbiting Nvidia GPU
A startup says it has successfully trained an AI model in space using an Nvidia GPU that was launched into Earth's orbit last month. Starcloud flew up the Nvidia H100 enterprise GPU on a test satellite on Nov. 2. The company now reports using the Nvidia chip to train a lightweight, open-source AI model called NanoGPT from OpenAI founding member Andrej Karpathy. In addition, Starcloud has been "running inference" on the AI model, meaning it's been used to generate answers or output. The NanoGPT implementation was trained on the complete works of Shakespeare, according to Starcloud Chief Engineer Adi Oltean. The startup has also been running a preloaded open-source AI model from Google called Gemma on the Nvidia GPU, effectively creating a chatbot in space. CNBC reports the test satellite, Starcloud-1, sent back a message reading: "Greetings, Earthlings! Or, as I prefer to think of you -- a fascinating collection of blue and green." The achievement is an early step to place data centers in Earth's orbit, which might kick off a new space race. Major players including SpaceX, Google, and Amazon founder Jeff Bezos have highlighted the potential benefits, such as near‑limitless solar energy. On Earth, AI data centers are sparking concerns about the environmental toll and strain on the electric grid. "This is a significant first step toward moving almost all computing off Earth to reduce the burden on our energy supplies and take advantage of abundant solar energy in space," says Starcloud's Oltean. That said, Starcloud-1 is just one satellite. It's about the size of a small refrigerator and carries a single H100 GPU. In contrast, the AI data centers on Earth are being built to house tens of thousands and even millions of GPUs. The resulting high costs and technical hurdles are why the concept is facing skepticism. One major challenge lies in cooling the GPUs, since the vacuum of space offers no air to dissipate heat. However, Starcloud is eyeing an air-based or liquid-based cooling solution for its satellites. The startup also envisions "the largest radiators deployed in space" to further handle the heat. The company is preparing a second satellite, Starcloud-2, that'll feature even more GPUs. The goal is to launch it sometime next year, and even offer access to customers.
[2]
Nvidia Chip on Satellite in Orbit Trains First AI Model in Space
"Anything you can do in a terrestrial data center, I'm expecting to be able to be done in space." Artificial intelligence companies are planning on investing unbelievable sums -- more than a trillion dollars per year by OpenAI alone -- building out enormous data centers that consume copious amounts of electricity, generate pollution, and take up considerable amounts of room. As critics have pointed out, the logistical obstacles are comically immense, from concerns over economic viability to bandwidth limitations. But, credit where credit is due, there's now a proof of concept. With backing from AI chipmaker Nvidia, a startup called Starcloud launched a high-powered Nvidia GPU into outer space aboard a SpaceX rocket last month. Since then, the company has fired up the chip and is running Google's open-source large language model Gemma, as CNBC reports, marking the first time an AI has been run on a cutting-edge chip in space. The company also says it's managed to train a small-scale LLM on the complete works of Shakespeare, resulting in an AI that can speak in Shakespearean English. "Greetings, Earthlings! Or, as I prefer to think of you -- a fascinating collection of blue and green," the AI wrote in a message. "Let's see what wonders this view of your world holds. I'm Gemma, and I'm here to observe, analyze, and perhaps, occasionally offer a slightly unsettlingly insightful commentary." Starcloud CEO Philip Johnston told CNBC that the concept is sound, and could considerably cut energy costs for AI companies. "Anything you can do in a terrestrial data center, I'm expecting to be able to be done in space," he said. "And the reason we would do it is purely because of the constraints we're facing on energy terrestrially." "Running advanced AI from space solves the critical bottlenecks facing data centers on Earth," he added, while also making strides on "environmental responsibility." As detailed in a white paper, Starcloud has some extremely ambitious plans, especially when it comes to keeping operations in space cool. While data centers on Earth can be cooled using water and air, things get more complicated when it comes to cooling AI chips in outer space. As such, the company wants to build out a five-gigawatt orbital data center that is cooled with enormous cooling panels panels that are more than six square miles in area -- all the while being powered 24/7 by solar power. "Orbital data centers can leverage lower cooling costs using passive radiative cooling in space to directly achieve low coolant temperatures," the white paper reads. "Perhaps most importantly, they can be scaled almost indefinitely without the physical or permitting constraints faced on Earth, using modularity to deploy them rapidly." Thanks to the unconstrained source of solar power, the resulting data center's solar panels would be dramatically smaller than an equivalent solar farm in the US, the company claims. Besides cooling, running orbital data centers have plenty of other challenges to overcome as well, from extreme levels of radiation potentially wreaking havoc on the electronics to maintaining enough fuel to stay in orbit, not to mention avoiding collisions with space junk and questions regarding data regulation in space. Nonetheless, a growing number of firms believe running data centers in orbit is the answer. Starcloud is far from the only entity exploring the idea. Google also recently revealed "Project Suncatcher," an initiative that's aiming to launch the company's in-house tensor processing units into orbit. While Starcloud has partnered with SpaceX to launch its chips, OpenAI CEO Sam Altman is raising funds to either acquire or partner with a competing private space company, as the Wall Street Journal reported earlier this month. "When Starcloud-1 looked down, it saw a world of blue and green," Johnston told CNBC. "Our responsibility is to keep it that way."
[3]
Starcloud Becomes First to Train LLMs in Space Using NVIDIA H100 | AIM
NVIDIA-backed startup Starcloud has successfully trained and run LLMs from space for the first time, a step toward orbital data centres as demand for computing power and energy grows on Earth. The Washington-based company's Starcloud-1 satellite, launched last month with an NVIDIA H100 GPU, has completed training of Andrej Karpathy's nano-GPT on the complete works of Shakespeare and run inference on Google DeepMind's open Gemma model. "We just trained the first LLM in space using an NVIDIA H100 on Starcloud-1! We are also the first to run a version of Google's Gemini in space!" wrote Philip Johnston, founder and CEO of Starcloud, in a post on LinkedIn. "This is a significant step on the road to moving almost all compute to space, to stop draining the energy resources of Earth and to start utilising the near limitless energy of our Sun!" he added. In a post on X, Starcloud CTO Adi Oltean said that getting the H100 operational in space required "a lot of innovation and hard work" from the company's engineering team. He added that the team executed inference on a preloaded Gemma model and aims to test more models in the future. Founded in 2024, Starcloud argues that orbital compute could ease mounting environmental pressures linked to traditional data centres, whose electricity consumption is expected to more than double by 2030, according to the International Energy Agency. Facilities on Earth also face water scarcity and rising emissions, while orbital platforms can harness uninterrupted solar energy and avoid cooling challenges. The startup, part of NVIDIA 's Inception program and an alumnus of Y Combinator and the Google for Startups Cloud AI Accelerator, plans to build a 5-gigawatt space-based data centre powered entirely by solar panels spanning four kilometres in width and height. Such a system would outperform the largest US power plant while being cheaper and more compact than an equivalent terrestrial solar farm, according to the company's white paper. Besides Starcloud, Google, SpaceX and Jeff Bezos' Blue Origin are also pursuing space-based data centres. Google recently announced Project Suncatcher, which explores placing AI data centres in orbit. The initiative involves satellites equipped with custom tensor processing units and linked through high-throughput free-space optical connections to form a distributed compute cluster above Earth. Google CEO Sundar Pichai described space-based data centres as a "moonshot" in a recent interview. He said the company aims to harness uninterrupted solar energy near the sun, with early tests using small machine racks on satellites planned for 2027 and potential mainstream adoption within a decade. Elon Musk, meanwhile, announced in November 2025 that SpaceX would build orbital data centres using next-generation Starlink satellites, calling them the lowest-cost AI compute option within five years. He said Starlink V3 satellites could scale to become the backbone of orbital compute infrastructure. According to a recent report, SpaceX is preparing for an initial public offering in 2026 to raise more than $25 billion at a valuation exceeding $1 trillion. According to Bloomberg News, SpaceX plans to use the IPO proceeds to build space-based data centres and purchase the chips needed to run them. Musk discussed the idea during a recent event with Baron Capital." Starship should be able to deliver around 300 GW per year of solar-powered AI satellites to orbit, maybe 500 GW. The 'per year' part is what makes this such a big deal," he said in a post on X on November 20. "Average US electricity consumption is around 500 GW, so at 300 GW/year, AI in space would exceed the entire US economy just in intelligence processing every 2 years."
[4]
World's 1st LLM trained in space: From earth to orbit on NVIDIA H100 GPUs
Future of compute is moving energy-hungry AI training to space The concept of "cloud computing" has just become literal in the most extreme way possible. In a historic first that signals a new era for artificial intelligence infrastructure, Washington-based startup Starcloud has successfully trained a Large Language Model (LLM) aboard a satellite orbiting 325 kilometers above Earth. This achievement proves that the high-performance computing required for modern AI can survive and function in the vacuum of space. Also read: ChatGPT with Photoshop and Acrobat lowers Adobe's learning curve, here's how The mission began with the launch of the Starcloud-1 satellite aboard a SpaceX Falcon 9. Inside the refrigerator-sized spacecraft sat a piece of hardware never before tested in such an environment: a data center-grade NVIDIA H100 GPU. The Starcloud team used this hardware to train Andrej Karpathy's NanoGPT model on the complete works of Shakespeare. The result was an AI capable of generating text in the Bard's distinct style while traveling at 17,000 miles per hour. Following the training run, the system executed inference on a version of Google's open-source Gemma model. The AI sent a message back to mission control that acknowledged its unique position. It greeted the team with "Hello, Earthlings" and described the planet as a "charming existence composed of blue and green." This successful communication confirmed that delicate tensor operations could be performed accurately despite the harsh conditions of low Earth orbit. Also read: Devstral 2 and Vibe CLI explained: Mistral's bet on open weight coding AI Putting a 700-watt GPU into orbit presented a massive thermal challenge. On Earth, these chips are cooled by complex water and air systems to prevent overheating. In space, there is no air to carry heat away through convection. Starcloud CTO Adi Oltean and his engineering team had to design a system that relies entirely on radiative cooling. This involves using large specialized panels to radiate the intense heat generated by the GPU directly into the freezing void of deep space. Beyond heat, the hardware had to be shielded from cosmic radiation. High-energy particles in space can flip bits in memory and corrupt the training process. The team implemented robust shielding and error-correction protocols to ensure the H100 could operate without the data corruption that typically plagues space-based electronics. This project is more than just a technical stunt. It addresses the growing energy crisis facing the AI industry. Terrestrial data centers currently consume massive amounts of electricity and water. Starcloud CEO Philip Johnston argues that moving compute to orbit allows companies to tap into the sun's limitless energy. In orbit, solar arrays can generate power 24/7 without night cycles or weather interruptions. Furthermore, the natural cold of space eliminates the need for the millions of gallons of water used to cool servers on the ground. The company plans to scale this technology into a 5-gigawatt orbital data center that would rival the largest power plants on Earth. The success of Starcloud-1 has kicked off a race for orbital dominance in the computing sector. Tech giants are already mobilizing. Reports indicate that Google is developing "Project Suncatcher" to deploy similar capabilities using its TPU chips. As AI models grow larger, the sky is no longer the limit for the infrastructure needed to power them. It is simply the next layer of the stack.
Share
Share
Copy Link
Washington-based startup Starcloud has successfully trained the first AI model in space using an Nvidia H100 GPU aboard its Starcloud-1 satellite. The company trained NanoGPT on Shakespeare's complete works and ran Google's Gemma model, proving that high-performance AI computing can function in orbit. This achievement addresses the AI industry energy crisis by tapping into limitless solar power.
Washington-based startup Starcloud has successfully trained the first AI model in space, marking a significant breakthrough in the push toward orbital data centers
1
. The company's Starcloud-1 satellite, launched aboard a SpaceX Falcon 9 rocket on November 2, carries an Nvidia H100 GPU that has been used to train NanoGPT, a lightweight open-source model created by OpenAI founding member Andrej Karpathy1
. The model was trained on the complete works of Shakespeare, demonstrating that high-performance AI computing can function in the harsh conditions of low Earth orbit, approximately 325 kilometers above Earth4
.
Source: Digit
Starcloud has also been running inference on Google's open-source Gemma model, effectively creating a chatbot in space
1
. The satellite sent back a message reading: "Greetings, Earthlings! Or, as I prefer to think of you -- a fascinating collection of blue and green"1
. According to Starcloud CTO Adi Oltean, getting the H100 operational in space required "a lot of innovation and hard work" from the engineering team3
. This proof of concept for AI in space demonstrates that Large Language Models in space can perform the same complex tensor operations required for modern artificial intelligence.The achievement directly tackles the mounting AI industry energy crisis facing terrestrial data centers. Starcloud CEO Philip Johnston told CNBC that "anything you can do in a terrestrial data center, I'm expecting to be able to be done in space," citing energy constraints on Earth as the primary motivation
2
. The International Energy Agency projects that electricity consumption from traditional data centers will more than double by 20303
. Facilities on Earth also face water scarcity concerns and rising emissions, while AI compute infrastructure in orbit can harness uninterrupted solar energy without these limitations3
.Starcloud plans to build a 5-gigawatt space-based data center powered entirely by solar panels spanning four kilometers in width and height
3
. According to the company's white paper, such a system would outperform the largest US power plant while being cheaper and more compact than an equivalent terrestrial solar farm3
. "This is a significant first step toward moving almost all computing off Earth to reduce the burden on our energy supplies and take advantage of abundant solar energy in space," says Oltean1
.
Source: AIM
Putting a 700-watt Nvidia H100 GPU into orbit presented massive thermal challenges that required innovative engineering
4
. One major challenge lies in developing a cooling solution, since the vacuum of space offers no air to dissipate heat through convection1
. Starcloud is eyeing an air-based or liquid-based cooling solution for its satellites, along with "the largest radiators deployed in space" to handle the heat1
. The company's white paper details plans for enormous cooling panels more than six square miles in area, using passive radiative cooling to achieve low coolant temperatures2
.Beyond thermal management, the hardware had to be shielded from cosmic radiation that can flip bits in memory and corrupt the training process
4
. The team implemented robust shielding and error-correction protocols to ensure the H100 could operate without data corruption4
. Other challenges include maintaining enough fuel to stay in orbit, avoiding collisions with space junk, and navigating questions regarding data regulation in space2
.Related Stories
Starcloud is far from the only entity exploring orbital data centers. Google recently announced Project Suncatcher, an initiative aiming to launch the company's in-house tensor processing units into orbit
2
. Google CEO Sundar Pichai described space-based data centers as a "moonshot," with early tests using small machine racks on satellites planned for 2027 and potential mainstream adoption within a decade3
. Elon Musk announced in November 2025 that SpaceX would build orbital data centers using next-generation Starlink satellites, calling them the lowest-cost AI compute option within five years3
.Musk stated that "Starship should be able to deliver around 300 GW per year of solar-powered AI satellites to orbit, maybe 500 GW," noting that at 300 GW per year, AI in space would exceed the entire US economy's electricity consumption of around 500 GW just in intelligence processing every two years
3
. According to Bloomberg News, SpaceX is preparing for an initial public offering in 2026 to raise more than $25 billion at a valuation exceeding $1 trillion, with plans to use the proceeds to build space-based data centers3
. Amazon founder Jeff Bezos' Blue Origin is also pursuing similar concepts3
.The refrigerator-sized Starcloud-1 satellite carries just a single H100 GPU, while terrestrial data centers on Earth are being built to house tens of thousands and even millions of GPUs
1
. The resulting high costs and technical hurdles explain why the concept faces skepticism from some quarters1
. Critics have pointed out that logistical obstacles are immense, from concerns over economic viability to bandwidth limitations2
.
Source: PC Magazine
However, Starcloud is preparing a second satellite, Starcloud-2, that will feature even more GPUs, with plans to launch sometime next year and even offer access to customers
1
. The startup, part of Nvidia's Inception program and an alumnus of Y Combinator and the Google for Startups Cloud AI Accelerator, argues that orbital platforms can be scaled almost indefinitely without the physical or permitting constraints faced on Earth3
. Philip Johnston emphasized the environmental stakes: "When Starcloud-1 looked down, it saw a world of blue and green. Our responsibility is to keep it that way"2
.Summarized by
Navi
04 Nov 2025•Technology

22 Oct 2025•Technology

31 Oct 2025•Technology

1
Technology

2
Technology

3
Policy and Regulation
