3 Sources
[1]
Data centres are controversial: will launching them into space help?
As the huge data centres powering the artificial-intelligence boom grow ever more unpopular on Earth, companies are planning to launch them into space. In the past few months, firms including SpaceX, Google and Blue Origin have all shared plans to launch large fleets, or constellations, of satellites into low Earth orbit. The networks would act in a similar way to the interconnected computers inside data centres on Earth, which process, store and transmit data on a massive scale. Putting these 'orbital data centres' into space could, in theory, address concerns about their energy and water consumption, and their occupation of wide swathes of land. The idea is that constellations would use sunlight for energy rather than driving up electricity costs on Earth, and they would be cooled by space's naturally cold environment rather than by water sources on our planet. For some, such a solution can't come soon enough. Data centres on Earth have become so environmentally taxing that communities and politicians are taking action against them. For example, the board of trustees for a township in the US state of Michigan voted last week to institute a one-year moratorium on the delivery of water to hyperscale data centres so that the township can study the effects of a planned facility. As companies plug away at satellite designs and lobby for launch approvals, they are pushing for the space-based data centres to become a reality in the next few years. Researchers who spoke to Nature, however, see it taking longer to wrangle the sci-fi technology into being. The chatter surrounding orbital data centres to power AI isn't new. In September 2024, engineers at the space-technology company Starcloud in Redmond, Washington, published a white paper arguing that orbital data centres are "feasible, economically viable, and necessary to realize the potential of AI". And in November 2025, researchers at the technology giant Google announced its Suncatcher project, with a plan to "one day scale machine learning compute in space". But it was in January this year that "everything blew up in this area", says Kathleen Curlee, who studies the space economy at Georgetown University in Washington DC. That's when billionaire Elon Musk's aerospace and tech company SpaceX, headquartered in Starbase, Texas, shared plans to launch one million satellites to form an orbital data centre -- a staggering number compared with the roughly 15,000 satellites now in low Earth orbit. Not to be outdone, the China Aerospace Science and Technology Corporation, based in Beijing, joined the race at about the same time, and billionaire Jeff Bezos' space-tech firm Blue Origin, based in Kent, Washington, later filed for its own constellation. Adding more pressure to get data centres into orbit is a March plan released by US President Donald Trump's administration called the Ratepayer Protection Pledge. AI firms such as Google, OpenAI and Musk's xAI signed the pledge, agreeing to build infrastructure for or to buy any power their data centres need, to prevent US residents from footing the bill. It's a non-binding agreement, but by implementing it ahead of the US mid-term elections in November, Trump has made clear that data centres are a political issue that could sway voters. For these projects to succeed, several engineering obstacles need to be overcome. One is ensuring that the satellites' electronics cool properly. Although space is much colder than Earth, it is also a vacuum, meaning that the extreme heat generated primarily by AI chips will probably not easily dissipate on its own. Technologies already exist to cool gadgets in space, such as the heat radiators on the International Space Station. But these are probably too heavy -- and, consequently, too expensive to launch -- to be practical for orbital data centres, says Igor Bargatin, a mechanical engineer at the University of Pennsylvania in Philadelphia. Another obstacle is the effect of harsh space radiation on AI chips. As protons and the other high-energy particles that make up space radiation strike the chips, they could flip a binary bit from a 0 to a 1, or vice versa, effectively corrupting stored data, says Ken Mai, a principal systems scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania. In a white paper released last year, however, a team at Google reported that its existing Trillium chips remained stable under the radiation of a proton beam. But it's "still an open question of how much radiation can be tolerated", Bargatin says. If the number of satellites in low Earth orbit increases by two orders of magnitude, "it certainly seems like a big challenge from the space-traffic management perspective," Bargatin adds. He points specifically to a phenomenon called the Kessler effect, which predicts that as low Earth orbit becomes overcrowded, collisions will increase exponentially as more debris is produced -- potentially making certain orbital regions unusable. Overcrowding is also a point of concern for astronomers, whose space images are already being marred by satellites. For instance, if SpaceX's plans were to go through, each image captured by the Very Large Telescope in Chile would lose 10% of its data, according to the Royal Astronomical Society in London. Some companies are already bringing AI models to space, with Starcloud running a version of Google's AI assistant Gemini on one of its satellites and the Chinese aerospace company Adaspace, based in Chengdu, deploying ten AI models in orbit. Regulatory agencies are also acting fast. The US Federal Communications Commission, which oversees satellites, accepted SpaceX's satellite proposal and opened it up to public comment within days of receipt -- although the commission has not yet given approval. At the end of March, Musk unveiled some technical details about SpaceX's orbital data centre, including an illustration of an AI Sat Mini to be used in the constellation, complete with a large radiator to remove heat, the industry website Space News reported. Musk has been publicly bullish about space-based data centres, saying at the World Economic Forum, held earlier this year in Davos, Switzerland, that "the lowest-cost place to put AI will be space. And that'll be true within two years ... three at the latest". But just last week, the news agency Reuters revealed that SpaceX itself is much less confident. In a filing ahead of its highly anticipated initial public offering, the firm said that its "orbital AI compute" initiative will rely on "unproven technologies" and "may not achieve commercial viability". Some researchers think that if orbital data centres are ever to come to fruition, it will be a long journey. "Five years is probably the best-case scenario, as far as I'm concerned," Bargatin says. Curlee sees an even lengthier timeline for data centres to proliferate in space as they have on Earth. "I don't really see that happening for at least 10 years -- probably 15 or 20," she says.
[2]
One giant leap for AI: Companies rethink how and where data is processed
Big tech and startups are developing orbital data centers to process AI-driven data in space, reducing latency and energy use. This edge computing approach allows satellites to prioritize and transmit high-value data, enhancing mission autonomy and enabling quicker insights for sectors like defense and climate monitoring. Processing data on satellites is gaining traction as a solution to reducing latency and addressing energy limitations in the artificial intelligence (AI)-driven data economy. As big tech companies explore this strategy, Elon Musk is among those considering orbital data centres to establish this infrastructure. This approach is becoming increasingly important as modern satellites, especially those used for earth observation, produce vast amounts of data that can overwhelm bandwidth and slow down real-time decision-making. Experts believe that by processing data at the "edge" in space, we can gain quicker insights, selectively transmit data, and enhance mission autonomy. A growing ecosystem of players, including global firms like Hewlett Packard Enterprise and Indian startups such as Pixxel, Skyroot Aerospace, Dhruva Space, SatSure and Digantara, are developing capabilities that combine onboard computing with ground-based systems. Early applications span sectors such as defence and intelligence (ISR), agriculture monitoring, climate modelling, disaster response and border surveillance, where timely data is crucial. The rise of AI is central to this shift. Machine learning models enable satellites to prioritise, compress and interpret high-value data in orbit despite tight power and compute limits, making space-based computing a complementary extension of terrestrial cloud and edge infrastructure in an increasingly distributed, AI-powered world. Bengaluru-based Space technology company Digantara aims to deploy a constellation of 15 satellites for space domain awareness by 2027 to track and monitor adversary satellite movements and space debris. Anirudh Sharma, CEO of Digantara, said edge computing serves in reducing downlink and information latency by processing closer to the source. "The second is enabling inference and autonomy onboard, for instance, when two satellites within the same constellation exchange data between them and support through inter-satellite links to allow for constellation maintenance and collision avoidance." "From a defence standpoint, this becomes significant at higher orbits, GEO and beyond, where ground-in-the-loop decision cycles are extremely difficult. Onboard autonomy is the infrastructure that holds significant value for decision making. The critical constraint is ensuring reliability of such analysis and ensuring that there are no false positives," he said. Ryan D'Souza, country manager for AI and high performance computing at Hewlett Packard Enterprise, an enterprise technology company, said the company's Spaceborne Computer programme demonstrates how data-centre-class computing can be extended into space, allowing missions to process data closer to where it is generated rather than relying entirely on earth-based systems. "For deep-space or lunar missions led by Indian Space Research Organisation, near real-time data analysis at the edge can significantly improve responsiveness and operational efficiency," he said. HPE's Spaceborne Computer-2, deployed aboard the International Space Station, integrates high-performance computing (HPC), AI and machine learning using commercial off-the-shelf hardware, tested for resilience in extreme space conditions. Industry leaders said such capabilities will be essential as space assets become more autonomous. Pawan Kumar Chandana, chief executive, Skyroot Aerospace, a Hyderabad-based private space launch company, said processing large volumes of data directly in orbit is critical. "To enhance real-time autonomy of space assets, ability to process large data originating from them is a must... space compute qualifies as critical infrastructure and demands sovereignty," Chandana said. Awais Ahmed, founder and chief executive of Pixxel, a Bengaluru-based private space technology company building a hyperspectral imaging satellite constellation, said the growing volume of data generated in orbit is already straining downlink capacity, making selective, in-orbit decision-making increasingly important. "As earth observation systems scale, space-based processing will become increasingly important... The real value is not in processing everything onboard, but in making smarter decisions in orbit, whether through filtering, intelligent compression, or prioritising what to transmit first," Ahmed said, highlighting the unique challenges of hyperspectral imaging, where each scene carries dense spectral data. He said that for companies like Pixxel, there is a clear opportunity to move certain forms of intelligence closer to the source to improve responsiveness and efficiency, while still relying on ground infrastructure for deeper analysis and large-scale model execution. Ahmed also pointed to the constraints of operating AI systems in orbit. "Real-time AI inference in orbit must operate within very tight power, compute and thermal constraints... the strongest use cases are likely to be around triage, prioritisation and selective processing of the highest-value data," he said, adding that Pixxel already uses such techniques for compression and cloud detection to optimise data transmission. Krishna Teja Penamakuru, chief operations officer and cofounder of Dhruva Space, a Hyderabad-based, full-stack space engineering company providing end-to-end satellite solutions, including design, manufacturing, launch and operations, said space-based computing must be designed as part of the broader mission architecture rather than in isolation. "For enterprise and government missions, we see space-based computing as an enabler. The real value lies in making first-order decisions in orbit: filtering, prioritising and compressing data, while leveraging ground systems for deeper analytics," he said. Penamakuru said that in full-stack missions, compute architecture is a strategic decision tied to the entire data pipeline, from onboard systems to ground infrastructure, playing a key role in ensuring performance, reliability and data sovereignty. "Processing selectively in orbit improves latency and reduces bandwidth dependency, but control over the end-to-end data pipeline is what ultimately enables true data sovereignty," he said, adding that in small satellite platforms, every compute decision involves trade-offs against power, thermal and reliability constraints. While in-orbit computing can reduce dependence on ground stations in low-latency scenarios, D'Souza said it is unlikely to replace earth-based infrastructure entirely. Applications such as large-scale remote sensing and disaster management will still require data aggregation and coordinated response on the ground.
[3]
Edge-First AI: Building Robust Intelligence for Deep Space Missions
I started my career working on the space shuttle program at IBM and thought my life endeavors would center on space. Instead, my interest turned to compute devices and the technology that can bring computation to the masses. Those interests are now aligning with the realities of AI in space, for both edge computation in satellites and spacecraft today, and the future plans for massive data centers in space. For years, AMD has built for "edge reality" - where power is constrained, connectivity isn't guaranteed, and success is measured in real-time decisions, not theoretical peak performance. We've helped bring AI into PCs, industrial systems and embedded deployments by combining heterogeneous compute (CPUs, GPUs and adaptive compute), along with a strong software foundation. This "edge playbook" centers on a relentless focus on performance-per-watt and mission-critical reliability, allowing our partners to right-size performance for their specific needs. We see space as the next and most demanding frontier for edge computing. The same fundamentals apply; they're just amplified: strict power and thermal budgets, intermittent communications, expected long service lives, and a premium on reliability and autonomy. We are taking what we've learned enabling AI at the edge and extending it to space workloads with holistic co-design across hardware, software and systems so that on-board intelligence can be deployed efficiently, updated responsibly, and scaled across missions and form factors. Orbiting data centers are emerging. As they do, AMD's focus on adaptive, scalable platforms and an open ecosystem will help partners build robust, efficient end-to-end systems. Space is the Ultimate Edge Environment The immediate opportunity is on-board intelligence that senses, decides and acts as the mission happens. Space makes edge processing not just beneficial, but often necessary with local AI becoming the backbone of operations in which every downlink is constrained, every millisecond of latency matters and connectivity can't be assumed. Intelligence at the Point of Action By moving AI from the terrestrial data center to the on-board system, the spacecraft shifts from a passive sensor to an autonomous decision-maker that can act even when the downlink is dark. Downlink is limited by bandwidth, power and communication windows, so sending everything to a terrestrial data center is inefficient and slow. On-board AI can discard low-value data (like cloudy frames in Earth observation), can surface urgent events (like early wildfire signatures) and can enable resilient autonomy when connectivity is intermittent. Edge processing helps spacecraft and satellites interpret data locally and act on it. Instead of treating the platform as a sensor that just collects raw data for Earth, AI in space turns it into a system that prioritizes, compresses and decides at the point of capture with agentic AI workflows. And this AI can be adjusted across use cases and workflows, whether for a planetary rover navigating hazards, or a spacecraft flagging telemetry anomalies before they cascade and create failure. The Intrigue of Data Centers in Space Looking further out, success will be about making orbital compute a reality. With the challenge of insatiable demand for more AI computing in data centers, there are several efforts to deliver mass-scale computation in space to tap into solar power and leverage cooler temperatures. Large-scale orbital compute will ultimately be limited by power, thermal dissipation, radiation resilience and communications. Many concepts assume sun-synchronous "dawn-dusk" orbits to maximize solar availability and reduce thermal cycling, with low Earth orbit helping limit latency and radiation exposure. One of the most difficult problems to solve is how to eliminate heat from large-scale compute deployments. Space is a vacuum, so excess heat must be conducted to radiators. The Vacuum Catalyst In space, there's no air to carry heat away, so thermal management becomes a first-principles problem. The only way to shed the heat generated by electronics is to conduct it to radiators for heat dissipation. This unique constraint transforms performance-per-watt from a metric into a mandate that drives the architectural innovations making massive-scale AI in orbit a reality. At meaningful scale, that reality drives architectural thinking toward modular, serviceable systems rather than the monolithic "data center in a box." It will be many elements operating together, each managing its own power generation and thermal dissipation while communicating through high-throughput links. At large scale, that likely implies: * Modular deployments that can reach multimegawatt-class capabilities over time. * High-speed, low-latency interconnect between elements (including optical links at substantially higher data rates and lower energy consumption than what's commonly deployed today). * Reliability and replacement models that assume modules may have limited lifetimes and can be de-orbited and replaced, more like fleet operations than traditional one-off spacecraft. AMD Offers the Building Blocks for What's Next AMD adaptive computing has supported space exploration for decades, including image processing and navigation acceleration for NASA's Mars rovers and the Artemis II mission. (Learn more about AMD's proven expertise in space in "AMD in Space: Proven Expertise, Products Support Missions".) AMD's approach is to make space AI buildable - not as a one-off engineering project, but as a repeatable platform journey. That starts with adaptive, scalable compute building blocks that can be right sized to the mission: CPUs, GPUs, FPGAs and accelerator options where they make sense, paired with modular design philosophies. This approach extends our established edge playbook to the stars. By providing the same platform consistency we've delivered for terrestrial deployments, we enable a repeatable journey where partners can evolve capabilities over time without re-architecting from scratch. Just as important is openness. Space missions are assembled from many specialized suppliers, and no single vendor can (or should) dictate the full solution. Mission Resilience Through Openness Space missions are complex, multi-vendor ecosystems. Utilizing AMD ROCmâ„¢ open software stack ensures that developers can tune and validate systems across diverse hardware, preventing proprietary lock-in and fostering a more resilient, collaborative frontier. AMD is investing in open software and open standards so partners can integrate, tune and validate end-to-end systems with more choice and less friction. On the software side, AMD ROCmâ„¢ software is part of the open software stack for AI and HPC, designed to help developers move from kernels to applications on AMD accelerators. On the systems side, AMD is helping drive standards for open security, interconnect and infrastructure to ensure high-performance AI systems can scale without lock-in. New Frontier: Scaling AI from Earth to Orbit The most exciting part of this conversation is that AI is expanding where compute can create impact, including environments that are remote, constrained and mission critical. By putting intelligence closer to where data is generated, we reduce latency, save bandwidth, and improve mission outcomes. That's true in factories, hospitals, and vehicles - and it's true in space. At AMD, we'll keep doing what we do best: Engineer for reality, co-optimize the full system and build technologies that scale efficiently - from Earth to orbit and beyond.
Share
Copy Link
Major tech companies are racing to launch data centers in space as AI's energy demands strain Earth's resources. SpaceX plans to deploy one million satellites for orbital computing, while Google, Blue Origin, and others pursue similar initiatives. The shift addresses concerns about power consumption and water use, but faces engineering challenges including cooling electronics in a vacuum and space radiation effects on AI chips.
As AI continues to expand, the environmental impact of AI infrastructure has pushed major technology companies to explore an unconventional solution: data centers in space. SpaceX, Google, and Blue Origin have all announced plans to launch large satellite constellations into low Earth orbit (LEO) that would function as interconnected orbital computing networks
1
. The most ambitious proposal comes from Elon Musk's SpaceX, which revealed plans in January to deploy one million satellites for an orbital data center—a staggering figure compared to the roughly 15,000 satellites currently in low Earth orbit1
.
Source: CXOToday
The concept addresses mounting concerns about terrestrial data centers, which have become so environmentally taxing that communities are taking action. A township in Michigan recently voted to institute a one-year moratorium on water delivery to hyperscale data centers to study their effects
1
. By utilizing solar power for energy and space's naturally cold environment for cooling, orbital data centers could theoretically eliminate the massive electricity and water consumption plaguing Earth-based facilities.Beyond addressing environmental concerns, edge computing for space missions offers practical advantages for AI in space exploration. Modern satellites, particularly those used for earth observation, generate vast amounts of data that overwhelm bandwidth limitations and slow real-time decision-making
2
. Processing data in orbit allows satellites to shift from passive sensors to active decision-makers that can prioritize, compress, and interpret high-value data before transmission.Anirudh Sharma, CEO of Bengaluru-based Digantara, which aims to deploy 15 satellites for space domain awareness by 2027, explains that edge computing serves to reduce data latency by processing closer to the source. "The second is enabling inference and mission autonomy onboard, for instance, when two satellites within the same constellation exchange data between them and support through inter-satellite links to allow for constellation maintenance and collision avoidance," Sharma noted
2
.Machine learning models enable satellites to discard low-value data, such as cloudy frames in imaging, while surfacing urgent events like early wildfire signatures
3
. This capability transforms spacecraft into autonomous systems that can act even when downlink capacity is unavailable, critical for deep-space missions where ground-in-the-loop decision cycles become extremely difficult.
Source: Nature
Despite the promise of orbital computing, several engineering obstacles must be overcome. The most pressing challenge involves thermal management. Although space is much colder than Earth, it is also a vacuum, meaning the extreme heat generated primarily by AI chips will not easily dissipate on its own
1
. Technologies like heat radiators on the International Space Station exist, but are probably too heavy and expensive to launch for practical orbital data centers, according to Igor Bargatin, a mechanical engineer at the University of Pennsylvania1
.AMD, which has been developing edge devices for constrained environments, notes that in space there's no air to carry heat away, making thermal dissipation a first-principles problem. The only way to eliminate heat is to conduct it to radiators, transforming performance-per-watt from a metric into a mandate
3
. At meaningful scale, this reality drives architectural thinking toward modular, serviceable systems rather than monolithic deployments, with each element managing its own power generation and thermal dissipation.Related Stories
Another obstacle involves space radiation effects on AI chips. As protons and other high-energy particles strike the chips, they could flip binary bits from 0 to 1 or vice versa, effectively corrupting stored data, says Ken Mai, a principal systems scientist at Carnegie Mellon University
1
. While Google reported last year that its Trillium chips remained stable under proton beam radiation, it remains "still an open question of how much radiation can be tolerated," Bargatin notes1
.The dramatic increase in satellites also raises concerns about space debris and traffic management. If the number of satellites in low Earth orbit increases by two orders of magnitude, it presents challenges from a space-traffic management perspective, particularly regarding Kessler syndrome—a phenomenon predicting that as orbits become overcrowded, collisions will increase exponentially as more debris is produced, potentially making certain orbital regions unusable
1
.Political factors are accelerating the push for orbital solutions. In March, US President Donald Trump's administration released the Ratepayer Protection Pledge, which AI firms including Google, OpenAI, and Musk's xAI signed, agreeing to build infrastructure for or buy any power their data centers need to prevent US residents from footing the bill
1
. Though non-binding, the agreement signals that data centers have become a political issue that could sway voters in the November mid-term elections.A growing ecosystem of players is developing capabilities combining onboard high-performance computing in space with ground-based systems. Companies like Hewlett Packard Enterprise, along with Indian startups including Pixxel, Skyroot Aerospace, Dhruva Space, SatSure, and Digantara, are building solutions that span sectors suchs as defense, agriculture monitoring, climate modeling, disaster response, and border surveillance
2
.Awais Ahmed, founder of Pixxel, emphasizes that the growing volume of data generated in orbit is already straining downlink capacity. "The real value is not in processing everything onboard, but in making smarter decisions in orbit, whether through filtering, intelligent compression, or prioritising what to transmit first," Ahmed said
2
. While companies push for space-based data centers to become reality in the next few years, researchers suggest it will take longer to wrangle the technology into being, making this a development worth monitoring as the intersection of AI and space infrastructure evolves.Summarized by
Navi
12 Feb 2026•Technology

17 Mar 2026•Technology

06 Feb 2026•Technology

1
Technology

2
Policy and Regulation

3
Science and Research
