2 Sources
2 Sources
[1]
OpenAI's newly launched Sora 2 makes AI's environmental impact impossible to ignore
Thompson Rivers University provides funding as a member of The Conversation CA. OpenAI's recent rollout of its new video generator Sora 2 marks a watershed moment in AI. Its ability to generate minutes of hyper-realistic footage from a few lines of text is astonishing, and has raised immediate concerns about truth in politics and journalism. But Sora 2 is rolling out slowly because of its enormous computational demands, which point to an equally pressing question about generative AI itself: What are its true environmental costs? Will video generation make them much worse? The recent launch of the Stargate Project -- a US$500 billion joint venture between OpenAI, Oracle, SoftBank and MGX -- to build massive AI data centres in the United States underscores what's at stake. As companies race to expand computing capacity on this scale, AI's energy use is set to soar. The debate over AI's environment impact remains one of the most fraught in tech policy. Depending on what we read, AI is either an ecological crisis in the making or a rounding error in global energy use. As AI moves rapidly into video, clarity on its footprint is more urgent than ever. Two competing narratives From one perspective, AI is rapidly becoming a major strain on the world's energy and water systems. Alex de Vries-Gao, a researcher who has long tracked the electricity use of bitcoin mining, noted in mid-2025 that AI was on track to surpass it. He estimated that AI already accounted for about 20 per cent of global data-center power consumption; this is likely to double by year's end. According to the International Energy Agency, data centres used up to 1.5 per cent of global electricity consumption last year, with consumption growing four times faster than total global demand. The IEA predicts that data centres will more than double their use by 2030, with AI processing the leading driver of growth. Research cited by MIT's Technology Review concurs, estimating that by 2028, AI's power draw could exceed "all electricity currently used by US data centers" -- enough to power 22 per cent of U.S. households each year. 'Huge' quantities AI's water use is also striking. Data centres rely on ultra-pure water to keep servers cool and free of impurities. Researchers estimated that training GPT-3 would have used up 700,000 litres of freshwater at Microsoft's American facilities. They predict that global AI demand could reach four to six billion cubic metres annually by 2027. Hardware turnover adds further strain. A 2023 study found that chip fabrication requires "huge quantities" of ultra-pure water, energy-intensive chemical processes and rare minerals such as cobalt and tantalum. Manufacturing the high-end graphics processing units -- the engines that drive AI boom -- has a much larger carbon footprint than most consumer electronics. Read more: The importance of critical minerals should not condone their extraction at all costs Generating an image uses the electricity of a microwave running for five seconds, while making a five-second video clip takes up as much as a microwave running for over an hour. The next leap from text and image to high-definition video could dramatically increase AI's impact. Early testing bears this out -- finding that energy use for text-to-video models quadruples when video length doubles. The case for perspective Others see the alarm as overstated. Analysts at the Center for Data Innovation, a technology and policy think tank, argue that many estimates about AI energy use rely on faulty extrapolations. GPU hardware is becoming more efficient each year, and much of the electricity in new data centres will come from renewables. Recent benchmarking puts AI's footprint in context. Producing a typical chatbot Q&A consumes about 2.9 watt-hours (Wh) -- roughly 10 times a Google search. Google recently claimed that a typical Gemini prompt uses only 0.24 Wh and 0.25 mL of water, though independent experts note those numbers omit indirect energy and water used in power generation. Context is key. An hour of high-definition video streaming on Netflix uses roughly 100 times more energy than generating a text response. An AI query's footprint is tiny, yet data centres now process billions daily, and more demanding video queries are on the horizon. Jevons paradox It helps to distinguish between training and use of AI. Training frontier models such as GPT-4 or Claude Opus 3 required thousands of graphics chips running for months, consuming gigawatt-hours of power. Using a model takes up a tiny amount of energy per query, but this happens billions of times a day. Eventually, energy from using AI will likely surpass training. The least visible cost may come from hardware production. Each new generation of chips demands new fabrication lines, heavy mineral inputs and advanced cooling. Italian economist Marcello Ruberti observes that "each upgrade cycle effectively resets the carbon clock" as fabs rebuild highly purified equipment from scratch. And even if AI models become more efficient, total energy keeps climbing. In economics, this is known as the Jevons paradox: in 19th-century Britain, the consumption of coal increased as the cost of extracting it decreased. As AI researchers have noted, as costs per-query fall, developers are incentivized to find new ways to embed AI into every product. The result is more data centres, chips and total resource use. A problem of scale Is AI an ecological menace or a manageable risk? The truth lies somewhere in between. A single prompt uses negligible energy, but the systems enabling it -- vast data centres, constant chip manufacturing, round-the-clock cooling -- are reshaping global energy and water patterns. The International Energy Agency's latest outlook projects that data-centre power demand could reach 1,400 terawatt-hours by 2030. This is the equivalent of adding several mid-sized countries to the world's grid. AI will count for a quarter of that growth. Transparency is vital Many of the figures circulating about AI energy use are unreliable because AI firms disclose so little. The limited data they release often employ inconsistent metrics or offset accounting that obscures real impacts. One obvious fix would be to mandate disclosure rules: standardized, location-based reporting of the energy and water used to train and operate models. Europe's Artificial Intelligence Act requires developers of "high-impact" systems to document computation and energy use. Similar measures elsewhere could guide where new data centres are built, favouring regions with abundant renewables and water -- this could encourage longer hardware lifecycles instead of annual chip refreshes. Balancing creativity and cost Generative AI can help unlock extraordinary creativity and provide real utility. But each "free" image, paragraph or video has hidden material and energy costs. Acknowledging those costs doesn't mean we need to halt innovation. It means we should demand transparency about how great the environmental cost is, and who pays it, in order to address AI's environmental impacts. As Sora 2 begins to fill social feeds with highly realistic visuals, the question won't be whether AI uses more energy than Netflix, but whether we can expand our digital infrastructure responsibly enough to make room for both.
[2]
OpenAI's newly launched Sora 2 makes AI's environmental impact impossible to ignore
OpenAI's recent rollout of its new video generator Sora 2 marks a watershed moment in AI. Its ability to generate minutes of hyper-realistic footage from a few lines of text is astonishing, and has raised immediate concerns about truth in politics and journalism. But Sora 2 is rolling out slowly because of its enormous computational demands, which point to an equally pressing question about generative AI itself: What are its true environmental costs? Will video generation make them much worse? The recent launch of the Stargate Project -- a US$500 billion joint venture between OpenAI, Oracle, SoftBank and MGX -- to build massive AI data centers in the United States underscores what's at stake. As companies race to expand computing capacity on this scale, AI's energy use is set to soar. The debate over AI's environmental impact remains one of the most fraught in tech policy. Depending on what we read, AI is either an ecological crisis in the making or a rounding error in global energy use. As AI moves rapidly into video, clarity on its footprint is more urgent than ever. Two competing narratives From one perspective, AI is rapidly becoming a major strain on the world's energy and water systems. Alex de Vries-Gao, a researcher who has long tracked the electricity use of bitcoin mining, noted in mid-2025 that AI was on track to surpass it. He estimated that AI already accounted for about 20% of global data-center power consumption; this is likely to double by year's end. According to the International Energy Agency, data centers used up to 1.5% of global electricity consumption last year, with consumption growing four times faster than total global demand. The IEA predicts that data centers will more than double their use by 2030, with AI processing the leading driver of growth. Research cited by MIT's Technology Review concurs, estimating that by 2028, AI's power draw could exceed "all electricity currently used by US data centers" -- enough to power 22% of U.S. households each year. 'Huge' quantities AI's water use is also striking. Data centers rely on ultra-pure water to keep servers cool and free of impurities. Researchers estimated that training GPT-3 would have used up 700,000 liters of freshwater at Microsoft's American facilities. They predict that global AI demand could reach four to six billion cubic meters annually by 2027. Hardware turnover adds further strain. A 2023 study found that chip fabrication requires "huge quantities" of ultra-pure water, energy-intensive chemical processes and rare minerals such as cobalt and tantalum. Manufacturing the high-end graphics processing units -- the engines that drive AI boom -- has a much larger carbon footprint than most consumer electronics. Generating an image uses the electricity of a microwave running for five seconds, while making a five-second video clip takes up as much as a microwave running for over an hour. The next leap from text and image to high-definition video could dramatically increase AI's impact. Early testing bears this out -- finding that energy use for text-to-video models quadruples when video length doubles. The case for perspective Others see the alarm as overstated. Analysts at the Center for Data Innovation, a technology and policy think tank, argue that many estimates about AI energy use rely on faulty extrapolations. GPU hardware is becoming more efficient each year, and much of the electricity in new data centers will come from renewables. Recent benchmarking puts AI's footprint in context. Producing a typical chatbot Q&A consumes about 2.9 watt-hours (Wh) -- roughly 10 times a Google search. Google recently claimed that a typical Gemini prompt uses only 0.24 Wh and 0.25 mL of water, though independent experts note those numbers omit indirect energy and water used in power generation. Context is key. An hour of high-definition video streaming on Netflix uses roughly 100 times more energy than generating a text response. An AI query's footprint is tiny, yet data centers now process billions daily, and more demanding video queries are on the horizon. Jevons paradox It helps to distinguish between training and use of AI. Training frontier models such as GPT-4 or Claude Opus 3 required thousands of graphics chips running for months, consuming gigawatt-hours of power. Using a model takes up a tiny amount of energy per query, but this happens billions of times a day. Eventually, energy from using AI will likely surpass training. The least visible cost may come from hardware production. Each new generation of chips demands new fabrication lines, heavy mineral inputs and advanced cooling. Italian economist Marcello Ruberti observes that "each upgrade cycle effectively resets the carbon clock" as fabs rebuild highly purified equipment from scratch. And even if AI models become more efficient, total energy keeps climbing. In economics, this is known as the Jevons paradox: in 19th-century Britain, the consumption of coal increased as the cost of extracting it decreased. As AI researchers have noted, as costs per query fall, developers are incentivized to find new ways to embed AI into every product. The result is more data centers, chips and total resource use. A problem of scale Is AI an ecological menace or a manageable risk? The truth lies somewhere in between. A single prompt uses negligible energy, but the systems enabling it -- vast data centers, constant chip manufacturing, round-the-clock cooling -- are reshaping global energy and water patterns. The International Energy Agency's latest outlook projects that data-center power demand could reach 1,400 terawatt-hours by 2030. This is the equivalent of adding several mid-sized countries to the world's grid. AI will count for a quarter of that growth. Transparency is vital Many of the figures circulating about AI energy use are unreliable because AI firms disclose so little. The limited data they release often employ inconsistent metrics or offset accounting that obscures real impacts. One obvious fix would be to mandate disclosure rules: standardized, location-based reporting of the energy and water used to train and operate models. Europe's Artificial Intelligence Act requires developers of "high-impact" systems to document computation and energy use. Similar measures elsewhere could guide where new data centers are built, favoring regions with abundant renewables and water -- this could encourage longer hardware lifecycles instead of annual chip refreshes. Balancing creativity and cost Generative AI can help unlock extraordinary creativity and provide real utility. But each "free" image, paragraph or video has hidden material and energy costs. Acknowledging those costs doesn't mean we need to halt innovation. It means we should demand transparency about how great the environmental cost is, and who pays it, in order to address AI's environmental impacts. As Sora 2 begins to fill social feeds with highly realistic visuals, the question won't be whether AI uses more energy than Netflix, but whether we can expand our digital infrastructure responsibly enough to make room for both. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Share
Copy Link
OpenAI's launch of Sora 2, a powerful AI video generator, sparks debate over the environmental impact of AI technologies. The new tool's impressive capabilities come with significant computational demands, highlighting the growing energy and resource consumption of AI systems.
OpenAI has recently unveiled Sora 2, a groundbreaking AI video generator that can produce minutes of hyper-realistic footage from simple text prompts. This technological leap has not only raised concerns about the potential misuse in politics and journalism but has also brought the environmental impact of AI into sharp focus
1
2
.The launch of Sora 2 coincides with the announcement of the Stargate Project, a massive $500 billion joint venture between OpenAI, Oracle, SoftBank, and MGX. This initiative aims to construct enormous AI data centers across the United States, underscoring the immense computational resources required for advanced AI systems
1
.The environmental impact of AI has become a contentious issue in tech policy. Contrasting viewpoints paint AI as either an impending ecological crisis or a negligible factor in global energy consumption. As AI rapidly advances into video generation, the urgency to clarify its true environmental footprint intensifies
1
.Researcher Alex de Vries-Gao estimates that AI already accounts for about 20% of global data-center power consumption, with projections suggesting this figure could double by the end of the year. The International Energy Agency reports that data centers consumed up to 1.5% of global electricity last year, with AI processing driving rapid growth
1
2
.Water usage is another significant concern. Data centers require vast amounts of ultra-pure water for cooling. Estimates suggest that training GPT-3 alone may have used 700,000 liters of freshwater at Microsoft's facilities. Predictions indicate that global AI demand could reach 4-6 billion cubic meters of water annually by 2027
1
.The shift from text and image generation to high-definition video is expected to dramatically increase AI's environmental impact. Early tests indicate that energy use for text-to-video models quadruples when video length doubles. For context, generating a five-second video clip consumes as much energy as a microwave running for over an hour
1
.Related Stories
Some analysts, including those at the Center for Data Innovation, argue that concerns about AI's energy use may be overstated. They point to improving GPU efficiency and the increasing use of renewable energy in new data centers. Recent benchmarks provide context: a typical chatbot Q&A consumes about 2.9 watt-hours, while Google claims a Gemini prompt uses only 0.24 Wh and 0.25 mL of water
1
2
.As AI models become more efficient, total energy consumption continues to rise due to increased usage – a phenomenon known as the Jevons paradox. The environmental cost of hardware production, including the fabrication of advanced chips and cooling systems, adds another layer of complexity to the issue
1
.As AI technology rapidly advances, particularly in the realm of video generation, the debate over its environmental impact remains crucial. The launch of Sora 2 and projects like Stargate highlight the need for ongoing research and transparent discussions about the ecological footprint of AI as it becomes an increasingly integral part of our digital landscape.
Summarized by
Navi
[1]