2 Sources
2 Sources
[1]
Researchers Just Found Something Extremely Alarming About AI's Power Usage
Researchers have found that the carbon footprint of generative AI-based tools that can turn text prompts into images and videos is far worse than we previously thought. As detailed in a new paper, researchers from the open-source AI platform Hugging Face found that the energy demands of text-to-video generators quadruple when the length of a generated video doubles -- indicating that the power required for increasingly sophisticated generations doesn't scale linearly. For instance, a six-second AI video clip consumes four times as much energy as a three-second clip. "These findings highlight both the structural inefficiency of current video diffusion pipelines and the urgent need for efficiency-oriented design," the researchers concluded in their paper. Experts are warning that we're rolling out generative AI tools without a full grasp of their true environmental impacts. "Ultimately, we found that the common understanding of AI's energy consumption is full of holes," MIT Technology Review wrote in a recent analysis. While image generators used the equivalent of five seconds of microwave warming to generate a single 1,024 x 1,024 pixel image, video generators proved far more energy-intensive. To spit out a five-second clip, the researchers found that it takes the equivalent of running a microwave for over an hour. If they're consuming far more power as the length increases, the math doesn't look good. Those demands rise even faster for longer clips, implying "rapidly increasing hardware and environmental costs," according to the Hugging Face researchers' paper. Fortunately, there are ways to slim down those demands, including intelligent caching, the reusing of existing AI generations, and "pruning," meaning the sifting out of inefficient examples from training datasets. But whether those efforts will be enough to make a dent in the enormous electricity consumption of current AI tools remains to be seen. The scale of its impact is substantial, with AI-related energy usage already representing 20 percent of global datacenter power demands, according to a recent study. Meanwhile, tech giants are investing tens of billions of dollars in infrastructure buildouts, sometimes abandoning climate goals in the process. In its 2024 environmental impact report, Google admitted that it was woefully behind its ambitious plan to reach net-zero carbon emissions by 2030, seeing a staggering 13 percent increase in carbon emissions year over year, in large part due to its embrace of generative AI. Earlier this year, the company released its Veo 3 AI video generator, later boasting that users had created over 40 million videos in just seven weeks. While the environmental impact of the tool remains unknown -- Google isn't exactly incentivized to investigate its sizable contributions to carbon emissions -- chances are it's far worse than we think.
[2]
Hugging Face: AI video energy use scales non-linearly
Researchers with the open-source AI platform Hugging Face have discovered that the carbon footprint of generative AI tools is substantially worse than previously estimated, particularly for those converting text prompts into video, due to non-linear energy scaling. In a newly published paper, the researchers detailed how the energy demands of text-to-video generators increase exponentially rather than in direct proportion to the content's length. The study established that when the duration of a generated video is doubled, its associated energy consumption quadruples. To illustrate this principle, the paper provides a specific example: producing a six-second video clip with AI requires four times as much energy as generating a three-second clip. "These findings highlight both the structural inefficiency of current video diffusion pipelines and the urgent need for efficiency-oriented design," the researchers concluded in their paper. This research emerges amid warnings from experts that generative AI technologies are being deployed without a complete understanding of their environmental consequences. A recent analysis by MIT Technology Review supports this concern, stating that "the common understanding of AI's energy consumption is full of holes." The gap in understanding is significant when comparing different types of generative tools. While creating a single 1,024 by 1,024 pixel image with an AI generator consumes energy equivalent to warming something in a microwave for five seconds, the requirements for video are orders of magnitude greater. The Hugging Face study found that producing just a five-second video clip demands an amount of energy comparable to running a standard microwave for over an hour. This disparity underscores the intensive nature of video generation. The non-linear scaling means that as video clips become longer, the power consumption escalates at an even faster rate. According to the paper, this trajectory implies "rapidly increasing hardware and environmental costs" for users and developers of these technologies. There are potential methods to mitigate these high energy demands. The researchers suggest several strategies, including the implementation of intelligent caching systems and the practice of reusing existing AI-generated content to avoid redundant processing. Another proposed technique is "pruning," which involves methodically identifying and removing inefficient examples from the large datasets used to train AI models. This process could help streamline the models and reduce their operational energy footprint during generation tasks. However, it remains uncertain whether these efficiency measures will be sufficient to make a meaningful impact on the overall electricity consumption of current AI systems. The scale of the issue is already substantial. According to data from one recent study, AI-related activities now represent 20 percent of the total power demand from all global datacenters. In response to growing AI demand, major technology companies are investing tens of billions of dollars into new infrastructure buildouts, a process that has led some to abandon previously stated climate objectives. Google's 2024 environmental impact report revealed the company is significantly behind its plan to achieve net-zero carbon emissions by 2030. The report disclosed a 13 percent increase in carbon emissions year-over-year, which it attributed in large part to its expansion of generative AI services. Earlier this year, Google released its Veo 3 AI video generator. The company later announced that users had created over 40 million videos with the tool within its first seven weeks of availability. The specific environmental toll of Veo 3 has not been disclosed.
Share
Share
Copy Link
Researchers from Hugging Face have discovered that AI-powered text-to-video generators consume energy at an alarming, non-linear rate. The study highlights urgent concerns about the environmental impact of generative AI technologies.
Recent research by the open-source AI platform Hugging Face has unveiled a concerning trend in the energy consumption of generative AI tools, particularly those converting text prompts into video. The study reveals that the carbon footprint of these technologies is substantially worse than previously estimated, with energy demands scaling non-linearly as video length increases
1
2
.The researchers found that doubling the duration of a generated video quadruples its associated energy consumption. For instance, producing a six-second AI video clip requires four times as much energy as generating a three-second clip
1
. This exponential increase in energy demand highlights the structural inefficiency of current video diffusion pipelines and underscores the urgent need for efficiency-oriented design in AI systems2
.To put this energy consumption into perspective, the study revealed that while generating a single 1,024 x 1,024 pixel image consumes energy equivalent to five seconds of microwave use, producing a five-second video clip demands energy comparable to running a microwave for over an hour
1
. This stark contrast emphasizes the intensive nature of video generation and its potential environmental impact.Related Stories
The findings come amid growing concerns about the deployment of generative AI technologies without a full understanding of their environmental consequences. AI-related activities now represent 20 percent of the total power demand from global datacenters
2
. Tech giants are investing heavily in infrastructure buildouts to meet the growing AI demand, sometimes at the expense of climate goals. For example, Google's 2024 environmental impact report revealed a 13 percent increase in carbon emissions year-over-year, largely attributed to its embrace of generative AI1
.Researchers suggest several strategies to mitigate the high energy demands of AI video generation:
However, it remains uncertain whether these efficiency measures will be sufficient to significantly reduce the overall electricity consumption of current AI systems
2
.As the AI industry continues to expand and evolve, addressing these energy consumption concerns will be crucial for ensuring the sustainable development and deployment of generative AI technologies.
Summarized by
Navi
[2]