Home » The New AI Frontier: GPT-5’s Power Needs Raise Sustainability Concerns

The New AI Frontier: GPT-5’s Power Needs Raise Sustainability Concerns

by admin477351

The release of OpenAI’s GPT-5 is a landmark event, but it is also sparking a heated debate about the environmental cost of artificial intelligence. While the company has remained quiet on the model’s resource usage, experts are voicing serious concerns. They argue that the enhanced capabilities of GPT-5, such as its ability to create websites and answer PhD-level questions, come with a steep and unprecedented environmental cost. This lack of transparency from a leading AI developer is raising serious questions about the industry’s commitment to sustainability.
A key study from researchers at the University of Rhode Island’s AI lab provides a stark illustration of this issue. They found that producing a single medium-length response of around 1,000 tokens with GPT-5 can consume an average of 18 watt-hours. This represents a dramatic increase from previous models. To put this into a more relatable context, 18 watt-hours is the amount of energy an incandescent light bulb uses in about 18 minutes. Considering that a platform like ChatGPT handles billions of requests every day, the total energy consumed could be enormous, potentially matching the daily electricity demand of millions of homes.
The spike in energy usage is directly linked to the model’s increased size and complexity. Experts suggest GPT-5 is significantly larger than its predecessors, with a greater number of parameters. This theory is supported by research from French AI company Mistral, which identified a strong correlation between a model’s size and its energy consumption. The Mistral study concluded that a model ten times bigger would have an impact that is an order of magnitude larger. This seems to be the case with GPT-5, with some experts theorizing its resource use could be “orders of magnitude higher” than even GPT-3.
This problem is further exacerbated by the model’s new architecture. Although it uses a “mixture-of-experts” system to improve efficiency, its ability to handle video, images, and complex reasoning likely negates these gains. The “reasoning mode,” which requires the model to compute for a longer time before delivering an answer, could make its power needs several times greater than for simple text tasks. This combination of increased size, complexity, and advanced features paints a clear picture of an AI system with a massive appetite for power, leading to urgent calls for greater transparency from OpenAI and the wider AI community.

Related Articles