Computers running its applications could consume up to 134 Twh per year
OpenAI’s ChatGPT debuted nearly a year ago and within two months amassed 100 million users, sparking an AI “explosion.”
However, the technology relies on thousands of specialized computer chips and in the coming years could consume huge amounts of electricity. According to a study published on Tuesday, by 2027 computers running artificial intelligence applications could consume between 85 and 134 Twh per year. This figure corresponds to the annual consumption of Argentina, the Netherlands and Sweden and reaches around 0.5% of today’s global consumption. In addition, the electricity required to run AI could increase carbon dioxide emissions unless exclusively renewable sources are used.
In 2022 the data centers powering all the computing, including Amazon’s cloud and Google’s search engine, used about 1% to 1.3% of the world’s electricity. This does not include cryptocurrency mining, which used another 0.4%. Alex de Vries, a PhD student at Vrije Universiteit Amsterdam, who conducted the study, founded the company Digiconomist, which publishes the Bitcoin Energy Consumption Index.
It’s impossible to precisely quantify the energy use of artificial intelligence because companies like OpenAI reveal very little, such as how many specialized chips are needed to run their software. So de Vries found a way to estimate electricity consumption by using projected sales of Nvidia’s A100 servers, the hardware estimated to be used by 95% of the AI market. “Every single one of these Nvidia servers, they’re power-hungry beasts,” he told the New York Times.
The researcher started with a recent prediction that Nvidia could ship 1.5 million of these servers by 2027 and multiplied that number by the electricity usage, 6.5 Kw for Nvidia’s DGX A100 servers, for example and 10.2 Kw for the DGX H100 servers.
Nvidia’s dominance in artificial intelligence
As the NYT reports, Nvidia has built a commanding lead in artificial intelligence hardware, which is likely to hold for several years, even as rivals try to make up lost ground. The limited supply of Nvidia’s chips is a drag on the development of artificial intelligence, causing small and large companies to try to source the chips from other sources.
“There are a lot of dramatic statements about the rapid development of artificial intelligence and so on, but it’s really about how quickly you can get these chips out there,” explained Benjamin Lee, a professor of electrical engineering and computer science at the University of Pennsylvania.
Nvidia told the NYT that its specialized chips are better than other options, since it would take a lot more conventional chips to do the same tasks. “NVIDIA-powered accelerated compute is the most energy-efficient compute model for artificial intelligence and other data center workloads,” the company said.
Some experts are urging companies to consider electricity consumption as they design the next generation of artificial intelligence hardware and software. “Maybe we should ideally slow down a bit to start implementing the solutions we have,” said Roberto Verdecchia, an assistant professor at the University of Florence’s Software Technologies Laboratory.
“Let’s not make a new model just to improve its accuracy and speed. Let’s take a deep breath and see how much we’re burning in terms of environmental resources,” concluded the scientist.