![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/a0623693e88e5178b1de0e8a19fe74ba.jpeg?width=720&quality=85%2C50)
1 minute read
The lighter side of water
ChatGPT a water guzzler?
Popular large language models such as OpenAI’s ChatGPT and Google’s Bard are energy intensive. They require massive server farms to provide enough data to train the powerful programs. Cooling those same data centres also makes the AI chatbots incredibly thirsty.
New research suggests training for GPT-3 alone consumed 185,000 gallons (700,000 litres) of water.
According to a recent study, an average user’s conversational exchange with ChatGPT amounts to dumping a large bottle of fresh water on the ground. Given the chatbot’s unprecedented popularity, researchers fear all those spilled bottles could take a troubling toll on water supplies, especially amid historic droughts and looming environmental uncertainty in the US.
Researchers from the University of California Riverside and the University of Texas Arlington published the AI water consumption estimates in a pre-print paper titled Making AI Less ‘Thirsty.’ The authors found that the amount of clear freshwater required to train GPT-3 is equivalent to that needed to fill a nuclear reactor’s cooling tower.
That huge number of gallons could produce battery cells for 320 Teslas. From a similar perspective, ChatGPT would need to “drink” a 500-millilitre water bottle to complete a basic exchange with a user consisting of roughly 25-50 questions.
The vast number of gallons needed to train the AI model also assumes the training is happening in Microsoft’s state-of-the-art US data centre, built especially for OpenAI. The report notes that water consumption could be three times higher if the data were trained in the company’s less energy-efficient Asia data centre.
The researchers expect these water requirements to increase further with newer models, which rely on more data parameters than their predecessors.
“AI models’ water footprint can no longer stay under the radar,” the researchers said. “The water footprint must be addressed as a priority as part of the collective efforts to combat global water challenges.”
When calculating AI’s water consumption, the researchers distinguish water “withdrawal” and “consumption.” Withdrawal physically removes water from a river, lake, or another source. Consumption refers to the loss of water by evaporation when it’s used in data centres. The research focused primarily on the consumption part of that equation, where the water can’t be recycled. Water consumption issues aren’t limited to OpenAI or AI models. In 2019, Google requested more than 2.3 billion gallons of water for data centres in just three states. The company currently has 14 data centres spread across North America. According to the paper, Google’s LaMDA could require millions of litres of water to train up to a usable level. This is larger than GPT-3 because several of Google’s thirsty data centres are housed in hot states such as Texas. However, researchers issued a caveat with this estimation, calling it an “approximate reference point.”
![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/b6e7e43ec337e9a21d8d1695a3ef8bda.jpeg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/3f9682a9e117540fc375f60c6f001957.jpeg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/37b80522a90886fbeed6996ea6e121ff.jpeg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/ca4bf8bbb07afea04e866e477e8cacb2.jpeg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/230724220053-e85f34dc1282f1b8cd369a974e863d7a/v1/b6e7e43ec337e9a21d8d1695a3ef8bda.jpeg?width=720&quality=85%2C50)