DCD Ebook: Cooling

Page 1

> eBook

Cooling fit for the future

Examining the challenges that come with future-proofing our facilities

>>

It affects your bottom line—and the planet we call home. It’s why we’re passionate about crafting cooling systems that are both reliable and incredibly energy efficient.

Our expertise can help you spend more wisely, meet your sustainability goals and preserve a better future for all. Visit our Data Center Resource Center to hear from Trane subject matter experts on in-demand topics for maximizing your performance, cost-efficiency and sustainability. Check it out at trane.com/datacenters

With big data comes big energy demands.

With the rise of technologies such as AI and machine learning, rack densities are – unsurprisingly - continuing to soar. Not only must operators ensure they can keep up with the demands of these new workloads, but keeping the facilities that house them cool is becoming ever more challenging.

A renewed focus on sustainability only serves to compound this complexity, taking cooling considerations to a whole new level. With a typical large data center generating between 20 to 50 MW, and a whole campus generating enough to power a mid-sized city, operators must be particularly mindful they are implementing sustainable solutions that don’t impact on performance, or the environment.

In this eBook we explore current cooling trends, the challenges posed by our increasing demand for data – not to mention rising global temperatures - rounding off with some speculation as to how owners and operators can keep their servers cool, without warming the planet

4 Chapter one: Current cooling 5 Liquid cooling: A new phase 9 Exploring current immersion cooling deployments 11 Which mechanical cooling design is ideal for your data center? 12 Chapter two: Cooling challenges 13 Overcooling is past its sell-by date 17 Designing data centers for heatwave resiliency 19 Q&A with Trane 21 Chapter three: Going greener 22 Decarbonization: The secret to success and sustainability? 25 Curbing thirsty data centers 28 Six ways to make your HVAC system greener 5 13 9
Contents >> Introduction 28 22

Chapter one: Current cooling

From Edge computing to hyperscale giants, it’s safe to say that when the cooling needs of today’s applications differ as much as they do, one size no longer fits all.

Although air cooling still very much has a place in the data center, as compute capacities continue to rise, new cooling methods are coming to the fore.

In this chapter we examine some of the latest cooling trends, to help you decide what ‘new’ is right for you.

4 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future

Liquid cooling: A new phase

Data center operators have been avoiding liquid cooling. Keeping it as a potential option for the future, but never a mainstream operational approach.

Liquid cooling proponents warned that rack power densities could only get so high before liquid cooling was necessary, but their forecasts were always forestalled. The Green Grid suggested that air cooling can only work up to around 25kW per rack, but AI applications threaten to go above that.

In the past, when rack power densities approached levels where air cooling could not perform,

silicon makers would improve their chips’ efficiency, or cooling systems would get better.

Liquid cooling: An exotic option?

Liquid cooling was considered a last resort, an exotic option for systems with very high energy use. It required tweaks to the hardware, and mainstream vendors did not make servers designed to be cooled by liquid.

But all the world’s fastest supercomputers are cooled by liquid to support the high-power density, and a lot of Bitcoin mining rigs have direct-to-chip cooling or immersion

5 | DCD eBook • datacenterdynamics.com
Two-phase liquid cooling has finally arrived. Vendors are making purpose-built liquid cooled servers
If other hyperscalers come out with aggressive targets for water reduction like Microsoft has, then that will accelerate the adoption of liquid cooling even faster
Peter Judge, DCD

>> DCD eBook | Cooling fit for the future

cooling, so their chips can be run at high clock rates.

Most data center operators are too conservative for that sort of thing, so they’ve backed off.

Now, that could be changing. Major announcements at the OCP Summit - the get-together for standardized data center equipment - centered on liquid cooling. And within those announcements, it’s now clear that hardware makers are making servers that are specifically designed for liquid cooling.

Purpose-built servers are a big step because till now, all motherboards and servers have been designed to be cooled by air, with wide open spaces and fans. Liquid cooling these servers is a process of “just removing the fans and heat sinks, and tricking the BIOS - saying ‘You're not air cooled anymore.’”

That brings benefits, but the servers are bigger than they need to be, says Capes: “You have a 4U air cooled server, which should really be a 1U or a half-U size.”

LiquidStack is showing a 4U DataTank, which holds four rack units of equipment, and absorbs 3kW of heat per rack unit - equivalent to a density of 126kW per rack. The company also makes a 48U data rank, holding the equivalent of a full rack.

Standards

The servers in the tank are made by WiWynn, to the OCP’s open accelerator interface (OAI) specification, using standardized definitions for liquid cooling. This has several benefits across all types of liquid cooling.

For one thing, it means that other vendors can get on board, and know their servers will fit into tanks from LiquidStack or other vendors, and users should be able to mix and match equipment in the long term.

Capes. “This is replacing refrigerants like R410A and R407C that have very high global warming potential and are also hazardous.

“It's very important when you start looking at standards, particularly in the design of hardware, that you're not using materials that could be incompatible with these various dielectric fluids, whether they be Novec or fluorocarbon, or a mineral oil or, or synthetic oil. That's where OCP is really contributing a lot right now.

An organization like OCP will kick all the tires, including things like the safety and compatibility of connectors, and the overall physical specifications.

“I've been talking recently with some colocation providers around floor load weighting,” says Capes. “It's a different design approach to deploy data tanks instead of conventional racks, you know, 600mm by 1200mm racks.” A specification tells those colo providers where it’s safe to put tanks, he says: “by standardizing and disseminating this information, it helps more rapidly enable the market to use different liquid cooling approaches.”

The reason is clear: hardware power densities are now reaching the tipping point: “Higher power chipsets are very commonplace now, we’re seeing 500W or 600W GPUs, and CPUs reaching 800W to 1,000W,” says Joe Capes, CEO of immersion cooling specialist LiquidStack. “Fundamentally, it just becomes extremely challenging to air cool above 270W per chip.”

Alongside Wiwynn’s servers, for the third leg, LiquidStack has a partnership with 3M to use Novec 7000, a non-fluorocarbon dielectric fluid, which boils at 34°C (93°F) and recondenses in the company’s DataTank system, removing heat efficiently in the process.

“The power delivery scheme is another important area of standardization,” says Capes, “whether it be through AC bus bar, or DC bus bar, at 48V, 24V or 12V.”

For another thing, the simple existence of a standard should help convince conservative data center operators it’s safe to adopt - if only because the systems are checked with all the possible components that might be used, so customers know they should be able to get replacements and refills for a long time.

Take coolants: “Right now the marketplace is adopting 3M Novex 649, a dielectric with a low GWP (global warming potential)”, says

In the specific case of LiquidStack, the OCP standard did away with a lot of excess material, cutting the embodied footprint of the servers, says Capes: “There's no metal chassis around the kit. It's essentially just a motherboard. The sheer reduction in space and carbon footprint by eliminating all of this steel and aluminum and whatnot is a major benefit.”

Pushing the technology

Single-phase liquid cooling vendors emphasize the simplicity of their solutions. The immersion tanks may need some propellors to move the fluid around but largely use convection. There’s no vibration caused by bubbling, so vendors like GRC and Asperitas say equipment will last longer.

“People talk about immersion

6 | DCD eBook • datacenterdynamics.com

with a single stroke, and don’t differentiate between single-phase and two-phase," GRC CEO Peter Poulin said in a DCD interview, arguing that single-phase is the immersion cooling technique that’s ready now.

But two-phase allows for higher density, and that can potentially go further than the existing units.

Although hardware makers are starting to tailor their servers to use liquid cooling, they’ve only taken the first steps of removing excess baggage and putting things slightly closer together. Beyond this, equipment could be made which simply would not work outside of a liquid environment.

“The hardware design has not caught up to two-phase immersion cooling,” says Capes. “This OAI server is very exciting, at 3kW per RU. But we’ve already demonstrated the ability to cool up to 5.25 kilowatts in this tank.”

Beyond measurement

The industry’s efficiency measurements are not wellprepared for the arrival of liquid cooling in quantity, according to Uptime research analyst Jacqueline Davis.

Data center efficiency has been measured by power usage effectiveness (PUE), a ratio of IT power to facility power. But liquid cooling undermines how that measurement is made, because of the way it simplifies the hardware.

“Direct liquid cooling implementations achieve a partial PUE of 1.02 to 1.03, outperforming the most efficient air-cooling systems by low single-digit percentages,” says Davis. “But PUE does not capture most of DLC’s energy gains.”

Conventional servers include fans, which are powered from the rack, and therefore their power is included in the “IT power” part of PUE. They are considered part of the payload the data center is supporting.

When liquid cooling does away with those fans, this reduces energy, and increases efficiency - but harms PUE.

“Because server fans are powered by the server power supply, their consumption counts as IT power,” points out Davis. “Suppliers have modeled fan power consumption extensively, and it is a non-trivial amount. Estimates typically range between five percent and 10 percent of total IT power.”

There’s another factor though. Silicon chips heat up and waste energy due to leakage currentseven when they are idling. This is one reason for the fact that data center servers use almost the same power when they are doing nothing, a shocking level of waste, which is not being addressed because the PUE calculation ignores it.

Liquid cooling can provide a more controlled environment, where leakage currents are lower, which is good. Potentially, with really reliable cooling tanks, the electronics could be designed differently to take advantage of this, allowing chips to resume their increases in powerefficiency.

That’s a good thing - but it raises the question of how these improvements will be measured, says Davis: “If the promise of widespread adoption of DLC materializes, PUE, in its current form, maybe heading toward the end of its usefulness.”

Reducing water

“The big reason why people are going with two-phase immersion cooling is because of the low PUE. It has roughly double the amount of heat rejection capacity of cold plates or single-phase,” says Capes. But a stronger draw may turn out to be the fact that liquid cooling does not use water.

Data centers with conventional cooling systems, often turn on some evaporative cooling when conditions require, for instance, if the outside air temperature is too

high. This means running the data center chilled water through a wet heat exchanger, which is cooled by evaporation.

“Two-phase cooling can reject heat without using water,” says Capes. And this may be a factor for LiquidStack’s most high-profile customer: Microsoft.

There’s a LiquidStack cooling system installed at Microsoft’s Quincy data center, alongside an earlier one made by its partner Wiwynn. “We are the first cloud

provider that is running two-phase immersion cooling in a production environment,” Husam Alissa, a principal hardware engineer on Microsoft’s team for data center advanced development said of the installation.

Microsoft has taken a broader approach to its environmental footprint than some, with a promise to reduce its water use by 95 percent before 2024, and to become “waterpositive” by 2030, producing more clean water than it consumes.

One way to do this is to run data centers hotter and use less water for evaporative cooling, but switching workloads to cooling by liquids with no water involved could also

7 | DCD eBook • datacenterdynamics.com
The big reason why people are going with two-phase immersion cooling is because of the low PUE. It has roughly double the amount of heat rejection capacity of cold plates or singlephase
> Joe Capes LiquidStack

help. “The only way to get there is by using technologies that have high working fluid temperatures,” says Capes.

Industry interest

The first sign of the need for highperformance liquid cooling has been the boom in hot chips: “The semiconductor activity really began about eight to nine months ago. And that's been quickly followed by a very dynamic level of interest and engagement with the primary hardware OEMs as well.”

Bitcoin mining continues to soak up a lot of it, and recent moves to damp down the Bitcoin frenzy in China have pushed some crypto facilities to places like Texas, which are simply too hot to allow air cooling of mining rigs.

But there are definite signs that customers beyond the expected markets of HPC and crypto-mining are taking this seriously.

“One thing that's surprising is the pickup in colocation,” says

Capes. “We thought colocation was going to be a laggard market for immersion cooling, as traditional colos are not really driving the hardware specifications. But we've actually now seen a number of projects where colos are aiming to use immersion cooling technology for HPC applications.”

He adds: “We've been surprised to learn that some are deploying two-phase immersion cooling in self-built data centers and colocation sites - which tells me that hyperscalers are looking to move to the market, maybe even faster than what we anticipated.”

Edge cases

Another big potential boom is in the Edge, micro-facilities are expected to serve data close to applications.

Liquid cooling scores here, because it allows compact systems which don’t need an air-conditioned space.

“By 2025, a lot of the data will be created at the Edge. And with a proliferation of micro data centers and Edge data centers, compaction becomes important,” says Capes. Single-phase cooling should play well here, but he obviously prefers two-phase.

“With single phase, you need to have a relatively bulky tank, because you're pumping the dielectric fluid around, whereas in a two-phase immersion system you can actually place the server boards to within two and a half millimeters of one another," he said.

How far will this go?

It’s clear that we’ll see more liquid cooling, but how far will it take over the world? “The short answer is the technology and the chipsets will determine how fast the market moves away from air cooling to liquid cooling,” says Capes.

Another factor is whether the technology is going into new buildings or being retrofitted to existing data centers - because

whether it’s single-phase or twophase, a liquid cooled system will be heavier than its air cooled brethren.

Older data centers simply may not be designed to support large numbers of immersion tanks.

“If you have a three-floor data center, and you designed your second and third floors for 250 pounds per square foot of floor loading, it might be a challenge to deploy immersion cooling on all those floors,” says Capes.

“But the interesting dynamic is that because you can radically ramp up the amount of power per tank, you may not need those second and third floors. You may be able to accomplish on your ground floor slab, what you would have been doing on three or four floors with air cooling.”

Some data centers may evolve to have liquid cooling on the ground floor’s concrete slab base, and any continuing air-cooled systems will be in the upper floors.

But new buildings may be constructed with liquid cooling in mind, says Capes: “I was talking to one prominent colocation company this week, and they said that they're going to design all of their buildings to 500 pounds per square foot to accommodate immersion cooling.”

Increased awareness of the water consumption of data centers may push the adoption faster: “If other hyperscalers come out with aggressive targets for water reduction like Microsoft has, then that will accelerate the adoption of liquid cooling even faster.”

If water cooling hits a significant proportion of the market, say 20 percent, that will kick off ”a transition, the likes of which we’ve never seen,” says Capes. “It's hard to say whether that horizon is, is on us in five years or 10 years, but certainly if water scarcity, and higher chip power continue to evolve as trends, I think we'll see more than half of the data centers liquid cooled.” 

8 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future

Exploring current immersion cooling deployments

As rack densities rise and chips get hotter, some are turning to immersion-based, open-tub liquid cooling to beat the heat.

In an open-tub scenario, there are two distinct types of coolant - single phase and two-phase - the phase meaning the state the coolant is in at any given moment during the cooling loop.

Single-phase coolants will remain in a liquid state while two-phase coolants will change from a liquid state to a gaseous one as the heat transfer occurs. We will explore both examples through two real-world deployments.

Single-phase - DUG McCloud data center - Houston TX Oil and gas computing specialist

DownUnder GeoSolutions (DUG) opened its 15MW 'Bubba' supercomputer in a 22,000 square foot (2,044 sq m) data hall built in partnership with Skybox Data Centers in Houston, Texas. It was deployed in 2019.

At 250 petaflops (single precision) once fully deployed, DUG's 15MW high-performance computing requires unique power and heat rejection systems to operate.

Designed in-house, the DUG HPC’s compute elements are entirely cooled by complete immersion in a dielectric fluid, specifically selected to operate at raised temperature conditions. Slotted vertically in an open-tub – essentially a rack on its back, the heatsinks are removed and the chips are in direct contact with the fluid.

The fluid is non-toxic, non-

flammable, biodegradable, non-polar, has low viscosity and, crucially, will not conduct electricity.

The heat exchangers are submerged in the tank with the computer equipment, meaning that no dielectric fluid ever leaves the tank, and it has a centralized power supply.

The deployment comes with a swathe of benefits, from considerably reducing total power consumption of the facility, to massively reducing the cooling system’s complexity. Mark Lommers, chief engineer at DUG and the designer of this solution, told DCD that “for every 1MW of real-time compute you want to use, you end up using 1.55MW of power or thereabouts for traditional chilled watercooling systems.”

In an immersion cooling

9 | DCD eBook • datacenterdynamics.com
Looking at the cutting-edge of cooling solutions with Microsoft and DUG McCloud
DUG
Vlad-Gabriel Anghel, DCD

>> DCD eBook | Cooling fit for the future

system, lots of power-hungry equipment is removed. A prime example of this is the server fans. Lommers added that “there are no chilled water pumps and there are no chillers that get involved because there's no below room temperature water involved,” concluding that “the actual total power that we get from that is only 1.014MW, which is a big change over the 1.55MW that we had before.”

The cooling loop is massively simplified and thus more reliable. Even more so because it must deal with fewer changes in temperature, the overall system is more robust as fewer controllers need to work in tandem for an efficient operation.

As the components sit below the fluid level, the company claims that there is no is component oxidation and fouling. “We see a very, very high benefit in reduced maintenance costs and reduced equipment failure rate as well,” Lommers said.

Two-phase immersion cooling – Microsoft public cloud

In 2021 Microsoft deployed a two-phase immersion cooling solution for their public cloud workloads, developed in partnership with Taiwanese server manufacturer Wiwynn.

At the time, the company said that “emails and other communications sent between Microsoft employees are literally making liquid boil inside a steel holding tank packed with computer servers at this data center on the eastern bank of the Columbia River.”

Inside Microsoft’s steel holding tank the heat generated by the bare chips makes the fluid

boil. As vapors rise, they meet a condenser coil found in the lid of the tank. Vapors hit the coil and condense, turning back into a liquid state and falling back into the tub, effectively creating a closed-loop cooling system.

As with the previous example, the cooling infrastructure is greatly reduced as no air handlers or chillers are needed - a dry cooler (basically a large radiator) circulates warm coolant into an open-bath immersion tank, and provides waterless cooling, no need to lose or evaporate water to cool.

Special server board designs are used that are smaller in size and blind-mate to their power connectors in the bottom of the tank, as in the other example effectively being slotted in and stacked horizontally.

Another less known aspect of this cooling approach is its ability to concentrate heat due to its use of radiators, this in turn enables real heat reuse scenarios like district heating where often air-cooled latent heat is too low grade to be of any real use.

Furthermore, Microsoft is also recognizing the overclocking opportunity such a solution brings. Husam Alissa, director of advanced cooling & performance, at Microsoft explained: “We could go as high and as low with densities as we want to, we'd be able to support hundreds of kilowatts in single

tank as we densify hardware.”

Microsoft has shared the designs and the learning behind this project with the wider industry through the Open Compute Project as its looking to grow the ecosystem of this technology.

But there’s a reason why Microsoft hasn’t deployed this system in all of its data centers - the ecosystem is not quite there yet, staff aren’t trained for the new approaches, and critical questions around cooling solution supplies, security, and safety have yet to be answered.

More crucially, it is not yet clear how large the market for the ultra-dense systems will be, with many racks still happy humming away at below 10kW.

Should that density rise, operators currently have a number of different approaches to choose from, and within that a variety of form factors and pathways they could take. There are no agreed standards, and no settled consensus, as the technology and its implementation remain in the early stages.

It will take projects like these, and their long-term success, for others to feel comfortable to take the plunge. 

10 | DCD eBook • datacenterdynamics.com
Microsoft
11 | DCD eBook • datacenterdynamics.com
Which mechanical cooling engineering design is ideal for your data center? Click here to download
>Report findings:

Chapter two: Cooling challenges

With rising global temperatures and a renewed focus on ESG credentials threatening to compromise both data center operations and sustainability metrics such as PUE (power usage effectiveness), it’s understandable operators might be unsure what action to take or which way to turn.

In this chapter we look at some of the biggest challenges – and pitfalls – when it comes to data center cooling. We also speak with Danielle Rossi, global director of Mission Critical Cooling at Trane, to gain a manufacturers’ perspective on the new challenges posed by our everchanging digital – and physical – landscape.

12 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future

Over-cooling is past its sell-by date

There is definitive guidance that stipulates it is perfectly safe to run data centers at temperatures up to 27°C (80°F). But large parts of the industry persist in over-cooling their servers, wasting vast amounts of energy and causing unnecessary emissions. There are signs that this may be changing, but progress has been incredibly slow - and future developments don’t look like speeding things up very much.

13 | DCD eBook • datacenterdynamics.com
Fourteen years after apparent proof that warmer is better, colocation companies are still struggling to turn their cooling systems down
Peter Judge, DCD

Don’t be so cool

When data centers first emerged, operators kept them cool to avoid any chance of overheating. Temperatures were pegged at 22°C (71.6°F), which meant that chillers were working overtime to maintain an unnecessarily cool atmosphere in the server rooms.

In the early 2000s, more energy was spent in the cooling systems than in the IT rack itself, a trend which seemed obviously wrong. The industry began an effort to reduce that imbalance, and created a metric, PUE (Power Usage Effectiveness) to measure progress.

PUE is the total power used in the data center, divided by the power used in the racks - so an “ideal” PUE of 1.0 would mean all power is going to the racks. Findings ways to switch off the air conditioning, and letting temperatures rise, was a major strategy in approaching this goal.

In 2004, ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended an operating temperature range from 20°C to 25°C. In 2008, the society went further, suggesting that temperatures could be raised to 27°C.

Following that, the society issued Revision A1, which raised the limit to 32°C (89.6°F) depending on conditions.

This was not an idle whim.

ASHRAE engineers said that higher temperatures would have little effect on the lifetime of components, but would offer significant energy savings.

Figures from the US General Services Administration suggested that data centers could save four percent of their total energy, for every degree they allowed the temperature to climb.

Hyperscale companies are often best placed to pick up advanced technology ideas. They own the building, the cooling systems, and the IT. So if they allow temperatures to climb, then it’s their own equipment that feels the heat.

So it’s no surprise that cloud giants were the first to get on board with raising data center temperatures. Facebook quickly found it could go beyond the ASHRAE guidelines. At its Prineville and Forest City data centers, they raised the server temperatures to 29.4°C, and found no ill effects.

“This will further reduce our environmental impact and allow us to have 45 percent less airhandling hardware than we have in Prineville,” Yael Maguire, then Facebook’s director of engineering, said.

Google went up to 26.6°C, and Joe Kava, then vice president of data centers, said the move was working: “Google runs data centers warmer than most because it helps efficiency.”

Intel went furthest. For ten months in 2008, the chip giant took 900 servers, and ran half of them in a traditionally cooled data center, while the other 450 were given no external cooling. The server temperatures went up to 33.3°C (92°F) at times.

At the end of the ten months, the chip giant compared those servers with another 450 which had been run in a traditional air-conditioned environment. The 450 hot servers had saved some 67 percent of the power budget.

In this higher-temperature test, Intel actually found a measurable increase in failure. Amongst the hot servers, two percent more failed. But that failure rate may have had nothing to do with the temperature - the 450 servers under test also had no air filtration or humidity control, so the small increase in failure rate may have been due to dust and condensation.

Some like it hot

Academics backed up the idea, with support coming from a 2012 paper from the University of Toronto titled Temperature Management in Data Centers: Why Some (Might) Like It Hot.

“Our results indicate that, all things considered, the effect of temperature on hardware reliability is weaker than commonly thought,” the Canadian academics conclude. “Increasing data center temperatures creates the potential for large energy savings and reductions in carbon emissions.”

At the same time, server makers responded to ASHRAE’s guidelines, and confirmed that these new higher temperatures were acceptable without breaking equipment warranties.

Given that weight of support, you might have expected data center temperatures to rise dramatically across the industry - and you can still find commentary from 2011, which predicts a rapid increase in cold aisle temperatures.

However, look around for recommended data center temperatures today, and figures of 22°C and 25°C are still widely quoted.

14 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future
ASHRAE said that temperatures up to 32°C (89.6°F) would have little effect on the lifetime of components, but would offer significant energy savings

This reluctance to change is widely put down to the industry’s reputation for conservatism, although there are some influential voices raised against the consensus that higher temperatures are automatically better (see Box).

Equinix makes a cautious move

All of which makes a recent announcement from Equinix very interesting. On some measures, Equinix is the world’s largest colocation player, housing a huge chunk of the servers which are not either in on-premises data centers on in the cloud.

In December, Equinix announced that it would “adjust the thermostat of its colocation data centers, letting them run warmer, to reduce the amount of energy spent cooling them down unnecessarily.”

“With this new initiative, we can intelligently adjust the thermostat in our data centers in the same way that consumers do in their homes,” said Raouf Abdel, EVP of global operations for Equinix.

Equinix’s announcement features congratulatory quotes from analysts and vendors.

Rob Brothers, program vice president, data center services, at analyst firm IDC explains that “most data centers … are unnecessarily cooler than required."

Brothers goes on to say that the announcement will see Equinix “play a key role in driving change in the industry and help shape the overall sustainability story we all need to participate in."

The announcement will "change the way we think about operating temperatures within data center environments,” he says.

Which really does oversell the announcement somewhat. All Equinix has promised to do is to make an attempt to push temperatures up towards 27°C -

the target which ASHRAE set 14 years ago, and which it already recommends can be exceeded.

No Equinix data centers will get warmer straight away, either. The announcement will have no immediate impact on any existing customers in any Equinix data centers. Instead, customers will be notified at some unspecified time in the future, when Equinix is planning to adjust the thermostat at the site where their equipment is hosted.

"Starting immediately, Equinix will begin to define a multi-year global roadmap for thermal operations within its data centers aimed at achieving significantly more efficient cooling and decreased carbon impacts," says the press release.

And in response to a question from DCD, Equinix supplied the following statement: "There is no immediate impact on our general client base, as we expect this change to take place over several years. Equinix will work to ensure all clients receive ample notification of the planned change to their specific deployment site."

Customers like it cool

Reading between the lines, it is obvious that Equinix is facing pushback from its customers, who are ignoring the vast weight of evidence that higher temperatures are safe, and are unwilling to budge from the traditional 22°C temperature which has been the norm.

Equinix pushes the idea of increased temperatures as a way for its customers to meet the goal of reducing Scope 3 emissions, the CO2 equivalent emitted from activity in their supply chain.

For colocation customers, the energy used in their colo provider’s facility is part of their Scope 3 emissions, and there are moves to encourage all companies to cut their

Scope 3 emissions to reach net-zero goals.

Revealingly, Equinix does not provide any supporting quotes at all from customers eager to have their servers hosted at a higher temperature.

For Equinix, the emissions for electricity used in its cooling systems are part of its Scope 2 emissions, which it has promised to reduce. Increasing the temperature will be a major step towards achieving that goal.

"Our cooling systems account for approximately 25 percent of our total energy usage globally," said Abdel. "Once rolled out across our current global data center footprint, we anticipate energy efficiency improvements of as much as 10 percent in various locations."

Equinix is in a difficult position. It can’t increase the temperature without risking the displeasure of its customers, who might refuse to allow the increase or go elsewhere.

It’s a move that needs to be made, and Equinix deserves support for setting the goal. But the cautious nature of the announcement makes it clear that this could be an uphill battle.

However, Equinix clearly believes that future net-zero regulations will push customers in the direction it wants to be allowed to go.

"Equinix is committed to understanding how these changes will affect our customers and we will work together to find a mutually beneficial path toward a more sustainable future,” says the statement from the company.

“As global sustainability requirements for data center operations become more stringent, our customers and partners will depend on Equinix to continue leading efforts that help them achieve their sustainability goals." 

15 | DCD eBook • datacenterdynamics.com
16 | DCD eBook • datacenterdynamics.com >Keynote: 10 challenges in data center cooling operations Click here to download >> DCD eBook | Cooling fit for the future

Designing data centers for heat wave resiliency

Potecting our important data center infrastructure and making it resilient through extreme weather events is critical to our modern operating world. We must approach designing new buildings and retrofitting existing data center infrastructure to be able to withstand extreme temperatures with a sense of urgency.

Data centers have become one of the most important aspects of our infrastructure. Nearly all businesses rely on data centers to keep operations running. And while the importance of well-designed data centers can be easy to forget when it

works seamlessly in the background, if a data center fails, it can be a major incident.

Building resiliency into data centers to ensure constant uptime is among the highest priorities for data center operations. But this resiliency is increasingly challenged as our climate changes and temperatures rise. In the summer of 2022, London learned first-hand how detrimental overheating can be when critical data centers failed during an unprecedented heatwave.

Considering data centers are being built in places where extreme heat and lack of water are the norm,

17 | DCD eBook • datacenterdynamics.com
Why data centers need to consider climate as part of their design
Consulting with trusted thermal management during the design phase can help ensure your data center will withstand drastic temperature shifts while using minimal water and maintaining efficiency and sustainability
Danielle Rossi, Trane Trane

these types of unprecedented events are likely to become increasingly common.

The good news is, consulting with trusted thermal management during the design phase can help ensure your data center will withstand drastic temperature shifts while using minimal water and maintaining efficiency and sustainability. We need to design our data center infrastructure for any extreme weather events – our businesses, communities, and society depend on it.

With temperatures rising across the globe and heatwaves becoming increasingly common, designing data centers to be resilient through extreme weather events has never been more important. But how exactly do you approach designing and building a data center today that can withstand future climate shifts?

Understand location climate and plan for the worst

The first step of designing a data center with climate resiliency at its core is to understand the environment where a data center is located. You need to consider every aspect of the climate throughout the entire calendar year. Things like altitude, humidity, water availability, and more all impact what heating and cooling equipment are best for your operations.

Spend time anticipating how the region’s climate might change over time and plan for the worstcase scenario. We are seeing temperatures rise much faster than ever anticipated, so we are already meeting – and in some cases surpassing – the worst-case scenarios from five years ago.

For large installations with many chillers, the immense amount of heat rejected can create a microclimate where the temperatures reach even higher.

Determine the best equipment for your operations

Technology for maintaining optimal temperatures within buildings has advanced rapidly and there are numerous options you can configure to create a solution that works best for your operations. A dedicated engineer can work with you to thoroughly assess your operational goals in order to develop a customized thermal management solution for your entire facility.

Located in a drought-ridden climate where there isn’t a lot of available water? Air-cooled chillers don’t use water for cooling making them some of the most sustainable options on the market. Alternatively, a water-cooled, closed loop system recycles the same water, again and again, so doesn’t require a lot of water to operate.

Wanting to create more efficiency? Install dry coolers for day-to-day operations and then supplement with trim chillers that can be used on the hottest days to help reduce overall costs while still preparing for the occasional heatwave. Optimize your free cooling for use in higher temperatures, increasing the amount of time the system runs without mechanical cooling.

Limited space for HVAC equipment? 1-phase immersion dry coolers are built horizontally, thus require up to 60 percent smaller footprint.

Add controls for better management and added efficiency

Getting the right equipment for your specific climate and operations is a great start, but to create a truly resilient data center, you need advanced controls to ensure equipment is set up to run optimally at all times while maximizing efficiency and helping to reduce operating costs.

Configuring the system to run

exactly when and where you need it, means that you’re only using the energy required to maintain the infrastructure. Being able to make changes remotely means you won’t be caught unprepared if an extreme weather event unexpectedly arises.

You can also build contingency planning into your control settings so that the system automatically adjusts to predetermined thresholds to maintain optimal operating conditions. Having this added layer of security for worst-case scenarios provides peace of mind that your operations are built to withstand climate fluctuations.

Once your equipment is setup, there are two important items to ensure the system is configured for resilience:

• Ongoing maintenance keeps everything running. Mechanical systems require cleaning, upkeep, and service. Equipment maintenance ensures an installation can function at the level it should every day and can prevent unforeseen component failures.

• Rental services are the parachute owners always forget. Having rental unit availability for emergency scenarios in high ambient conditions adds another level of redundancy.

Trane’s world-class team of engineers, technicians, and experienced energy specialists have helped data centers around the world design their operations for sustainability, efficiency, and resiliency and take the job of protecting the world’s vital infrastructure while increasing the sustainability of the planet seriously. Trane understands the nuances that data centers face and how to build solutions for the world’s largest hyperscale and colocation companies in the extreme conditions, particularly high ambient climates.

To find out more, visit www.trane.com. 

18 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future
Q&A

Q&A

Q&A Danielle Rossi, Trane

A look at data center cooling challenges from the mind of a manufacturer

As our demand for data continues to snowball resulting in increased rack densities, the solutions needed to keep our data center facilities cool have to keep pace. But this ever-evolving digital landscape throws up challenges for cooling manufacturers, who must ensure their products perform to the max, with a minimal impact on our environment.

We spoke to Danielle Rossi, global director, Mission Critical Cooling at Trane Commerical US, to find out how what these challenges are, and more importantly, how they're being overcome.

What kind of limits do the laws of physics force on makers of cooling hardware?

Chips are getting smaller and denser. We have reached a point in the technology that requires a different form of heat rejection to keep them cool. Some of the newer HPC chips are, in some cases, two to three times more dense than previous types in the last 10 years and cannot be cooled by air, requiring a liquid cooling design. A March 2023 article by Jacqueline Davis with Uptime Institute, ‘Too hot to handle? Operators to struggle with new chips’,

is a great reference discussing the new chip densities.

In what ways does data center design need to change to maximize sustainability?

The largest use of power in the data center, outside of the IT load itself, is the cooling system. The efficiency of UPS units and other power systems have been optimized in the last 20 years with 1pf and transformer-less systems, accordingly the power support of the data center is now down to five to eight percent. In most data centers, the power used by the cooling system is between 30 and 40 percent of the overall power. Many large hyperscale and colocation customers are revisiting their cooling system designs to improve efficiency. Another way to improve efficiency is to optimize controls and services. Ensure controls and BMS systems are utilized to their full extent and regularly maintenance all equipment for the best performance and monitoring. If a site needs efficiency improvement, I always recommend energy services to determine where to best begin.

A lesser mentioned sustainability topic is materials reuse. There has been more conversation regarding

refrigerants and chemicals in recent years, but solid material reuse is an area of sustainability that is not often discussed. Materials sustainability can sometimes be more costly and does not have a monetary return on investment. Therefore, it will likely require vendor initiatives, end-user operator requirements or governmental laws to move materials reuse forward as a sustainability goal.

What can IT hardware and server rack makers do to help improve cooling efficiency?

IT hardware and server manufacturers are starting to collaborate with cooling companies to optimize design and increase efficiency as a full system design. This is particularly true in liquid cooling, where the server is in direct contact with the heat rejection. With the use of these high-density chips becoming more common, collaborative design is incredibly important to ensure systems are optimized to the best efficiency possible.

What role can machine learning and AI tools play in maximizing the efficiency of cooling systems?

19 | DCD eBook • datacenterdynamics.com
Danielle Rossi, Trane

AI is both, partially, the reason for and the design assistance for higher density applications. AI applications are requiring the high-performance computing that will require more and different cooling systems. AI systems also help map future computational fluid dynamics (CFD), project future weather in regions, and provide realtime operational adjustments.

How is air cooling evolving to meet the challenges posed by the need for ever-greater efficiency and sustainability?

Very little has changed with air cooling the past 30 years. Perimeter units have gone from belt and pully systems to direct drive fans. Row containment and close-coupled cooling for in-row applications was introduced in the late 2000s, allowing higher rack density.

Many large facilities have moved to chilled water outdoor heat rejection systems with chilled water, air-cooled indoor systems, such as perimeter CRAH, instead of individual refrigerant/DX based cooling systems, such as a perimeter CRAC. That decreases the number of high-power draw compressors being used and decreases the amount of required copper in the facility.

The largest power draws, aside from compressors, on air-cooled systems are the fans. Some air-cooling vendors are redesigning systems to increase airflow or decrease power loss and some fan manufacturers are working to optimize that component individually, but the efficiency of air-cooled systems will always be limited to the heat transfer capabilities of air itself.

Ultimately, will data center operators need to migrate to liquid cooling to achieve their long-term sustainability ambitions?

Liquid cooling is the most efficient and sustainable cooling method, to date. Liquid cooling has a better efficiency than air cooling and utilizes very little to no water. Operators can also improve footprint

and provide heat reuse, making the design very good for sustainability goals.

However, it is important to note, a required migration to liquid cooling would be dependent on the load and chips being utilized. There are multiple types of liquid cooling; direct-to-chip, single phase immersion (1-PIC) and two-phase immersion (2-PIC). The types of liquid cooling are used in different ways and for different applications.

Some data center operators may transition to one or multiple methods alongside their air-cooled systems, as their densities grow, some may choose to utilize multiple methods of liquid cooling within a space and other operators may create a new facility for one type of liquid cooling for a fully high-density space. If a design fits the density parameters for liquid cooling, it would be the best cooling option to meet sustainability targets.

Trane is a leader in ‘highefficiency heat rejection’, why is this important?

As mentioned previously, the cooling system is between 30 and 40 percent of the overall power. Our chillers are extremely efficient, utilizing our controls systems to optimize free cooling and other efficiency technologies. Increasing the efficiency of the heat rejection and the building cooling system has a larger impact on overall site efficiency than any other portion of the design.

Trane has recently done a Series B investment with LiquidStack, a company that specializes in liquid immersion cooling. How will this benefit Trane’s sustainability journey, as well as the data center industry at large?

There have been discussions about liquid cooling in data centers for nearly 20 years. The new highdensity chips are making the transition a reality. That transition opens the door to better utilize

existing sustainability practices such as heat reuse. Trane chillers can provide both the chilled water to cool the systems and provide heat recovery to reuse. Trane Technologies embarked on our Gigaton Challenge which we have set the goal to reduce our customers’ carbon emissions by one billion metric tons of greenhouse gas emissions (CO2e) from our customer's carbon footprints by 2030. Trane products work well combined with LiquidStack products to help us achieve that goal.

Looking at the wider data center life cycle, how does waste-heat get reused?

Many European countries are putting language into recent bills stating heat reuse must be used on data center applications over a certain size. There are many benefits to heat reuse and places it can be utilized. Waste heat, particularly from liquid cooling, can be used as local heating to the building’s office space, district heating to a nearby community, indoor agriculture and fish farming facilities, community pools and parking lot or sidewalk snow mitigation.

Could you talk us through the sustainability (and wider) benefits of liquid immersion cooling? Is this the future?

Aside from high density use cases and heat reuse already mentioned, the biggest sustainability benefit of liquid cooling is the high operating temperatures. In most cases, mechanical cooling would only be utilized for the hottest days and utilizing only fans for the outdoor heat rejection, limiting compressor run times, saving power, money, and noise. In very cold climates, no mechanical cooling is required, and only outdoor fans are used to cool the system. That system’s high temperature is the reason heat reuse is more beneficial with liquid cooling than air cooling. 

20 | DCD eBook • datacenterdynamics.com
>> DCD eBook | Cooling fit for the future

Chapter three: Going greener

A few years ago, operating a sustainable data center meant little more than making improvements to PUE. But, with cooling making up a huge portion of a data center’s operational expenditure, ‘green’ has taken on a whole new meaning, bringing with it a myriad of new sustainability metrics and considerations.

Today, efficient equipment is no longer enough, as data center operators are tasked with tracking their water and carbon usage too. In this chapter we hone in on data center decarbonization, how we go about curbing thirsty facilities, as well as practical solutions to greening your HVAC system.

21 | DCD eBook • datacenterdynamics.com

Decarbonization: The secret to success and sustainability?

Decarbonizing the data center: the what, why and (most importantly) how

Although sustainability within the data center industry has been steadily climbing the priority ladder over the last few years, decarbonization – one of the key ways to actually achieve it – is a fairly nascent topic.

Perhaps this is testament to the greenwashing currently rife across the sector, or maybe it’s simply down to a lack of education. But what we do know, is an all talk no (real) action approach won’t get us anywhere.

Decarbonization quite literally means the reduction in carbon emissions, usually in relation to a business, or in this case, a data center, which are businesses unto themselves.

Unfortunately for data center operators, many resting on their laurels, decarbonization won’t just happen, and will require a multifaceted, concerted effort if they want to reap the rewards decarbonization will ultimately bring to the bottom line.

“Decarbonization requires businesses to effectively account for the emissions it takes to run their operations,” says Trevor Joelson, key account decarbonization program lead at Trane. “It’s then about coming up with a plan to actually reduce those emissions.”

Why now?

In corporate America in particular, over the last 12 to 18 months, the buzz surrounding decarbonization has coincided with the institutional investment community deeming it a priority alongside the broader topic of ESG (environmental, social and governance). In other words, the money has finally found it, and big money equals big interest.

“Within ESG you have environmental sustainability, and these large institutional investors are looking at companies and they’re looking at data center operators,” says Joelson.

Investors want to know what a data center’s plan is for environmental sustainability. This will not only help them gauge the

health of the business today, but will act as an indication of what they can expect from it in the future so they can make prudent investments.

Of course, due to an abundance of resources, hyperscale companies are currently leading the way when it comes to decarbonization. But this isn’t because they’re more environmentally conscious than the rest of us, it’s their customers in the driving seat.

Customer priorities shape the way an organization does business, so if hyperscale customers care about carbon accounting – a key component in the decarbonization of a data center – this gives operators a functional reason to care too, and to not only care, but take action.

“If we look at hyperscale operations in terms of carbon accounting, you have your scope one and scope two emissions which are those you are directly accountable for,” explains Joelson. “But then scope three emissions are the value stream both to and

22 | DCD eBook • datacenterdynamics.com
Claire Fletcher, DCD

from your suppliers and to your customers.”

This has led customers (particularly those who are large users of a data center’s core services) to approach operators looking to understand the emissions that they are responsible for.

This is because, ultimately, they too will be accountable for said emissions within one of their own emissions scopes. Essentially, hyperscale customers want to be able to account for the emissions used as a result of contracting with these companies.

Following the leader

Although scope one and two emissions may be easier to ascertain and disclose, scope three emissions are by far the largest piece of the puzzle, accounting for up to 80 percent of a data center’s total emissions. With that being the case, it’s hard to believe that up until recently, scope three emissions remained largely undefined.

“Scope three has been this black box for a while that society just accepted as being undefined, so companies never really had to act. What’s changed recently is that industry leaders – some being the hyperscalers we referenced earlier – have become much more transparent about their scope three emissions. Enterprise data centers hosted in colocation (colo) and cloud need to take notice; even though it’s off site, you have to look at those emissions in scope three.”

This has resulted in other companies following suit, creating a kind of transparency snowball which is continuing to pick up speed.

A unique challenge

By way of example, if you took all the roof space of a data center and covered it in solar panels, this may seem like a substantial sustainability

investment. However, because of the high energy environment that is a data center, this would be less than a drop in the ocean in terms of that facility’s overall energy consumption.

“A lot of the decarbonization of a data center does have to happen off-site with investment in large renewable energy developments,” says Joelson. “We’ve seen a lot of major operators go out and invest in things like virtual power purchase agreements (VPPAs).”

Joelson goes on to explain that due to the fact organizations shouldn’t lose sight of what’s behind the meter, i.e. their actual core energy consumption, VPPAs are not necessarily the way to go. Rather, investments should be made in energy efficiency and behind the meter type solutions.

That said, this doesn’t mean traditional off-site investments should be avoided entirely. While we won’t reach net zero with an exclusively on-site solution, at the same time, operators shouldn’t rely too heavily on an exclusively off-site strategy either, it’s about striking a workable balance between the two.

Out with the old

The exact formula for data center decarbonization largely depends on the facility in question. If a data center has yet to be constructed, this provides operators the opportunity to bake sustainable solutions into the build from design stage, selecting materials with less embedded carbon than those typically used in existing facilities, for example concrete.

“When you build greenfield, you can design to the newest and most sustainable standards. You can also plan ahead, when possible, to make easy retrofits in future," says Danielle Rossi, global director, Mission Critical Cooling at Trane US Commerical.

When building a new data center, the aim of the game is to build the most efficient facility possible. More often than not, this means designers are working with the latest – generally more environmentally friendly – technologies and materials to achieve the desired result.

Historically, existing facilities have utilized diesel generators as a source of backup power. But as the spotlight remains firmly on sustainability, there is now a strong desire for the industry to move away from diesel, to batteries as the generation source.

Recent technologies have made this possible, with hyperscale behemoth Google having applied battery energy storage as a complete replacement for diesel generators.

"In addition to battery storage options, we have seen a rise in thermal and ice storage providing off peak energy and cooling options," says Rossi.

Unfortunately, even with illustrious trailblazers like Google, human beings, particularly those running existing mission critical facilities, are inherently resistant to change.

“Within the built environment there’s a much more embedded human element to it. It’s a more emotional decision that’s being made, and there’s a ‘we’ve always done it this way, that’s just the way it is’ attitude at play,” says Joelson.

Facility operators need to be made comfortable with the fact different conditions can be implemented that will result in greater efficiency and a lower carbon footprint, without affecting their core business and resilience.

"Operators know that any adjustments to a live data center could result in downtime. The key is to find changes that can be made for minimal downtime and optimal monetary and sustainability ROI," explains Rossi.

Companies that have been in operation for a while tend to focus on projects that will provide a high return on investment, with very low, simple paybacks. However, this approach causes them to leave behind a plethora of other attractive

23 | DCD eBook • datacenterdynamics.com
Although scope one and two emissions may be easier to ascertain and disclose, scope three emissions are by far the largest piece of the puzzle, accounting for up to 80 percent of a data center’s total emissions

projects that may simply take longer to pay back.

"Finding an energy partner like Trane, helps to navigate all the options available, including energy services and capital investment services."

But how?

From an energy perspective, the HVAC side of a data center accounts for anywhere between 30 to 40 percent of a facility’s total operating costs, so this would be a sensible place to start.

"Aside from the IT load, the cooling system uses the most energy in a data center. Optimizing efficiency and ensuring refrigerants are the lowest GWP will help the overall sustainability of the project more than any other aspect of the design."

Customers focused on decarbonization need to account for these refrigerants, based on their aforementioned global warming potential (GWP), alongside any refrigerant leaks that need to be accounted for as ‘fugitive emissions’. For customers, these fall into their scope one category.

So, what can operators do?

"The Montreal Protocol requires the next refrigerant GWP drop by the end of this year and all of the industry will be upgrading by then. However, we began transitioning sooner than many competitors, and customers have come to us in recent years for equipment selections because of that," says Rossi.

And, of course, if you can reduce the number and volume of leaks, the lower refrigerant-based scope one emissions you will

have. Fortunately, leak reduction is an area of business that doesn’t necessarily require capital upgrades to equipment.

This is because a lot of the time, HVAC within a data center is being controlled by a poorly programmed system, or a system that has been taken out of action altogether. Therefore controls, particularly within the built environment, are crucial to the entire decarbequation.

Fortunately, the industry is now in a position where operators are able to add advanced artificial intelligence and machine learning to their control systems, creating better outcomes for their facilities.

Some of these add-ons may focus on energy efficiency and optimization, but will be refrigerant specific, to the point where an operator can see any variances within the system alerting them to a leaking refrigerant – without having to specifically monitor for leaking refrigerants.

“Traditionally, being able to monitor all assets separately might’ve presented a cost hurdle for customers. Now it can be embedded within a core machine learning/artificial intelligence type optimization solution, helping significantly reduce the scope one emissions that can be a huge weight on a customer’s carbon accounting,” says Joelson.

Outside of the facility itself, another important piece of the decarbonization puzzle is having an in-depth, real-time understanding of the electricity grid the data center is connected to. Ultimately, it’s this generation mix that will drive your emissions.

“Around machine learning and artificial intelligence, we’re seeing customers asking, ‘can we shift the operations of our individual facility or portfolio of facilities to respond to the generation mix on the electric grid?’

“For example, if a coal plant has to come on due to a particular variable, can you then shift to battery storage, or perhaps use your DCIM to shift the load somewhere else? It’s about that integration with the electrical grid that machine learning and artificial intelligence allows for,” says Joelson. Where do I start?

The first logical step is to identify which solutions are available to you. This can be done with the help of industry partners who are not only established in decarbonization and sustainability, but also within mission critical environments (preferably data centers.)

"Data centers and decarbonization are both very specialized topics. Finding a design partner that is knowledgeable in both is important," says Rossi.

And because there are so many different variables that go into effectively achieving decarbonization, rather than keeping partners siloed as might have been the case in the past, Trane advises ‘partnering your partners’, as a collaborative effort will ultimately help drive the greatest outcomes for your business.

Trane can put you in touch with a myriad of third-party investors that see the value in decarbonization and are willing to invest capital in your business and buildings to help you achieve it.

“This is a turn-key, silverbullet opportunity for data center companies to bring in investors to drive decarbonization, while operators can remain focused on their core business,” says Joelson.

In other words, stick to what you know. If you are a data center operator, your focus should be exactly that. So, the secret to sustainable success? Seek professional advice, and delegate your decarbonization. 

24 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future
The industry is moving towards better, less environmentally impactful refrigerants. We have customers that will come to us and base their equipment selection on the fact we’ve adopted a low GWP refrigerant before one of our competitors
> Miles Auvil Trane

Curbing thirsty data centers

As freshwater supplies dwindle, data centers need to look elsewhere

Freshwater is increasingly in short supply

The march of industrialization, extensive use of limited groundwater reserves, growing populations, and the impact of climate change mean that access to water is going to become one of the defining struggles of the 21st century.

Data centers' drink problem

"There's gonna be a 40 percent gap in freshwater supply and demand by 2030 according to the UN," Nalco Water VP and GM Heather DuBois told DCD. "It's just staggering, but yet we need to continue to grow."

Data center demand shows no sign of letting up, with gigawatts of capacity expected in the years ahead - much of which will be cooled with water.

That means a lot of water, at a time when the world needs it the most. That’s simply unsustainable.

Take Digital Realty - back in 2018, its data centers used around 1.4

billion gallons of water in a year, an astronomical sum, and that’s even with the majority of its facilities not using water for cooling. If all of that was potable water, it would be a wasteful travesty.

"Water is a key element of our environmental, social, and corporate governance journey," Digital's director of sustainability programs Aaron Binkley said. “43 percent of our water use came from reclaimed sources last year - that's more than 660 million gallons of water across the portfolio.”

To get to that point took a significant amount of work, the company’s water sustainability lead Walter Leclerc told DCD. Getting to 100 percent water reclamation will take significantly more effort.

First, the company partnered with Nalco and parent company Ecolab to assess the water use and water risk at all of its data centers. “We used the Water Risk Monetizer, which is a tool that Nalco, Microsoft and Trucost put together, to place each data center on a water maturity curve,” Leclerc said.

“We've done that at the local level, and now we've prioritized our hotspots. We know as of today which of our sites are high-risk water sites, medium-risk water sites, and low-risk water sites.”

This has meant collecting “a tremendous amount of data,” Leclerc said. “It can take three to four months per site to do assessments - Nalco went to every one of our water sites across the globe, they did a design assessment, they took samples.

“They first take water quality assessments, and then they do assessments of the watersheds, where those data centers were taking it off of. So it took a long time.”

Quality counts, too

State and local utility providers offer little in the way of data on water supplies, with limited regulatory data offered, along with a mandatory requirement to test for Legionella. “But that didn't help us from a water stewardship perspective,” Leclerc said. “We need to understand the

25 | DCD eBook • datacenterdynamics.com
Seb Moss, DCD

quality of the water, we're talking about the pH, we're talking about the solids in it, and so on, because we need that information to develop the pipeline of projects at a site.”

Armed with up-to-date information on water use, availability, and quality, the company was able to take steps at each site to help reduce freshwater usage. “It can include everything from sulfuric acid dosing to reclaim water, to implementing pretreatment on systems, to looking at collecting rainwater.”

Harvesting precipitation “takes a tremendous amount of effort,” Leclerc said. “We actually have two major projects that are about to come to a conclusion now, and they took two years to do.”

While the data center company likes to standardize designs where possible, the reality is that different locations have different needs and capacities. “Down in our Phoenix, Arizona, site we initially reclaimed water into that system, but then we outstripped the capacity of the city,” Leclerc said.

“So they actually told us to go back onto potable water. Now the city has caught up with us, and they said they can handle our reclaimed water volume, so we're trying to do another project next year to get it back.”

In places like Ashburn, where Digital has multiple huge campus developments, “those are opportunities to have a strategic dialogue with the water authority and say, ‘is reclaimed water available next to our campus?’” Digital’s Binkley said. “Or if it's not, we can give them some general guidance on what our water consumption would be and what our demand profile would look like.”

Due to the proximity of other companies’ data centers in some areas, it can make sense to collaborate with rivals - as well as non-data center companies - to make a case for reclaimed water availability. “We know Amazon is doing a huge building in Ashburn, so that's going to affect the reclaimed water supply,” Leclerc said.

Local opportunities

Sometimes the particular location of a data center can open up unique cooling solutions, like Google found with its Hamina data center, which uses a cooling system it inherited from the paper plant which preceded it to draw on seawater.

With Digital’s Marseille facility (under the Interxion brand), the team realized that they were relatively near a decommissioned coal mine “that has waste cold water that's coming out of the ground that was traditionally pumped into the river and out to sea,” Binkley said.

“That's being piped about 14 kilometers to our Marseille data

center campus, and it's being used as a chilled water source to limit cooling energy needs. It's helping to make the facility more efficient by getting free cooling out of what is a wastewater stream from an old coal mine.”

While the environmental benefits of such water saving moves is clear, what can make it easier for companies to sign off on is that it can also lead to cost savings. “We want to do the right thing when it comes to water stewardship,” Leclerc said, “but also the return on investment on some of these projects is like seven months,” with reclaimed water usually less expensive than freshwater.

“In the old days, the ROI was 10 years,” he added. “We're just finishing up a two year project at our Santa Clara location, where we're going to be taking 16 million gallons of potable water off that watershed and going to reclaimed water. This is great, but I'm also going to be collecting hundreds of thousands of dollars a year from the cost differential for the operations team. So it is a driver, absolutely.”

Still, the financial benefits aren’t as profound as one would expect given water’s precious nature. “Water is cheap, even in warm places where it is a scarce

26 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future
The cooling plant at Google's Hamina data center
43 percent of our water use came from reclaimed sources last year - that's more than 660 million gallons of water across the portfolio

precious resource it's still relatively inexpensive,” Binkley added. Further savings can be made with evaporation credits, or in spending more on longer pipes that reach areas with lower water rates.

A number of customers are already demanding at least some action on freshwater usage. “Customers are becoming aware of water concerns and many of the big hyperscalers have made commitments,” Binkley said, although the level of demand for action is currently much less than

that for renewable energy.

Here, Digital is trying to work out how to involve the customer in sustainable decision making. For renewable energy, the company has partnerships with solar and wind companies that mean its customers can decide whether to use renewable energy via power purchase agreements, or just run off of the grid. For water, it’s not immediately clear how much control the customer can have.

“We just started those conversations in the last month or

so,” Leclerc said. “How can we bring Ecolab into our RFP process, and how can we leverage the partnership to do that with our customers? We're just not there yet.

“We're kind of hung up on it. It's easy when it's the customers paying for the source, but if they're just a customer in a site and they're not part of that water matrix perspective, it’s a lot harder.”

However that is achieved, it is expected that water sustainability will increasingly feature on customers’ priorities, just as we have seen the rapid expansion of demand for renewable energy sources for data center workloads over the past few years.

Water scarcity is only going to get worse, and companies needlessly using limited freshwater resources will rightly face the ire of a desperate populace. “The public is going to hold us to these as we move forward,” Nalco’s DuBois said.

“I don't think anybody's going to have a choice as we look to the constraints that the environment is going to have in the coming years.”

27 | DCD eBook • datacenterdynamics.com
Click here to download >Panel:
Digital Realty's Marseille facility accesses cold water from a mine

Six ways to make your HVAC system greener

Six ways that HVAC systems can make bigger strides to decarbonize right now

Technology and material advancements have been making buildings consistently more sustainable for decades. Some of what’s possible is truly remarkable. There are solutions readily available today that can reduce emissions while paying back building owners with financial benefits.

28 | DCD eBook • datacenterdynamics.com >> DCD eBook | Cooling fit for the future
>> DCD eBook | Cooling fit for the future

1. Squeeze out every last kilowatt of inefficiency

The electric grid is still partially powered by carbonemitting power sources. The first step to decarbonization begins by following some basic principles of applied system design. Decisions for compression-based heating systems should enable a high enough annualized COP to reduce emissions below site-based fossil fuel heating systems.

High efficiency systems further reduce wasted energy demand when the right control strategies are applied. As an example, Trane’s Symbio® 800 controller with Adaptive Controls modulates the compressor and fans to deliver peak efficiency at all operating conditions. When multiple units are in place, Tracer® SC+, with its chiller plant control application, can efficiently sequence units and dynamically adjust system control setpoints to minimize system energy use at all load conditions.

2. Use cool outdoor air instead of the compressor

Nature often provides what we need, and contemporary solutions are getting quite clever (and sensible) about using the local climate or geo-based resources. One of the market’s most practical and readily-available solutions is Free Cooling. It uses outdoor air to chill water used within applied systems without the use of compressors when outside air temperatures are advantageous—during winter, spring and fall. Integrated free cooling, a technology that’s available in many Trane chillers, can make building cooling easier, more efficient, and lower cost than fieldprovided solutions.

3. Recover and reuse heat energy

Many buildings require heating and cooling at the same time. We’ve all experienced buildings where some areas are too hot, while others feel too cold. Moving heat, instead of generating it with gas-fired heaters, reduces a building’s direct greenhouse gas emissions while improving comfort. Electronics, lighting and human bodies all generate heat which builds up in certain interior spaces. Heat recovery systems reclaim this excess heat energy that is typically expelled from the building and transfers it to different areas where heat is needed.

4. Plan a transition to low GWP refrigerants

The U.S. EPA has proposed a new rule to limit hydrofluorocarbon (HFC) production based on global warming potential (GWP) in an effort to align with the Kigali Amendment to the Montreal Protocol and the AIM Act enacted last year. As part of the Trane Technologies Gigaton Challenge, Trane plans to fully transition out of high GWP refrigerants by 2030, ahead of regulation. What’s your next move? Consult with HVAC system experts now to begin strategizing your building’s transition to less harmful, next-generation refrigerants including R-513A, R-514A and R-1233zd.

5. Store energy to help balance supply and demand

We’re all in this together. Buildings are part of the problem, and they have a responsibility to be part of the solution—a big part. Despite significant progress, buildings still account for over 70% of U.S. electricity consumption and power sector CO2 emissions.

Along with continued efficiency strides, buildings also need to become more flexible in timing when they consume the most energy. Energy storage enables the shift, and in chiller plants that means ice storage: dedicating chillers to create cold energy at night, when electricity demand and prices are lower, for use in the peak-use daytime hours. Shifting building load helps to balance the grid’s electricity supply with customer demand—one of the key prerequisites to speeding up the grid’s full transition to renewable energy sources.

6. Transition to renewables, onsite and offsite

When used in combination with HVAC system solutions that reduce or shift energy demand, renewable energy sources, such as solar and wind, can serve a significant part of a building’s energy demand. As utilities get greener, advanced chiller controls can integrate with services that allow two-way communication with the grid. Buildings that can reduce, shift, or modulate energy use and establish demand flexibility will expedite the reality of a fossil-fuel-free, renewable-energy grid.

Sustainability is complicated when you don’t have a plan, but the right partner can help you strategize with solutions that are proven, practical and affordable. When you need a partner to guide you into the next generation of decarbonization and building electrification, reach out to Trane 

29 | DCD eBook • datacenterdynamics.com
30 | DCD eBook • datacenterdynamics.com >Broadcast: Charting the journey to reach carbon
2030 >> DCD eBook | Cooling fit for the future Click here to download
negative by

It affects your bottom line—and the planet we call home. It’s why we’re passionate about crafting cooling systems that are both reliable and incredibly energy efficient.

Our expertise can help you spend more wisely, meet your sustainability goals and preserve a better future for all. Visit our Data Center Resource Center to hear from Trane subject matter experts on in-demand topics for maximizing your performance, cost-efficiency and sustainability. Check it out at trane.com/datacenters

With big data comes big energy demands.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.