The Cooling Supplement

Page 1

Sponsored by

Cooling Supplement

INSIDE

New technologies and new benchmarks Why PUE has to change

> The long-lived efficiency measure is no longer enough, thanks to new cooling methods

A new phase for cooling

> Two-phase cooling has been bubbling under for some time. Now it’s properly arriving

Cool vibrations

> A Glasgow firm converts thermal energy into motion, to speed heat away


A BOLD NEW WAY TO MANAGE YOUR DATA CENTERS Streamline your data center monitoring and management operations with Honeywell Data Center Manager.

Learn how we are shaping the way businesses manage their data center operations


Cooling Supplement

Sponsored by

Contents 4. A new phase for liquid cooling Two-phase cooling has been bubbling under for some time. Now it's properly arriving 8. Why PUE has to change The long-lived efficiency measure is no longer enough, thanks to the arrival of new cooling methods 11. Cooling with water Harmful HFC gases are on the way out. What if we could replace them with pure water? 12. Cool vibrations A Glasgow firm converts thermal energy into motion, to speed heat away

Cooling evolves

W

hile air conditioning systems still dominate the data center world, new alternatives have been emerging.

Liquid cooling has been in the running for some time. Traditionalists have argued the time isn't right, while baths of dielectric appear in HPC facilities. At the same time, there's a bunch of alternatives coming along - and all this is producing some confusion about how you evaluate the contenders. This supplement takes a look across the landscape of data center cooling.

In search of a new metric This kind of technology seachange could drive new ways to measure efficiency - because they create a world where the venerable PUE metric doesn't work any more Power usage effectiveness (PUE) divides data center energy into rack power (good) and facility power (bad). But direct liquid cooling actually changes the boundaries. Liquid cooling does away with fans in servers. that's a part of its efficiency promise - but in traditional PUE it actually counts against it, because it reduces power use in the rack. For these and other reasons, PUE is going to have to change.

Cooling with water Two-phase comes of age

4

12

We start with a surprise. While data center operators have been grudgingly starting to accept that increasing power densities will demand liquid cooling, they've expected circulating or tank systems that absorb heat and remove it through conduction and convection, In two-phase systems, the coolant boils and condenses. This removes more heat, but till recently, only over-clocking gamers have been up for the risk of vibrational damage or other consequences of pushing the boundaries. But there really is a rapid increase in the power density of HPC and AI racks, and this seems to be coinciding with moves to make two-phase practical. We talk to the people who are working on standards which could allow multi-vendor systems and maybe make two-phase the norm in certain sectors of the market.

With F-gas regulations phasing out the HFCs in traditional chillers, we need to use something else. One alternative is water. Adiabatic chillers are a lowenergy alternative to traditional units, but the industry is realizing it can't go on using ever-increasing amounts of water. So it's interesting to see a company with chillers that use water in a traditional compression cycle. Will it catch on?

Shaking the heat away A truly bizarre looking chiller debuted at the COP26 event in Glasgow. With metal leaves fluttering round an upright trunk, the thermal vibration bell (TVB) looks like a steampunk tree. But it's not an art installation. It's a serious contender in which thermal energy converts to mechanical motion which drives its own removal.

Cooling Supplement 3


Liquid cooling: A new phase Two-phase liquid cooling has finally arrived. Vendors are making purpose-built liquid cooled servers

D

ata center operators have been avoiding liquid cooling. Keeping it as a potential option for the future, but never a mainstream operational approach. Liquid cooling proponents warned that rack power densities could only get so high before liquid cooling was necessary, but their forecasts were always forestalled. The Green Grid suggested that air cooling can only work up to around 25kW per rack, but AI applications threaten to go above that. In the past, when rack power densities approached levels where air cooling could not perform, silicon makers would improve their chips’ efficiency, or cooling systems

4

would get better. Liquid cooling was considered a last resort, an exotic option for systems with very high energy use. It required tweaks to the hardware, and mainstream vendors did not make servers designed to be cooled by liquid. But all the world’s fastest supercomputers are cooled by liquid to support the high power density, and a lot of Bitcoin mining rigs have direct-to-chip cooling or immersion cooling, so their chips can be run at high clock rates. Most data center operators are too conservative for that sort of thing, so they’ve backed off. This year, that could be changing. Major

DCD Supplement • datacenterdynamics.com

Peter Judge Global Editor

announcements at the OCP Summit - the get-together for standardized data center equipment - centered on liquid cooling. And within those announcements, it’s now clear that hardware makers are making servers that are specifically designed for liquid cooling. The reason is clear: hardware power densities are now reaching the tipping point: “Higher power chipsets are very commonplace now, we’re seeing 500W or 600W GPUs, and CPUs reaching 800W to 1,000W,” says Joe Capes, CEO of immersion cooling specialist LiquidStack. “Fundamentally, it just becomes extremely challenging to air cool above 270W per chip.“ When air-cooling hits the performance


Cooling Supplement wall, it’s not a linear change either, it’s geometric. Because fans are resistances, their power draw is the I-squared-R set down by Ohm’s law, going up by the square of the current. “So as you go to higher power chipsets, the fans have to be larger, or they have to run at higher speeds. And higher the speed, the more wasteful they become,” says Capes. “If you have a traditional chilled water computer room air handler (CRAH), you want those fans running at 25 to 28 percent of total capacity to reduce the I-squared-R loss. If you have to ramp those fans to 70 or 80 percent of their rated speed, you're consuming a massive additional amount of energy.” Out of the crypto ghetto LiquidStack itself charts the progress of liquid cooling. Founded in 2013 by Hong Kong entrepreneur Kar-Wing Lau, it used two-phase cooling (see box) to pack 500kW of high-performance computing in a shipping container, with a PUE of 1.02. The company was bought by cryptocurrency specialist BitFury in 2015, and put to work trying to eke more performance out of Bitcoin rigs. Then in 2021, eyeing the potential of liquid cooling

“Higher power chipsets are very commonplace now, we’re seeing 500W or 600W GPUs, and CPUs reaching 800W to 1,000W” in the mainstream, BitFury appointed former Schneider executive Capes as CEO and spun LiquidStack out as a standalone company. Wiwynn, a Taiwanese server-maker with a big share of the white-label market for data center servers, contributed to a $10 million Series A funding drive - and Capes gives this partnership much of the credit for the market-ready liquid cooled servers he showed at two back-to-back highperformance computing events: the OCP Summit in San Jose, and SC21 in St Louis. “In the immersion cooling space, you have a three-legged stool: the tanks, the fluid, and the hardware,” says Capes. All three elements put together make for a more efficient system. Alongside Wiwynn’s servers, for the third leg, LiquidStack has a partnership with 3M to use Novec 7000, a non-fluorocarbon dielectric fluid, which boils at 34°C (93°F)

and recondenses in the company’s DataTank system, removing heat efficiently in the process. Purpose-built servers are a big step because till now, all motherboards and servers have been designed to be cooled by air, with wide open spaces and fans. Liquid cooling these servers is a process of “just removing the fans and heat sinks, and tricking the BIOS - saying ‘You're not air cooled anymore.’” That brings benefits, but the servers are bigger than they need to be, says Capes: “You have a 4U air cooled server, which should really be a 1U or a half-U size.” LiquidStack is showing a 4U DataTank, which holds four rack units of equipment, and absorbs 3kW of heat per rack unit equivalent to a density of 126kW per rack. The company also makes a 48U data rank, holding the equivalent of a full rack. Standards The servers in the tank are made by WiWynn, to the OCP’s open accelerator interface (OAI) specification, using standardized definitions for liquid cooling. This has several benefits across all types of liquid cooling. For one thing, it means that other vendors can get on board, and know their servers will fit into tanks from LiquidStack or other vendors, and users should be able to mix and match equipment in the long term. “The power delivery scheme is another important area of standardization,” says Capes, “whether it be through AC bus bar, or DC bus bar, at 48V, 24V or 12V.” For another thing, the simple existence of a standard should help convince conservative data center operators it’s safe to adopt - if only because the systems are checked with all the possible components that might be used, so customers know they should be able to get replacements and refills for a long time. Take coolants: “Right now the marketplace is adopting 3M Novex 649, a dielectric with a low GWP (global warming potential)”, says Capes. “This is replacing refrigerants like R410A and R407C that have very high global warming potential and are also hazardous. “It's very important when you start looking at standards, particularly in the design of hardware, that you're not using

Cooling Supplement 5


and carbon footprint by eliminating all of this steel and aluminum and whatnot is a major benefit.” Pushing the technology Single-phase liquid cooling vendors emphasize the simplicity of their solutions. The immersion tanks may need some propellors to move the fluid around but largely use convection. There’s no vibration caused by bubbling, so vendors like GRC and Asperitas say equipment will last longer. “People talk about immersion with a single stroke, and don’t differentiate between single-phase and two-phase," GRC CEO Peter Poulin said in a DCD interview, arguing that single-phase is the immersion cooling technique that’s ready now. But two-phase allows for higher density, and that can potentially go further than the existing units. Although hardware makers are starting to tailor their servers to use liquid cooling, they’ve only taken the first steps of removing excess baggage and putting things slightly closer together. Beyond this, equipment could be made which simply would not work outside of a liquid environment. “The hardware design has not caught up to two-phase immersion cooling,” says Capes. “This OAI server is very exciting, at 3kW per RU. But we’ve already demonstrated the ability to cool up to 5.25 kilowatts in this tank.“

materials that could be incompatible with these various dielectric fluids, whether they be Novec or fluorocarbon, or a mineral oil or, or synthetic oil. That's where OCP is really contributing a lot right now. An organization like OCP will kick all the tires, including things like the safety and compatibility of connectors, and the overall physical specifications. “I've been talking recently with some colocation providers around floor load weighting,” says Capes. “It's a different design approach to deploy data tanks instead

6

of conventional racks, you know, 600mm by 1200mm racks.” A specification tells those colo providers where it’s safe to put tanks, he says: “by standardizing and disseminating this information, it helps more rapidly enable the market to use different liquid cooling approaches.” In the specific case of LiquidStack, the OCP standard did away with a lot of excess material, cutting the embodied footprint of the servers, says Capes: “There's no metal chassis around the kit. It's essentially just a motherboard. The sheer reduction in space

DCD Supplement • datacenterdynamics.com

Beyond measurement The industry’s efficiency measurements are not well-prepared for the arrival of liquid cooling in quantity, according to Uptime research analyst Jacqueline Davis. Data center efficiency has been measured by power usage effectiveness (PUE), a ratio of IT power to facility power. But liquid cooling undermines how that measurement is made, because of the way it simplifies the hardware. “Direct liquid cooling implementations achieve a partial PUE of 1.02 to 1.03, outperforming the most efficient air-cooling systems by low single-digit percentages,” says Davis. “But PUE does not capture most of DLC’s energy gains.” Conventional servers include fans, which are powered from the rack, and therefore their power is included in the “IT power” part of PUE. They are considered part of the payload the data center is supporting. When liquid cooling does away with those fans, this reduces energy, and increases efficiency - but harms PUE. “Because server fans are powered by the server power supply, their consumption counts as IT power,” points out Davis. “Suppliers have modeled fan power consumption extensively, and it is a non-trivial amount. Estimates


Cooling Supplement typically range between five percent and 10 percent of total IT power.” There’s another factor though. Silicon chips heat up and waste energy due to leakage currents - even when they are idling. This is one reason for the fact that data center servers use almost the same power when they are doing nothing, a shocking level of waste, which is not being addressed because the PUE calculation ignores it. Liquid cooling can provide a more controlled environment, where leakage currents are lower, which is good. Potentially, with really reliable cooling tanks, the electronics could be designed differently to take advantage of this, allowing chips to resume their increases in power-efficiency. That’s a good thing - but it raises the question of how these improvements will be measured, says Davis: “If the promise of widespread adoption of DLC materializes, PUE, in its current form, maybe heading toward the end of its usefulness.” Reducing water “The big reason why people are going with two-phase immersion cooling is because of the low PUE. It has roughly double the amount of heat rejection capacity of cold plates or single-phase,” says Capes. But a stronger draw may turn out to be the fact that liquid cooling does not use water. Data centers with conventional cooling systems, often turn on some evaporative cooling when conditions require, for instance, if the outside air temperature is too high. This means running the data center chilled water through a wet heat exchanger, which is cooled by evaporation. “Two-phase cooling can reject heat without using water,” says Capes. And this may be a factor for LiquidStack’s most highprofile customer: Microsoft. There’s a LiquidStack cooling system installed at Microsoft’s Quincy data center, alongside an earlier one made by its partner Wiwynn. “We are the first cloud provider that is running two-phase immersion cooling in a production environment,” Husam Alissa, a principal hardware engineer on Microsoft’s team for data center advanced development said of the installation. Microsoft has taken a broader approach to its environmental footprint than some, with a promise to reduce its water use by 95 percent before 2024, and to become “waterpositive” by 2030, producing more clean water than it consumes. One way to do this is to run data centers hotter and use less water for evaporative cooling, but switching workloads to cooling by liquids with no water involved could also help. “The only way to get there is is by using technologies that have high working fluid temperatures,” says Capes.

“If the promise of widespread adoption of DLC materializes, PUE, in its current form, maybe heading toward the end of its usefulness" Industry interest The first sign of the need for highperformance liquid cooling has been the boom in hot chips: “The semiconductor activity really began about eight to nine months ago. And that's been quickly followed by a very dynamic level of interest and engagement with the primary hardware OEMs as well.” Bitcoin mining continues to soak up a lot of it, and recent moves to damp down the Bitcoin frenzy in China have pushed some crypto facilities to places like Texas, which are simply too hot to allow air cooling of mining rigs. But there are definite signs that customers beyond the expected markets of HPC and crypto-mining are taking this seriously. “One thing that's surprising is the pickup in colocation,” says Capes. “We thought colocation was going to be a laggard market for immersion cooling, as traditional colos are not really driving the hardware specificaitons. But we've actually now seen a number of projects where colos are aiming to use immersion cooling technology for HPC applications” He adds: “We've been surprised to learn that some are deploying two-phase immersion cooling in self-built data centers and colocation sites - which tells me that hyperscalers are looking to move to the market, maybe even faster than what we anticipated.” Edge cases Another big potential boom is in the Edge, micro-facilities are expected to serve data close to applications. Liquid cooling scores here, because it allows compact systems which don’t need an air-conditioned space. “By 2025, a lot of the data will be created at the Edge. And with a proliferation of micro data centers and Edge data centers, compaction becomes important,” says Capes. Single-phase cooling should play well here, but he obviously prefers two-phase. “With single phase, you need to have a relatively bulky tank, because you're pumping the dielectric fluid around, whereas in a two-phase immersion system you can actually place the server boards to within two and a half millimeters of one another," he said.

How far will this go? It’s clear that we’ll see more liquid cooling, but how far will it take over the world? “The short answer is the technology and the chipsets will determine how fast the market moves away from air cooling to liquid cooling,” says Capes. Another factor is whether the technology is going into new buildings or being retrofitted to existing data centers - because whether it’s single-phase or two-phase, a liquid cooled system will be heavier than its air cooled brethren. Older data centers simply may not be designed to support large numbers of immersion tanks. “If you have a three-floor data center, and you designed your second and third floors for 250 pounds per square foot of floor loading, it might be a challenge to deploy immersion cooling on all those floors,” says Capes. “But the interesting dynamic is that because you can radically ramp up the amount of power per tank, you may not need those second and third floors. You may be able to accomplish on your ground floor slab, what you would have been doing on three or four floors with air cooling.” Some data centers may evolve to have liquid cooling on the ground floor’s concrete slab base, and any continuing air cooled systems will be in the upper floors. But new buildings may be constructed with liquid cooling in mind, says Capes: “I was talking to one prominent colocation company this week, and they said that they're going to design all of their buildings to 500 pounds per square foot to accommodate immersion cooling.” Increased awareness of the water consumption of data centers may push the adoption faster: “If other hyperscalers come out with aggressive targets for water reduction like Microsoft has, then that will accelerate the adoption of liquid cooling even faster.” If water cooling hits a significant proportion of the market, say 20 percent, that will kick off ”a transition, the likes of which we’ve never seen,” says Capes. “It's hard to say whether that horizon is, is on us in five years or 10 years, but certainly if water scarcity, and higher chip power continue to evolve as trends, I think we'll see more than

Cooling Supplement 7


Is PUE too long in the tooth? What’s next for energy efficiency metrics in the data center industry?

S

ince it was first proposed by Christian Belady and Chris Malone and promoted by the Green Grid in 2006, power usage effectiveness (PUE) has become the de facto metric with which energy efficiency of data centers are measured. It is easy to calculate – a ratio between total facility power and power consumed by the IT load – and it provides a simple single metric that can be widely understood by non-technical people. But as leading facilities move to PUEs below 1.1 and more companies look to achieve carbon neutral status, it’s time to ask; should we move on from PUE? And if so, what to? PUE: a victim of its own success In a perfect world, PUE would be 1; every kilowatt of energy coming into the data center would be used only to power the IT hardware. And while a perfect 1 is

Dan Swinhoe News Editor

likely impossible, being able to quickly and easily measure energy use has driven improvements. The Uptime Institute says the industry average for PUE has dropped from around 2.5 in 2007, to around 1.5 today. But even Uptime has said the metric is looking ‘rusty’ after so long. “PUE has been one of the most popular, easily understood and, therefore, widely used metrics since the Green Grid standard was published in 2016 under ISO/IEC 30134-2:2016,” says Gérard Thibault, CTO, Kao Data. “I believe that its simplicity has been key, and in many respects, customers use it as a metric to

evaluate whether they are being charged cost-effectively for the energy their IT consumes and how sustainable they are.” “In an industry that is somewhat complex, PUE has become accepted by customers as a clear indicator of energy efficiency, becoming inextricably linked to sustainability credentials.” However, after more than a decade, PUE could be starting to become a victim of its own success, argues Malcolm Howe, Partner at engineering firm Cundall, whose data center clients include Facebook. “PUE has been an immensely beneficial tool for the industry. And in

“People can quite readily get their heads around it on a superficial level, but it's got loads of things wrong with it when you get below the surface”


Cooling Supplement the guise of the role that it was originally intended for it has driven very significant improvements in energy efficiency in data centers,” he says. “People can quite readily get their heads around it on a very superficial level, but it's got loads of things wrong with it when you get below the surface.” The limitations of PUE As we near the limits of what conventional cooling can achieve and data center owners and operators look towards carbon neutrality or even negativity, PUE starts to lose its value as a metric. Some of the most efficient data centers are starting to achieve PUEs of 1.1 or lower; the EU-funded Boden Type data center in Sweden has recorded a 1.018 PUE, while Huawei claims its modular data center product has an annual PUE of 1.111. Google says its large facilities average 1.1 globally but can be as low as 1.07. As facilities become more efficient, measuring improvements with PUE becomes harder and gains become increasingly incremental. “We're now down to PUEs of 1.0-whatever; we need more precise methods of measurement,” says Howe. “We're focused on trying to achieve these sustainability targets and net-zero, and we're doing it with a tool that is a blunt instrument; we're using a metric that doesn't really capture the impact of what's going on.” An example of its bluntness is that PUE doesn’t capture what is happening at rack level. IT power to the rack can power rack level UPS or on-board fans; energy that could and probably should be added to the debit sheet, yet isn’t. Howe notes that in a conventional air-cooled rack, as much as 10 percent of the power delivered to the IT equipment is consumed by the server and PSU cooling fans. Some companies can make their PUE look better this way in what he describes as “creative accountancy.” “Not all of that power is actually being used for IT. And I think a lot of people lose sight of that,” he says. “A lot of model operators putting UPS at rack level; they can immediately make their PUE look better by shifting the UPS load out of the infrastructure power and putting it into the IT power. You're just moving it from one side of the line to the other, you haven't actually changed the performance.” A marketing metric Howe also notes that PUE’s simplicity and widespread use across the industry has led to it being used as a competitive

“It’s taken on an importance and a profile within the industry that was never intended, for which it's not really suited” advantage weapon as much as an internal improvement metric. “Over time it has been seized upon as a marketing tool,” he says. “Operators are using it to bring in customers, and customers are going along with that and giving minimum PUE standards in the RFPs.” Many operators will happily tout the PUE of their latest facilities, and use it as a lure for sustainability-conscious customers. “PUE was never really intended for that, it was always an improvement metric,” He says operators should “measure the PUE, implement changes, and then measure it again to assess the effectiveness. “It’s taken on an importance and a profile within the industry that was never intended, for which it's not really suited.” Liquid cooling and PUE As we reach the limit of air cooling, liquid cooling is becoming an increasingly popular and feasible alternative. But as adoption increases, the utility of PUE as a yardstick lowers. “Even if you had a perfectly efficient fan motor, it is going to consume power,” says Howe. “We've got to the limit of what is achievable within the physics of what we've been doing with air cooling.” In an opinion piece for DCD, Uptime Institute research analyst Jacqueline Davis recently warned that techniques such as direct liquid cooling (DLC) profoundly change the profile of data center energy consumption and “seriously undermine” PUE as a benchmarking tool and even “eventually spell its obsolescence” as an efficiency metric. “While DLC technology has been an established yet niche technology for decades, some in the data center sector think it’s on the verge of being more widely used,” she said. “DLC reshapes the composition of energy consumption of the facility and IT infrastructure beyond simply lowering the calculated PUE to near the absolute limit.” Davis noted that most DLC implementations achieve a partial PUE of 1.02 to 1.03, by lowering the facility power. But they also reduce energy demands inside the rack by doing away with fans - a move that reduces energy waste, but also

actually makes PUE worse. “PUE, in its current form, may be heading toward the end of its usefulness,” she added. “The potential absence of a useful PUE metric would represent a discontinuity of historical trending. Moreover, it would hollow out competitive benchmarking: all DLC data centers will be very efficient, with immaterial energy differences.” “Tracking of IT utilization, and an overall more granular approach to monitoring the power consumption of workloads, could quantify efficiency gains much better than any future versions of PUE.” TUE: One metric among many or the new PUE? If PUE really is too long in the tooth, is there a ready-made replacement that has the simplicity of PUE but can provide a better picture? Howe says Total-Power Usage Effectiveness (TUE) can be a more effective metric at calculating a data center’s overall energy performance. TUE is calculated via IT Power Usage Effectiveness (ITUE) x PUE. ITUE accounts for the impact of rack-level ancillary components such as server cooling fans, power supply units and voltage regulators. TUE is obtained by multiplying ITUE (a server-specific value) with PUE (a data center infrastructure value). “ITUE is like a PUE at rack level and addressing what is going on in a way that PUE on its own does not. It's saying this is how much energy is going to the rack; and this is how much of that energy is actually going to the electronic components,” he says. “It's giving you a much more precise understanding of what's going on that level; which is you've either got some serving server fans spinning around or you've got some dielectric pumps, or you've got something else which may be completely passive.” TUE and ITUE aren’t new; Dr. Michael Patterson (Intel Corp.) and the Energy Efficiency HPC Working Group (EEHPC WG) et al proposed the two alternative metrics around a decade ago. Compared to PUE, TUE’s adoption has been slow. The equations aren’t much

Cooling Supplement 9


harder to calculate, but TUE requires a greater understanding of the IT hardware in place – and that is something that many colo operators won’t have a lot of visibility into. Howe says Cundall is starting to have conversations about TUE and moving on from PUE with its customers partly as it looks to ensure all its projects are heading towards net-zero carbon, but also as customers look to achieve their own sustainability goals. As more companies look to deploy liquid cooling – which takes efficiency beyond PUE – more companies may opt for TUE as a way to better illustrate their sustainability credentials amid a landscape where most facilities operate at PUEs in the low 1.1s. “Operators are going to want to try and position themselves as being ahead of the game and doing more and achieving more. Previously, people have been comparing each other using PUE, but you might now get companies saying ‘my TUE is X’.” PUE & TUE; just parts of a wider sustainability picture While TUE can provide a more granular view of energy efficiency, even then it is still just one part of the total sustainability package. There is no one metric to capture the entire picture of a data center’s sustainability impact. Many of the large operators now offset their energy use with the likes of energy credits and power purchase agreements to ensure their operations are powered directly by renewable energy, or at least matched with equivalent energy contributing to local grids. Taking this further, Google and others such as Aligned and T5 are beginning to release tools that can show a more

“We are aware of ITUE and TUE, but believe that PUE still offers our industry a widely adopted metric that can be used" granular breakdown of energy use by facility, showing a more accurate picture of renewable energy use hour by hour. Companies are increasingly touting their Water Usage Effectiveness (WUE) – a ratio which divides the annual site water usage in liters by the IT equipment energy usage in kilowatt-hours (KwH) – to illustrate how little water their facilities use. Carbon Usage Effectiveness (CUE) aims to measure CO2 emissions produced by the data center and the energy consumption of IT equipment. Schneider Electric recently published a framework document, designed to help data center companies to report on their environmental impact, and assess their progress towards sustainability. The paper goes beyond the industry's normal focus on PUE and sets out five areas to work on across 23 metrics. “Environmental sustainability reporting is a growing focus for many data center operators," Pankaj Sharma, EVP of Schneider's secure power division, told DCD at the time. "Yet, the industry lacks a standardized approach for implementing, measuring, and reporting on environmental impact. Amongst those 23 metrics, Schneider includes greenhouse gas (GHG) emissions (across Scopes 1, 2, and 3); water use, both on-site and in the supply chain; waste material from data center sites generated, landfilled, and diverted; and even species

abundance to measure biodiversity in the surrounding land. In 2020 the Swiss Datacenter Efficiency Association launched the SDEA Label, which aims to rate a facility’s efficiency and climate impact in an ‘end-to-end’ way; taking into account not only PUE but also infrastructure utilization and the site’s overall energy recycling capabilities. Despite the competition from young upstarts, PUE is still seemingly the top dog of sustainability metrics for now. “In Harlow, we operate at industryleading levels of efficiency, and we use PUE to help us monitor, measure, and optimize our customers’ energy footprints,” says Kao’s Thibault. “We are aware of ITUE and TUE, but believe that PUE still offers our industry a widely adopted metric that can be used by new and legacy data centers to understand their energy requirements, to optimize their ITE environments and reduce waste.” “One might arguably say that our industry has been too focussed on PUE, [but] alternative metrics will require greater data, parameters, and discussion between operators and customers in order to define new standards and drive adoption. Many legacy operators also may not have the means to achieve anything further than improving PUE levels, which is another consideration for our industry. Until that level of granularity is available, PUE remains the best option.”

Is the software green? Metrics are numerical indicators, and don’t easily measure subjective influences. What the workloads are doing is beyond the remit of data center designers, engineers, and even operators, but that energy – the 1 of PUE’s 1.x – should not escape scrutiny when looking at the wider sustainability picture. “PUE provides a good measure of how much power they consume, but not how they use it!” says Thibault. “As the requirement for sustainability and efficiency increases, software and machine code could potentially enable greater efficiencies. Legacy software, for example, consumes far more energy than

newer applications and this is an area that could change the energy efficiency landscape. Thibault has previously told DCD that colo operators can’t directly affect the 1 of the PUE, because it’s their customers’ business. But he notes there should be more focus around improving the efficiency of the code to really make that initial '1' of PUE the “most efficient that it can be.” Organizations such as The Green Software Foundation are looking to promote greater sustainability in software through more efficient coding, but despite the backing of Accenture, ThoughtWorks,

10 DCD Supplement • datacenterdynamics.com

the Linux Foundation, Microsoft and its subsidiary GitHub, the ‘green coding’ concept is still relatively niche amid the realities of business operations. “The measure of the performance of a data center is not how much power the IT equipment is consuming, but what data processing work it is actually doing,” notes Howe. “[TUE] still doesn't talk about what the server itself is doing; all of these calculations assume that the energy that is delivered to the electronic components is doing useful work rather than having a server that's just idling.”


Cooling Supplement

Peter Judge Global Editor

Cooling with water Is it possible to run a conventional cooling system - but use water instead of harmful refrigerant gases? We found a firm that thinks so

C

onventional cooling systems have some complex environmental issues. Traditional air conditioning units consume large amounts of energy and use HFCs (hydrofluorocarbons), which are powerful greenhouse gases. These gases are being phased out by industry agreements such as the European F-gas regulations. In temperate climates, “adiabatic” cooling systems remove heat by evaporating water but data centers’ excessive thirst for water is also coming in for criticism. But what if it were possible to have a refrigeration system that uses water as the working fluid - and uses it in a closed circuit so it doesn’t get consumed? Water refrigeration That’s what German tech firm Efficient Energy claims to have, with its eChiller: “The eChiller is the only refrigeration machine that works with pure water as a refrigerant and has an energetic performance that exceeds the state of the art by a factor of four to five,” claimed Juergen Suss in a video on the company’s site. Suss served as CEO and CTO in the company’s early days from 2013, before leaving to join Danfoss. The current management is finding a boost from changes in the industry: “The

refrigerant market is currently undergoing a huge transformation,” says Thomas Bartmann, sales director at Efficient Energy, Bartmann. “With the goal of cutting emissions, the EU enacted the F-Gas Regulation which will drastically restrict the use of traditional, HFC based refrigerants, forcing operators to fundamentally rethink their cooling strategies.” “We’re feeling the rocketing demand for natural refrigerants and environmentally friendly refrigeration technology firsthand,” he adds. Perhaps to increase familiarity for those used to conventional refrigerants, Efficient Energy likes to refer to water as R718, its name in the ASHRAE/ANSI standard listing of chemical refrigerants. And the device is somewhat familiar: the eChiller has the same components as a conventional chiller - an evaporator, a compressor, a condenser, and an expansion device. It uses the direct evaporation of water in a near-vacuum in a closed circuit, cooling the primary circuit through heat exchangers. Bartman says Efficient has had products in operation since the end of 2014. A new chiller, the eChiller 120, was introduced in 2020 with 120kW of cooling power. The system is in use in a few data centers already, including BT’s Hamburg facility, which installed three of Efficient’s earlier 40kW units in 2017. The BT data center uses cold aisle cooling, with racks arranged in 100kW

“cubes” connected to power and cooling systems. There’s a chilled water network, and the eChillers were installed to cool water in that loop. The data center has an energy management system, linked to a building management system, which determines how many of the eChillers are actually needed at a given time. The system keeps logs, and Efficient handles system maintenance. A trial showed the system could reliably deliver chilled water at 16°C. At Sparkasse savings bank in BadenWürttemberg, Efficient provided of cooling to a warm-aisle containment system. In this case, the bank was gradually adding equipment in a building with a maximum capacity of 70kW. Efficient started with 8-10 kW and expanded that to 35kW. With those and other projects under its belt, the company has established itself first in the DACH region (Germany, Austria, and Switzerland) and is now setting up a network of distributors across Europe in 2021, with partners signed in the UK, France, Sweden, and Norway. Bartmann says Efficient is being approached by prospective partners: “As the only supplier of series-produced chillers that use water as a climate-neutral refrigerant, we see ourselves as a pioneer of sustainable refrigeration solutions.”

Cooling Supplement 11


Cool vibrations A new cooling technology debuted at COP26 - one which harnesses thermal energy to create vibrations

O

utside a data center in Glasgow stands a bizarrelooking device. A metal cylinder, some four meters high, is studded with metal paddles. The paddles are all waving up and down, like the leaves of some sort of mechanical tree. It is, hands down, the most bizarre and alien-looking piece of data center kit we’ve ever seen, but the company behind it, Glasgow-based Katrick Technologies, believes it is set to take the data center world by storm. “Our genuinely disruptive passive cooling system is set to revolutionize the data center market,” promises Katrick’s video about the technology, “transforming data centers from energy hungry centers, into eco-friendly data providers.” The thermal vibration bell (TVB) came

seemingly out of nowhere, popping up at a Glasgow data center, during the COP26 climate change conference in November, accompanied by stirring claims from Katrick. With a launch event during COP26, the bell has drawn members of the British and Scottish parliaments, including Ivan Paul McKee, Minister for Business, Trade, Tourism, and Enterprise in the Scottish government. The bell is “a new refrigeration cycle” according to Karthik Velayutham, founder

Peter Judge Global Editor

and co-CEO of Katrick, which has a patent application for the system, whose internals fully match the novelty of its exterior. The system uses two different cooling fluids, Velayutham explained to DCD, and it works because of their different properties. In a large drum at the base, hot water from the data center’s primary heat removal system passes through a heat exchanger, giving up its heat to a coolant which surrounds it. This coolant has a high density, so it remains in the lower part of the bell, but has a low boiling point, so it starts to bubble.

"Initial results have been very pleasing. We think we can save up to 70 percent of our cooling costs, and 25 percent of our overall energy usage"

12 DCD Supplement • datacenterdynamics.com


Cooling Supplement These bubbles rise through a grille into the top half of the bell, where they pass through a second coolant, with a low density and a high boiling point. As well as taking the heat energy from the bubbles, the top part of the bell harnesses their mechanical energy. The bubbles agitate paddles in the coolant, each of which connects to one of the waving leaves outside the bell. Each paddle has two ends: one blade inside the bell absorbs heat and is moved by the coolant bubbles. And the second blade, outside the bell, gives up its heat - a process accelerated by the agitation of the paddle. The outside paddles radiate heat, and the motion created by the vibrations accelerates that heat loss, explained Velayutham. As a passive system, the bell simply uses the heat input to drive the cooling mechanism. This is in contrast to a conventional chiller which, like any air conditioning system, needs electricity to drive the mechanical energy necessary to deliver cooling. It’s part of a portfolio of thermomechanical systems which Katrick is developing, along with an energy harvesting system, which is effectively a novel wind turbine, whose unconventional fan blades can gather energy even at low wind speeds. As well as placing its thermal bells at data centers, Katrick wants to build walls of these turbines alongside roads and airports, to passively gather green electricity. The unidirectional string mirror Underlying many of Katrick’s inventions is the uni-directional string mirror (UDSM), a multi-point conversion system, which captures vibrations from a given surface area, converges them to a focal point, and converts them to energy. In its wind patents, the company uses the UDSM to harness small amounts of vibrational energy, which can create usable power As the company explains, “vibration to power is a known process which has been in use for over a century. When you talk over your phone, the sound (which is a form of vibration) is captured by the microphone and converted to electricity.” The company believes that UDSM is efficient enough to create usable energy from a variety of sources, including wind, waves, and heat. “Heat is a low-quality form of energy with atoms moving randomly,” says Katrick’s site. “By capturing and converting them into mechanical vibrations they are transformed into a highly organized form of energy. We can extract this energy efficiently through smaller pockets to capture and provide higher quality power.”

"We are doing testing to apply this technology to any type of data centers, whether it's air-cooled, or water-cooled or liquid-cooled" In the case of the thermal vibration bell, the energy may be organized, but it is not harnessed further. Although the paddles move, their motion is not converted into electrical energy or any other form of useful power. Instead, the energy is used directly, to help radiate heat, replacing energy that would otherwise be required to drive conventional chillers. Out of research Velayutham came up with the UDSM concept while studying waves and looking for a way to harness their energy. It was tested at the University of Strathclyde’s Naval Architecture, Oceans, and Marine Energy (NOAME) department, which established that a UDSM device can capture and converge vibrations. A further NOAME project, under the Energy Technology Partnership (ETP) program, developed the working concept of UDSM panels, which capture wind energy. A three-month project at NOAME confirmed mechanical vibration can be captured by a panel, converging the vibration to a focal point with an energy increase of over 250 times, a lens effect later increased to 400 times. The ETP program also brought in Glasgow Caledonian University, and the bi-fluid thermal vibration bell heat engine concept was developed. Internationally, also worked on the concept with thermal engineers at Indian consultancy Energia India to prove heat energy can be converted into fluid vibrations in the bi-fluid bell. That project found the heat engine can convert up to 30 percent of thermal energy to mechanical vibrations. Test drive partner No matter how good, ideas like this can languish on the shelf, unless there’s an industrial partner willing to take them on. Katrick signed with local data center provider Iomart, to test the TVB at its Glasgow data center during October 2021. Iomart, like many other providers, has been keen to adopt renewable energy, and also reduce the amount of energy it uses to cool its data centers. The bell is several steps more radical than simply moving to renewable power

or cutting waste, but Iomart CEO Reece Donovan has been impressed with what he has seen: "Initial results have been very pleasing. We think we can save up to 70 percent of our cooling costs, and 25 percent of our overall energy usage." To create the test system, Katrick decided to build a module with a capacity of 120kW, a substantial but not excessive capacity, and one which would make a measurable difference to most data centers’ thermal performance. Katrick visited multiple facilities, to come up with a useful design, says Velayutham: "We measured their,layouts, and we believe 120kW is a very good number." The actual prototype was built by a boutique manufacturing company in Glasgow, but could easily be scaled up. Retrofitting to existing sites Katrick believes the system can replace conventional chillers, and can be retrofitted where those chillers are connected to a chilled water cooling circuit. "Our passive cooling solution is completely retrofittable, providing an end-to-end solution to passively cool refrigerants from existing systems," says a company video. Velayutham says it could be applicable in many climates, but concedes it will be most applicable in temperate regions like the Northern hemisphere countries where most of the world’s data centers are located. "Technologies which are easily adaptable, should be accessible to everyone around the world," says Velayutham. "We are doing testing to apply this technology to any type of data centers, whether it's air-cooled, or water-cooled or liquid-cooled, even systems that would come in five years' time." Made in Scotland But any business building the system is likely to remain in Scotland, says Velayutham. A graduate of Strathclyde University, he has worked in the Glasgow area, in marine engineering, as well as energy systems since them. He’s a firm believer in keeping the technology where it was born, he told DCD: "We have done extensive research on the costing. We want to make everything in Glasgow. I started this in Glasgow, so I want to stick to Glasgow."

Cooling Supplement 13


ADVANCE FAULT DETECTION OF YOUR DATA CENTER CRITICAL ASSETS Protect your most critical assets, leveraging real-time analytics to proactively determine when maintenance is needed. With Honeywell Forge Digitized Maintenance, you can gain insight into the health of your data center.

Learn how we can help you proactively monitor your data center operations


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.