EIBI June 2021

Page 27

Data Centre Management

David Craig is chief executive officer at Iceotope

When air just won’t cool it David Craig believes that liquid cooling of servers is the most energy-efficient way to drive the data centre industry forward as energy use in the sector continues to surge

T

he data centre market uses massive amounts of energy and water in its role as a key economic and research platform for this data-led era. The last year of corporations shifting to WFH, home schooling and binge-watching content, has witnessed immense growth in data generation and consumption. The market will continue to experience huge growth partly due to online device installation up from 18.4bn devices in 2018 to 29.3bn devices expected by 2030. To support this expansion the network of data centres continues to grow, and with it, the power, cooling and connectivity that maintains it. Gartner predicts that public cloud services will grow to a $368bn market by 2022, with all major countries experiencing between 15 per cent and 33 per cent growth. Furthermore, the growth of small edge data centres, typically drawing up to 10kW per site, will require a roll out of tens of thousands of units across individual countries to monitor, analyse and take action on specific applications such as intelligent road and vehicles, hospitals and many more. The Uptime Institute recently stated that, average data centre PUE in 2020 was 1.58. This has not significantly improved in the last seven years. Many data centre developers are still wedded to a chilled air-cooled approach to technology spaces, which rolls out the older style fan assisted servers. This legacy approach consumes large amounts of water and up to 30 per cent of the data centre’s energy in cooling reducing the total IT load available. A more enlightened approach whether in data centres or at the Edge is chassis-level precision immersion liquid cooling technology, which has a significantly lower PUE 1.03. Liquid cooling is 1,000 times more efficient than air cooling and

Liquid cooling of servers is the most energy-efficient way to drive the data centre industry forward

eliminates the requirement for refrigerants. It also removes the need for server fans while dramatically increasing the compute density that can be effectively managed in each server rack. Within the technology space liquid cooling within HPC environments allows guaranteed operational temperatures within tightly configured server clusters. The classic design of a data centre is to use cool airflow across the hot components within

server racks to maintain the operational temperature within the technology suite. It is often the case that more energy is used to power the mechanical systems to cool the technology hall than is used to run the servers, ITE and network equipment. At a time when AI (Artificial Intelligence)based applications require highperformance computing (HPC), air cooling and the excessive water it uses is no longer the most effective strategy to cool the technology suite. Pressure from investors, customers, legislators and the public is causing the industry to rethink data centre cooling strategy.

Template to optimise energy Liquid cooling of servers is the most energy-efficient way to drive the data centre industry forward. It provides a template to optimise energy use in the technology suite, so more power drives the applications on the servers, rather than the cooling systems. As local ‘edge’ data centre designs are implemented, these too benefit from liquid cooling technology over air cooling techniques, in what are much more constricted spaces. Today’s view on sustainability and reductions in CO2 from data centres is partly driven by the cost considerations as energy and water become more expensive and less available, as well as the threat of

legislation to reduce emissions. Also, the massive increase in AI and ML (Machine-Learning) applications which require HPC and graphical processing unit (GPU)-rich servers to process compute-dense workloads has increased the average server power usage from 5kW/rack to upwards of 15kW/rack and in instances like Iceotope’s Ku:l 2 up to 42kW/rack. These HPC configurations no longer work effectively with air-cooled processes. The HPC market was valued at $39.1bn in 2019 and is expected by research companies to grow at a compound annual growth rate (CAGR) of 6.5 per cent from 2020 to 2027. This market is also referred to as supercomputing and involves the use of increasingly large parallel processing clusters to throughput the data at high-speeds and accuracy reducing results timescales. This capability is making HPC a must have for many corporations, governments and research facilities, which is contributing to the market’s growth. As new-build and legacy data centres consider the requirements to accommodate HPC environments, with their concentrations of servers, the heat created must be effectively removed and dealt with. Precision immersion technology can now capture and efficiently reject over 97 percent of the heat generated. Liquid cooling techniques have become more flexible in the devices that can now be accommodated into the systems. To greatly increase energy efficiency in legacy data centres it is possible and cost effective to retrofit chassis level liquid cooled servers in similar racks in the technology suites. In fact, a shared space can greatly increase the efficiency of the space. It is not unusual that technology suites layouts have void spaces where the air cooling is not effective. These areas can be used by liquid-cooled technology, increasing the total space utilisation, creating a more efficient suite.  JUNE 2021 | ENERGY IN BUILDINGS & INDUSTRY | 27

EIBI_0621_027(M).indd 1

04/06/2021 14:18


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.