DCD Cooling Supplement

Page 1

> Cooling | Supplement

INSIDE

Cooled by:

The hottest and coolest

The great refrigerant shortage

Getting into hot water

At the edge, liquid cooling returns

> A few key locations are defining the cooling market, and how it evolves

> New EU regulations are putting the squeeze on data center refrigerants. Take note

> An energy efficient supercomputer could hold the key to the future

> As the edge is built out, we may have to revisit old ideas, creating something new


Best server chilled CyberAir 3PRO from STULZ stands for maximum cooling capacity with minimum footprint. Besides ultimate reliability and large savings potential CyberAir 3PRO offers the highest level of adaptability due to a wide range of systems, variants and options. www.stulz.de/en/cyberair-3-dx

STPK-H-17-0002-04_AZ_Best-server-chilled_DCD_210x270_EN_4C_pdLitho.indd 1

26.06.18 16:03


Cooling Supplement

Keeping cool without costing the Earth

Sebastian Moss Senior Reporter

No matter what form they take, or where they are located, data centers will need to be cooled. The trick, says Sebastian Moss, lies in how you do it

F

ish swimming off of the coast of the Orkney Islands in Scotland are due for a surprise. Should they head into the depths, they may come across a strange object lying on the sea floor: a giant cylinder, vibrating ever so slightly, and warm to the touch. This bizarre sea creature is not the kraken of old, nor is it a sunken ship or Atlantean artifact. No, it is - perhaps - a glimpse of the future. The cylinder represents the latest efforts by Microsoft to operate data centers under the sea, building a digital kingdom among the crabs. 'Project Natick' began as a whitepaper in 2013, starting in earnest the next year. By 2016, Microsoft was ready for a test in the wild, running a three-month trial off the Pacific coast of the US - a single server rack in an eight-foot (2.4m) diameter submarine vessel, filled with inert nitrogen gas. Now it looks like the company is ready to shift the project into high gear, submerging a 12-rack cylinder featuring 864 servers and 27.6 petabytes of storage into the North Sea. The icy water is expected to provide more than enough cooling for the data center, which is powered by a cable connected to the shore. This power will come from the European Marine Energy Centre’s tidal turbines and wave energy converters, which generate electricity from the movement of the sea. After this test, Microsoft envisions larger roll-outs, dropping clusters of five cylinders at a time. Indicatively, last year saw the company patent the concept of artificial reefs made out of data centers. Things, it seems, are going to remain confusing for the fish.

On land, removing heat is a very different challenge, and one that may be about to get a whole lot more complicated - new EU regulations, designed to cut greenhouse gas emissions, have had the unintended sideeffect of causing a data center refrigerant shortage (p9). New refrigerant gases may be the way forward, but they too come with knockon effects. Another approach could see the gases removed entirely, relying on hot water sent straight to the chip - it seems counterintuitive, but for the right power densities, this is surprisingly effective. Plus, it could one day lead to super-dense computers, the likes of which we have never seen before (p11). Another company is aiming even higher - in fact, its ambitions are literally out of this world. By harnessing the scientific phenomenon of sky radiative cooling, it hopes to beam excess heat into space, taking advantage of the interstellar heat sink that surrounds us (p15). Closer to home, there are those wondering whether liquid cooling could find a home at the edge, with disused cupboards one day set to house micro data centers, cooled by water or oil (p6). Whether you pursue any of these approaches, or try something else entirely, may depend less on your budget than on your location. This sector is led by a few key markets, locations with specific ambient temperatures and humidity levels, sometimes blessed with natural resources, and sometimes cursed with perennial challenges like land scarcity. Understanding these markets, and their requirements, is key to understanding how to keep your cool (p3).

Facebook embraces Nortek's membrane Facebook is rolling out a new indirect cooling system, developed in collaboration with Nortek Air Solutions. According to Facebook, StatePoint Liquid Cooling (SPLC) can reduce data center water usage by more than 20 percent in hot and humid climates, and by almost 90 percent in cooler climates, when compared to alternative indirect cooling solutions. In development since 2015, the technology - which has been patented by Nortek - uses a liquid-to-air heat exchanger, which cools the water as it evaporates through a membrane separation layer. This cold water is then used to cool the air inside the facility, with the membrane layer preventing cross-contamination between the water and air streams. “The system operates in one of three modes to optimize water and power consumption, depending on outside temperature and humidity levels,” Facebook's thermal engineer Veerendra Mulay said. Facebook uses direct evaporative cooling systems as long as the climate conditions permit this. “But the SPLC system will allow us to consider building data centers in locations we could not have considered before,” he added.

Issue 28 • June/July 2018 2


The hottest and coolest data center locations

Peter Judge Global Editor

Climate may affect data centers, but the overriding factor will be where the demand is, says Peter Judge

D

ata centers are affected by many things. Climate can influence the choice of location, but there are usually many additional factors such as the state of the local economy, proximity to consumers, availability of power and networking connections and, very importantly, politics. In this article we look at some key data center locations, and draw out the patterns behind the most exciting (hottest) and most fascinating (coolest) locations on the planet. Wherever data center demand is strong, those building the facilities have no choice.

They must meet the environmental needs through a series of technology choices and trade-offs, designed to ensure the facility delivers a reliable digital service to its ultimate consumers. Energy can make up more than half the overall cost of the data center during its lifetime, and operators will do everything in their power to reduce their expenses. This means picking technology which will run the facility more efficiently - but also making geographical choices, such as going where the energy costs are cheap (or where there is a supply of renewable energy that will reduce environmental impact). There are also political decisions to be made. Facebook and the other large hyperscale operators famously play off different American states or European countries against each other, locating their facilities where they get the most generous tax breaks. In Scandinavia, Sweden, Denmark and Finland, each country has offered competing levels of tax exemption for data centers. And in the US, Utah and New Mexico were placed in open competition to give Facebook the best terms for a data center in 2016: New Mexico eventually won.

3 DCD Magazine • datacenterdynamics.com

Builders know in advance what cooling technology they will require, and what will be practical in a given location. Servers use electricity, and all the power used in a data center will ultimately be emitted as heat, which must be removed to keep the equipment within its working temperatures. It’s easiest to remove that heat in a cool climate, where the outside air can do most of the work - subject to being safely filtered and run through heat exchangers. Thermal guidelines from ASHRAE show which parts of the world can use free-cooling and for how many hours. In most of the populated regions of the Northern hemisphere, free-cooling can be used for at least a part of the year. In Northern countries like Iceland and Sweden, it can be used all year round. Near the equator, in places like Singapore, mechanical cooling is required all the time. At the same time, Iceland and Sweden have plenty of cheap renewable electricity, while Singapore does not. Despite all this, Singapore is thriving, while Iceland remains a relatively exotic data center destination. The reason? Location still carries more weight than anything else, except for providers with applications which can live with a long response time.


Power + Cooling These conditions mean that historically, run by Stockholm Exergi, a joint venture All of this could be changing. The Internet the UK has none of the flagship data centers between the utility, Fortum, and the city of Things and the demand for digital content designed and built by hyperscale operators of Stockholm, which will help make the have led to the growth of so-called “edge” such as Facebook, Microsoft and Amazon. economics of urban data centers more resources which are located where the data The giants are leasing space locally in positive. It provides hot water to homes and is needed most. This means that all locations wholesale colocation sites, but place their offices - and pays industrial facilities where there are people will need digital big data centers in countries like Sweden, for their waste heat. Stockholm infrastructure. Denmark and Ireland, where the taxes, land Data Parks, run by Exergi, But it will also boost prices and energy costs are much more offers tenants up to $200,000 requirements for backfavorable. per year per MW of heat. end resources that can On top of this, Britain’s narrow 2016 vote Sweden’s government be accessed with a greater reduction in to leave the European Union (the so called is backing data centers, latency, such as analytics and electricity tax “Brexit” decision) caused a fall in the value of having slashed the country’s reporting. All the important for data centers the pound and might be expected to impact energy tax for the sector, a parts of the data collected at in Sweden, London’s future as the financial hub, and its move designed to persuade the edge will need to be backed approved value as a European beachhead for foreign wavering providers, who might up and analyzed. And all the in 2017 organizations. be considering other Nordic customer data which doesn’t So far, there has been no sign of any countries, to join Facebook. need regular access (think old impact, and investment has continued Facebook posts, or bank statements) can be unabated. This is partly because most To learn more about Sweden, sign up for safely put elsewhere. political decisions are yet to be made. Moves DCD’s Energy Smart event in Stockholm next That’s where the specialized hypertowards Brexit have been so slow and April: bit.ly/ DCDEnergySmart2019 efficient data centers will come into their confused that it is still possible own. to hope for an outcome which In a sense then, almost every location on 2. London changes little. In the absence earth could find a role in the digital landscape Blooming despite Brexit of real data, Brexiteers are which we are building. The role of the Summer max: 73.4°F (23°C) still able to promise a bright technology is to deliver the digital resources Winter min: 48.2°F (9°C) future outside of the EU, to where they have to be. Typical cooling tech: total colocation Chillers with free/ power capacity while moderates say the UK evaporative cooling must continue doing business in London 1. Sweden with the EU after Brexit, so Capital of Energy Smarts (CBRE) The capital of the UK remains surely the country will maintain Summer max: 71.6°F (22°C) the largest and most vibrant the alignment with European laws Winter min: 33.8°F (1°C) data center hub in Europe, despite a and regulations which benefit the Typical cooling tech: Outside air freenumber of apparent obstacles. London has digital sector. cooling, heat reuse some 495MW of data center power capacity, The UK government has seen the according to CBRE, comfortably ahead importance of data centers and backed the Sweden is a small data center market when of other European cities including Paris, industry with a climate change agreement compared with giants like the UK (London) Frankfurt and Amsterdam. Colocation and which exempts them from energy taxes and Germany (Frankfurt) but it is positioning cloud providers have flocked here to service as long as they collectively improve their itself as an energy-efficient hub for data the city’s financial hub, and use the abundant efficiency. centers, and has had some significant wins fiber networks. London also serves as an Alongside these factors, the weather - the most well-known being Facebook’s English-speaking European base for foreign has less of an impact. The country’s cool, expanding campus in Luleå, which is multinationals. temperate climate enables outside air cooling growing to three data centers and will use All this might have been called into for most of - if not all of - the year, but data hundreds of megawatts of power. The Luleå question by a number of factors. Real estate centers still require mechanical chillers for area is also home to a data center location in London has eye-watering prices, the reliability reasons. called The Node Pole. country’s energy costs are high, and the Sweden gets most of its energy from electrical grid suffers from poor forward London will be home to DCD’s flagship renewable sources, and the capital, planning and a high dependence on fossil annual event, DCD>Zettastructure, this Stockholm, plans to be carbon-neutral by fuels. November 5-6: bit.ly/DCDZettastructure2018 2040. The city has a district heating system,

97%

495MW

Issue 28 • June/July 2018 4


3. Ashburn, Virginia

4. Singapore

Boom town Summer max: 87.8°F (31°C) Winter min: 41°F (5°C) Typical cooling tech: chillers with some evaporative cooling

Asia Pacific’s Data Center Capital Summer max: 89.6°F (32°C) Winter min: 86°F (30°C) Typical cooling tech: Chillers all year round

center building. Another project aims to create solar farms despite the land shortage, by floating them on the island’s reservoirs. A further solar development plans to distribute capacity by renting space on city floors. The heavy planning of the Singapore economy, combined with the continued need, seem set to keep the country at the forefront.

Like Virginia, Singapore is something of an accidental data center hub, but in Singapore’s Northern Virginia is not just the largest data case, it is definitely working center hub in the world, it continues to be In September, we’re heading to Singapore for against the climate. It has among the fastest growing. With a total of the region’s leading data center and some 290MW of capacity more than 600MW installed, the region is cloud event. Be sure to join us: according to CBRE, and is adding more than 100MW every year. It bit.ly/DCDSingapore2018 growing rapidly. accounts for 20 percent of the US data center Data center operators market, and more than ten million square have no choice but to locate feet of data center space. 5. Beijing in Singapore, as it is a crucial Northern Virginia is a place close to total colocation Planning at Scale financial and business center for the Beltway of Washington, with a lot of Summer max: 87.8°F (31°C) power capacity the Asia Pacific region. However, consumers and businesses eager for capacity Winter min: 35.6°F (2°C) in Singapore it is a punishing place to build. The - but the fundamental reason for its strength Typical cooling tech: chillers (Cushman & temperature is hot all year round, as a hub has little do with this and more with needed for large parts of the year Wakefield) and the very high humidity makes a historical accident. any reliance on evaporative cooling In the nineties, fiber networks were China is experiencing an impressive a laughable suggestion. And the cost of land built out fast, and early investors settled in level of growth in its data centers, as the in the tiny island state is very high Ashburn. AOL built its headquarters in country undergoes rapid development. indeed. Loudoun County, and Internet providers With its huge population rapidly taking up Singapore has a heavily fossil-fuel got together to interconnect their mobile services and other digital activity, it based electricity grid, so data center infrastructure, building an exchange has increased investment in infrastructure, providers locating there will take a hit point that became known as with cloud players like Alibaba and Tencent on their corporate environmental MAE-East, which was quickly becoming global giants. footprint. Even though the climate designated by the National China’s capital and its third largest might allow for solar power, Science Foundation as one of city, Beijing was one of the first Chinese it’s very hard to exploit that in four US Network Access Points. data center hubs, along with Shanghai, of the world’s Singapore, because the city-state Colocation giant Equinix arrived Guangzhou, and Shenzhen. However, Internet traffic has very high property values, and in the area, and the rest is down by 2016, the city was less willing to allow flows little space for solar farms. to network effects. data centers, owing to a shortage of land, through The Singapore government The local economy has along with the high power demands - and Northern is taking a proactive become skewed towards substantial carbon footprint - of data centers, Virginia approach to data centers. data centers - staff and which didn’t endear them in one of the It has backed projects land are available, and world’s most polluted cities. In 2016, Beijing to explore ways around these problems. A local regulations and taxation simplifies issued a ban on data centers with a PUE government-sponsored project, spurred by the building of new space there. Power is rating of more than 1.5. a possible shortage of land for data centers, available, and local rulings make backup Smaller cities are developing data center is considering how to build multi-story power easy to implement. sectors of their own, but Beijing remains a facilities, and keep them energy-efficient Once again, the climate has little effect crucial location. despite the local climate. on this. Virginia’s summer heat precludes The Info-communications Media data centers relying on free cooling, but the China’s biggest hyperscale Development Authority of Singapore (IMDA), winter is cool enough to turn off the chillers companies come together along with Huawei and Keppel Data Centres, for extended periods. this December: is currently testing a high-rise green data bit.ly/DCDBeijing2018

370MW

70%

5 DCD Magazine • datacenterdynamics.com


Cooling Supplement

Will liquid cooling rule the edge?

Tanwen Dawn-Hiscox Reporter

The advent of edge computing could increase the popularity of liquid cooling systems, says Tanwen Dawn-Hiscox

S

ome may remember when the wider data center industry caught wind of liquid cooling technologies. Though the physics of using liquid - rather than air - to remove heat from server racks made sense, the concept seemed too risky to most, so investment in legacy HVAC systems continued unabated.

Nonetheless liquid cooling found its applications, chiefly in High Performance Computing (HPC). This is a perfectly suitable use case for the technology: whether using chilled water or dielectric fluid, liquid cooling systems are an efficient match for high density, high power server nodes, and are much less prone to failures than their airbased counterparts. Iceotope’s Edgestation, an enclosure the size of an electric radiator, is liquid cooled on the inside and passively air cooled on the outside, supporting about 1.5kW of IT. A variant of this can be placed on a roof or mounted on a wall, and the company offers to engineer bespoke products on demand. The company’s ‘ku:l’ product range, which comes in vertically or horizontally mounted form-factors, is designed for significantly higher densities than the Edgestation, between 50 and 100kW, with output temperatures in the 50°C (122°F) range, which can, for instance, be used to heat a building. Iceotope's direct immersion system is in use at the University of Leeds and at the Poznan Supercomputing and Networking Center - in other words, used to sustain otherwise difficult to cool chips. But it can, in theory, be placed in, say, a disused cupboard, or any poorly utilized space in an office block or a factory. While liquid cooling may not be the go-

to approach of the early edge computing adopters, it can complement other technologies for added efficiency. In Project Volutus, Vapor IO chose to partner with BasX, whose chief engineer founded Huntair (now owned by Nortek) and invented the idea of fan-wall cooling. The technology used for Vapor’s modules, which are being deployed at the base of Crown Castle cell towers across the US, is essentially an adaptation of airside free cooling. Instead of an evaporative system, air circulates in a closed loop with a chilled water cooling coil which runs to an outside projection coil, which, if the temperature difference ranges between 12 to 15°F (7-8°C), can reject all of the heat from the small data center. The size of the outside coil is adapted according to its geographic location to ensure maximum efficiency, but for hotter days, the system also contains a liquid cold plate refrigeration circuit. With project Volutus, it is likely that several tenants using different technologies in each module, making it less likely they are suited to the use of cold plate technologies. But the prime reason liquid cooling isn’t used to cool average densities is the cost. And, for the time being, primarily due to the complex engineering the manufacture of such systems requires, this still stands. What’s more, widespread adoption of novel technologies often awaits the endorsement of enough competitors to take off. But edge computing may well be the

springboard that propels liquid cooling into mainstream use. The dynamics of data distribution are evolving. It used to be that data was transmitted following a ‘core to customer’ model, but increasingly, it moves between peer to peer before traveling to the core, and back again. Consequentially, the network infrastructure will likely be forced to adapt, bringing compute much closer to the user. And without the barrier of having to replace legacy cooling systems, this could bring about liquid cooling’s heyday.

Issue 28 • June/July 2018 6


Advertorial: STULZ

Give edge data centres some liquid refreshment STULZ is pioneering the use of direct contact liquid cooling (DCLCTM) as a way to extract heat from processing components, servers and equipment in edge and micro data centres.

T

he exponential adoption of the cloud, the Internet of Things (IoT), Industry 4.0, and Web 2.0 applications, along with our desire to view increasing amounts of streamed content via services like Amazon and Netflix, has resulted in a growing number of data centres being built closer to where users are. Latency, speed and bandwidth are key challenges. Edge and micro data centres allow for the reliable distribution of compute assets and carrier links to process workloads in a multitude of locations, while still keeping core functions in a central location. They

Advertorial: STULZ

are therefore meeting the demand for uninterrupted availability of data, audio and visual content, and eliminating the challenges around latency, connectivity and cloud outages. Furthermore, in the near future autonomous vehicles are projected to consume terabytes of data with continuous sensing, data interchange, analysis and management. These applications need the continuous high-speed connectivity and availability of a large volume of processed data. This leads to architectural changes in data management, processing, analysing, relaying and storing and is already changing

the data centre form factor. The design of the data centres is evolving from data centres at the edge to facilities at the mobile edge. Energy efficiency is a major concern when considering the future for production processes and IT infrastructures, and the trend is to decrease floor space and invest in more compact and powerful computer systems that are able to process at faster speeds. Data centres of all kinds consume vast amounts of energy for powering their servers and the pressure is on to reduce the level currently used. That is why Power Usage Effectiveness (PUE) has become such a prevalent industry metric – the closer it is to 1.0 then the better the facility is doing in managing its use of energy – and DCLC can help lower this figure. Maintaining optimum climate conditions is just as important within edge and micro data centres as it is for enterprise, colocation and hyperscale facilities. To combat higher cooling costs, STULZ has partnered with CoolIT to develop solutions that serve multiple applications and diverse customer needs across many verticals in this data driven world. Their innovations have lowered operating costs and, due to the physics principle that liquids have a higher heat transfer capability over the air as the medium of exchange, the inherent benefits of DCLC are gaining in popularity. DCLC is a disruptive cooling method that can be applied for heat extraction from IT equipment. This patented technology uses cold plate heat exchangers that are directly mounted in the heat generating surfaces.


Advertorial: STULZ

These plates transmit extracted heat into the atmosphere, enabling equipment to operate at optimal temperature for higher processing speeds and enhanced reliability. Due to compact servers with higher capacities, the kW per rack ratio is significantly increased, with economic benefits that help to maximise the white space usage in data centres. Operational efficiencies can therefore be improved to positively impact bottom lines. DCLC uses the exceptional thermal conductivity of liquid to provide dense, concentrated, inexpensive cooling. It drastically reduces dependency on fans and air handlers – therefore, extreme high rack densities are possible and the power consumed by the cooling system drops significantly. This results in more power availability for computing, as each server in each rack can be liquid cooled – significantly lowering operating costs. STULZ and CoolIT’s technical leadership and record of reliability and innovation is meeting the exploding need to rapidly cool the huge increase in data traffic demand. Their joint technology leadership is resulting in higher rates of data centre availability, reliability, resiliency and, therefore, a lower cost of operation. They have provided DCLC solutions to major server and processor manufacturers like HPE, AMC, Apple, Intel and Dell, while Bitcoin mining firms have become DCLC users because it enables high densities and lower cooling costs when compared with traditional air cooling. As the density of installed equipment in the data centre has risen, so too has the amount of heat generated. While being able to fit more

STULZ Micro DC - High Perfomance Version

kit into a smaller space is generally considered a good thing, the need to control temperature has led to the growing use of liquid cooling. While the initial capital expenditure (CapEx) and the estimated operating expenditure (OpEx) will vary for every edge and micro data centres, what will not alter are the significant savings that owners and managers will achieve across the value chain by applying DCLC.

STULZ and CoolIT A summary of the benefits of STULZ and CoolIT’s capabilities can be seen in Table 1 and Table 2.

Contact Details Norbert Wenk Product Manager STULZ GmbH Hamburg GERMANY T: +49 40 55 85 0 E: wenk@stulz.de

Table 1: The benefits of DCLC

Feature

Benefit and impact

Performance

Facilitates peak performance for higher powered or overclocked processors

Density

Enables 100 per cent use of rack and data centre spaces

Quiet

Relieves employees from the disruptive screaming of server fans

Efficiency

Benefits from a significant reduction in total data centre energy consumed

Scalability

Meets fluctuating demands through the ability to modify data centre capacity

Savings

Generates immediate and measurable operating expense benefits, reducing overall total cost of ownership and thus increasing return on investment

Table 2: The features, benefits and impacts of the STULZ Micro DC with DCLC.

Feature

Benefit and impact

Video surveillance system

CCTV recordings for monitoring the unit and area around the unit

Fire alarm system

Fire monitoring and release of extinguishing agent

Cable management

Universal cable tray with horizontal cable management

Power distribution

Smart PDUs with environmental probe and temperature humidity sensor

Monitoring and security

Remote infrastructure management

Electronic cabinet access

Security access with integrated card reader

Rack construction

Heavy duty steel construction with powder coat finish

Drop-in solution

Rapid installation

Modular design

Easy to expand as need increases

Integrated cooling solution

DCLC integrated into the unit

Data centre in a box

Suitable for data centres and non-data centres

Advertorial: STULZ


The great refrigerant shortage European regulations are phasing out certain refrigerants, with major effects on data center cooling. Peter Judge reports

Peter Judge Global Editor

E

fforts to reduce the impact of climate change by limiting greenhouse gas emissions could have a big impact on data centers, causing changes to one of their main components - the chillers.

While many data centers aspire to free cooling (just using the outside air temperature) that’s not possible in all locations all year round, so data centers will usually have some form of air conditioning unit to cool the IT equipment. Air conditioning systems have come under fire for their environmental impact and a major component of this is the global warming potential (GWP) of the refrigerants they used. Rules are coming into force that will reduce the use of current refrigerants, and replace them with more environmentally friendly ones - while having a profound effect on equipment used in data centers. The HFC refrigerants used in chillers are being phased out because of their high GWP. The effect is to increase the price of HFCs and push vendors towards other chemicals. So it seems equipment makers will have to put up the price or use new refrigerants. Sticking with the current F gas rules, each year the price of HFCs will go up, and the pressure to change will increase. The trouble is, the replacements have drawbacks.

They are generally more expensive. More surprisingly, the replacements are flammable. Why would international environmental rules demand we use flammable liquids in AC units? Natascha Meyer, product manager at Stulz, explains it is actually inevitable: “A low GWP means that the refrigerant degrades rapidly as it enters the atmosphere. The only way to ensure this is to make it chemically reactive. However, high reactivity also generally means high flammability, entailing safety risks for people and machines.” There are some products which have a low GWP and relatively low flammability, but they are possibly more unacceptable, says Felisi, because they are toxic: “There is a lobby in northern countries pushing for the use of ammonia. Ammonia is natural, and not flammable, but is it safe? Would you allow ammonia in your house?” As well as being toxic, these fluids can be expensive. Meyer says one of the possibilities, R1234yf is out of the question: “It reacts with water to form hydrofluoric acid [...and] its sparsity on the market makes it too expensive at present.” The best possibility is R1234ze. It is possible to modify chillers to work with this fluid, but there are still issues, says Meyer: “We have specially modified the CyberCool 2 to work with this refrigerant.

9 DCD Magazine • datacenterdynamics.com

F gas rules attack HFCs Chillers currently use HFCs fluorinated hydrocarbons - which have a GWP thousands of times larger than carbon dioxide. The two main culprits are R410a, used in systems up to a few hundred kW, and R134a, used in larger systems. If this gives you a sense of déjà vu, that’s because refrigerants have been changed regularly on environmental grounds. HFCs themselves only came in as a replacement for CFCs (chlorinated hydrocarbons) which were banned for a different environmental impact: they depleted the ozone layer. “A few years ago, we passed from R22 then we made a change to R47c, then the industry changed to R410a,” says Roberto Felisi, product marketing director at Vertiv. “So it is the third time we changed refrigerant in 15 years.” This time, the refrigerants are being phased out gradually, using the 2015 “F gas” regulations in Europe which set a cap on the amount that can be produced and sold (or imported) by the large chemical companies that supply the products. That’s just a European rule, and one response is for manufacturers to ship units empty if they are going outside the EU, to be filled on arrival. However, a global agreement to cap and reduce F gases was passed in Rwanda in 2016, and should start to come into force from 2019. It’s worth remembering that data centers are only a small part of the air conditioning market, which is dominated by “comfort” air conditioning. The whole market is so large, and the global warming potential of HFCs is so extreme, the Rwanda deal was billed as the greatest single step in heading off global warming. It is possible that the Trump administration in the US might become aware of the Rwanda deal and back out of it, as it did with the Paris agreement on climate change. However, at present, it remains in place.


Cooling Supplement

However, R1234ze has a low volumetric cooling capacity. Consequently, a chiller that originally delivered a cooling capacity of 1,000kW over a defined area now achieves just 750kW over the same area.” The modified chillers are less energy efficient, and customers need larger units that take up more space - which may be a serious consideration in a built up area. So companies will continue to buy, and maintain, chillers based on R134a and R410a, and face the impact of the F gas regulations. They will have to pay more for refrigerants, and these price changes will be unpredictable. Meyer warned that users might have stocked up in 2016, and sure enough the big price increase was delayed. As Felisi says in 2018, “the price has gone up much more than we forecasted. The price of 410a went from €7 to €40 (US$8-46) per kg something like a five times increase.” It’s possible to overstate the current impact of course. As Felisi points out: “The cost of refrigerant is only a few percent of the price of running a chiller.” However, that cost will keep increasing. In the long term, it may mean existing chillers will have higher maintenance costs, and may be replaced sooner.

These cost changes may be harder to bear for smaller manufacturers, while larger manufacturers may be able to use their purchasing muscles to get hold of F gas more cheaply, and compete to maintain and replace those older systems. Taking the longer term approach of changing the refrigerant, it is possible to blend coolants, and bring down the GWP from say 1500 to 600, says Felisi, with a coolant that is “mildly flammable.” This will have an impact on data center design - making split systems less popular and boosting the prospects of systems which circulate chilled water. Split systems have evolved, which circulate refrigerant to provide localized cooling, even putting the actual cooling into the racks, have seemed a good idea. However, they have long pipes, which need a lot more refrigerant, so they will become too expensive (or dangerous if more flammable coolants are being circulated). “In a split system, you might have 100m of piping,” says Felisi, estimating that refrigerants could be as much as ten percent of the running cost of a split system. “Split systems have become much less viable.”

The most likely result of the regulations will be to reinforce an existing trend towards placing air conditioning units outside the facility and circulating chilled water within the site. This keeps the volume of refrigerant down, and avoids circulating flammable material in the white space areas. “We see an increased use of chilled water solutions,” says Felisi. “In a chilled water system, the refrigerant is installed in a packaged unit, outside the building.” This move also potentially improves efficiency, as it is easier to combine external chillers with adiabatic evaporative cooling systems. Of course, the use of water has drawbacks, as it may be expensive or in short supply in a given location. As with all environmental decisions there are trade-offs. Increasing water use in order to reduce the impact of the refrigerant might have a negative environmental impact in some locations. Meanwhile, a move which pushes data center builders to use less efficient chillers will mean that data centers will consume more energy (and possibly have a higher carbon footprint) in order to reduce their impact from their refrigerants.

Issue 28 • June/July 2018 10


Getting into hot water Fears over climate change and rising power densities have led to the creation of a new wave of liquid cooled systems. Sebastian Moss traces the history of hot water cooling, and peers into a future where supercomputers could become vastly more efficient, and more powerful

Sebastian Moss Senior Reporter

I

deas can strike at any time. In 2006, Dr Bruno Michel was at a conference in London, watching former head of IBM UK Sir Anthony Cleaver give a speech about data centers. At the end, attendees were told that there would be no time for questions because Cleaver had to rush off to see the British prime minister. “He had to explain to Tony Blair a report by Nicholas Stern,” said Michel, head of IBM Zürich Research Laboratory’s Advanced Thermal Packaging Group. The Stern Review, one of the largest and most influential reports on the effects of climate change on the world economy, painted a bleak picture of a difficult future if governments and businesses did not radically reduce greenhouse gas emissions. “We didn’t start the day thinking about this, of course,” Michel said in an interview with DCD. “What Stern triggered in us is that energy production is the biggest problem for the climate, and the IT industry has a share in that. The other paradigm shift that happened on the same day is that analysts at this conference, for the first time, said it’s more expensive to run a data center than to buy one. “And this led to hot water cooling.” IBM’s history with water cooling dates all the way back to 1964, and the System/360 Model 91. Over the following decades, the company and the industry as a whole experimented with hybrid air-to-water and indirect water cooling systems, but in mainstream data centers, energy-hungry conventional air conditioning systems persisted.

11 DCD Magazine • datacenterdynamics.com

LRZ’s SuperMUC Source: IBM “We wanted to change that,” Michel told us. His team found that hot water cooling, also called warm water cooling, was able to keep transistors below the crucial 85°C (185°F) mark. Using microchannel-based heatsinks and a closed loop, water is supplied at 60°C (140°F) and “comes out of the computer at 65°C (149°F). In the data center, half the energy in a hotter climate is consumed by the heat pump and the air movers, so we can save half the energy.” Unlike most water cooling methods, the water is brought directly to the chip, and does not need to be cooled. This saves energy costs but requires more expensive piping, and can limit one’s flexibility in server design. By 2010, IBM created a prototype product called Aquasar, installed at the Swiss Federal Institute of Technology in Zürich and designed in collaboration with the university and Professor Dimos Poulikakos. “This [idea] was so convincing that it was then rebuilt as a large data center in Munich the SuperMUC, in 2012,” Michel said. “So five and a half years after Stern exactly on the day - we had the biggest data center in Europe running with hot water cooling.” SuperMUC at the Leibniz Supercomputing Centre (LRZ) was built with iDataPlex Direct Water Cooled dx360 M4 servers, comprising


Cooling Supplement supercomputers are using some form of hot water cooling,” Michel said. There are signs that the technology may be finally ready to spread further: “We did see a big change in interest in the last 18 months,” said Martin Heigl, who was IBM’s HPC manager in central Europe at the time of the first SuperMUC. Heigl, along with the SuperMUC contract and most of the related technology, moved to Lenovo in 2015, after the company acquired IBM’s System x division for $2.3 billion. “There are more and more industrial clients that want to talk about this,” Heigl, now business unit director for HPC and AI at Lenovo, told DCD. “When we started it in 2010, it was all about green IT and energy savings. Now, over time, what we found is that things like overclocking or giving the processor more power to use can help to balance the application workload as well.”

more than 150,000 cores to provide a peak performance of up to three petaflops, making it Europe’s fastest supercomputer at the time. “It really is an impressive setting,” Michel said. “When we first came up with hot water cooling they said it will never work. They said you’re going to flood the data center. Your transistors will be less efficient, your failure rate will be at least twice as high… We never flooded the data center. We had no single board leaking out of the 20,000 because we tested it with compressed nitrogen gas. “And it was double the efficiency overall. Plus, the number of boards that failed was half of the number in an air-cooled data center because failure is temperature change driven: half the failures in a data center are due to temperature change and since we cool it at 60°C, we don’t have temperature change.” The system was the first of its kind, made possible because the German government had mandated a long-term total cost of ownership bid, which meant that energy and water costs were taken into account. As a closed system running with the same water for five years, the water cost was almost zero after the initial installation. The concept is yet to find mass market appeal, but “all the systems in the top ten of the Top500 list of the world’s fastest

With hot water cooling, Lenovo has been able to push the envelope on modern CPUs’ thermal design point (TDP) - the maximum amount of heat generated by a chip that the cooling system can dissipate. “At LRZ, the CPU will support 240W - it will be the only Intel CPU on the market today running at 240W,” Heigl said. “In our lab we showed that we can run up to 300W today, and for our next generation we’re looking at 400-450W.” He added: “Going forward into 2020-2022, we think that to get the best performance in a data center it will be necessary to either go wider or go higher, so you lose density but can push air through. Or you go to a liquid cooling solution so that you can use the best performing processors.” Hot water cooling has also found more

A data center in a shoebox In 2012, the Dutch government launched the DOME project - a joint partnership between IBM and ASTRON, the Netherlands Institute for Radio Astronomy - to design computing technology for the Square Kilometre Array (SKA), the world’s largest planned radio telescope. Building upon the hot water cooling technology in SuperMUC, DOME lead to the creation of IBM Zurich’s MicroDataCenter, a computational dense, and energy efficient 64-bit computer design based on commodity components. But with IBM mostly out of the low-cost server market, it is currently licensing out the technology, with the first company utilizing the product coming from The Netherlands. ILA microservers, a startup, offers variations on the hot water cooled server.

success in Japan “as there’s a big push for those technologies because of all the power limitations they have there,” as well as places like Saudi Arabia where the high ambient temperatures have made hot water a more attractive proposition. Meanwhile under Lenovo, the SuperMUC supercomputer is undergoing a massive upgrade - the next generation SuperMUCNG, set to launch later this year, will deliver 26.7 petaflop compute capacity, powered by nearly 6,500 ThinkSystem SD650 nodes. u

Source: IBM

Issue 28 • June/July 2018 12


SuperMUC from above Source: IBM u “The first SuperMUC is based on a completely different node server design,” Heigl said. “We decided against using something that’s completely unique and niche; our server also supports air cooling, and we’re designing it from the start so that it can support a water loop - we are now designing systems for 2020, and they are planned to be optimal for both air and water.”

supplied by Fahrenheit, a German startup previously known as SolTech. “We’re the only people in the data center space working with them so far,” Heigl added.

“We do think that this will be used more often in the future, though. In our opinion, LRZ is an early adopter, finding new technologies, or even just inventing them. The water cooling we did back in 2010 - no one else did that. Now, after a few years, As power densities continued to rise, others - be it SGI, HPE, Dell - they have Lenovo encountered another cooling adopted different kinds of water challenge - memory. “DIMMs cooling technologies.” didn’t generate that much Michel, meanwhile, heat back in 2012, so it was remains convinced that sufficient to have passive the core combination of heat pipes going back and The number of servers microchannels and directforth to the actual water to-chip fluid can lead to loop,” Heigl said. Lenovo will have huge advances. “We did “With the current shipped when it another project for DARPA, generation 128 Gigabyte upgrades the where we etch channels into DIMMs, you have way the backside of the processor more power and heat SuperMUC and then have fluid flowing coming off the memory, so through these microchannels, we now have water running reducing the thermal resistance by between them allowing us to another factor of four to what we had in the have a 90 percent efficiency in taking heat SuperMUC. That means the gradient can away.” then become just a few degrees.” The company has also explored other ways of maximizing cooling efficiency: “We The Defense Advanced Research Projects take hot water coming out of it that’s 55-56°C Agency’s Intrachip/Interchip Enhanced (131-133°F), and we put it into an adsorption Cooling (ICECool) project was awarded to chiller,” which uses a zeolite material to IBM in 2013. The company, and the Georgia “generate coolness out of that heat, with Institute of Technology, are hoping to which we cool the whole storage, the power develop a way of cooling high-density 3D supplies, networking and all the components chip stacks, with actual products expected that we can’t use direct water cooling on,” to appear in commercial and military Heigl said. applications as soon as this year. The adsorption chiller Lenovo uses is

20m

13 DCD Magazine • datacenterdynamics.com

“It is not single phase, it’s two phase cooling, using a benign refrigerant that boils at 30-50°C (86-122°F) and the advantage is you can use the latent heat,” Michel said. “You have to handle large volumes of steam, and that’s a challenge. But with this, the maximum power we can remove is about one kilowatt per square centimeter. “It’s really impressive: the power densities we can achieve when we do interlayer cooled chip stacks - we can remove about 1-3 kilowatts per cubic centimeter. So, for example, that’s a nuclear power plant in one cubic meter.” In a separate project, Michel hopes to be able to radically shrink the size of supercomputers: “We can increase the efficiency of a computer about 5,000 times using the same seamless transistors that we build now, because the vast majority of energy in a computer is not used for computation but for moving data around in a computer. “Any HPC data center, including SuperMUC, is a pile of PCs - currently everybody uses the PC design,” Michel said. “The PC design when it was first done was a well-balanced system, it consumed about half the energy for computation and half for moving data because it had single clock access, mainboards were very small, and things like that. “Now, since then, processors have become 10,000 times better. But moving data from the main memory to the processor and to other components on the mainboard has not changed as much. It just became about 100 times better.”


Cooling Supplement

Cooling towers on the rooftop of LRZ Source: IBM

This has meant that “you have to use cache because your main memory is too far away,” Michel said. “You use command coding pipelines, you use speculative execution, and all of that requires this 99 percent of transistors. “So we’re using the majority of transistors in a current system to compensate for distance. And if you’re miniaturizing a system, we don’t have to do that. We can go back to the original design and then we need to run fewer transistors and we can get to the

factor of about 10,000 in efficiency.” In a research paper on the concept of liquid cooled, ultra-dense supercomputers, ‘Towards five-dimensional scaling: How density improves efficiency in future computers,’ Michel et. al. note that, historically, energy efficiency of computation has doubled every 18-24 months, while performance has increased 1,000-fold every 11 years, leading to a net 10-fold energy consumption increase “which is clearly not a sustainable future.” The team added that, for their dense system, “threedimensional stacking processes need to be well industrialized.”

An iDataPlex Direct Water Cooled dx360 M4 server Source: IBM

However, Michel admitted to DCD that the road ahead will be difficult because few are willing to take on the risks and longterm costs associated with building and deploying radically new technologies that could upend existing norms.

“All engineers that build our current computers have been educated during Moore’s Law,” Michel said. ”They have successfully designed 20 revisions or improvements of data centers using their design rules. Why should number 21 be different? It is like trying to stop a steamroller by hand.” The other problem is that, in the short term, iteration on existing designs will lead to better results: “You have to go down. You have to build inferior systems [with different approaches] in order to move forward.” This is vital, he said: “The best thing is to rewind the former development and redo it under the right new paradigm.” Alas, Michel does not see this happening at a large scale anytime soon. While his research continues, he admits that “companies like ours will not drive this change because there is no urgent need to improve data centers.” The Stern Review led to little change, its calls for a new approach ignored. “Then we had the Paris Agreement, but again nothing happened” he said. “So I don’t know what needs to happen until people are really reminded that we need to take action with other technologies that are already available.”

Issue 28 • June/July 2018 14


Fast and flexible. From network edge to factory floor. The STULZ Micro Data Center: A complete solution in a single unit. Includes rack, cable management, cooling, UPS, power distribution, ambient monitoring and firefighting. It can also be installed anywhere, is rapidly ready for use, and can be technically expanded in many ways. Thanks to direct chip cooling, for example, heat loads of over 80 kW are no problem. www.stulz.de/en/micro-dc

The picture shows the STULZ MDC high performance version, the standard version differs in terms of equipment


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.