Issue 36 • March 2020 datacenterdynamics.com
Aligned’s CEO Andrew Schaap on data center tech Preparing for disaster Lessons from CyrusOne’s ransomware attack The future forecast 14 industry luminaries share their predictions
Supplement
Smart Energy Super smarts
NREL turns to AI Ops for the ultimate system
Server success
Finding energy savings at the rack level
Demand response
Alternate strategies f or battery smarts
R
High Rate & Extreme Life High Rate Batteries for Critical Power Data Center Applications
RELIABLE Narada HRL and HRXL Series batteries deliver! Engineered for exceptional long service life and high rate discharges, Narada batteries are the one to choose. Narada can provide solutions to meet any Critical Power need.
ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties
Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339
ISSN 2058-4946
Contents March 2020
6 News Data centers use just one percent of global energy. Plus, how data centers should respond to Covid-19
16
14 The 2020 DCD Calendar We’re embracing digital conferencing
41
16 Preparing for climate change Are you ready for what’s to come?
Industry interview
22 44
50
22 “ The proliferation of technologies such as AI, IoT, VR, and blockchain will call for considerably more compute, thus generating more heat, and prompting customers to look at higher-density cooling solutions,” Aligned Energy’s CEO Andrew Schaap tells DCD. 24 Ending hot work Working on live electrical circuits is not worth the risk 25 The smart energy supplement Analyzing AI Ops for supercomputers, efficient servers, demand response, 5G power requirements and more in a special supplement 41 So you want to build a smart city? Connecting existing conurbations might be harder than we thought 44 Ready to dive into water cooling? When should you consider turning to liquid over air? 50 Your money or your data Lessons learned from the CyrusOne ransomware attack
53
53 T he future forecast 14 industry luminaries predict the decade ahead 61 Hollow giant We head to Dagenham, UK, to view an upcoming NTT data center 64 Recycling buildings Turning churches, printing factories, and pyramids into data centers
64
66 We must work together to confront covid-19 Stay safe in these trying times. We’re in this together
Issue 36 • March 2020 3
Preparing for the worst case
A
s we write this, the world is dealing with an epidemic. Covid-19 is not the most deadly virus ever. It will only strike a small proportion of the world, but its spread has caught the imagination. When the virus subsides, we will still have to deal with its effects on our lives, form stock markets, airlines and hotels, down to schools and families. We will also have to continue to deal with the threats we already have on the table - and the biggest of those is climate change.
If your staff still work on live circuits, read what Uptime's Kevin Heslin has to say (p24) Data center operators have made very public efforts to limit their contribution to climate change, by reducing their dependence on fossilpowered electricity. But the human race is still emitting too much and global temperatures will still rise. Reports have revealed that a lot of digital infrastructure could be at risk form the effects of man-made climate change. Consider one basic fact: submarine cable landing stations, by definition, must be vulnerable to coastal flooding as sea levels rise. This issue our cover feature (p16) asks just how well-prepared our digital infrastructure is for the inevitable impact of climate change - for floods, droughts, and extreme storms.
It's smart to save energy, whatever the climate. Our Energy Smart event is due in Stockholm later this year, and we have a special supplement on ways to save energy more intelligently (p25). One piece of common sense around power distribution is to ban work on live circuits. If your organization still allows this, read what Uptime Institute's Kevin Heslin has to say (p24).
187m
From the Editor
Recycling a building can be a good
no. of people displaced if sea level rises by 2m before 2100 (Bamber et al., Proc Natl Acad Sci USA 2019)
way to reduce emissions. Concrete and new building materials are one of the world's big hidden polluters, so it's worth considering the alternatives (p64). But if you have to construct a new building there's lots to consider. Alex Alley visited NTT's forthcoming London facility to see how to prepare a new data center (p61).
4
Deputy Editor Sebastian Moss @SebMoss Reporter Alex Alley SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Harriet Oakley Designer Mandy Ling Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison Chief Marketing Officer Dan Loosemore
DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907
users have very different needs. So how does Aligned Energy reckon it can satisfy both? CEO Andrew Schaap says the answer requires a different technology approach (p22).
Since our last issue, we've produced a digital supplement on the reality of the Edge. After a lot of generic coverage of the emerging sector, we thought it was time to look at specifics. So read about smart cities here (p41), and find the supplement online to learn of cars, shops... and mosquitos.
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
The next decade might seem remote given the crises we now face, but we took time to ask our top contacts what they see coming up in the next ten years (p53). Do you agree with their answers? Let us know!
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
Global Editor Peter Judge @Judgecorp
Head Office
Hyperscale customers and smaller
Dive deeper
Events
Meet the team
Debates
DCD Magazine • datacenterdynamics.com
Training
Awards
CEEDA
www.pefc.org
© 2020 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER
Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation. Learn more at http://www.cat.com/datacentre
© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.
Whitespace
News
NEWS IN BRIEF
Whitespace: The biggest data center news stories of the last two months
Schneider Electric warns coronavirus will cost it $324m The company had to shut down factories in China amid the ongoing outbreak. Equipment maker Vertiv also reduced its forecasted revenue for the first quarter by upwards of $70m due to covid-19.
Digital Realty to buy 98% stake in Westin Building Exchange Details were not disclosed about the planned buy but the cost is likely higher than the $30m Digital paid for its initial 49 percent stake in 2006. The deal is expected to be closed in the first half of 2020.
For more on the impact of climate change, see p16
Huge data center efficiency gains stave off energy surge - for now Energy usage increased only marginally despite rising demand Data center efficiency has increased so fast that energy demands have grown much slower than had been feared, according to a report funded by the US government. The study - by Northwestern University, Lawrence Berkeley National Laboratory, and Koomey Analytics was published in the journal Science on February 28. The researchers found that advances in data center efficiency, outpaced other sectors, so US data centers’ power demands rose just six percent in the time it took for compute to jump 550 percent. The study criticized “several oft-cited yet simplistic analyses” that claim the energy used by the world’s data centers has doubled over the past decade and will triple or even quadruple within the next decade. “Such extrapolations are based on recent service demand growth indicators that overlook the strong countervailing energy efficiency trends that have been occurring in parallel,” the study notes. The paper estimates that in 2005 the worldwide energy use of data centers was 153 terawatt-hours (TWh), 194 TWh by 2010, and 203 TWh in 2018. But between 2010 and 2018, global data center workloads and
6
compute instances have increased sixfold, data center Internet protocol (IP) traffic has increased by more than 10-fold, and data center storage capacity has increased by an estimated factor of 25. The researchers noted the difficulty of predicting the future, but said that improvements should ensure that “there is a sufficient energy efficiency resource to absorb the next doubling of data center compute instances that would occur in parallel with a negligible increase in global data center energy use.” Despite the positive news, the study’s lead, Eric Masanet, said: “While the historical efficiency progress made by data centers is remarkable, our findings do not mean that the IT industry and policymakers can rest on their laurels. “We think there is enough remaining efficiency potential to last several more years. But ever-growing demand for data means that everyone - including operators, equipment manufacturers, and data consumers - must intensify efforts to avoid a possible sharp rise in energy use.” bit.ly/CanYouKeepItUp
DCD Magazine • datacenterdynamics.com
El Capitan supercomputer to feature AMD chips, break 2 exaflops barrier Coming in 2023, the system will be used to simulate nuclear weapons. Were it operational today, it would be more powerful than the next top 200 supercomputers combined.
Amazon’s Irish data center plans approved by authorities Planning permission for a $380m data center has been granted. The data center will be built on the IDA Business and Technology Park on Donore Road, Drogheda. The data center’s construction should begin in the Q2 2020, with full operation expected in 2023.
US DOJ charges Huawei with racketeering, trade secrets theft, and helping rival states The US government alleges Huawei deliberately stole trade secrets from several companies, including six US technology firms. It also alleges the Chinese business broke trade sanctions to provide surveillance equipment and services to the Islamic Republic of Iran and North Korea.
Equinix opens $34m data center in Warsaw, Poland The $34m WA3 International Business Exchange (IBX) opened on March 5. The data center offers 475 cabinets and more than 1,400 sq m (15,000 sq ft) of space. Once the data center is fully built out, the facility will include more than 3,500 sq m (38,000 sq ft) of space, with capacity for approximately 1,200 cabinets. It will also be entirely powered by renewable energy.
EU wants carbon neutral data centers by 2030
Microsoft to be carbon negative by 2030 By 2050 it aims to remove all its emissions since its founding in 1975 Microsoft plans to become carbon negative, not just reducing its carbon emissions and shifting to renewable resources but also offsetting its carbon footprint. There are three commonly classified categories of emission: Scope 1, those directly from a person or company’s activities; Scope 2, those indirectly created by the production of the electricity or heat used; and Scope 3, those indirectly created by all other activities (such as food production, or manufacturing of goods used). This year, Microsoft expects to emit 100,000 metric tons of Scope 1 carbon, four million tons of Scope 2 carbon, and 12m tons of Scope 3: “Historically we’ve focused on Microsoft’s scope 1 and 2 emissions, but other
than employee travel, we haven’t calculated as thoroughly our scope 3 emissions,” company president Brad Smith said in a blog post. “That’s why we’re committed to becoming carbon negative for 2030 for all three scopes.” By the middle of the decade, Microsoft expects to bring Scope 1 and 2 emissions to “nearzero” by shifting to 100 percent renewable energy supply via PPAs for the electricity consumed by its data centers, buildings, and campuses. By 2030, Microsoft will be using negative emission technology and plans to invest around $1bn over four years to help keep its commitments.
The EU has announced it wants European data centers to be greener after outlining its strategy. The EU Commission said that data centers and telecoms are responsible for a sizeable environmental footprint, and should be climate neutral by 2030. How the Commission plans to ensure this shift is unclear, but it did say that it will undertake “initiatives to achieve climateneutrality no later than 2030. Some initiatives are already underway, such as the EU-funded Boden Type DC One, a Swedish prototype data center focused on creating an efficient facility. bit.ly/CarbonnEUtral
bit.ly/Microcarbon
Aligned to match 100% of its IT load by buying renewable energy credits The company will match its power drain with renewable energy credits Texas-based Aligned Energy will work with local utility providers and purchase bundled Renewable Energy Credits (RECs). The move follows other data center companies wishing to move completely to renewable energy (or at least RECs). Switch shifted in 2016, followed by Iron Mountain in 2017, and Data4 the year after. “Aligned is committed to powering our data center portfolio with renewable, clean energy and working with clients to achieve our shared carbon-reduction goals,” Andrew Schaap, CEO of Aligned, said. “Our focus is to provide ultra-efficient, rapidly deployable and sustainable data center solutions, that enable customers to scale easily and efficiently as their business grows while supporting clean energy.” As for the larger tech companies and cloud hyperscale giants, Apple and Google are already 100 percent renewable energy-powered, Facebook expects to hit the target by the end of the year, and Amazon Web Services has given itself until 2030. bit.ly/Renewablecreditscore
Issue 36 • March 2020 7
Whitespace Microsoft invests $1.1bn into Mexico as it plans new cloud region Microsoft will invest around $1.1bn over five years into Mexico as it unveiled plans to establish a cloud region in the country. The move will expand the company’s global cloud infrastructure to 57 cloud regions in 22 countries. Its regions usually consist of two or more geographically distinct data centers, but no details as to the specifics were revealed. The company said in a blog post that the region will provide locals better access to Microsoft Azure, Office 365, Dynamics 365 and the Power Platform services. Microsoft CEO Satya Nadella pledged the money in a promotional video released by the Mexican Government. According to the company, Microsoft will also be investing in training labs and skills programs. Nadella says the money will focus on “access to digital technology for people across the country.” In January, Microsoft revealed a strong growth for Azure in its quarterly earnings report.
Google plans $10bn investment in US data centers and offices The company provides fiscal details for previously known projects Google will invest $10bn in US data centers and offices across 11 US states, this year. The projects will be developed in Colorado, Georgia, Massachusetts, Nebraska, New York, Oklahoma, Ohio, Pennsylvania, Texas, Washington, and California. The majority of these projects were already known, but the total spend was not detailed. In the South, Google will increase its offices in Atlanta, Georgia, and expand its offices and data centers in Texas, Alabama, South Carolina, Virginia, and Tennessee. Over in the Midwest, it will open a new data center in Ohio, and complete the expansion of its data center in Iowa. It also plans an office expansion in Detroit. In the Central states, it will grow its data
centers in Nebraska and Oklahoma. The company also said that it has “the capacity” to double its workforce in Colorado. As for the East, it will open a large New York City building in Hudson Square. It’s also expanding its offices in Pittsburgh, and has started work at a bigger office in Cambridge, Massachusetts. In early 2018, senior Google executives reportedly debated the direction of the company’s cloud division, setting the goal of becoming a top-two player by 2023. At the time, the company considered reducing its cloud spend should it fail. The division has shown no signs of slowing down, but is third. bit.ly/10billiondownsouth
Peter’s financial factoid Microsoft beat estimates in 2019 with nearly $37bn in revenue, while Azure grew by 62 percent. Although, the covid-19 outbreak is expected to impact PC revenue.
bit.ly/BuyingintoMexico
Amazon invests $1.6bn for two data centers in India Amazon will invest $1.6bn into the construction of two Indian facilities. The data centers are expected to be located on the outskirts of Hyderabad. One of the facilities, a 66,000 sq m (710,000 sq ft) data center, will be located at the village of Chandanvelly, while the 82,000 sq m (882,000 sq ft) second facility will be situated at the village of Meerkhanpet. CEO Jeff Bezos announced the investment of $1.6bn to digitize small and medium businesses across India
8
DCD Magazine • datacenterdynamics.com
to help the country achieve its goal of $10bn in SME exports by 2025. “I predict that the 21st century is going to be the Indian century,” said Bezos, at an event in New Delhi’s Jawaharlal Nehru Stadium. “The most important alliance is going to be the alliance between India and the United States, the world’s oldest democracy and the world’s largest democracy.” bit.ly/Corporatestatevisit
Whitespace
Oracle to ‘sack’ 1,300 in Europe Staff who work at Oracle’s divisions in Ireland, The Netherlands, and Spain could be let go, according to reports from The Irish Times. As many as 1,300 people are expected to be laid off. This comes after Oracle’s poor quarterly revenue estimates. Oracle had a poor Q2 in documents released in November, which showed a fall of seven percent in revenue from cloud and on-premise licenses. The victims of the layoffs are believed to work across sales, business development, and solutions engineering. Employees were told that despite the layoffs any new openings could still be applied for. Last year, Oracle sacked hundreds in a series of dramatic firings. In March, hundreds of staff were let go after the company decided to make cuts across its divisions and focus on its cloud platform. Employees were told it was their last day and given half an hour to leave the building with their things. Oracle declined to comment. bit.ly/Officialnotice
Outgoing exec called back as interim CEO amidst mass layoffs at CyrusOne What is going on at CyrusOne? Dallas-based CyrusOne is cutting 55 jobs in response to oversupply and slower demand in the US - potentially saving around $11m as a result. It amounts to 12 percent of its staff. The lay offs were announced in January, with the company’s European president, Tesh Durvasula, among them. He was set to leave in March by “mutal agreement.” However, just a month later, CyrusOne president and CEO Gary Wojtaszek stepped down suddenly. Durvasula was rehired as interim CEO, while Wojtaszek will stay on as an advisor for a transition period. The reason for the abrupt change was not disclosed. “I know that I speak for everyone at CyrusOne in thanking Gary for his strong leadership and vision,” Durvasula said. “Over the years, Gary has helped create a strong company culture at CyrusOne focused on customer service and delivering shareholder value, which will remain unchanged.” The company has struggled financially to support its investment levels, with takeover rumors dogging the company throughout 2019 Ultimately, CyrusOne denied a Bloomberg report that it considered selling itself off.
Wojtaszek blamed market conditions, including “the continued moderation in demand from hyperscale customers.” The company’s New York data center also suffered a ransomware attack at the end of 2019, which we delve into on page 50. The affected employees will receive severance and transition assistance, the business claimed. The company expects to spend around $5.9m in the first quarter after the cuts. Ahead of the decision for Wojtaszek to step down, the CEO sold some $7.9m shares in CyrusOne across December and January, public disclosure data reveals. Shares in the company rose three percent after news of the departure was made public. The CyrusOne board said it will launch a search for a new CEO, which will include consideration of Durvasula as well as external candidates. “It has been a tremendous journey and privilege to serve as the CEO and a Board member of CyrusOne since its IPO and spin-off from Cincinnati Bell,” Wojtaszek said. bit.ly/Theypulledmebackin
Nokia CEO stands down after poor fiscal results Rajeev Suri has resigned as CEO of Nokia, with the head of Finnish company Fortum set to take over. Pekka Lundmark will assume the role in August this year. According to a statement, the change of leadership has been planned for some time. Suri will stay on as an advisor to Nokia’s Board until January 1, 2021. The resignation comes after a turbulent few years for Nokia and poor results for the company. Last year, around $7bn were wiped
10 DCD Magazine • datacenterdynamics.com
off Nokia’s market value after the company released its financial report on October 24, 2019. This came after the company’s board decided to halt dividends to bolster its flagging operations in 5G. Suri’s Nokia became a leading player in telecommunications after its $19bn buy-out of Alcatel-Lucent. But, since then Nokia’s share price has tanked by more than half. bit.ly/TougholdNokia
Advertorial: Submer
Submer Technologies: Smart and Sustainable Solutions for Next Generation Datacenters Submer is changing how datacenters are being built, to be more efficient than ever and to have as little an impact on the environment as possible
T
he digital trends that are transforming the social, urban and economic landscape all have something in common: they use an ever-increasing amount of data. Processing that data in a smart way has become a competitive and economic imperative. Datacenters and HPC generate greater heatloads than traditional, server based applications. This increases the costs of cooling the equipment and the space it needs, lowering the return on capital investment (ROIC). Datacenters will play a fundamental role in the future of our societies, and we need sustainable and smart ways to address their ever growing needs.
Liquid Immersion Cooling is the New Black Air-and-fan-based computer cooling has changed very little since the dawn of the computer age more than fifty years ago. Datacenters and HPC need to advance beyond conventional air-cooling technologies to realize further gains at scale. Most liquid cooling approaches use fluid-filled heat sinks or large radiators, giving marginal improvements over conventional, ambient air cooling, but they also decrease the hardware densities. Submer’s SmartPodX - the first commercially available Immersion Cooling system that conforms to both standard server formats and Open Compute Project specifications - dramatically increases datacenter and HPC efficiency by lowering cooling and space costs with unrivaled hardware densities thanks to: • A modular and compact design • The SmartCoolant physical chemical properties giving higher heat transfer performance than air. Highly Efficient and Eco-Friendly Instead of spacing servers out in vertical racks, the SmartPodX places them in special, horizontal tanks, where the SmartCoolant thermodynamically whisks heat away from
hot spots quickly and efficiently. Immersion cooled servers do not need cooling fans, and avoid wasted air-flow space within the servers, increasing the hardware density and computational capacity. Finally, the solution does not require any CRAC/HVAC units. In a conventional datacenter, about 40% of the electricity is used for the cooling system. SmartPodX saves: • Up to 95% of cooling costs • Up to 85% of physical space (with the added benefit of a completely silent technology) • Up to 50% OPEX saving • 25-40% CAPEX saving (thanks to the higher hardware density and easy retrofitting) • 60% lower hardware failure rate (the SmartCoolant protects the servers and their components from dust, particles, abrupt changes of temperature and moisture) • 30% longer hardware life-span. This has a smaller (or even positive) impact on the environment thanks to: • Savings in energy and water • Non-hazardous biodegradable proprietary coolant • The ability to re-use the waste heat to heat the datacenter’s host building or surrounding urban and industrial areas. Submer guarantees a certified Power Usage Efficiency (PUE) of 1.02 (in 2019, the datacenter average PUE was 1.67, according to the Uptime Institute) . This gives: • A reduction in power consumption of nearly 50% • An increase in hardware density (up to 100kW in a 45U SmartPodXL configuration) and computational capacity • A reduction in physical space occupied. Matteo Mezzanotte
About Submer Submer Technologies was founded in 2015 by a team of industry forwardthinkers and innovation visionaries to tackle the datacenter business from a new angle, creating highly efficient solutions for next generation datacenters. Submer is changing how datacenters and supercomputers are being built from the ground up, to be as efficient as possible and to have little or positive impact on the environment around them. We believe that “innovation” doesn’t have to contradict “sustainability”. We believe there is a better way, a smarter way to stay cool. marketing@submer.com https://submer.com Barcelona: +34 932 202 855 Ashburn, Virginia: +1-571-758-4171 Palo Alto: +1-650-304-0654
Whitespace
Italy’s coronavirus lockdown: The view from SuperNAP Is it really that bad? Yes, say the people who are keeping one of Italy’s largest data centers online Italy is one of the countries suffering most from Covid-19. And that is affecting how digital infrastructure is operating, according to one the country’s largest data center operators, SuperNAP Italia in Milan. The €300 million ($314m) data center, which opened in 2016, has around 42,000 sq m (452,000 sq ft) of white space in four data halls, with 40MW of power available. It’s run by SuperNAP International, a joint venture set up by the US operator Switch. Italy has moved to a policy of widespread quarantines and lockdowns for much of society, and Switch’s facility is in the North where the effects of the coronavirus have been extreme. We spoke to SuperNAP Italia about how the facility is coping with the crisis. “All public utilities can keep running and should keep running, and we count as a public utility,” Alison Gutman, communication manager at SuperNAP Italia, told DCD, over the phone. Isolated at home like most Italian residents, Gutman explained how the company is dealing with the pandemic sweeping the world. SuperNAP Italia has adopted remote working: since the end of February, everyone that doesn’t need to be at the data center has worked from home said Gutman, “with our essential operations staff still being on-site to continue guaranteeing 100 percent uptime.” For the few staff that have to travel to work, several steps have been taken, including ample access to antibacterial gel, and a daily deep clean of surfaces and equipment. “The staff are maintaining the safety distance between them while they’re working,” Gutman said. “They’re only going from home to work. So when they’re not at work, they have to respect the same laws as everyone else, which is to stay home.“ During their work commute, staff members carry documents that allow them to circumvent the general travel ban. “In addition, our company has provided a supporting letter that explains that they
have a necessity to come to work because we are providing a public utility.” Those employees work in shifts, Gutman explained, “so we have enough staff that if someone was infected on one shift, we have other staff that can work in their place. “So one group of three workS one week and another group workA another week so that they avoid any cross-contamination.” Currently, no one at the company has exhibited signs of the virus, but employees (the company does not use contractors) are allowed to have indefinite sick leave, if need be. The hope is that the current travel restrictions will slow further spread in a country currently dealing with more than 12,000 cases and 1,000 deaths. Should the matter deteriorate further, though, SuperNAP is preparing for the possibility of a total lockdown: “We will always have someone [at the data center], so if an immediate quarantine was called where no one can leave, there will be someone present.” The company has had more food delivered - “enough to support at least five people in case of the total lockdown,” and Gutman said that the staff are prepared to stay at the data center for a long period of time if need be. “We have everything ready to go. We don’t think that’s going to happen. We hope not. But if it does, we already have everything ready to go for them in terms of, sleeping, eating, hygiene, everything that they would need.” When asked whether SuperNAP is prepared for those employees also leaving, Gutman said: ”There never is and never will be a moment when it goes completely unoccupied. It is technically capable of running on its own. But part of our protocols is that there’s someone always there.” It appears unlikely that Italy’s electric grid will go down, but the company says it has a power system composed of three independent power sources “to guarantee extra high resiliency.”
12 DCD Magazine • datacenterdynamics.com
“In case of a total loss of power, the facility has emergency DCCP generators that can ensure up to 80 hours of continuous operation of the entire facility (60 percent of load at 270l/h fuel consumption),” said a statement.. “We have contracts with diesel suppliers to arrive within six hours to top up fuel and continue to do so as long as necessary.” At the same time, the data center has experienced more usage as people shift online. “Over the last few weeks, we have experienced a 15 percent increase in power consumption and Internet traffic,” Gutman said. “Our data center has a capacity of several MW so we eact promptly to requests.” Sherif Rizkalla, the company’s CEO, added: “Work in Italy has not stopped, but has taken on another form. In the data center world, we are seeing this via small increases in consumption and bandwidth. “SuperNAP is ready to face the crisis with business continuity and 100 percent uptime; we are structurally and technologically the most suited to do so, and to support remote work and maintain production to protect the country as much as possible.” It is not clear how the pandemic will continue to impact Italy, or the wider world. “I hope that people are learning,” Gutman said. “There’s a lot of criticism [of what is happening in Italy], but it’s not so easy. Just a couple of weeks ago, some people were confused, and saying ’is it really that bad?’ And it really is. Just take it seriously.” Gutman did caution against panic, however: “I think that the message here in Italy that we want to give to the world is that, everyone is doing what they can. The work hasn’t stopped. It’s not an apocalypse. We are having this crazy experiment with work from home and smart working and we’re moving ahead as much as we possibly can and working together to overcome.” bit.ly/StaySafeAndWashHands
How to keep online and beat Covid-19 Data centers are well placed to keep running during the global pandemic - but there are important rules to follow to get through the crisis Covid-19 is a real and unprecedented crisis, but data centers will come through it. They provide a vital service - more important than ever as people reduce personal contact and rely on electronic communications. There’s been a rapid increase in digital traffic for online communications services, and this will be sustained as the public is encouraged to keep their distance from each other, while continuing work as far as possible. Data centers have plenty of resources, and they’re designed to deal with disasters like fires and blackouts. But they haven’t had to face a global pandemic before. What should they do? First, don’t panic. “Preparedness is in the industry’s DNA,” said Fred Dickerman, SVP at the Uptime Institute, author of a report on minimizing the risk from Covid-19: “Very few organizations have planned for the type of pandemic we are having now. If you don’t have a plan for this in your reference library, you can look for one that can be adapted” Uptime’s work on resilience, including the well-known Tier certification scheme, is usually associated with the impact of physical and technical disasters on equipment - but this time the staff are in the firing line. “From our research, in many conversations we’ve found that the first concern is staff health and safety - as of course it should be,” said Uptime’s research director, Andy Lawrence. It’s not a platitude. The most direct impact that Covid-19 might have on data centers is staff getting ill and not being able to run the service. So the response has to start at ways to prevent staff getting ill. And, as everyone knows from government advice, that starts with hygiene and ‘social distancing.’ “Data centers tend to be fairly clean but we are going to have to up our game,” Dickerman told an Uptime webinar on Covid-19 risk. “The best information is
that the virus lives for two to three days on metal and plastic, and can live for 24 hours on cardboard. It’s not thought to be airborne but it’s still early days, and we may find it lives in the air.” If part of the facility is potentially contaminated, and it’s possible to seal it off for three days, that may be enough. But, as a basic starting point, Uptime recommends upgrading the cleaning process at facilities - adding disinfection to basic cleaning and making sure the cleaning company they have is able to step up further if a deep clean is needed quickly: “Your data center cleaning crews are going to become an important part of your operation. Consider using a specialist firm, that can do it at short notice.” Staff have to change their habits and movements, too. Most firms in the sector already make use of teleconferencing and allow home working, so it’s relatively easy to mandate those things. But ingraining better behavior is hard. “You need to change the way your people are thinking during the day to get them into the mindset of avoiding contamination,” said Dickerson. Staff should be separated into shifts that don’t have contact with each other if possible. “When teams come on and off site, they should do handovers from a distance or by phone,” he said. “Sterilize your physical logbooks - or discontinue them.” In colocation facilities, customers must visit the site less, and follow these same rules. Key posts must be kept filled: “Uptime has recommended for years that organizations should designate key personal and alternates - and keep them apart,” said Dickerson. If you have two essential network engineers, don’t have them in the same room. Staff must also understand the need to self-report and don’t come in when they are possibly ill. Supply chains may be disrupted, so spares aren’t available, along
with the essentials of life in a data center. The plan should have phases to bring in if matters become more serious, said Dickerson: “Don’t go to full isolation in one step. Escalate from basic preventive measures to the worst case scenario.” Each stage is triggered by two major factors: how many staff are absent through illness or self-isolation, and what the current government orders are. If there are travel restrictions or a city goes into lockdown, then sites will need travel permits. In France, these come from the Interior Ministry, and that means applying online, and submitting hygiene procedures, along with a clear statement of the role of the data center. At this point, sharing a written plan is a big help, said Dickerson. If the site goes into lockdown, it will have to operate lights out as far as possible, but it may be necessary to have staff camping out in the building or in a nearby hotel. This part of the plan means that the site should have some essential supplies in stock, and a kitchen and shower room good enough that staff can spend some time there. And if the site does get contaminated, then some applications may need to be moved or paused, while the site is deep cleaned and rebooted. Through all this, communication is vital, both externally, and internally to make sure staff stay alert: “People tend to relax after they’ve been in a crisis for ten or twenty days,” warns Dickerson. It’s important they get good information not rumors from social media - and the company has a role to play in making sure people keep their mental health through good interactions. Finally, Uptime warns there may be more pandemics, so keep the plan ready for next time. bit.ly/UptimeCovid19 Uptime’s advice is free online
Issue 36 • March 2020 13
New York
The industry's biggest virtual conference We’ve moved our live DCD>New York event to September 1-2, 2020. However, we don’t want you to miss out on our world-leading conference program, so we’ll be hosting 24 free-to-view webinars between March 31 - April 2 to enable you to access our expert thoughtleadership and build your industry knowledge wherever you are in the world.
Day 1 | Tuesday March 31 10:00am EDT
Lightning plenary keynote: Green zettabytes - how we power the ‘world’s computer’ in a low-carbon economy
Joe Kava Vice President - Global Data Centers Google
11:00am EDT
Technologically Challenged Panel: Is there such thing as ‘too big for the cloud’? Alan Howard, OMDIA (moderator) Eric White, Clinton Foundation Shane Brauner, Schrödinger Jason Carolan, Flexential
12:00pm EDT
Championing the connected facility: Examining the role of software in managing remote locations at the edge Russell Senesac, Schneider Electric
1:00pm EDT
Cooling 2.0 - what role can high quality thermal design play in lowering energy OPEX? David C. Meadows, STULZ Joerg Desler, STULZ
Special color HKS 14 Pantone 485 CMYK 0/100/100/0 RGB 227/06/19 HEX #e30613
Hosted on our fully interactive platform you’ll still be able to participate in live Q&A, audience polls and gain insights from our industry experts.
2:00pm EDT
Major Panel: What are the barriers to modernizing infrastructure whilst live and how can uptime be maintained throughout? George Rockett, DCD (moderator) Frank McCann, Verizon Eric Fussenegger, Wells Fargo Mark Hurley, Schneider Electric Arthur Valhuerdi, DataGryd
3:00pm EDT
Placing ‘hot air’ at the center of airflow optimization: Improve power availability and resiliency while reducing energy demand Paul Hagen, Kelvion Inc.
4:00pm EDT
An exploration into the next generation of immersion cooled data centers Scott Noteboom, Submer Immersion Cooling
5:00pm EDT
Vertically Challenged: Cloud conundrums - how can CIOs help bolster organizational resilience in the age of hybrid cloud? Matt Stansberry, Uptime Institute (moderator) Kevin W. Sanders, EYP Mission Critical Facilities Chris Brown, Uptime Institute Nabeel Mahmood, United Security Bank
To secure your place at the industry's largest virtual data center conference use our session registration form and we'll provide you with links and calendar reminders for all the keynotes and panels so you don't miss a thing.
Click below to join any of the free-to-view sessions Register for sessions
Day 2 | Wednesday April 01 10:00am EDT
Lightning plenary keynote: 5G disrupts - will a ‘data explosion’ at the edge spell death for the centralized data center? Vinay Kanitkar Chief Technology Officer Global Carrier Strategy Akamai Technologies, Inc
Day 3 | Thursday April 02 Intelligent data centers: How artificial intelligence will dictate operational decision-making Rhonda Ascierto Vice President of Research Uptime Institute
11:00am EDT
Ethically Challenged Panel: Decarbonizing the data center - can a bold CSR policy save the world? Susanna Kass, UNEP (moderator) Chelsea Mozen, Etsy Nancy Gillis, Green Electronics Council Elizabeth Jardim, Greenpeace USA
Technologically Challenged Panel: How is the virtualization of the “meetme-room” changing the landscape of network infrastructure? Sagi Brody, Webair (moderator) Misha Cetrone, Megaport Okey Keke, Digital Realty Sanjeevan Srikrishnan, Equinix Jezzibell Gilmore, PacketFabric
12:00pm EDT
How to power 10MWs at the edge: What have we learnt so far? Peter Panfil, Vertiv
Why sustainable site selection is pivotal to operational reliability Brendan Gallagher, Jacobs
1:00pm EDT
Data volumes on the rise? Here’s five tell-tale signs that you’ve hit your fiber threshold and how to overcome the connectivity bottleneck Ray Nering, Cisco Marlana Bosley, Corning
Cloud? Edge? Hybrid? Harmonizing your critical infrastructure to align with the unique operational demands Tony Despirito, BGIS
2:00pm EDT
How will the surge in compute demand - in the core, and at the edge, change how we operate digital infrastructure? George Rockett, DCD (moderator) Zahl Limbuwala, CBRE Bill McHenry, Next Generation Data Barry Novick, BlackRock
Major Panel: Big power, shrinking footprint - are operators striking the right balance between the high availability of power and the sustainable supply of energy? Stephen Worn, DCD (moderator) Braco Pobric, NEX Group, plc Kanad Ghose, SUNY Binghamton David Quirk, PE, LEED AP, CEM, DLB Associates
DATA CENTRE SOLUTIONS
3:00pm EDT
Investing in uptime and placing environmental sensors at the epicenter of thermal optimization Trey Evans, RF CODE
Is lithium-ion the pathway to sustainable uptime and the protection of power continuity? Jerry Hoffman, LiiON
4:00pm EDT
Journeying toward a PUE of 1.2 or under? Adopt a low energy approach to the non-mechanical removal of heat Jon Pettitt, Excool ltd
Assessing the cloud - pulling ‘high tech’ properties into a ‘low tech’ tech environment Nicholas W. Carter, Altus Group
5:00pm EDT
How do you train-up the next generation of data center engineers? George Rockett, DCD (moderator) Thomas Ciccone, Stack Infrastructure Dennis Cronin, Verizon Peter Curtis, PMC Group One
Major Panel: Edge, hyperscale or other: where will the biggest market opportunities for colocation suppliers stem from in the next ten years? David Liggitt, datacenterHawk (moderator) Christopher Street, Princeton Digital Group Tony Rossabi, TierPoint Peter von der Linde, EdgeConneX
Register for free for any of the sessions at the DCD>New York Virtual Conference:
Register for sessions
Cover feature
Are you prepared for climate change? Humanity failed to take climate change seriously. Let’s make sure that data centers don’t do the same, Sebastian Moss cautions
A
Sebastian Moss Deputy Editor
fter decades of warnings,
companies shown to be impacted by the end
decisions. “We know now that past weather
climate change is here,
of the decade. The vast majority declined to
is no longer the predictor of future weather,”
exacting a costly toll on
comment, pulled out of interviews at the last
Carroll said.
people and infrastructure.
minute, or assured us that they had plans in
Once a threat that we were
place which they would not detail.
told would impact our
One major multinational
With a vast, continent-spanning infrastructure, the company is vulnerable to the whims of the climate - from 2016 to 2018,
children and grandchildren, years of inaction
telecommunications company said that
natural disasters cost the company more than
have ensured that this is a problem of our
none of its infrastructure was at risk of sea
$874m. “These severe weather events are
lifetime.
level rise. When shown data to the contrary,
obviously connected to climate change, so it’s
representatives stopped communicating with
definitely in our best interest, as well as our
DCD.
customers’ best interest, for us to be prepared
Already the horrors that were foretold are beginning to unfold across the planet, be it through droughts in New Zealand, floods in
“I’ve gotten a broad range of responses
for the future impact of climate change.”
the UK, hurricanes in the US, or the fires that
from people in the industry,” Barford said.
decimated Australia.
“Ranging from ‘wow, this is something that
just rely on sea level data. ”That’s not going to
“Climate change is the most significant
To understand that impact, AT&T couldn’t
we really need to take
get you to where you need to be when you’re
crisis facing mankind right now,” Professor
seriously,’ to flat out
looking at coastal and inland flooding. It’s just
Paul Barford, of the University of Wisconsin,
denial: ‘we don’t believe
one of the components.”
told DCD. And, as part of that, the fabric of
it.’” Even technical
the Internet also faces the consequences of a
people, with the ability
National Laboratory to create a detailed
to understand the
picture of extreme weather in the future,
warming planet. With this in mind, a few years ago Barford and a team of researchers set out to answer a relatively straightforward question: “What is
issues, often refuse to accept there’s a problem, he said. “My role as a scientist is not to try to
So the company turned to Argonne
developing the most localized climate model currently available for the continental United States.
the risk to telecommunications infrastructure,
convince people of these things, but to report
of the sea level rise that’s projected over the
the information and then assume that people
Modeling risk
next hundred years?”
will take the logical next steps after that.
“Generally speaking, a climate model operates
Which is not always what happens.”
by taking the entire globe and dividing
In a 2018 paper, Lights Out: Climate Change Risk to Internet Infrastructure,
Some companies are beginning to take
it up into a grid,” Thomas Wall, a senior
Barford et. al. took US sea level models from
the issue seriously, however, as the changing
infrastructure and preparedness analyst at
the National Oceanic and Atmospheric
climate exacerbates events that cause costly
Argonne, explained.
Administration and overlaid their own curated
damage to their infrastructure. “It’s definitely
Within each grid cell, there’s a series of
map of data center and PoP locations, known
been a journey for us,” said Shannon Carroll,
equations that represent all of the physical
as The Internet Atlas.
director of global environmental sustainability
processes in the atmosphere, and in the
at AT&T. “Talking about climate change and
interaction between the land and the ocean,
771 PoPs, 235 data centers, 53 landing stations,
looking at it in a formal way, is something we
and so on. “For every step forward in time
42 IXPs will be affected by a one-foot rise in
started doing back in 2015.”
through the model we calculate all of the
The study found that by 2030 alone, “about
sea level.” Over the past three months, DCD contacted more than a dozen of the
For years, the telco used its own weather operations center to study historical data when making resiliency and planning
16 DCD Magazine • datacenterdynamics.com
variables and all the model outputs.” The problem is, “if you have a global climate model, you’re trying to do everything
for the entire world, you can’t have grid cells
outcomes, even with the same emissions
and data public, and that it was looking at
that are extremely small because you run out
levels.
expanding the model to include a wider area
of computing power - there’s a lot of grids.
“Some models are very sensitive like the
and potentially track other climatic threats
Each grid cell is maybe 100 kilometers on each
Hadley Center model, which predicts a 5-7°C
side, which is great if you’re trying to look at
temperature increase when you double
global trends, but it’s difficult to say, ‘here’s
the carbon dioxide in the atmosphere,” the
currently uses public climate change data
where the impacts will occur to my piece of
Atmospheric Science and Climate research
to assess its risk. “We’ve got proposals to
infrastructure,’ because a whole city is now
group’s chief scientist and department head
perform [similar] climate scenario analyses,
just part of a grid cell.
at Argonne, Rao Kotamarthi, said. “Then there
but we haven’t finalized that plan yet,” Michael
are models which are around 4-4.5°C per
Beekman, CenturyLink’s director of global
regional climate model where, because we’re
doubling. So we combined three models to
environment, health, and safety, said.
looking at just North America, our grid cells
account for model uncertainty.”
“So what we’ve done at Argonne is to take a
are 12 kilometers,” tracking the extremes of flood levels and wind speeds.
Again, with more computing power and
including wildfires. Rival telecoms company CenturyLink
Both companies said that they wouldn’t stop providing services to areas that their
time, the team could have included more
models showed would be at risk. But they
models, Kotamarthi said. “We can dream about
wouldn’t site critical interconnection points in
hydrological modeling in the Southeast to
that in the future when we get an exascale
those areas, either. “Put it this way - we would
create maps at a 200 meter scale. “It’s the kind
supercomputer.”
not put a major data center sitting on the
Then, the researchers did additional
of thing where companies could say ‘let’s look
With the three models running on the
shore of Florida in a hurricane area,” Kathryn
at acquiring real estate in another location, so
two IPCC scenarios, Argonne provided data
Condello, senior director of national security
we can relocate that facility there.’”
that tried to explain both the forecast and
and emergency preparedness at CenturyLink,
uncertainty in the prediction for each grid
said.
Even with the models focusing on just part of the planet, they were still limited by
cell. “We tried to simplify this as much as
the abilities of their 11.69 petaflops Theta
possible,” Kotamarthi said, with a focus on
Protecting the nation
supercomputer. “We can’t run the entire
ensuring that business decision-makers
As part of her remit, Condello works with the
century ahead of us, because this would
could understand the tool without requiring a
Department of Homeland Security to ensure
require a massive amount of computing,” Wall
climate science background.
that her
said. “What we did was take time slices. One
For its part, AT&T looked at four states -
company’s
of them is around mid-century, and one of
Florida, Georgia, South Carolina, and North
infrastructure
them is around the end of century. We tried
Carolina. “You have to start small and then
is ready for
to capture the nearer term trends that I think
expand later if you can,” Carroll said.
disaster, natural
people who are building infrastructure today
or otherwise.
would be concerned about, but also provide
“The DHS does
some insight to understand where we are headed along different trajectories.” Here the researchers had to contend with another problem - the uncertainties of the future. We don’t know whether society will significantly reduce emissions, or if we will
“We know now that past weather is no longer the predictor of future weather”
continue to fail to meet targets. We don’t know
risk assessments, such as: ‘What would be the impact of a much, much higher magnitude hurricane coming into the Washington, D.C. area?’ Normally we get the fringes of hurricanes, but based on modeling that DHS will get from [a national lab] they’ll
who the next US president will be, whether the
see that, while the possibility of that is small,
Amazon rainforest will be further plundered,
“What we are able to do is look at the extreme
or if China will curb its coal consumption.
outcomes of the severe weather events in
“There are four scenarios that are currently
certain regional
the consequences might be high.” CenturyLink, along with utilities and other
those areas.” For example, by mid-century, a
relevant bodies, engages in exercises with
outlined by the Intergovernmental Panel
50-year flood event will produce floodwaters
the DHS to plan out how it would deal with
on Climate Change, we run two of them,”
up to 10 feet deep across inland and coastal
such scenarios. “Based on that, we then look
Wall said. One is the 4.5 ‘Representative
areas of southeastern Georgia.
at where our infrastructure is. Do we think it’s
Concentration Pathway’ (a measure of
“We can focus on our physical assets, and
greenhouse gas concentration), the closest
how long we anticipate those physical assets
approximation to the Paris Agreement.
being operational.”
The other is 8.5 RCP, jovially known as the
It’s also about the future siting of assets.
protected? Do we think we have enough in place for that?” Condello said that over the last decade she has been involved with countless
‘business as usual’ case, based on if we make
“Where are you going to put the next cell
regional risk assessment programs for the
no efforts to curb emissions at all - that’s
tower, where are you going to put the next
DHS. “All of them in one form or fashion,
something we really, really want to avoid.
central office? We’re learning how to better
even if it was more cyber-related, have had a
use the tool every day, folks are starting to use
component that was associated with climate
it more and more.”
change,” she added. “It’s us trying to deal with
Argonne also had to try to capture the differing views of the scientific community - with various models suggesting different
AT&T said that it would make the tool
contingencies that maybe we didn’t think
Issue 36 • March 2020 17
Cover feature about, but that the US government might have
happen. “In general, we’re trying to anticipate
concerns about.”
out 10, 15, 20 years. We just launched a
but faces another roadblock: “Almost
The US government, like all governments,
The work on ensuring resilience continues,
program called the Secure Tomorrow series
everything we do is voluntary,” Kolasky said.
is endlessly concerned, continually searching
that looks out about 20 years, which is
“There are a lot of different requirements and
for signs of weakness in its nation’s structure.
appropriate for some infrastructure build-up.”
regulations that infrastructure owners have
“After 9/11, we stood up the National Critical
Amid this work, Kolasky’s team has to deal
to follow, but our relationship with them is
Infrastructure Prioritization Program,” Bob
with another reality: That of the government
Kolasky, head of the Cybersecurity and
of the day. A public denier of climate change,
Infrastructure Security Agency’s (CISA)
President Trump has defunded programs to
agency is use existing requirements as a way
National Risk Management Center (NRMC) at
combat the problem, and sidelined various
to incentivize security best practices within
the DHS, told DCD.
agency efforts to prepare for the worst.
that, but you’re not going to hear me doing a
NCIPP is a statutorily mandated list of “key
In the DHS, the federal government’s
voluntary. “One of the things we try to do as an
lot of calling out companies for not doing the
physical assets that should be protected in the
second-largest department, FEMA (the Federal
right thing. We have levers of influence, but
face of a potential attack,” Kolasky said. “And
Emergency Management Agency) removed
they tend to be behind closed doors.”
we have thousands of infrastructure [assets]
all mention of climate change from its 2018
on that, where, if they get flooded or a cyber-
strategic plan, while Homeland Security
said, “would let us know if they were building
attack causes the system not to work, they
Secretary Kirstjen Nielsen (who left last year)
significant infrastructure. They’d certainly
end up being really important, whether it’s a
questioned whether humans were responsible
ask for a consultation and partnership at
terrorist attack or not.”
A company like CenturyLink, Kolasky
the community level, and we have security
This data, along with information from state and local governments, is used to build an understanding of weak points across the country. “I think that the most immediate use case and the easiest to imagine is a hurricane headed to the Florida Panhandle,” Kolasky said.
advisors who can help do it.”
“We have levers of influence, but they tend to be behind closed doors"
things that we’re going to end up caring about
governments know that their infrastructure is really vital to the functioning of something that’s important to the government.” Searching for weak points
“You have some range of uncertainty of what the storm surge could look like, but we can pretty quickly knit together where the
Equally, “the big builders of data centers [want to ensure that] state and local
While data centers are individually designed for climate change. Kolasky would not be drawn on the matter
to last a few decades at best, the benefits of close proximity to other facilities,
are, whether it’s the hospital, the wastewater
of whether the politicized nature of climate
interconnection points, and favorable tax
treatment plant, or the substation.”
change impacted his work. “You characterize
environs will ensure that data center hubs
it as politicized. I characterize it as: We plan
last much longer. “If there’s too much
Securing the digital heart
for risks. We don’t endorse the risks we’re
concentration of risk of a geographic site,
While resources like hospitals and power
planning for.”
obviously that can be problematic,” Kolasky
plants are among those at the top of the list of
In 2015, well before Trump’s election, a
said.
important assets to protect during a disaster,
congressional subcommittee was formed
data centers have increasingly become
to “Examine DHS’s Misplaced Focus on
diversity of location so that one single
integral to the functioning of society.
Climate Change.” In an opening statement,
incident can’t bring down a big portion of it.”
With the expected
“And certainly our guidance encourages
Subcommittee Chairman Scott Perry (R-PA)
The Internet was originally designed to
rise of smart cities,
listed various threats, from ISIS, to hackers,
be incredibly robust, with its predecessor, the
smart grids, AI-based
and added: “I am outraged that the DHS
ARPANET, built to withstand a nuclear war.
healthcare, and the
continues to make climate change a top
“It is an amazingly resilient infrastructure,”
like, the need for data
priority.”
Professor Barford, who is the founder and
centers to maintain operations during a disaster is something that is likely to only grow in importance. When they go down, it’s hard to imagine what they might bring down
Kolasky was called up to testify, and
director of the Wisconsin Advanced Internet
defended the decision to invest in climate
Laboratory, said. “However, there are certain
change resilience: “Climate change threatens
locations and certain aspects of the Internet
our nation’s security... The analysis of
that are much more strategically important
infrastructure exposure to extreme weather
than others.”
events we have conducted shows that rising
Barford hopes to analyze where
with them. “That’s a hypothesis that would
sea levels, more severe storms, extreme
the Internet is most vulnerable, from
cause us to be looking more closely at those
and prolonged drought conditions, and
interconnection points to submarine
things,” Kolasky said. “We don’t fully know
severe flooding combine to threaten the
cables. “But I would say that in the research
what’s going to be the answer, right?”
infrastructure that provides essential services
community, there isn’t a clear understanding
to the American public.” He told DCD his
of exactly where the most important aspects
views had not changed.
of the Internet actually are. I don’t think that
His team has to deal with long time scales, where it can be hard to predict what will
18 DCD Magazine • datacenterdynamics.com
there’s been any systematic assessment of
trying to understand how the industry could
the operators and make sure that our teams
that. And until we actually do that work, then
be impacted by more droughts.
know what to do.”
it’s very hard to say: ‘Well, here’s where we
“We’ve been looking at how water scarcity
The tests and procedures have to extend
need to focus the most on because if this goes
will change, mainly in the US, but it is a
beyond the facility itself, to the upstream
down, all hell is gonna break loose.’”
global issue of how that could affect the water
risks - like what to do if the power goes out.
demand that data centers need,” Shehabi told
“We have these very robust fuel resupply
mind. First, that some 70 percent of Internet’s
DCD. “If you have a data center that’s using
agreements with suppliers so that in the
traffic flows through Northern Virginia.
water cooled chillers, and they’ve been sited
event that there is an extended outage, we
Second, that Virginia is sinking.
based on what the demand is for today, that
don’t run out of diesel fuel for the backup
could be changing as that utility comes under
generators.
There are two facts that one should bear in
“It’s complicated, but it has to do with the fact that the glaciers used to come down almost to Virginia, and they actually pushed
more stress in the future.” If the utility has to make tough decisions
“We’ve planned road routes and refueling locations on-site so that they can get in
up the land ahead of them and caused it
to ration water, “who would be the first they
with a truck in an area that’s not going to be
to bulge up,” Virginia’s state climatologist
would drop? Data centers would be pretty
flooded or not going to be obstructed, that
Professor Jerry Stenger said. “And now it’s
high up there, as they’re using utility water for
they can park the appropriate distance from
sinking back down.”
cooling for the most part.” While a lot of the
the fuel tank that they need to refill and do
huge water-consuming industries use on-site
that in a time-efficient manner so that they
more melting of land ice that’s going to
water, most data centers use “utility water
don’t miss the window to get to the next
run off and raise sea levels because there’s
that’s been treated and is needed for people,
property of ours and get that one refueled.”
significant increases in the melting rates of
for houses. I would think that cooling data
the Arctic ice shelves and at the margins of
centers would fall at a lower priority.”
At the same time, “you’re going to have
the Greenland ice sheet. “So the question would be, what about data
Data center goliath Digital Realty is aware
Even then, there are limits - if all the roads shut and trucks can’t get through, what can one do? “This goes beyond the scope of just
of the growing risk. “We recently did a global
Digital Realty but, if our building floods that’s
review of water scarcity and stress risk
our problem, if lower Manhattan floods, that’s
across the portfolio and did some rankings
the City of New York’s problem, so to speak.
centers and cable landing stations, is home
of our portfolio that way,” Aaron Binkley, the
We can’t build a sea wall for the City of New
to the East Coast’s fastest-rising sea levels.
company’s senior director of sustainability
York.”
Stenger told DCD that he did not want to
programs, said.
centers that are located right near the coast?” Virginia Beach, which features both data
provide real estate advice, but added: “I
“A number of our facilities that use water
Building a wall
don’t know that I’m going to rush to buy a
for cooling are using non-potable reclaimed
The US Army Corps of Engineers is currently
beachfront property at Virginia Beach. There’s
water, so we’re not heavily dependent on
studying five potential sea wall proposals.
going to be a lot of areas along the coast that
potable water supplies that a community
The largest would cost $119bn and take 25
are going to have more and more problems.”
would otherwise need for household
years (if everything went smoothly), and is
purposes.” Some facilities do not use any
not designed to account for some predicted
immediate sea level
water cooling, and the company is also
sea level increases. The city has several
rise. “You raise the
looking into ways to reuse more water for
smaller projects underway.
water level a little bit
longer. “That’s a significant long term effort
and bring a big storm
that we’ve put in place.”
The impacts will go beyond just the
in,” Stenger said, “and
To deal with the risk of disasters to its more
“None of those have been fully constructed or built yet,” Ke Wei, assistant deputy director for the New York City Mayor’s
now you’re inundating
than 210 data centers around the world, the
Office of Sustainability and the Office of
more land every time
company recently opened a global operations
Resiliency, told DCD.
there’s a storm surge of
command center in New Jersey. “Around
the same magnitude. You
times of a hurricane or a blizzard, they’re
don’t necessarily need to
providing real-time updates and dialog out to
wait until the water is lapping at your
the site teams with weather updates and other
door, because the same type of storm comes
notifications so that they’re prepared and can
through and now it’s pushing the water even
respond accordingly.”
further inland.”
But the most vital way to prepare, Binkley said, is to hold activities and drills. “We do
“I don’t know that I’m going to rush to buy a beachfront property at Virginia Beach"
Beyond the storm
pull-the-plug tests - if somebody just walked
Droughts will also likely become more
in and hits the power off button, what
prevalent, putting pressure on data centers
happens? Is everyone prepared to respond
that use water cooling. Dr. Arman Shehabi,
quickly to that, and does the control system,
center [operators] that because your
the Lawrence Berkeley National Laboratory
and every piece of equipment work the way
infrastructure is so critical to the provision
researcher best known for tracking data
it’s supposed to? Those are real-life ways to
of basic services, public safety and health,
center energy use, is in the early stages of
not only test the facility itself but also to test
that you don’t necessarily want to count on
“We essentially tell telecoms and data
Issue 36 • March 2020 19
Cover feature broader sea walls to protect your facility. We
“Because of what happened with Hurricane
change issue because they believe that it’s far
would expect them to continue to think about
Sandy there were a lot of investments made
more important to keep coal going for jobs
how they can harden their specific facilities,
to harden the facilities from flooding across
and income. We still have a long way to go
and that they shouldn’t solely just depend on
the city and across the infrastructure space,”
on the political side to really get a visionary
the larger cultural resiliency projects that are
Wei said.
plan and a long term strategy. These sorts of
being built.” Wei collaborates with infrastructure
“And so I think that at least people are
disasters will increasingly happen, and to be
aware of the risks because of that experience,”
honest, if it’s man-made or not, who bloody
operators across energy, wastewater,
Wei said, but warned that eight years’ work on
cares?
transportation, and telecoms to build
resiliency has not been tested in action: ”We
resiliency plans. “To be completely honest
obviously haven’t experienced a comparable
plans. Sticking your head in the sand is not
with you, I think telecoms has been a
flooding event since [Hurricane Sandy].”
the solution.”
challenging sector for us to work with on a
Wildfires are not a problem localized
voluntary basis. First, because newer cellular
The crisis is here
and Internet technologies have transitioned
“You need a big crisis - otherwise, people don’t
it to a more competitive market landscape
move,” Paul Budde told DCD.
versus the energy sector, which is more regulated as a monopoly.
Budde should know: For years the telecoms analyst has been pushing for
“Second is just the different levers of
the Australian government to undertake a
control. With respect to the energy sector,
national resiliency plan for his sector. His
there’s more local and state authority relative
efforts were rebuffed, and ignored. Then,
to what we’re seeing on the telecoms side.”
Australia caught on fire.
The city shares regulatory authority over
“We’ve got these problems. Come up with
“What’s going to happen over the next 5, 10, 20 years? I don’t want to think about it”
“We’ve now got a meeting with a
to Australia, as any
telecoms with federal and state bodies, Priya
government minister, and a lot of the things
Californian resident
Shrinivasan, special counsel and director of
that I mentioned in my discussion paper have
can attest. The
policy standards for the New York City Mayor’s
been addressed, and are going to be looked at
flames pose a huge
Office of the CTO, said. “So the telcos are
so that’s a positive.
risk to human life,
predominantly regulated at the federal level,
Budde has experienced first hand what
and some at the state, and some at the city
happens when little thought was given
level. So it gets very complicated.”
to cell tower battery power, fuel lines, or
While most of the work they do with
who is supposed to fill the generators, as
and infrastructure. But for data centers, “the biggest issue is the smoke,” Pete Marin, CEO
telecom companies is voluntary, “we have
communities went dark amid widespread
of wholesale and enterprise data center
climate resiliency design guidelines, and
wildfires. It’s an area his proposal seeks to
company T5, said. “If you use some type
we put those into new city capital projects,”
address: “Access to electricity, and everything
of indirect or direct evaporative and you’re
Shrinivasan said.
around it, that is seen as an easy win, because
taking smoke in through your cooling system,
it’s not going to cost lots of money.
that’s very problematic.
“We also share with major telecom stakeholders climate projections that are
“Then the next thing is, what’s going to
“So on the West Coast of [the US], where
projected to impact New York City,” Wei
happen over the next 5, 10, 20 years? I don’t
there have been fires, you just have to
added. “We have specific climate projections
want to think about it, to be honest, but at the
monitor that. And that comes down to proper
that are developed for the 2020s, 2050s, and
same time, it’s reality. There will be more fires,
protocols and the operations of the data
2080s.”
and that means you have to start looking at
center. And that’s how you manage that, you
where you are placing your mobile towers.
don’t manage it through the cooling system.
“They are typically on top of hills, which
You manage it through the way you operate,
New York is no stranger to the dangers of an angry climate. In 2012, Hurricane Sandy tore through the city, flooding streets, subway
are the most vulnerable parts because the fire
and the process and procedures that you train
tunnels, and offices. There were widespread
creeps up the hill and is at the hottest at the
and train and train for.”
power outages, billions in damages, and at
top of the hill. If you’ve got your mobile tower
least 53 deaths.
there, there’s no hope in the world.”
Data centers were mostly fortunate
This will require an honest discussion
But, again, no matter how much operators train, they are helpless to stop the impact of the disaster affecting upstream elements.
throughout the disaster, with only one major
about redundancy and resilience, Budde
“For the majority of our data centers we have
outage. Datagram went down when key
believes. “It’s not just about communication
a municipal utility that provides this very
equipment in its basement was flooded. A
for people living in the bush and schools and
reliable low cost, clean power, but some of
Zayo site risked danger when it had to power
hospitals and things like that. It’s how are we
the transmission lines that feed power to that
down its cooling systems amid generator
as a country going to cope?”
municipality are PG&E lines,” Digital Realty’s
issues, while Peer 1 Hosting employees were
Unfortunately, Budde has little hope of the
Binkley said.
able to keep their facility fueled by forming a
current government finding a solution. “The
bucket chain to relay five-gallon buckets of
politicians in Australia are ultra-conservative
our bill doesn’t come from PG&E, we have an
diesel fuel up 17 flights of stairs.
and are not really interested in the climate
indirect exposure to the reliability of PG&E’s
20 DCD Magazine • datacenterdynamics.com
“And so even though they’re not PG&E, and
system,” he said, referring to the utility’s
things get magnified by climate change, we’re
If you take the map of Houston, it’s very flat,
decision to turn off power amid wildfires late
already in a position where we have basically
very close to sea level. Nevertheless, the city
last year (see issue 35).
over-designed in order to mitigate those risks,
of Houston is one of the largest economies
whether or not we had the foresight to know
in the world. And we are a portion of that,
that this was really going to be a long-term
and if that larger economy takes a hit, then
climate change issue.“
we will take the hit along with it. There’s
“And there’s no way for anyone in that market to avoid that.” PG&E’s self-imposed outage was a mixture of climate change exacerbated events and
This means that data centers are designed
poor planning by the utility company. But
to ride out a 185mph wind: “We all say if the
the grid is certainly going to struggle with the
sky turns green, go into the data center.”
just a massive shared fate element to these systemic things that can happen.” We’re in this together
changing climate. When the wind blows
It’s this shared fate, and disagreement
Do you trust the grid?
But building for hurricanes is something
over where the responsibility for action lies,
High temperatures and heatwaves limit the
that operators need to understand will cost
that scares climate scientists. “Whenever
transfer capability of transmission lines,
more, and add to the construction time. “The
I’m talking to people, I always bring this up,”
increase transmission loss and line sagging.
key force to deal with when it comes to high
Argonne’s Kotamarthi said.
High winds and storms will knock out
winds is uplift. In order to build a highly
transmission lines. Cold waves, snow, and
wind-resistant structure, it starts with the
of change in a place which has very little
ice can bridge insulators and cause flashover
foundations.”
capacity to adapt. The consequences will be
faults. Lightning strikes can cause shortcircuit faults, or the voltage surge can damage
Giant tubes of concrete are dug up to 25 feet deep, and are welded to the roof
and handle new use cases such as electric cars. Rising temperatures and heat waves are also expected to lead to a huge increase in
has much higher resources to adapt.” believe the preponderance of scientific
Utilities will have to prepare for all this, to intermittent renewable power sources,
much more disastrous than a place where it The US, should it collectively agree to
equipment. Floods could take out substations. while simultaneously trying to rapidly shift
“Let’s say, there is a certain amount
“People in Florida are not really thinking hard about what it all means"
evidence, is far better placed to adapt than much of the planet. “Large portions of this world are not ready to handle that - they’re not even thinking about it,” Kotamarthi said. “Researchers have put
air conditioning load, risking blackouts and
temperature and humidity data
brownouts when people need power the most. structure. “So when there’s a high uplift
together, to come up with an
every year due to heat-related illnesses than
pressure on the facility, it would have to pull
index of livability, and it shows that
the number of people that were killed during
25 feet of ground beneath the building.”
for a lot of the summer people cannot
Sandy,” Ke Wei said. “So that’s something that
Henigin believes that there are some
even go outside and work in large parts
companies that don’t care about such levels
of Middle East, North Africa and India.
“From a city perspective, more people die
we think about a lot.” Cities are designed to last countless
of windproofing. “We’re really talking about
“It is shocking when you realize how few
generations, and require thinking that is
the hyperscalers and people who share
people are actually worried about. It could
equally long-term. Is it fair to expect the same
their mindset, who have such large capital
affect millions of people, maybe hundreds of
from the data center industry? “We’re thinking
budgets and such rapid lifecycle turnover
millions. And it seems like nobody is really
out maybe 10 to 20 years and I think for that
of equipment, and distributed footprints,
worried about that.”
timing we don’t have any concerns about
that specific individual facility resiliency
Even in the US, “people in Florida are not
the usability of our facilities. But 40 years? A
is less important to them than removing
really thinking hard about what it all means.
lot can happen between now and then,” Ed
those features to save the money that they
I can immediately see a map of the US from
Henigin, CTO of Texan colo Data Foundry,
can then spend it on a wider footprint or
a later part of the century, and I know which
said.
software stacks that can dynamically respond
year it is from just by looking at how much
to outages.
of Florida is underwater. I don’t think people
“I mean, geez, what if everybody moves to the cloud and colocation data centers
“That part of the market will certainly
are pointless and it all doesn’t matter? What
achieve significantly lower construction
if Amazon just ends up buying up all the
costs by taking everything out, but that’s a
available real estate because they’re just this
holistic thing.”
massive?”
Still, even with the extra layers of defense,
are really thinking about it. That’s the really shocking thing for me” Kotamarthi has been studying the climate since the ‘90s and said that the decades-old models of his youth have held
Henigin admits that no structure can ever
up surprisingly well. “The models now
to handle the increased storms and climactic
claim total resilience. “There’s force majeure
are fairly robust. Actually, they may be
events expected to batter the southern states
events that are really beyond the scope. We
underestimating some of these changes.
- ironically because the area has always been
do not promise and no business promises to
at risk. “The Dallas area in northern Texas has
be [indestructible].”
Henigin is confident his facilities are ready
a far higher risk of tornadoes. So, as these
“There are inherent risks to physicality.
“I’m worried about what is going on. But I’m also hoping people will be proactive as they start getting impacted more and more.”
Issue 36 • March 2020 21
Cori and the wildfires
Peter Judge Global Editor
Aligned with customers Aligned Energy sounds like an electric utility, but CEO Andrew Schaap says it can serve any size of colo customer thanks to innovative cooling and strong backing 22 DCD Magazine • datacenterdynamics.com
T
he first question you have to ask about Aligned Energy, is why is it called Aligned Energy? It’s a data center company, and that name sounds like a utility. Sure, there are plenty of other data center providers with stranger brands, but Aligned’s moniker actually hints at where it’s come from, and what might be different about it. A couple of years back, Aligned Energy was a technology-focused data center operator with multiple businesses. Under its Aligned Data Centers brand, it delivered colocation space with a guaranteed PUE (power usage effectiveness) of less than 1.15. Some of the technology to enable that came from another subsidiary, Inertech, which had a patented heat rejection system, broken into its component parts and distributed, to focus cooling onto the racks with the greatest need. The company also had its own DCIM and design subsidiaries, and the company’s founder and then-CEO Jakob Carnemark sourced other energy-efficiency technology from the frontiers, like heat reclamation devices designed for maritime use made by Climeon.
100W per square foot, and in two years, they can densify in the same footprint - 400W per square foot all the way to 1,000, without any disruptions to their business.” The Delta Cube units are fitted in arrays, to create a fan-wall, which moves large volumes of air, and doesn’t need to move it with high energy or high speed, says Schaap. There are a few equipment vendors selling similar low-velocity, high-volume systems airflow systems, which can be more efficient. “Our customers don’t need to worry about exotic cooling solutions or CRAH / CRAC systems with a large number of moving parts to maintain. Aligned’s Delta Cube has one moving part.” Aligned populates fan-walls incrementally, with blanking panels covering empty columns. “We may build, 200W per square foot, If the customer wants to expand in three years, we can come back, take out the blanking panels, and add those Delta Cubes.” The upgrade is fast: “We’re able to provision initial capacity deployments of 2-20MWs, and scale beyond that in as little as 12 weeks. For smaller scale projects, such as small to medium-sized enterprises looking to expand in our existing sites, we’ve been able to deploy in as little as 30 days.”
That structure changed in 2017. Andrew Schaap was brought in as CEO, after an 11year stint at Digital Realty, where he rose to senior vice president of global operations. Aligned was briskly restructured: the DCIM and design businesses are gone, and the Inertech brand disappeared. “When I came on board I made a hard left turn, towards making the technology much more designed for hyperscalers,” Schaap told DCD. “We are aligning our business strategy, technologies and infrastructure closely with [those customers’] needs and requirements.” Inertech became a research and development department for Aligned, led by Michael Coleman, a previous data center operations leader at Google. The patented cooling products - the Delta Cube cooling array and the Cactus heat rejection system - got re-designed for hyperscale use and are not now available to competitors, Schaap said. “I've simplified the design. I've taken it and super-sized it into this really large form factor, which is optimized for large wholesale facilities,” he said. “It’s simplified and enlarged. The fans are bigger, the coils are bigger, which enables me to get a better cost per kW. But it also gives me the ability to scale to a much larger footprint.” Upgrading customers on site is important to him: “No one starts out with 800W per square foot,” he said. “But I can start a customer at a lower density, say
"Our sites are like the ones hyperscalers like Facebook build, not the ones they buy" This is important, as power densities are increasing: “While the market is seeing averages of 11kW per rack, we’re seeing 20, 30, 40kW per rack in some of our deployments. The proliferation of technologies such as AI, IoT, VR, and blockchain will call for considerably more compute, thus generating more heat, and prompting customers to look at higherdensity cooling solutions.” We’re surprised to hear such a technology-based pitch from a wholesale colo provider, when wholesale the giants like Digital Realty and CyrusOne tell us that the hyperscale market has made such specific demands on wholesale colocation that it seems the market is just a competition based on nothing but cost and delivery time. Schaap reckons his technology pitch works because Aligned is... well, more ‘aligned’ with the hyperscalers’ own work than the other wholesale colocation builders. In recent years, it seems it has won big deals with hyperscalers, that had
previously been slated for build-to-suit or build-it-yourself. “We're doing slab on grade, with no raised floors, and an open air ceiling, so our sites look very much like the sites those hyperscalers own and operate, not the facilities they buy wholesale.” The cooling arrays (Schaap won’t use the word “fan-wall” as it’s someone else’s trademark) are also familiar to hyperscalers: “They’ve been around for 50 years. Amazon, Microsoft, Google, everybody's using them in some form or fashion. Even Facebook, use them up in Prineville.” There’s a difference, though, which may give Aligned’s ideas a wider appeal: “Ours is a closed loop system versus an open system. It doesn't use any outside air. We can still win big financial services or Fortune 500 firms, who might feel that what Facebook does is too risky for them. “Ours is in that great middle ground. It’s more advanced than any colocation operators, but it's not so specific to one hyperscaler that it would hurt our ability to win deals with the enterprise or other technology companies just below the hyperscalers.” Returning to speed, Schaap tells us how in Salt Lake City, Aligned turned a former Fairchild Semiconductor facility into a data center, and had a customer in place in six months. In Ashburn, the company built two data halls with 12MW of data center space, expandable to 60MW, in less than six months, which Schaap says is “among the fastest building permit-to-commissioning construction projects in the history of Ashburn's critical infrastructure.” This speed is made possible by advanced site selection and good supply chain methodologies, he said. “We don’t have any loss on our mechanical and electrical plant (MEP) or stranded capacity in our buildings,” and Aligned uses modular power and cooling, with electrical and mechanical inventory prefabricated and tested in the factory before delivery.
All this requires finance, and Schaap describes his backing as “sophisticated” capital, with investors that “understand infrastructure, and have committed to supporting the company’s growth.” They are also committed to efficiency, he said. The biggest investor is Macquarie Infrastructure Partners, which Schaap described as “the largest infrastructure provider on the planet. They're also the largest green energy producer. In 2017, they bought the Green Investment Bank in London, and the Green Bank is the largest financier of green energy projects globally.” Which brings us back to the company name. Aligned is not an energy company, but it’s connected to that world.
Issue 36 ∞ March 2020 23
Don't risk it
Let’s end hot work Working on live circuits is dangerous and not even useful. It’s time to phase it out
D
espite years of discussion, warnings and strict regulations in some countries, hot work - the practice of working on live electrical circuits - remains a contentious issue in the data center industry. Hot work is done, in spite of the risks, to reduce the possibility of a downtime incident during maintenance, but Uptime Institute advises against it in almost all instances. The safety concerns are just too great, and data suggests work on energized circuits may - at best - only reduce the number of manageable incidents, while increasing the risk of arc flash and other events that damage expensive equipment and may lead to an outage or injury. In addition, concurrently maintainable or fault-tolerant designs as described in Uptime Institute’s Tier Standard make hot work unnecessary. In the US, electrical contractors have begun to decline working on energized circuits, even if an energized work permit has been created and signed by appropriate management. This permit is required by the National Fire Protection Association (NFPA) 70E, an electrical safety standard which sharply limits the situations in which hot work is allowed. The US Department of Labor’s Occupational Safety and Hazards Agency (OSHA) has repeatedly rejected business continuity as an exception to hot work restrictions, making it harder for management to justify hot work and to find executives willing to sign the energized work permit.
OSHA statistics make clear that work on energized systems is dangerous, especially for construction trades workers. And, while personal protective equipment (PPE) can protect workers, an arc flash can destroy many thousands of dollars of IT gear. Ignoring local and national standards can be costly. OSHA reported 2,923 lockout/tagout and 1,528 PPE violations in 2017 - and the minimum penalty for a single violation exceeds $13,000, with fines for numerous, willful and repeated violations running into millions of dollars. Wrongful death and injury suits add to the cost, and violations can put up insurance premiums. A recent Uptime Institute roundtable agreed that firms still demanding hot work should prepare to end the practice. Senior management is often the biggest hurdle, despite the well-documented risks, because they are concerned about power supplies or have failed to maintain independent A/B feeds. In some cases, service level agreements forbid powering down equipment.
A clear trend By 2015, more than two-thirds of facilities operators had already eliminated hot work, according to Uptime Institute data. Tighter regulations, safety concerns, increased financial risk and improved equipment should all but eliminate it in the near future. But there are still holdouts, and the practice is far more acceptable in countries such as China. Fundamentally, hot work does not even eliminate IT failure risk. Uptime Institute’s abnormal incident report (AIR) data shows
24 DCD Magazine • datacenterdynamics.com
Kevin Heslin Uptime Institute
"There are still holdouts, and the practice is far more acceptable in countries such as China" at least 71 failures occurred during hot work - but a more careful analysis found that better procedures or maintenance would have made it possible to perform the work safely on de-energized systems. The AIRs database includes only four injury reports, all of which occurred during hot work. There are 16 reports of arc flash, one occurring during normal preventive maintenance and another during an infrared scan. Eliminating hot work can be a difficult process. One large retailer has said it expects the transition to take several years. And not all organizations succeed: Uptime Institute is aware of at least one organization where plans to ban hot work were pulled after incidents involving failed power supplies. Several Uptime Institute Network members told us that building a culture of safety is the most time-consuming part of the transition, as data centers are goaloriented organizations, well-practiced at developing and following programs to identify and eliminate risk.
The future It is not wise to eliminate all hot work at once. The IT team can slowly retire the practice by eliminating the most dangerous hot work first, building experience on less critical loads, or reducing the number of circuits affected at any one time. The Operations team can increase scrutiny on power supplies and ensure that dual-corded servers are properly fed. In early data centers, the practice of hot work was understandable - necessary, even. Modern equipment and higher resiliency architectures based on dual-corded servers make it possible to switch power feeds in the case of an electrical equipment failure, improving availability, and allowing equipment to be isolated for maintenance.
Sponsored by
INSIDE
> Smart Energy | Supplement Super smarts
Server success
Demand response
> NREL turns to AI Ops for the ultimate system
> Finding energy savings at the rack level
> Alternate strategies for smart batteries
Sponsored by
Contents 28 S uper smarts How NREL uses AI Ops to save energy 30 E nergy efficient servers Intelligence cuts energy use 32 E +I Advertorial Busways to prevent arc flash 34 D emand response Data centers can save the utility grid but why should they? 37 K eeping up with 5G's power demands 5G is set to usher in higher data transfer speeds, enabling a new wave of computing technologies
Smart Energy: thinking how to cut energy use
W
ork smarter, not harder. It's a good motto for life, but also for data center hardware and infrastructure. This special supplement is about how AI and human intelligence can be applied right now to digital infrastructure, to save Watts of power demand. We've mostly looked inside the facility, with a foray into the wider world.
Human intelligence has plenty to contribute, in redesigning racks, servers, and cooling systems, to reduce energy waste. We may be approaching the limits of efficiency achievable through the approach of PUE, which reduces the energy used in cooling the facility. The next step after that is to concentrate on the energy used in the IT equipment. Alex Alley tracks the trends in today's data centers, which may be pointing to a need for new ideas in future (p30).
Common sense can save lives
Supply and demand are well understood in economics, but in data centers, they are still getting to know each other (p34). Data centers want energy, they want reliability, and they also want to use renewable power sources. It's increasingly likely that they won't be able to get all three, unless they start work with the utilities. Renewables are intermittent. At a certain point, utilities can't switch on any more solar or wind power, until sites with stored power start to share it. That's the hurdle demandresponse is going to address - and it's a challenge for data centers to deliver on their promises to enable green energy.
and equipment lost to arc flash, according to E+I (p32). One way to reduce that risk is to use an intelligent design of busway, that includes protective housing and safety features.
30
32
37
A high-performance computing site at an energy efficiency laboratory was a natural place to develop the use of AI to make computing itself more efficient. The US NREL institution found out that it takes a huge amount of data points and a lot of effort training up algorithms, to start to making its supercomputers run more efficiently, and to intervene when they work less well (p28). Another finding is that, even with this level of applied intellect, it's too early to completely hand over optimization to AI. We still need to check the working, or else the AI's recommendations run the risk of being impractical or just wrong.
5G could change everything - but only if we can actually provide the electric power it needs (p37). The history of cellular comms is a story of continuing change and improvement. Vlad-Gabriel Anghel shows how we can power the next revolution.
Issue 36 • March 2020 27
AI Ops at the scale of exascale
Super smarts NREL’s highly efficient data center is turning to AI to prepare us for exascale supercomputers, Sebastian Moss reports
T
he world’s most efficient data center wants to get even better, using artificial intelligence to eke out more compute power with the same electrical energy, Building upon a wealth of data, the Energy Systems Integration Facility (ESIF) HPC data center hopes that AI can make its supercomputers smarter, and prepare us for an exascale future.
Nestled amongst the research labs of the National Renewable Energy Laboratory campus in Colorado, the ESIF had an average power usage effectiveness (PUE) of just 1.032 in 2017, and currently captures 97 percent of the waste heat from its supercomputers to warm nearby office and lab space. For the last five to ten years, researchers at NREL have used sensors to try to track everything happening in the facility, and within its two systems - the HPE machines Peregrine and Eagle. This hoard of data has grown and grown to more than 16 terabytes, just waiting for someone to use it. A little under three years ago, Mike Vildibill - then VP of HPE’s Advanced Technologies Group - had a problem. He was in charge of running his company’s exascale computing efforts, funded by the Department of Energy. “We formed a team to do a very deep analysis and design of what is needed
to build an exascale system that is really usable and operational in a real world environment,” Vildibill, now HPE’s VP & GM of high performance networking, told DCD. “And it was kind of a humbling experience. How do we manage, monitor and control one of these massive behemoth systems?” Vildibill’s team started with a brute force approach, he recalled: “We need to manage and monitor this thing, we have to collect this much data from each server, every storage device, every memory device, and everything else in the data center. We've got to put it in a database. We've got to analyze it, and then we’ve got to use that to manage, monitor and control the system.” With this approach in mind, the group did a rough calculation for an exascale system. “They came back and told me that they can do it, but that the management system that has to go next to the exascale system would have to be the size of the largest computer in the world [the 200 petaflops Summit system],” he said: “Okay, so we’ve stumbled across a real problem.” At the time, Vildibill was also looking into AI Ops, the industry buzzword for the application of artificial intelligence to IT operations. “We realized we needed AI Ops on steroids to really manage and control in an automated manner - a big exascale system,” he said. To train that AI, his team needed data lots and lots of data. Enter NREL. “They have
Sebastian Moss Deputy Editor
all this data, not just for the IT equipment, but for what we call the OT equipment, the operational technologies, the control systems that run cooling systems, fans, and towers, as well as the environmental data. “We realized that that's what we want to use to train our AI.” Armed with a data set with a whopping 150 billion sample points, Vildibill’s team last year announced a three year initiative with NREL to train and run an AI Ops system at ESIF. “Our research collaboration will span the areas of data management, data analytics, and AI/ML optimization for both manual and autonomous intervention in data center operations,” Kristin Munch, manager for the data, analysis and visualization group at NREL, said. “We’re excited to join HPE in this multi-year, multi-staged effort - and we hope to eventually build capabilities for an advanced smart facility after demonstrating these techniques in our existing data center.” Vildibill told DCD that the project is already well underway. “We spent several months ingesting that data, training our models, refining our models, and using their [8 petaflops] Eagle supercomputer to do that, although in small fractions - we didn't take the whole supercomputer for a month, but rather, we would use it for 10 minutes, 20 minutes here and there. “So we now have a trained AI.”
The system has now progressed to a stage, Vildibill revealed, that it can “do real time capturing of the data, put it into a framework for analytics and storage, and do the prediction in real time because now we have it all together. “We did 150 billion historical data points. Now we're in a real time model. That’s the Nirvana of all of this: Real time monitoring, management and control.” But, for all its value, data from Eagle and the outgoing 2.24 petaflops Peregrine can only get you so far. Exascale systems, capable of at least 1,000 petaflops, will produce a magnitude more data. “The next steps we're doing within NREL is just to bloat or expand the data that they're producing,” Vildibill said. “Like for example, if one sensor gives one data point
28 DCD Magazine • datacenterdynamics.com
The six layers of AI Ops
every second, we want to go in and tweak it and have it do a 100 per second. Not that we need 100 per second, but we're trying to test the scalability of all the infrastructure in planning for a future exascale system.” All this data is ingested and put into dashboards that humans can (hopefully) understand. "I could literally tell you 100 things we want on that dashboard, but one of them is PUE and the efficiency of the data center as a result."
As an efficiency metric for data centers, PUE has some detractors, but it’s good enough for NREL. “That's what NREL cares about, but we're building this infrastructure for customers who have requirements that we don't even yet know,” Vildibill said. He noted that the system "might do prediction analysis or anomaly detection," and we “can have dashboards that are about trying to save water. Some geographies like Australia worry as much about how much water is consumed by cooling a data center as they do about how much electricity is consumed. That customer would want a dashboard that says, how efficiently they are using their data center by the metric of gallons per minute that are being evaporated into the air. “Some customers, in metropolitan areas like New York, are really sensitive to how much electricity they used during peak time versus off hours because they've got to shape their workload to try to minimize electrical usage during peak times. Every customer has a different dashboard. That was the exciting thing about this program.” It’s still early days though, Vildibill cautioned, when asked whether the AI Ops program would be available for data centers that did not include HPE or (HPE-owned) Cray equipment. “That's a very fair question,” he said. “We're really excited about what we're doing. We're onto something big, but it's not a beta of a
product. It is still advanced development. So the question you ask is exactly the very first question that a product team would and will ask and that is: 'Okay, Vildibill, you guys are on something big. We want to productize it. First question, is it for HPE or is it going to be a product for everybody?’ And I don't think that that decision even gets asked until later in the development process.” Alphabet’s DeepMind grabbed headlines in 2016 with the announcement that it had cut the PUE of Google’s data centers by 15 percent, and expected to gain further savings. It also said that it would share how it did it with the wider community, but DCD understands the company quietly shelved those plans as the AI program required customized implementations unique to Google’s data centers. “I can tell you this - and I'm putting pressure on the future product team that's going to have to make these decisions but everything I'm describing is entirely transferable,” Vildibill said. “In fact, we envision this being something that could even be picked up by the hyperscalers. It would be very ready for use to manage cloud infrastructure, in addition to being used by our typical customers, both HPC and enterprise, that are running on-premises. “What I'm driving with this design is entirely transferable that I think, if it's not, then you depreciate its value entirely.”
For the AI Ops program, Mike Vildibill breaks the system down into six layers through which data travels. First is the data source: "we've got to be able to ingest historical data, capture telemetry from servers in real-time, and so on.” Then comes the aggregation and processing of that data: “We pre-process it to make it usable. Instead of 20 different formats or log files, we have to unify it into a format that everyone can understand." Third is data analytics, followed by AI and machine learning techniques in the fourth layer. In the layer after that, AI begins to predict and advise the user. For instance, if the system sees a large number of corrected errors on a DIMM in a specific rack, it could recommend that node should be replaced. Finally, layer number six will provide automation and control: "This is the big objective. Instead of simply advising what a human should do, the system goes in. For example, it turns off a node, if it has predicted that it's going to fail.” That final stage is still a way off, however: “This program is really touching upon the first five of those six, and we want to get a couple of years of a strong prediction and advice capability under our belt first.” The current focus is to make the process behind the first five layers more transparent to humans. "I want to be able to make sure that, by the time a piece of advice or prediction comes out, we have a very clear understanding of what data led to that prediction, so that we can go back and audit the decisions that are being made," said Vildibill. "I don't think we can get all the way to automatically controlled systems unless humans can understand the factors that led to a decision," he concluded. "We can't just hand it over to the AI and say 'I don't know what you're doing. I hope it works out.' I think there's going to be a lot more research involved before we really turn the systems over to complete automated control."
Issue 36 ∞ March 2020 29
Efficiency efforts move into the IT stack Data center power consumption is costly - but servers are getting more efficient, Alex Alley reports
D
ata centers bring together a large number of servers in one place, and run applications on them. Whether they are enterprise, colocation or cloud data centers, they have to operate 24x7 to support those mission-critical applications so, as data centers emerged, the first priority was to build in reliability. Once the reliability task was done, costs and efficiency came to the fore. Those early data centers were over-engineered and overcooled to ensure reliability, but it quickly became apparent that more than half the energy they consumed went into keeping the hardware cool, and less than half was actually used in computation. Ten years of working on the efficiency of cooling systems has given us a current generation of facilities with a power usage effectiveness (PUE) of 1.2 or less, meaning more than 80 percent of the power they use is burnt in the servers themselves. So now, it’s time to start looking in more detail at the power used by servers, as a major component of the energy used
by data centers. In February, the Lawrence Berkeley National Laboratory co-wrote a report commissioned by the US Department of Energy, which revealed some interesting statistics. Firstly, the study confirmed an oftquoted rule of thumb, that data centers now consume a small but significant part of global energy. However, the word on the street has been cranking up to around two percent, the DOE report reckons it was closer to one percent in 2018. That sounds like a manageable figure, but it masks areas where data centers have become a burden. For instance, Ireland is facing a boom in data center building, and has a limited ability to grow its grid. The Irish Academy of Engineering has predicted that in 2027, 31 percent of all power on the grid will go to data centers. Secondly, and more interestingly, the report shows that this overall figure is not growing as fast as some had feared. Over the past decade, things have dramatically changed. In 2018, data center workloads and compute instances increased more than six-fold compared to 2010, yet
30 DCD Magazine • datacenterdynamics.com
Alex Alley Reporter
power usage only went up by six percent. “Increasing data center energy efficiency is not only an environmentally friendly strategy but also a crucial way of managing costs, making it an area that the industry should be prioritizing,” Jim Hearnden, part of Dell Technologies’ EMEA data center power division, told DCD. “Most IT managers are keen to increase their energy efficiency in relation to their data center, particularly when doing so also helps improve performance and reduce cost.” It’s clear that data centers have seen huge efficiency gains - and as one would expect from the PUE figures, the majority of these have been in the cooling side of the facility. But during that same eight year period, server energy consumption went up by 25 percent. That’s a substantial increase, although it’s a much smaller uptick than the six-fold increase in workloads the study noted. It’s clear that server are also getting more efficient, gaining the ability to handle higher workloads with less power. Much of this is down to more powerful processors. We are still in the era of Moore’s Law, where the number of transistors on a
chip has been doubling every two years, as predicted by Gordon Moore, the one-time CEO of Intel. More transistors on a chip means more processing power for a given amount of electrical energy, because more of that computation can be done within the chip, using the small power budget of on-chip systems, without having to amplify the signals to transmit to neighboring silicon. Moore’s Law implies that the computational power of processors should double every 18 months, without any increase in electrical energy consumed, according to an observation by Moore’s colleague David House in 1975. As well as in the processors, there’s been waste energy to be eliminated in all the components that make up the actual servers in the data centers. Supermicro makes “white-label” processor
“Despite people doing more, they are doing it with less electric power"
boards used by many large data center builders, and it has been hard at work to shave inefficiencies off its servers, according to Doug Herz, senior director of technical marketing at the company. “The data center’s electric power consumption in the US has started to flatten off,” he told DCD in an interview. “It’s not going up that fast due to a number of energysaving technologies. Despite people doing
more, they are doing it with less electric power.” Supermicro has spotted the part of the puzzle where it can help: “Manufacturers have not focused on idle servers and their cost,” Herz said. “And newer management software can aid in keeping that consumption down.” A five-year old server can use 175W when it is idle, which is not that much less than when it is in use. Idle server power consumption has improved over recent years, but still Herz estimates that data centers with idle servers can be wasting a third or even a half of the power they receive. Newer management software can balance workloads, distributing tasks so servers spend less time idling. “This software is used not only to monitor the servers in your data center but also to load balance the servers in your data center and optimize the electric power,” Herz said. “If you have a set amount of workloads that you have to distribute over a certain number of servers in your data center, maybe there are more efficient ways to go about it. Try optimizing the servers in your data center so that you're running some of them at full capacity. And, that way you're able to get economies of scale.” Further up the stack, it’s possible to optimize at a higher level, where the server power use shades over into the cooling. For instance, French webscale provider OVH takes Supermicro boards and customizes its servers, with specially-adapted racks and proprietary water cooling systems. Small watertight pockets are placed on hot components to conduct heat and transport it away. “It makes good business sense,” OVH’s chief industrial officer, Francois Sterin, told DCD. “The goal is that our server needs to be
very energy and cost-efficient.” OVH has around 400,000 servers in operation, and its process is just as softwaredriven as Supermicro’s, Sterin told us: “We submit a server to a lot of different tests and environmental tests. This allows us to measure how much energy the rack is consuming. The goal is that our server needs to be energy and cost-efficient.” It’s clear that energy efficiency is now top of mind at all levels of the data center stack. More efficient server chips are being managed more effectively, and used more continuously, so they crank out more operations per Watt of supplied power. At the same time, those servers are being cooled more intelligently. Liquid cooling is ready to reduce the energy demand on cooling systems, while conventional systems are being operated at higher temperatures so less energy is wasted. We know that Moore’s Law is reaching the end of its reign, however. Chip densities can’t go on increasing indefinitely, and delivering the same rate of increase in performance power Watt. If we’ve made cooling as efficient as possible, and chip efficiency begins to level out, where will the next efficiency gains be found? One possibility is in the software running on those processors: how many cycles are wasted due to inefficient code? Another possibility is in the transmission of power. Between eight and 15 percent of all the power put into the grid is lost in the longdistance high-voltage cables that deliver it. To reduce that would require a shift to a more localized power source, such as a micro-grid at the data center. The data center sector has great needs, and plenty of ingenuity. The next stage of the efficiency struggle could be even more interesting.
Issue 36 • March 2020
31
Reducing Data Centre Arc Flash Risk Through Innovative Busway Design It is estimated that more than 30,000 arc flash incidents occur each year, resulting in an average of 7,000 burn injuries, 2,000 hospitalisations and 400 fatalities.
T
he NFPA 70E, describe arc flash as “a dangerous condition associated with the possible release of energy caused by an electric arc.” An arc flash occurs when a surge of electrical current caused by a short circuit, flows through the air from one energised conductor to another. This results in a release of ‘incident energy’ which is expressed as an explosion of heat and pressure into the external environment. Despite the transient nature of arc flash incidents, they have the potential to reach temperatures of up to 35,000°F. Arc Flash Risk in the Data Centre As the demand for data increases, modern data centres are now seeking higher power capacities, higher rack densities and higher efficiency designs, all of which have an impact on arc flash risk. As power capacity and rack densities increase, all things being equal, so too does the available fault current. Ironically, the quest for higher
efficiency designs can also increase the risk of arc flash in the data centre. Transformers represent one of the highest losses in electrical power distribution, however they also provide inductive and resistive impedance that limits fault current. To reduce losses, modern data centres are being designed with fewer, larger transformers compared to traditional data centres. However, as this also reduces electrical impedance within the power system, the trend of higher efficiency designs tends to increase the available fault current and the risk of arc flash. This can result in consequences such as lost work time, downtime, fines, medical costs, lost business, equipment damage; And most importantly arc flash presents a severe safety risk to human life. Intelligent Medium Powerbar – Safety by Design E+I Engineering’s iMPB open-channel busway solution has been manufactured with the safety of the installer and user as
the number one priority. iMPB is the safest and most flexible open-channel busway system on the market due to a range of in-built features that greatly reduce the likelihood of arc flash in the data centre. These features are designed to ensure continuity of power in in a world that demands 24/7 uptime, whilst also making operator safety a priority. Arc flash testing according to IEC/TR 61641:2014 has been completed for E+I Engineering’s full iMPB range. Protective Housing iMPB lengths are designed as an open track system where tap off units can be plugged in anywhere along the bar. The assembly is designed to exceed the minimum ‘finger safe’ requirements of both UL and IEC standard, this greatly reduces any risk of any accidental contact and hence reduces the risk of arc flash. The copper conductors are fully isolated from the housing using a certified thermoplastic material, the insulation has excellent dielectric strength
Advertorial | E+I
and is impact resistant. The lengths are connected using custom designed thermally and electrically secure joint packs, that can easily be disassembled and reassembled. iMPB offers an enhanced layer of safety by offering a closure strip that can be fitted over the area of the busbar that is not connected to a tap off unit. This increases the protection to IP3X, further reducing the risk of an arc flash. Mechanical MCB Interlock An MCB Safety Interlock can be integrated into E+I Engineering’s iMPB product to prevent the tap off unit being fitted to the bar while the MCB is in the ‘On’ position. Similarly, the tap off unit can only be removed from the busbar when the MCB is in the ‘Off’ position. The MCB can only be switched on when the contacts are fully engaged with the busbar. This provides users with an extra layer of safety when fitting/removing tap off boxes from the busbar.
The mechanical interlock secures the tap off box to the busbar using high tensile strength lockable hardware which cannot be fitted incorrectly. Once fitted to the bar, the engager handle can be turned. This lifts the contacts into the busbar and has a positive lock once fully rotated. This mechanical connection between the tap off unit and the busway prior to any electrical connection, ensure that there is no risk of an arc flash incident when installing iMPB tap off boxes to the busbar. “Ground First, Break Last” Technology iMPB tap off units are fitted to the busbar using E+I Engineering’s unique ‘earth first, break last’ safety feature. Each tap off unit interlocks onto the distribution length with a ground strip. This ensures that the ground is the first point of contact with the busbar system during installation, achieving a lower fault current and lower fault clearance time as excess current will always exit the busway system through the grounding strip.
Hook Operated Tap-Off Units iMPB can be installed vertically or horizontally depending on project requirements. However, it is typically ceiling mounted above the rack. For safer operation, E+I Engineering have introduced a hook operated tap-off unit which can be switched on and off from the floor using a simple hook mechanism. This adds a further level of protection as users have no direct contact with the tap-off box while it is energised. In mission critical environments, which is constantly demanding higher levels of power, the dangers of arc flashes can never be completely eliminated but they can be controlled. This is done by understanding where the potential hazards are and taking steps to mitigate them. In the open channel iMPB busbar product, E+I Engineering have implemented a range of safety features to minimise the risk of an arc-flash incident, preserving both operator safety and system efficiency.
Responding to demand response demand
Demand response - is there a demand? Data centers can play a role in cutting emissions and reducing the strain on the grid. But why should they?
I
n a bid to reduce emissions, renewable sources are being used where possible - but this creates new problems for the grid, making it harder to match generation and consumption. Data centers could help to create a balance, through techniques referred to as “demand response” but so far it’s proven difficult to enlist their help. All the world’s economies are attempting to reduce carbon emissions by increasing
the share of renewable sources in their electricity generation, and reducing that provided by fossil fuels. However, there are two problems with this.
Firstly, apart from hydroelectric power, renewables are mostly intermittent. Solar panels and wind turbines only deliver energy when the sun shines or the wind blows, and can’t be switched on as required. And secondly, the fossil fuel-powered capacity
34 DCD Magazine • datacenterdynamics.com
Peter Judge Global Editor
that is being retired is exactly the steady, readily available capacity that the grid needs, providing a continuous baseload, and also extra flexible capacity as needed. The electricity grid has to satisfy a fluctuating demand - and there are two big factors where long term policies designed to reduce emissions could actually add to the burden on the grid. Electricity is being proposed as a replacement for fossil fuels in cars and heating. But this will increase
the demand for electricity - and it only reduces emissions if green electricity can be increased to match that demand. To respond to changing supply and demand, the grid has to become flexible. According to Mohan Gandhi, a civil engineer and analyst at the New Bridge Founders thinktank: “As intermittent renewables penetrate further into the generation mix, flexibility becomes an increasingly important feature of the electricity system.” Some of this flexibility is based on moving electricity to where it is needed, but transmitting power is costly and involves losses - and the cables may not even be there: “Renewables are actually being built faster than cables can be laid,” says Gandhi. “In Germany, wind generation in the north has grown enormously, but the interconnection cables between the north and south are yet to be built.” Instead of moving electricity around, another approach is to shift demand towards times when electricity is cheaper or more available - an approach dubbed “demand response.” This can be as simple as offering consumers a cheaper tariff for night-time electricity (in the UK often referred to as Economy 7). In industry, energy use is more concentrated, and there is potential for more advanced methods including on-site generation and stored energy, so industrial sites can temporarily shift their load completely off the grid, or even become an energy source, feeding power into the grid. “Demand response is often the most economical form of flexibility because it requires few new transmission or distribution investments,” says Gandhi. It also has a lot of potential: The European Commission estimates that Europe as a whole could deliver 100GW of demand response power (and this is expected to rise to 160GW by 2030). However, the European grid is currently only accessing around 20GW of available demand response capacity. Globally, Gandhi estimates that 20 percent of the world's electricity consumption will be eligible for demandresponse by 2040. As a sophisticated and significant electricity user, believed to be using around one percent of the US grid’s output, digital infrastructure can play a big role here. “Data centers, with their real-time management and workload flexibility, are good candidates for demand response schemes,” says Gandhi. “They can ‘shift’ load outside peak hours, or deliver surplus energy stored in their batteries and on-site generators to the grid at times of undersupply.”
It’s been suggested that a group of data centers could help shift demand by migrating their loads amongst themselves to make use of the cheapest and greenest electricity at a given moment. “Many typical data center workloads are delay-tolerant, and could be rescheduled to off-peak hours,” says Gandhi. There are drawbacks to this. Firstly, if a data center is running profitable workloads, then it costs money to move them elsewhere, and the most cost-effective use of that resource is to run it at capacity as long as possible. And secondly, the customer who owns the data may need to ensure that is processed in a given location to comply with local regulations.
It’s actually possible for data centers to reduce their power demands without affecting IT workloads. Research by the Lawrence Berkeley National Laboratory (LBNL) found that energy consumption could be reduced by five percent in five minutes, and 10 percent in 15 minutes by making changes such as setting a temporarily higher air temperature. Beyond this, demand response approaches tend to use the facility’s uninterruptible power supply (UPS). This is designed to support the data center when the grid fails: there is an alternative source of power (usually diesel gensets), and some energy storage (typically batteries) that will support the data center while the local power starts up. Why not use the batteries, or even switch to diesel for a few hours, when energy is expensive? “When power is expensive, you can use energy from batteries, not the grid,” says Janne Paananen, technology manager of energy equipment company Eaton. “This gives savings in cost of energy. You can do it yourself.” Beyond the DIY approach, there are systems managed by the utilities, which work in a surprisingly simple way. The grid frequency in the UK is 50Hz (plus or minus one percent), but it varies at heavy loads. The utilities use this to regulate the power - the grid detects the change in frequency and uses that to switch on extra capacity. Because there are industries with their own generating capacity for backup, the electricity industry has come up with a scheme called Firm Frequency Response (FFR), in which those third party resources are turned on in response to the same change in frequency. Data center UPS systems are designed to switch on immediately, and can be hooked into this sort of scheme. FFR is in operation in Ireland and likely to come onstream in the UK shortly. Eaton is working with the FFR scheme in Ireland, says Paananen. ”With fast frequency
response, you are providing services for the grid, and getting revenue by providing those services. Instead of responding to the cost of energy, you respond to a real-time signal.” In Ireland, facilities on the FFR program get a signal roughly once a month, says Pannanen. “Normally the frequency deviation lasts for only a few seconds.” This is a level of usage that traditional lead-acid batteries can readily support - and if FFR takes the place of a scheduled battery test, it can actually create less stress to the system. These systems are proven, says Paananen. On 8 August 2019 in the UK, two power plants went down, the frequency of the grid changed, and that signaled various responses, so numerous factories and facilities went off-grid. All this has been possible for years, but - as with any new idea - the big hurdles are making it pay, and gaining users’ trust. Utilities are prepared to offer cheaper electricity at different times, and even pay consumers to take themselves off-grid. But will data centers take them up on this? Back in 2013, Ciaran Flanagan of ABB told DCD: “Demand response programs (DRPs) have not only become a tool for grid operators to manage demand, but also a source of revenue for DRP participants. DRPs are in operation today in many commercial and industrial sectors but, ironically, data centers are largely nonparticipants, even though they are the fastest-growing part of the grid’s load.”
A big reason for data centers’ reluctance is that they make profits from continuous availability, and sharing support systems might increase the risk of failure. “Participating in demand response programs may reduce the availability or lead to a higher risk of downtime. This risk is exacerbated by the potential surrendering of control to aggregators,” says Gandhi. “Data centers are typically in the business of avoiding downtime, minimizing risk and maximizing availability.” One operator put it more simply at a DCD event: “I put in that UPS to support my load when the data center is browning out. Why would I share that just when I most need it?” “The challenge is educating the market so they understand it is safe,” says Paananen, “there are safety features built in. The UPS will refuse to participate when it is needed.” Beyond that, the trouble is that any revenue from demand-response is small: “I’m not sure how much the extra revenue is meaningful for data centers.” He suggests that the return could be in automated maintenance with no service charge - at least that’s something that customers need.
Issue 36 ∞ March 2020 35
Responding to demand response demand In a colocation facility, an operator’s hands may be tied. The UPS may be shared by users, some of whom object to handing over control to a demand response scheme. In theory, hyperscale data centers, with monolithic applications under one organization’s control, have more freedom. Paananen hopes that a different approach in use in Nordic countries may be a better fit for data centers. In 2017, Eaton launched an “UPS-as-a-reserve” (UPSaaR) service with Swedish energy supplier Fortum, a more flexible approach in which UPS batteries effectively act as a part of a “virtual power plant - and get paid around €50,000 per MW of power allocated to grid support. Similar schemes are now in operation in Finland and Norway. It’s still early days, but Paananen has faith: “Things are progressing, but not nearly as fast as people hoped. The challenge is if you really want to get commercial benefits from it, you need big batteries.” Large deals with hyperscalers will take years to complete, he cautions. Paananen thinks the Nordic scheme may be the most promising. Ireland uses FFR for “containment,” kicking in when the frequency has dropped significantly. But the Nordic UPSaaR schemes are more about fine-tuning or “regulating” the system when it is slightly out of line. In Nordic markets, the UPS gets used more often, for shorter periods, and with a faster response required: “It can be only 30
seconds, and consumes very little energy. That is very nice for data center. The UPS can run for less than its full design load.” The trouble is that this kind of use demands more modern batteries: “There’s a more constant discharge in that scheme. You would need lithium-ion batteries.” It also requires critical functions in the UPS itself: “The UPS needs to understand, and follow external signals - and make its own decisions.”
With all that effort, an ironic problem can be that the utilities aren’t always keen, says Gandhi, due to the basic laws of supply and demand. Demand response programs are often implemented by thirdparty aggregators who have agreements with utilities and consumers, and pool the demands of a group of customers. Aggregators can hook into proprietary control systems like those of Eaton, or add intelligence to the operation of other vendors’ equipment. Demand response systems led by aggregators help the consumers reduce consumption, and cut their costs. But why would utilities relinquish control and potentially lose revenue? More subtly, as a third party, the aggregator masks the real demand from the utility. “There is no incentive for electricity suppliers to include aggregators in their contracts with customers because this undermines many areas of their business
36 DCD Magazine • datacenterdynamics.com
model,” Ghandi points out. Some governments are stepping in to demand the adoption of demand response, and regulators are getting involved. “The markets are not yet open in every country,” says Paananen. One aggregator that is optimistic is Upside Energy, a UK operation that was adopted as a partner for data center demand response by equipment maker Vertiv. The pair have made no big wins yet, but Upside CEO Devrim Celal says “we are super excited about data centers, and that business will be increasing significantly in the next few years.” In the meantime, though, there’s plenty of interest from other sectors. “There’s good activity with cooling and refrigeration, and behind-the-meter cogeneration,” says Celal. But the demand response proponents want data centers on board. Right now, large players like Google are paying to have more renewable energy connected to the grid with power purchase agreements (PPAs), but there are limits to this, says Paananen: “At some stage, you may not be able to use all that renewable energy. Demand response helps the grid to get more renewable energy in - so the only way data centers can get green energy is by helping the grid to get it.” It could change perceptions, he goes on: “Instead of DCs being a problem, they are part of the solution, and help the grid to adapt.”
Power efficiency needs to keep up with 5G's demands 5G is set to usher in higher data transfer speeds, enabling a new wave of computing technologies. But this will put new pressure on energy efficiency, Vlad-Gabriel Anghel finds
I
n 1979, Nippon Telegraph and Telephone (NTT) launched the first mobile cellular network in Tokyo, Japan. An analog system, it is now referred to as a “first-generation” or 1G network. By 1983, NTT rolled out coverage throughout the whole of Japan, while other 1G networks were springing up in Europe and elsewhere. Motorola began service in 1983 in the US - where Bell Labs had proposed such a network as early as 1947, but dismissed it as impractical. 1G had a lot of drawbacks including poor audio quality and limited coverage. There was no roaming support between networks, because there were no standards. However, it was revolutionary and paved the way for further development within the sector. The next iteration, 2G was a big improvement, using digital radio signals. It appeared in 1991 in Finland, and was launched under a standard - GSM - which promised the possibility of international roaming. Providing SMS and MMS, the new technology was adopted widely. Despite slow speeds and relatively small bandwidth, 2G revolutionized businesses and customers on a scale never seen before. 3G evolved in the years leading up to 2000, with the aim of standardizing the network protocols used by vendors thus providing truly international roaming. NTT Docomo launched the first 3G network in 2001. 3G offered speeds about four times
Vlad-Gabriel Anghel Contributor
higher than the peak possible with 2G, and supported new protocols and solutions like VoIP, video conferencing and web access. This opened the packet switching era in mobile phone communications. While functions like Internet access struggled at first, the launch of the iPhone in 2007 stretched 3G’s capabilities to the limit. It was clear that 4G would be needed - and international standards bodies including the ITU have been working on it since 2002. In 2009, the 4G Long Term Evolution (4G LTE) standard got its first run in Sweden and Norway, and was rolled out over the next few years. 4G’s speed has enabled multiplayer gaming on the go, high quality video streaming and much more. However, the protocol is plagued by network patchiness in a lot of regions around the globe with some suffering extremely low 4G penetration.
Enter 5G This all paved the way for 5G, which was being developed from the moment 4G was delivered and is now already deployed in certain areas. It promises massive improvements such as allowing up to a million devices per square kilometer - which could revolutionize a lot of sectors. The latest estimates from IHS Markit see 5G having an economic impact of at least $12 trillion as the focus shifts from connecting people to information towards connecting everything to everyone. Plus it promises an energy efficiency revolution in the field of
Issue 36 ∞ March 2020 37
Building 5G efficiently
mobile communication networks. The Internet of Things era is now in full swing, with more devices connected to the Internet than there are people on Earth. All these devices send and receive data on an almost constant basis. While home IoT applications benefit from a local wireless network, business and industrial IoT will require untethered connectivity over long distances as well as enough bandwidth to accommodate all these applications at the same time. Currently there are so many devices connecting to both 3G and 4G networks that these networks are close to breaking point, and cannot add the growing number of fresh devices that require perpetual connectivity. This is where 5G comes in. It is faster, smarter and more efficient than its predecessors with a much higher bandwidth.
Antennas, spectrum, and base stations Communications networks are increasingly finding that space, power consumption, and emissions are major economical and operational issues. Given the vast increase in connected devices it is clear that energy efficiency will be a top concern for operators wishing to reduce overheads from both capital and operational expenditure. So how does 5G fare in terms of power consumption and efficiency compared to its predecessors? 5G’s design requirements specify a 90 percent reduction in power consumption compared to current 4G networks - a figure which is based on the whole ecosystem, from the base stations all the way to the energy used by the client device. This massive reduction happens through a combination of updated design practices, hardware optimizations, newer protocols, and smart software management of the underlying infrastructure. Currently mobile networks use, on average, anywhere between 15 to 20 percent
of their overall power consumption in actual data traffic, with the rest being wasted mostly by keeping the components in a ready-to-operate state. 5G base stations will be able to go to sleep when network activity pauses. This is immensely important as base stations account for around 80 percent of the power used by a mobile network. In current networks, the majority of base stations may be idle at any moment. Despite the increased number of devices and higher data rates, 5G may have more opportunities to power down base stations. Since 5G has a higher data transfer rate, data transits the network more quickly, increasing the time when the base station is idle. 5G data packets are more compressed, further reducing traffic volume.
5G specifies a power reduction of 90 percent over 4G - from the base station all the way to the client device The new network also adds a multipath transport control protocol (MPTCP) which reduces the need for packet replication and retransmission and increases reliability as more network paths are created. This is possible because 5G uses cheap and efficient MIMO (multiple input multiple output) antennas. A MIMO system of antennae can handle more clients and bigger volumes of network activity, and increase reliability by providing more routes for the data to take. Should one route fail another one can take its place. MIMO antennas communicate with multiple clients using focused beams of radio waves (“beamforming”). This increases the channel efficiency along with data
38 DCD Magazine • datacenterdynamics.com
transfer rates and reduces the possibility of interference. It also focuses radio energy directly towards the connected device, and can identify the exact amount of power and energy required to further reduce energy consumption for both the base station and the device itself. Furthermore, 5G makes use of smaller network cells, covering a given area with a larger number of smaller antennas. In mobile networks, a cell refers to the base station, its antennas and the physical area they serve. 5G’s small cells are designed to be deployed inside large buildings or outside in highly populated areas. Power consumption increases with the distance between the base station and the client device (the antenna has to “shout”), so these smaller network cells should be deployed to keep the communication distance as small as possible. Finally, through a combination of new scheduling algorithms the spectral efficiency of 5G New Radio is massively improved over current networks. In 4G networks, the signal scheduling included a large number of control and verification codes at regular intervals which could consume up to 20 percent of the network’s energy overhead during higher frequency transmissions. In 5G streams, the control and verifications codes are greatly reduced. This is because the use of smaller mobile network cells cuts the communication distance, reducing the chance of interference or failure. The scalability and flexibility of 5G networks are increased with software defined networking (SDN) and its related virtualization technologies. SDN works by decoupling the control layer of the wider network from its data plane and merged into a centralized control plane, which has access and overview over the whole network. This means that the hardware
resources can be dynamically assigned by software to optimize traffic flow.
What does 5G actually deliver? The 5G design specifies an almost Utopian technology which promises an energy efficiency revolution across the whole mobile communication ecosystem, but these promises should be taken with a pinch of salt when analyzing how the technology will perform once deployed. The rollout of 5G to the public will be slow, mostly because of the relatively high initial investment required. So a full 5G coverage to the level of current 4G deployment is within sight but will take a few years. Full coverage is well beyond that. For a long time, “legacy” mobile networks like 3G and 4G will account for the majority of a base station’s total energy consumption. The latest figures from Cisco’s Annual Internet Report 2018 - 2023 further emphasize this with “the top three 5G countries in terms of percent of devices and connections share on 5G will be China (20.7 percent), Japan (20.6 percent), and United Kingdom (19.5 percent), by 2023.” Some analysts have argued that there may in fact be no energy savings when the technology is rolled out - depending on one’s definition of energy efficiency across 5G. For example, the computational power required to process 5G signals is around four times that of 4G. On comparable hardware, one would expect 5G’s data processing component to have four times the energy consumption. However, this is only true if 5G is rolled out on the same infrastructure that 4G currently runs on without adopting the increased efficiency delivered by hardware manufacturers. Mobile operators have been given significant spectrum for 5G networks, partly due to the imminent saturation of previous networks and the rise of industrial and commercial IoT. However efficient 5G is by design, it will add up to a further
increase in energy used by the total mobile communication infrastructure. This will be the case until (and if) previous network generations are phased out. This can have a domino effect on the wider digital infrastructure. The growth of 5G networks comes hand in hand with the rise and expansion of Edge data centers, so it seems reasonable to include the energy use of these facilities when discussing 5G’s efficiency. Putting critical resources closer to the network edge can reduce latency, so 5G plus Edge facilities may deliver individual client interactions more effectively. However, if we factor in the vast increase in data storage, and processing needs of these applications over the coming decade, it seems that 5G will in fact simply enable an even more power-hungry digital infrastructure industry than today’s. This could be potentially offset by a close collaboration between all the interconnecting industries, ensuring a powerful and reliable smart grid backed up by renewables and proper energy storage technologies.
Searching for standards Standards are slow to emerge, as a consensus is needed between various parts of the communications and energy ecosystem. The number of bits transmitted per Joule of energy expended is now one of the top metrics used to analyse the efficiency of a network. Currently, a cellular site will deliver an energy efficiency of around 20kbit/Joule, and some research papers in the field forecast that 5G could boost this more than two orders of magnitude to 10Mbit/Joule. The future looks even more exciting as technologies are already starting to appear that introduce novel ways to harness 5G. In “radio frequency harvesting,” the energy transmitted over radio waves can be captured and used on the client device or
parts of the underlying infrastructure. Since radio frequency signals carry both information and energy, it is theoretically possible to harvest some energy and receive some information from the same input. This system is known as SWIPT simultaneous wireless information and power transfer. The hardware required for this is still in development and there’s a rateenergy tradeoff between the amount of data and energy derived from a signal. So SWIPT will never charge your smartphone wirelessly, but it is a novel approach which could offset the power consumption required for data transmission on the client device. The rollout of 5G is likely to be closely linked to the rise of the modular and containerized “Edge” data center market, which in turn is driven by the need to make communication as efficient as it can be in energy and latency terms, by placing timesensitive functionality at the network edge. It’s clear that energy efficiency will be key to all future developments in the digital infrastructure industry. In the next decade, we will see how 5G’s energy efficiency - and its impact on the world - plays out. Right now, the technology is in its infancy and there is not enough data to know what the impact its energy consumption will have in real life. As every moving part of the industry becomes more knowledgeable and less risk averse, 5G could bring in an era of near instantaneous communication with a plethora of new industries and markets, making every effort to minimize their impact on the world, through collaboration and smart infrastructure deployment and management. The roll out of this Edge-based technology will bring huge changes to data centers and other infastructure. But for that to happen, companies have to ensure that efficiency is paramount, as they rush to compete for the sector.
Issue 36 ∞ March 2020 39
INTELLIGENT MEDIUM POWERBAR Delivering power safely and efficiently in mission critical environments. E+I Engineering's innovative iMPB product is an open channel busway system designed for use in data centres and other mission critical environments. E+I engineering have completed iMPB installation in data centres across the globe where security and flexibility of electrical distribution is paramount. iMPB has been engineered with the safety of the installer and user in mind.
For more information about our full range of products please contact us at info@e-i-eng.com Donegal, Ireland | South Carolina, USA | Ras al-Khaimah, UAE
WWW.E-I-ENG.COM
So you want to build a smart city?
Arvin Teymouri Correspondent
Smart cities promise massive improvements in efficiency. To deliver, they will have to intelligently combine Edge resources with the cloud. Arvin Teymouri reports
I
n concept, smart cities use a network of sensors and devices to make sustainability enhancements across a city or town. The data collected by the IoT devices can be used to produce useful metrics and control systems for transportation, buildings, energy, utilities, environment, and infrastructure, making all these systems more efficient and intelligent. A smart city implementation will need to connect all the IoT devices to a control center, which acts as the “brain” for the networks and, by extension, the city. The
most crucial activities within a smart city are managed by these control centers. Real-time analysis of the data fed through the system, along with statistical reports compiled over time will allow better inner city planning, potentially integrating different departments and applications within those departments. In January, Dan Doctoroff, CEO of Google-sister company Sidewalk Labs and former deputy mayor of New York City, said: “One idea we’ve put forward is adaptive traffic signals that can recognize pedestrians, cyclists, and transit vehicles (in addition to
cars) at intersections, helping to improve intersection safety for all users.” In Denmark, the city of Copenhagen has implemented a similar solution. Traffic lights are backed up by GPS, and traffic flow is regulated according to how many cars are on the road. This ‘GPS-powered’ traffic light also favors cyclists and the Danes say it has decreased overall travel time for motorists by 17 percent. What Doctoroff did not say, however, is the new wave of hardware deployments required by something as simple as adaptive traffic signals.
Issue 36 ∞ March 2020 41
The city as a computer
Enter the Edge Edge computing is a distributed and open IT architecture that features decentralized compute power. This enables mobile computation, IoT, and applications such as smart cities. In Edge computing, data is processed near its source by a local server rather than being transmitted to a traditional data center. Edge computing should be seen as an extension of the data center industry, delivering resources through more dynamic, miniature counterparts of the conventional data center. It requires new methods of data storage, secure networking, faster networking, and cooling. This means more investment into the industry as a whole. Edge computing will require a collaborative effort from specialized sectors such as telecoms to provide the blistering speeds required. Network carriers will provide infrastructure, and data centers will deliver new local storage space for Edge computation and also upload non-real time data to traditional cloud services. So far no one knows what the sheer scale of the costs in these developments will reach, but we do have some clues. A report by the White House-backed SmartAmerica group estimates that US city
governments will invest approximately $41 trillion over the next 20 years to upgrade their infrastructure to benefit from the IoT. The main challenge with smart cities and its reliance on the ‘Edge’ will be who is going to operate it. The opportunities for different scales of company will depend crucially on the ecosystem which underpins the new infrastructure. It’s possible that ecosystems will be deliberately constructed so that small companies can flourish by developing specialisms in processing vast amounts of people’s data. Alternatively, we could hand the figurative ‘keys’ of the city to the likes of Google or Amazon, who already have the capacity to roll out infrastructure like this on a giant scale. The term ‘hyperscale’ has never before been quite so literal.
Building a city New York or Copenhagen were never designed to be a “smart city.” They were laid out long before the widespread use of automobiles, phones, and electricity - let alone any digital technology. Despite what many may think, sometimes it’s easier to just build a whole new city, an approach which is famously being implemented in China.
42 DCD Magazine • datacenterdynamics.com
In the US, the National Institute of Standards and Technology (NIST) has created a blueprint of the different approaches. One method would be for the city authorities to build the infrastructure and own it. This is a traditional approach where a government agency assembles a set of specific requirements - in this case, for an IoT network - and then advertises the competitive contract. As part of the deal, the government offers to pay for the entire cost of construction and installation - provided the government ends up with full ownership of the completed system. This approach gives lower operating costs, and avoids the application of government network constraints. It should also make it easier to deploy new services. The trade-off is the high level of public sector investment, including large capital costs, on-going operational costs, and the need for a highly skilled engineering team to operate and maintain the network. An alternative method is the Corporate Model. For a clear example, we can head to Japan, where car manufacturer Toyota is building the Woven City - a small smart city where the corporation promises to make everyday life automated and intelligent. At the Consumer Electronics Show 2020 (CES 2020), Toyota showed off its Woven City
"Building a complete city from the ground up, even on a small scale like this, is a unique opportunity to develop" plans, describing it as a ‘living laboratory’ for the ‘subjects’ who will reside there. "Building a complete city from the ground up, even on a small scale like this, is a unique opportunity to develop future technologies, including a digital operating system for the city's infrastructure," said Toyota president Akio Toyoda. "With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test connected AI technology... in both the virtual and the physical realms... maximizing its potential." Toyota claims the 175-acre city will be built in 2021. The company has also been acquiring technology for autonomous transportation such as the $394 million investment into Joby Aviation, a company designing small planes as air taxis.
Utopian Networking A smart city is one thing; a 5G smart city opens the door to full automation capabilities, potentially making the promises of projects like Woven City considerably more plausible. The next generation mobile data technology, 5G, could enable effective smart cities. Synergies between municipal and consumer applications of the technology could ease the rollout and maximize the potential of smart city functions. Every network carrier is racing to deploy the fastest network for its customers, and there are no signs of that slowing down. Data centers can capitalize on the data explosion this will bring, and smart cities will come to rely on the capabilities that 5G promises. Networks with 5G functions can deliver lower latency (faster local data) and will have lower energy consumption compared to today’s infrastructure. This correlates well with the goals of the data center industry, making a win-win relationship for both industries. “All those 5G antennas that will soon provide super-fast connections will also have to be connected to fiber optic cables below the ground,” said Petra Claessen, director of BTG, a group of large telecoms customers based in the Netherlands.
Speaking at the Smart City Congress in Barcelona in late 2019, she went on: “In order to avoid having to break open the sidewalk three times in the future, a law must be quickly put in place to ensure that the mobile network operators will share their infrastructure.”
Combining the Edge with the cloud Edge devices have limited power constraints, as they need a very long battery life, because their sheer number requires them to work unattended in order to make economic sense. On the other hand, the cloud has access to virtually unlimited electricity in its central locations, but has massive power efficiency granted it by the economies of scale. The objective of Edge computing is to combine these two in real-time and provide a great user experience, while supporting new mega-applications such as smart cities. For this reason, Edge computing data centers and hyperscale data centers will coexist rather than replace one another. Depending on the use case, the two data center archetypes will be blended to meet the necessary requirements. Smart cities will have to use a variety of technologies to build the next generation metropolitan utopia.
DCD>Magazine DCD>Magazine The TheArtificial Edge Supplement Intelligence Supplement
Out now
This article featured in a free digital supplement on Edge artificial computing. intelligence. Read Read today today to learn to learn about about healthcare rack density, solutions, deep retail learning of the future, technologies, vehicle-to-vehicle the role of CPUs communication, in inferencing, smart the quest for fusion factories, and much power, more. and much more. bit.ly/AISupplement bit.ly/DCDEdgeSupplement
Issue 36 ∞ March 2020 43
Ready to dive into water cooling?
Andy Patrizio Contributor
There comes a point when water should be prioritized over air, but the question is when? Andy Patrizio investigates
L
iquid cooling, once the tool of extreme high performance computing and obsessed overclockers, continues to go mainstream with virtually every hardware vendor supporting and encouraging it to some degree. The question is at what point do you need it, and how should you deploy it? Is it only for new data centers, or are retrofits possible? At the rack and server level, virtually all conventional data centers rely on airflow for cooling. The traditional method of air cooling is a computer room air conditioner (CRAC) to cool the air as it enters the computer room, but in the last few years, the IT industry has gotten hip to free cooling, where just outside air is used and the chiller is eschewed. This makes the data center slightly warmer but still well within the tolerance of the hardware. But even the chilliest of air conditioners will run up against the limits of physics. Eventually, there is simply too much heat for the fans to dissipate. That’s where water, or some other liquid, comes in. Air conditioners can move around 570 liters (20 cubic feet) of air per minute. A decent liquid cooling system can move up to 300 liters of water in that time but, since water has 3,000 times more heat capacity than air, the liquid cooled system is shifting about 1,500 times as much heat as its air-cooled rival.
Despite the hype, liquid cooling is still somewhat nascent and data center providers rarely pitch. “We don’t actively recommend it,” says John Dumler, director of energy management for data center provider Digital Realty. “At this point it’s really a retrofit market. A customer can ask for a data center retrofit, while others are saying they would like to bring in GPUs, the data center has got the pipes and we want you to connect them.” Brett Korn, project executive for DPR Construction, which builds data centers, isn’t seeing much activity either: “I see one project every year. It’s not done frequently,” he says. So what’s the dividing line? It comes down to heat and economics.
Consideration #1: Old vs new Don’t waste your money retrofitting old hardware. It’s not going to make much difference in terms of heat and the investment won’t pay itself back, says Jason Zeller, product marketing manager with CoolIT, maker of enterprise cooling solutions. “It’s pretty rare to see an existing data center retrofit any machinery,” he says. “The cost is just too prohibitive to take machines down and retrofit them to be worth the time and money. Early in CoolIT’s history we were involved with retrofits but they proved to not be beneficial for anybody involved.”
44 DCD Magazine • datacenterdynamics.com
DPR’s Korn said his firm never takes the initiative in pushing liquid cooled upgrades. “The economics aren’t there to put a water cooling system if it was not client-driven, and I don’t see the economics that would allow you to do it in a pre-existing data center,” he says. If you have a 250 square meter site, at what point do you say let me scrap the site and start over? It depends on the age of the site, said Korn. “If a lot of equipment is old you would have to ask why put good money into an old site. If a site is five or six years old and has a lot of life, then you might do an upgrade,” he says.
Consideration #2: Know your density For Jason Clark, director of R&D for Digital Realty Trust, the most basic question is around what is the ultimate density and how many cabinets will you place in close proximity. “If a customer has one 25kW cabinet, that’s different from six cabinets as low as 18kW,” he says. “You tend to run out of runway [for air cooling] at 18kW racks. It’s very situational but where things get challenging is above 18kW.” The density consideration applies not just to compute gear but also networking equipment as well. If it’s packed in tight with the servers, that might also be a good reason for using liquid cooling, says Clark.
“With machine learning or AI boxes, they need to be very close together for high speed networking. That increases heat density,” he says. Zeller said anything over 30kW is where CoolIT draws the line. “Any time we see a customer planning a new data center with a density over 30kW per rack is a strong indicator for liquid cooling to optimize performance. Anything above that starts to overload regular air systems,” he says.
Consideration #3: Operating and capital expenses Water vs air cooling is an opex vs capex argument. With water cooling there is greater upfront expense than for air cooling, thanks to its novel hardware. Since air cooling has been around longer, there is a greater choice and the products are mature. But with water cooling, operating expenses over time are much lower, says Zeller. “Pumping water requires a lot less electricity than running air. With liquid cooling, you can significantly reduce
the number of fans so cost over time is substantially lower,” he says. In a 42U cabinet with 2U rack-mounted servers, there would be four compute nodes each, and each node would have three fans. So you have 12 fans per 2U server, that comes out to 252 fans per cabinet, each fan only draws up to a couple of watts, but that could add up to 500W for the fans in the cabinet, With liquid cooling, most of the fans can be removed outright, with just a small number left to do ambient cooling of the memory and other motherboard components, which makes for a substantial saving. It’s much quieter, too, since the servers can get by with few or no fans. With hundreds of fans screaming in each cabinet, data centers can hit 80 decibels. That’s not quite the level of a Motörhead concert but enough that some data center workers are advised to wear ear plugs.
Consideration #4: Use cases For basic enterprise apps, like database apps, ERP, CRM, and line of business (assuming
you haven’t moved to Salesforce or a competitor yet), you really don’t need water cooling because the overhead is not all that severe. Even a high utilization application like data warehousing has gotten by just fine on air cooling. But artificial intelligence and all of its branches - machine learning, deep learning, neural networks - do require liquid cooling because of the extreme density of the servers and many AI processors, particularly GPUs, run extremely hot. However, how much of your data center is running AI applications? Probably not a lot, just a handful of racks or cabinets at best. So you can get by with a small amount of liquid cooling. Korn spoke of one client, a pharmaceutical firm, that was doing AI simulations in genetic analysis. The workload was 50kW, so it used a chilled water rack blowing cold air. Korn said the company may use a bigger rack as AI picks up steam but right now, “everyone talks AI but only a handful of apps have real world application.”
Issue 36 • March 2020
45
Introducing the
DEKA SHIELD from
Our exclusive Deka Shield program gives you peace of mind and exclusive additional warranty protection for your Deka batteries. This benefit is just one offering of Deka Services, the North American service network operated by East Penn. Exclusive DEKA
SHIELD
Deka Shield is an innovative and exclusive program to provide optimum battery performance and life to your Deka Batteries no matter the application: Telecom, UPS, or Switchgear. By allowing Deka Services to install and maintain your batteries, your site will receive extended warranty benefits.
How do I sign up for the Deka Shield program? Installation must be completed by Deka Services or a Deka Services approved certified installer The application and installation area must be approved by Deka Services prior to installation Access for Deka Services to perform annual maintenance during the full warranty period
What coverage does the Deka Shield program provide?*: Full coverage labor to replace any defective product Full labor to prepare any defective product for shipment
Extensive DEKA
Freight allowance for new product to installation site Full return freight for defective product Extended warranty
* Terms and conditions apply – please contact us for additional information.
SERVICES
Deka Services provides full service turnkey EF&I solutions across North America. Their scope of services include, but are not limited to:
• Turnkey EF&I Solutions • Battery Monitoring • Battery Maintenance
• Battery Capacity Testing • Removal and Recycling • Engineering
• Project and Site Management • Logistical Support • Installation
All products and services are backed by East Penn, the largest single site lead battery manufacturer in the world. With over 70 years of manufacturing, battery, and service expertise let Deka Services be your full scale power solution provider.
Preparing for ransomware
Learning from a ransomware attack In 2019, criminals hit a group of US service companies through their data center provider, CyrusOne. Dan Swing analyzes what happened
R
ansomware attacks continue to plague companies of all stripes. Organizations of any size and sector can be vulnerable to attacks that encrypt files and render entire IT estates inoperable. Threat actors then demand ransom payments in return for decrypting the files and making the resources available again. Rather than go through a long recovery
process or attempt to use decryption tools, many organizations will simply pay the fee, despite official advice from the FBI and many cybersecurity companies warning that this only encourages attackers and propagates the problem. In early December 2019, the managed service division of Dallas-based data center REIT CyrusOne announced that it had suffered a ransomware attack that had encrypted some customers’ devices.
50 DCD Magazine • datacenterdynamics.com
Dan Swing Contributor
It is rare for cybercriminals to go after a data center, but the attack hit six managed service customers at once, mostly customers at CyrusOne’s New York data center. The company's colocation services, including IX and IP Network Services, were seemingly unaffected. FIA Tech, a financial and brokerage firm, was one of the customers affected by the attack on CyrusOne and saw an interruption of some of its cloud services as a result.
“The attack was focused on disrupting operations in an attempt to obtain a ransom from our data center provider,” the company said in a statement posted online. Ransomware attacks explained First reported by ZDNet, CyrusOne was reportedly hit by the REvil strain, also known as Sodinokibi. A relatively new strain of ransomware first discovered in April 2019. As well as other MSPs, this strain has been used against local governments in Texas, and hundreds of dental offices in the US. As of October 2019, McAfee estimated Sodinokibi had already made over $4.5 million in ransomware payments. “Ransomware has become more sophisticated, with more appetite for making money, maximizing their return on investment,” says Liviu Arsene, senior e-threat analyst at cybersecurity firm BitDefender. Sodinokibi has quickly become popular with cybercriminals. Dubbed ‘The Crown Prince Of Ransomware’ by cybersecurity firm
CyberReason, Sodinokibi is thought to be linked to the same attackers that created the prolific GandCrab ransomware. While the infection vector into CyrusOne isn’t known - CyrusOne declined to comment for this piece - Sodinokibi has been observed being distributed via spear phishing and poisoned downloads and exploiting vulnerabilities in unpatched Oracle WebLogic servers, wherein it encrypts data in the user's directory and deletes shadow copy backups to prevent quick recovery. Unlike many forms of ransomware, it is also possible to remotely execute. It doesn’t, however, currently have any capabilities to self-propagate. It can also steal data before it encrypts; meaning as well as disrupting operations, the attackers can leak whatever information was on the machine prior to encryption. Cybercriminals claiming to be behind Sodinokibi have threatened to release data collected from victims prior to encryption. “This seems to be a new evolution in ransomware meant to pressure victims
into paying,” says Bitdefender’s Arsene, “by scaring companies with potential fines applied by legislators if attackers were to publicly expose customer or sensitive data. In essence, the bigger the potential fine caused by a data breach on a company, the bigger the stake for the ransomware operator.” The fact that it was the Managed Service Division of the company that was affected was likely no accident. MSPs make an appealing target for cybercriminals of all stripes; as well as often a large attack surface, the fact they often have routes into the IT estates of customers mean attackers can not only extort the MSP but many of their customers too. Sodinokibi has been used against multiple MSPs since it was first spotted in the wild, including Californiabased Synoptek in December. “This situation highlights that data center and Infrastructure-as-a-Service (IaaS) providers are just as vulnerable to attacks as other companies,” Thomas Hatch, CTO and co-founder at SaltStack, told press at the time of the attack. “While IaaS providers generally
Issue 36 ∞ March 2020 51
Preparing for ransomware
create very secure infrastructures, there is still the liability that they can be attacked in this manner.” CyrusOne wasn’t oblivious to the threat; in a regulatory filing from 2018, the company listed ransomware and other cybersecurity issues against both itself and its customers as a risk factor for the company that would only increase over time. Where there is one, there are often more, and data center operators should take note of lessons they can learn in order to better protect themselves from this and other strains of ransomware in the future.
Lesson One: Know your enemy and understand the threat Data center owners and operators of all stripes should be well aware of the threats they face; not only the who, but the how and why. Which groups are active in the locations
“Data center and Infrastructure-as-aService (IaaS) providers are just as vulnerable to attacks as other companies.” you have operations in? Which groups are known to target companies not only in your sector but that of your customers? What are their tools, techniques, and procedures; how do they get in, what do they do once they’re in, and what are their final goals? Some attackers might use phishing attacks to deliver ransomware for money, others might hijack credentials to later steal information. Understanding the threats you face enables you to see where your defenses might be vulnerable, and therefore give you an opportunity to put extra defenses, controls, and monitoring in place.
Lesson Two: Educate your staff to cut off an easy route of entry The point of entry for many strains of ransomware is email. While it won’t prevent every attack, properly educating staff around phishing emails can help reduce the main route in for ransomware. Many companies will simply run phishing simulation tests and admonish those who click through. However, good programs will go further and educate staff on what to look out for, share real-world examples of phishing email attempts, and will not blame staff that do fall for such attacks.
“It’s important that employees, new or seasoned, are well trained in terms of data protection practices within the company,” says BitDefender’s Arsene. “All employees need constant training and assessment, with a particular focus on new employees that may become more susceptible to various types of cyberattacks or infections.” Admittedly, some of those phishing emails will simply be too good; emails from highly-skilled actors may well be indistinguishable from legitimate ones. However, most will have telltale signs that can give the game away if you know what to look for, and well-trained staff can prevent attacks before they begin by not clicking malicious links or opening malware-infected files. On the technical side, labeling external emails, installing the DMARC email authentication protocol, and having a dedicated internal email address that employees can send potential phishing emails to can all help.
Lesson Three: Patch what you can, isolate what you can’t Patching is often simpler in theory than execution, but understanding what vulnerabilities exist on an IT estate, where they are, and the risk that particular asset might pose to a business if compromised can go a long way to informing your patching priorities should lie. And, if an asset can’t be patched for whatever reason, isolate it as much as possible and put extra monitoring in place.
Lesson Four: Have playbooks ready, and not just for the technical teams Ransomware attacks are a worryingly common occurrence, but many companies will simply think ‘it will never happen to me.’ Instead of ignoring the threat, be prepared for it. Have your IT & security teams prepare playbooks for various ransomware scenarios; not just what happens if one device gets encrypted, but all of them. What would happen if the phones went down? What would you do if your backups had been compromised? How quickly would you be able to respond? What would nontechnical staff be doing during the incident? You can’t be prepared for every eventuality but having a broad understanding of what kind of processes teams should broadly be looking to follow can save time and smooth recovery operations.
Lesson Five: Don’t pay, have backups and insurance Ideally, even if an organization is affected by ransomware, backups will be available to draw upon. Backups should be made and tested regularly to ensure companies are resilient as possible.
52 DCD Magazine • datacenterdynamics.com
“It’s recommended that organizations perform regular backups of their critical data, deploy encryption across their infrastructure, and use layered security solutions that can both detect and block potential ransomware infections,” explains BitDefender’s Arsene. Sometimes, however, recovering from backup would simply be too timeconsuming (and therefore costly) for many organizations, leading them to think paying the ransom may be the quicker and cheaper course of action. However, as well as enabling attackers to continue operating and profiting, organizations may well be inviting further attacks, plus there is a chance the criminals won’t provide any decryption keys - or they may not work. “Law enforcement and security organizations recommend that victims don’t give in to these ransom notes, as paying will only encourage ransomware operators to continue investing in ransomware development,” says Arsene. Instead, companies should ensure they have comprehensive cyber-insurance which will cover the organization’s costs and loss of business during any such incidents. Having an incident response firm on retainer or knowing which such organization you would reach out to in such an incident is also prudent.
Lesson Six: Prepare your public response ahead of time Suffering a cyberattack isn’t the taboo it once was. Incidents occur so frequently that all but the most serious will quickly be forgiven, by a public increasingly aware of the danger. What matters more is how you react. A swift and well-planned response can actually improve a company’s standing. To take two examples, both Maersk and Norsk Hydro suffered major incidents that brought the companies to their knees operationally, but were praised for their rapid and open response. Both companies provided continual operational updates not only internally to staff but publicly, and both actually saw their share prices increase after initial drops. When CyrusOne announced the initial incident, it provided little detail. While it provided a brief statement to press there were no follow up posts explaining the issue in detail or the status of the recovery effort. Those interested had to turn to its customer, FIA Tech, which provided regular updates during the incident. Just as companies expect their security teams to be well-drilled to know what to do in such incidents, their communications and leadership teams should also be well-drilled on how to respond to incidents; identify key stakeholders, have messages prepared, be ready and willing to provide regular and detailed updates.
Zahl Limbuwala on the need to question data centers’ worth
F
or at least the last ten years I’ve referred to data centers as the ‘factories of
the information technology era.’ With pretty much every aspect of our lives today being technology enabled, the growth of this sector is assured for the foreseeable future. However, with the climate crisis upon us and sustainability
Future forecast: 14 industry leaders on the decade ahead
top of so many agendas, our industry faces the resurgence of those that will cast us into the corner of ‘problem’ rather than ‘solution’ to these very publicly visible global challenges. It was thirteen years ago when data centers first started to become visible to the general public and first started to gain the attention of countries who were the early adopters of CO2 emissions legislation. This industry did a great job for around 7-8 years of pacifying most of those that started to take a direct interest in what this sector was doing so far as energy efficiency, sustainability and CO2 emissions were concerned. That peak of activity died down towards the end of 2016 thanks to us having done a good job of demonstrating our ability to manage our own ‘footprint’ and very public ‘green’ initiatives from all the big publicly visible brands in the market. While all the good work continues a bigger rock lies ahead of us, that most likely plays out as much as an ethical issue as it does an awareness of the great strides we’ve taken as an industry. When it becomes broader public knowledge what proportion of the data center and IT industry’s footprint (climate/sustainability) is ‘spent’ on extremely low value (to society) content such as 60 second user created videos, we can expect a much more serious level of critical questioning and need to demonstrate when, how and to whom this cost to the planet actually adds value. Those without a handle on their facts and figures as well as a clearly laid out proposition on the ‘greater good’ may find life and their business in much tougher climates.
"It was 13 years ago when data centers first started to gain general public awareness and first started to gain the attention of countries who were the early adopters of CO2 legislation" Zahl Limbuwala is executive director CBRE. He was previously the CEO and co-founder of Romonet, before its acquisition by CBRE.
Rhonda Ascierto on the data center skills shortage
Craig Pennington on clean and secure energy
T
Graying and overwhelmingly
I
male, most data center teams
When the utility provider’s
are struggling to fill open roles.
response to potentially causing
Our research shows that many
wildfires is to turn off supply
believe the worst is yet to come:
then there’s a new set of
the skills shortage will have
considerations to evaluate;
greater impacts in the near
generators should not be the
future.
answer : they are expensive,
he digital infrastructure workforce is in a precarious spot.
at the edge, demand for data center capacity will (continue to) grow
California, clean energy isn’t the only consideration, energy
security is now a concern.
More people will be needed. From colossal cloud sites at the core to lights-out micro data centers
n recent years in
noisy, polluting... and your final line of defense. Onsite generation at a sufficiently high energy density is then
at a rapid pace. More data centers, and the networks that connect
front of mind and fuel cell technologies are one of the few that can
them together, will be designed, built and operated.
do this cleanly and effectively. Not completely clean but with the
There will also be new challenges. Data centers, operating at
potential in the future to run on biogas or hydrogen, they allow for
unprecedented scale and complexity, will be disrupted at almost
new ways to power data centers at scale and will be part of a well
every level. New technology, new IT and business models, growing
thought through energy portfolio in the future.
environmental impacts, more laws and the increasing effects of
Additionally, we expect to see software defined power solutions
climate change are just some of the forces that will drive change.
start to make in-roads, allowing for sensible use of peaks and troughs
These challenges will be met by innovation, by discipline and by high levels of performance. But to fill all these jobs the industry
in one part of a facility to allow overconsumption by other parts without risk.
will need to hire from a wider range of people. And, as research has
I foresee a future where the negotiation of supply to meet need
consistently shown, diversity - of thought, skills and approaches - is
can be automated; BMS and customer solutions working together to
good business.
deliver a power marketplace within the facility.
Technologies such as AI and automation will help. However, the problem is deep and widespread, and our studies suggest that the impact of technology on the overall demand for skills will be limited. To address the worsening staff shortages there will be significantly greater investment from industry and educators, with more data
“We expect to see software defined power solutions start to make in-roads”
center-centric curricula and major awareness campaigns. The goal will be to build a larger, more diverse pipeline of talent. The future business success of digital infrastructure will depend on it.
Craig Pennington is VP of Design Engineering at Equinix where he leads the company’s Future Data Center research initiatives
“To fill all these jobs the industry will need to hire from a wider range of people” Rhonda Ascierto is VP Research, Uptime Institute, and has spent her career at the crossroad of IT and business, as an adviser, researcher and speaker focused on the technology and competitive forces that shape the global IT industry
Emma Fryer on government policy towards data centers
I
Patrik Öhlund on how demand for transparency will drive sustainability
data centers: Policy makers will
O
realize that we need state-of-
parallel, people are becoming
the-art digital infrastructure if
increasingly concerned about
we want a flourishing digital
climate change. Customers
economy. Politicians will
(both businesses and
recognize that the delivery
consumers) will require more
of their digitally dependent
transparency on the real carbon
policy agendas relies on data
footprint of the services that
n the next decade, there
ver the next
will be a sea change
decade, energy
in the way that the UK
needs for the data
Government perceives
center industry
will continue to grow. In
centers. Ministers will formally acknowledge the UK data center sector as globally important: a genuine UK business success story. This should be happening already, but Government is struggling
they buy, and they will become more and more willing to opt for services with low carbon emissions. Both politicians and businesses are acting: The European Green Deal by the EU is already asking for increased transparency on the
to recognize data centers as digital infrastructure and to connect
environmental impact from the ICT sector. Microsoft has recently
them to growth, employment and productivity. Back in 2011 I was
launched a carbon calculator for the Azure platform, and both
told “data centers are big sheds that use lots of energy and don’t
Greenpeace and some of the hyperscalers are pushing towards 24x7
employ anybody: Why would we want them here?” In 2015 the
renewables.
National Infrastructure Commission was established without a digital
Altogether, this will require the entire DC industry to switch over
remit (telecoms - but not data centers - were later bolted on). Even
to renewable sources of power on a 24x7 basis, and to be able to
this year, policy makers are surprised to find a data center market in
prove it in a transparent way to their customers. This will require
London rather than Iceland.
the REC/GO system (Renewable Energy Certificates and Guarantees
So this would be a major step forward. However, with recognition
of Origin) to expand in all markets, and we will likely see a long-
comes responsibility. While I anticipate that the government will
term shift from today’s setup with virtual or corporate PPAs over
eventually make the connection between data centers and a data-
to market-based, 24x7 PPAs paired with strict requirements on
driven economy, the very criticality of the sector will bring increasing
additionality to cover an increase in power demand.
scrutiny. So we can expect much more attention from regulators, both domestic and international. For now, the European Commission’s ambitious legislative
The companies that are most successful in implementing these new power supply schemes and finding ways to be transparent about it to their customers are likely to increase their market shares during
agenda for digital services is a pretty good indication of where we are
the next decade. At the same time, these companies will also help
heading, and it won’t all be plain sailing.
reducing carbon emissions for the entire industry sector.
“Government will eventually make the connection between data centers and a data-driven economy”
"This will require the entire DC industry to switch over to renewable sources of power on a 24x7 basis, and to be able to prove it in a transparent way to their customers"
Emma Fryer is Associate Director, Climate Change Programmes, at industry lobbyist techUK. She represents the UK data center sector in dialog with government policy makers and other external
Patrik Öhlund is CEO of Node Pole and Chairman of the iMasons
stakeholders
Sustainability Committee
Peter Gross on the need for new storage solutions
Sarah Davey on getting proactive when it comes to climate change
T
for a zero carbon or even a
T
negative carbon goal.
connectivity, regulatory bodies
here is no question
he demands of
that data center
the data center
operators will
industry are growing
continue their quest
exponentially. As
the world needs more reliable
The significant transition
are quickly gaining awareness
we’ll start seeing in the
of the required energy
near future is the shift from
increases and therefore the
acquiring renewable energy
environmental impacts.
credits through PPA contracts,
While awareness of the
to the use of microgrids where
climate emergency is growing
renewable sources are directly connected and physically located near the data centers. The question is, how long before today’s model, where the
in both the general population and government level, it is essential and inevitable that highimpact industries such as data center design shoulder their share of
renewable source is only supplementing the grid power, is replaced
responsibility. Policy-makers who have historically provided building
by an integrated, self-contained, off-grid solution? The answer lies
incentives such as carbon tax exemptions and reduced electricity
in the speed of developing reliable, economical, practical energy
prices may soon feel increased accountability for the current climate
storage solutions.
crisis.
It is questionable whether the lithium-ion batteries widely
The data center industry is aware of the coming challenge. It
used today will be able to achieve these objectives, especially at
is strategically building in low ambient locations, exploring the
GWhrs scale. More promising are some of the many competing
use of natural gas or fuel cells and deploying hyperscale models,
technologies at various stages of development, such as Liquid
which maximize large scale and high efficiencies that reduce
Metal Batteries, Glass Batteries, High Temperature Energy Storage
environmental impact.
(HTES) or Isothermal Compressed Air Energy Storage (ICAES). A
Challenges remain, however, with intensive energy requirements
lot of excitement is generated by graphene technology that has the
for cooling systems, privacy laws requiring domestically-hosted
potential of converging supercapacitors and battery features, thus not
data in hot climates, high demands on current renewables from
only addressing the cost, energy density and cycling requirements,
competing technologies and even potential reductions on the
but also power density and safety concerns.
industry footprint to ensure improved efficiency.
Another concept that will evolve into a practical solution is the use
I would like to see the industry begin to increase the amount of
of renewables to produce hydrogen, which in turn is delivered and
accountability and legislation enforcements. These may be in the
stored at the data center for use with fuel cells. The use of fuel cells
form of proactively complying with environmental directives or a
powered by biogas or even by methane, if the carbon dioxide capture
reduction in climate change agreements. I believe the industry will
or sequestration becomes practical, will provide viable alternatives.
see a challenge with the use of water for cooling which presents an interesting predicament for more sustainable options, and it will see
"How long before the model where the renewable source is only supplementing the grid power, is replaced by an integrated, selfcontained, off-grid solution?” Peter Gross is a board member, investor, consultant, and corporate advisor. Previously at Bloom Energy, he is now on the board of
an obligation to shape and influence national infrastructure for a more justifiable solution. It’s an interesting predicament for more sustainable options, and it will see an obligation to shape and influence national infrastructure for a more justifiable solution.
"Policy-makers who have historically provided building incentives such as carbon tax exemptions and reduced electricity prices may soon"
Virtual Power Systems Sarah Davey is a mechanical engineer at global engineering power house Arup. She has been working on data center projects for four years and recently won Mission Critical Engineer of the Year at the DCD Awards 2019
Dr Jon Summers on why we need to go into reverse to go forward
Dr Jonathan Koomey on the quest for further efficiency gains
T
D
onald Trump mentioned “quantum computing” in his opening speech at the 2020 World Economic Forum in Davos. The term was first introduced by the late Professor and Physics Nobel Laureate, Richard Feynman. Such a computing process essentially functions in an environment where information cannot be destroyed and plays on the reversibility of quantum states. The central processing units (CPUs) found in data centers of today operate by losing some ordered information, which is deliberately overwritten and lost virtually. The information has not been physically destroyed, rather it has been pushed out into the data center’s thermal environment, where it becomes randomized information manifested as heat. Year on year miniaturization of CPU parts results in reducing this heat, but no level of financial investment can change the fundamental laws of physics which hinder future progression. We are now in the defining decade where the only way forward is to operate digital processing without losing information - so that the process can be reversed to recover its original state. At the center of thermodynamics and information theory is the idea of reversible computing, where the operations are adiabatic, and energy is constrained from leaving the system as heat. No heat implies no cooling and reduced energy consumption. Our digital infrastructure has arrived at a historic crossroads, going forward with the present trajectory would lead to stagnation and spiralling cost due to physics. This decade will be defined by the technical challenges of reversible computing where the physics has no upper limit.
"The traditional architecture that’s been carrying the industry for the last 50 years is totally inadequate for the level of data generation and data processing needed today"
he data center industry greatly improved efficiency over the past decade,
so much so that total global electricity use for data centers grew little over that period. These efficiency gains came about in large part by shifting traditional data center loads to bigger and much more efficient hyperscale facilities. To reduce emissions further, hyperscale providers have aggressively signed contracts for wind, solar, and renewable gas generation to power their facilities, matching annual renewables production to the electricity used by the data center over the year. As a first step, this annual accounting is fine. It encourages the development of new wind and solar installations that would not otherwise have been built, and it allows the renewable industry to scale up and reduce costs for everyone. The next big industry challenge, as Google argued in an October 2018 white paper, is to ensure that zero emissions electricity powers each data center every hour of every year. Achieving this goal will require clever use of thermal storage, batteries, and load management, as well as more geographical and technological diversification of renewable generation. The movement to power data centers with zero emissions electricity 24 hours per day, seven days per week is just getting started. For engineers who want to be on the cutting edge of information technology sustainability, this is the challenge for you.
"The movement to power data centers with zero emissions electricity 24 hours per day, seven days per week is just getting started"
Jonathan Koomey, Ph.D. is President of Koomey Analytics. Among other work, he has spent decades tracking global data center energy use, in collaboration with Lawrence Berkeley National Laboratory
Dr. Jon Summers is Scientific Leader, ICE Datacenter, RISE Research Institutes of Sweden AB, Adjunct Professor at Lulea Technical University and Senior Lecturer at the University of Leeds.
Don Beaty on software eating the data center
Kevin Brown on building a sustainable edge
A
certainly the data center
F
industry with it. But why
are putting the majority of their
hasn’t software taken on a
applications in the cloud and
more essential role in the
they will have enterprise data
lifecycle of Day 1 data center
centers for some applications,
planning, design, operations,
albeit smaller enterprise data
maintenance, through change
centers. In the last two years,
controls and the End State?
we’ve seen the importance of
s once predicted by
rom my point of view,
Marc Andreessen,
much of the debate
“software is eating
about the future of
the world” and
data centers is over.
We know that most customers
Wait, what End State? Enter the Digital Twin, software intended to model the real world in the virtual world, object-for-object and bit-for-bit. The data center “End State” is just a gating process for the next change. Digital Twin software efficiently manages that change control process. Data center optimization is extremely complicated (many Day 1
edge data centers in this mix. It’s clear we have a hybrid environment: cloud data centers, smaller enterprise data centers, and micro data centers. In the past few years, I’ve spoken at many conferences about our view on the need for more resiliency at the edge. What’s been fantastic to see is how customers have responded. They are
design phase unknowns). This is followed by years of unpredictable
beginning to discuss and take appropriate steps to ensure they have a
/ non-uniform changes in IT hardware deployments driven
secure and reliable edge to maximize availability of the application.
by software changes. Important areas negatively impacted by unmodeled data centers include:
So, what’s next? We’re starting to focus on what I believe will be the next big challenge for the industry: energy efficiency and
1. Space/power/cooling utilizations
sustainability of the edge. In the last 10 to 15 years, the industry has
2. Energy use
made great progress on data center efficiency.
3. Workflow efficiencies (speed to market influences)
Our analysis estimates that by 2030 66 percent of total data center
4. Constant information requests internally (wasted productivity)
energy will be consumed by the local edge. In other words, the
5. Lack of insights on how to deploy software in capital efficient
edge will dwarf the energy consumption of the big data centers -
manner on the hardware/sites/resiliency 6. Business interruption costs (inability to model what-if scenarios on resiliency/availability) The next decade of our rapidly moving industry will require
especially if you consider the pending 5G rollout. For me, the industry is ill-prepared to deal with this issue. We need better metrics, better software, and better processes to meet the challenge of a sustainable edge.
“what-if” tools to analyze continuous mismatched changes at the software, IT hardware, and building infrastructure levels. Digital Twins combined with tools like wireless sensor networks, BAS, and DCIM software, are the next software evolution that will eat the next generation of data centers.
“[We] will require “what-if” tools to analyze continuous mismatched changes at the software, IT hardware, and building infrastructure levels” Don Beaty is President & Founder, DLB Associates, a New York based consulting engineering firm. He led ASHRAE’s TC9.9 for many years and literally “wrote the book” on modern data center design
“We’re starting to focus on what I believe will be the next big challenge for the industry: energy efficiency and sustainability of the edge” Kevin Brown is the senior vice president, EcoStruxure solutions and chief marketing officer of secure power division at Schneider Electric
Joe Kava on the industry’s shared responsibility
Nancy Novak on building data centers differently
A
and innovation are moving at
I
an unprecedented pace, but
people have of a mass of steel
there are important elements
and concrete rising from the
that remain constant. The
ground with a lot of people in
importance of the industry’s
hard hats scurrying around.
shared responsibility when it
The difference between your
comes to sustainability and a
vision and future data center
foundation of trust remain at the
construction sites will be
new decade is upon the data center industry. Our technology, people,
construction site, your description probably wouldn’t vary much
from the vision that most
heart of what we build. As organizations look to scale around the globe, sustainable
f asked to describe a
considerable. In the ‘20s, off-site production and technology will become even
operations are strategic to their business and they select partners
more integral to the data center construction process. New builds will
who hold the same values. As we’ve scaled our global data center
make extensive use of off-site manufacturing, with repetitive tasks
portfolio, Google has continued to invest in clean energy initiatives
performed robotically, to produce components such as electrical
that in turn benefit other organizations by helping change the way
rooms and power centers. Combined with the pre-fabrication of
they can purchase renewable resources like solar and wind. We
sub-components like walls, the dynamic will evolve from one of
want to continue to make it simple for any organization to purchase
traditional construction to on-site assembly.
affordable renewable energy in the markets where they operate. We are committed to the mission of trust through transparency that spans the lifecycle of a data center. Whether it’s the engagement with the communities where we build our data centers, the products
Ironically, the speed and volume of future data center construction will see significant improvements in job site safety and personnel. Exoskeletons will enable tradespeople to work longer and safely
we design and continue to innovate for our customers and users,
lift heavy tools and materials. By eliminating the reliance on “brute
or how we tirelessly protect end-user data through software and
strength,” exoskeletons will also usher more women into the building
physical means, a foundation of trust is a hallmark of our data center
trades. Drones and laser scanning will enhance precision while
program.
reducing the risks associated with many on-site tasks.
Finally, our ability to innovate depends upon a workforce that
The diversification of the workforce along with incorporation of
reflects our global community. We must develop educational
new technology and methodologies will dramatically enhance the
programs for young people who are making career decisions. This
certainty of cost and budget schedules.
is critical for delivering world class data centers, and allows us to develop the best products for our users and customers. As we head into this new decade, we’ll double down on these
The result of all of these changes and enhancements may not substantially affect how data centers look in the coming years, but it will in the way we build them.
values. We are proud of our work, and we feel strongly that we have a collective responsibility to remain focused on what matters most building a cleaner data center industry grounded in stakeholder trust.
"As organizations look to scale around the globe, sustainable operations are strategic to their business and they select partners who hold the same values”
"Ironically, the speed and volume of future data center construction will see significant improvements in job site safety and personnel” Nancy Novak is Compass Datacenters’ senior vice president of construction. With over 25 years of construction experience, she has overseen the delivery of more than $3.5 billion in projects
Joe Kava is vice president of data centers for Google and is responsible for design, engineering, construction, operations and sustainability for the company’s mission critical facilities around the world
A
s an industry, our increasing relevance and criticality to society
will present new opportunities and challenges across many dimensions. One to consider is the public’s general awareness of the carbon footprint associated with their digital life and how this interweaves with climate responsibility. I think our response to the climate challenge and how we demonstrate our progress will dominate our thinking well into this new decade, this will be a defining issue for the future of data centers. As technology suppliers, as builders & operators we must pay attention to the public at large who use services are the end of our value chain. The public expects us to be responsible in how we do business and we will need to demonstrate our stewardship. This will become a key facet of how the industry will communicate with the market. The quest for energy responsibility and efficiency will move from a short return on investment paradigm to a more incremental and sustainable approach, every watt, every kg of embedded carbon will count and this will be very much part of how we measure our performance. We have the capability to measure and manage our energy consumption, we will see an increased focus on driving out waste, no matter how small. Energy storage at scale, renewable integration and grid participation will all offer new efficiencies, business models and ultimately an improving public perception of our value to society. Our industry is coming of age and we can contribute in so many positive ways to our societal challenges. We are a safe, smart and sustainable industry.
"The public expects us to be responsible in how we do business and we will need to demonstrate our stewardship"
Ciaran Flanagan is group vice president and head of ABB’s global data center business
Have your say: Share your predictions community@datacenterdynamics.com
Ciaran Flanagan on leading the climate fight
Hollow giant The shell of NTT Global Data Centers’ new London data center is, frankly, hard to miss. Alex Alley heads to Dagenham
A
construction project that may be the largest data center built on British soil doesn’t sneak in quietly. NTT’s London 1 is emerging from a muddy torn-up stretch of land, which looked more like the battlefields of WWI than an industrial park, when we visited in November. Traversing the swamp in a pair of rubber boots borrowed from an onsite portacabin, we found a building layered in a sequence of green shades, which is already far more attractive than the ugly, gray warehouses which surround it.
The building was painted in green at the behest of planning authorities. The site backs onto a marsh which is a frequent haunt for walkers, and local officials wanted the structure to blend in as much as possible.
Before our tour of the ‘Dagenham Somme,’ we met John Eland, in a corner office tucked away in a cabin. He is head of global strategy at NTT Ltd’s Global Data Centers business. In July 2019, Nippon Telegraph and Telephone Corporation (NTT) spun up a service provider business NTT Ltd from its subsidiaries. In January 2020, NTT
Alex Alley Reporter
Ltd organized its data center properties, including RagingWire, NetMagic and e-shelter, into NTT Global Data Centers, headquartered in London. "As a Japanese company, we don’t tend to talk about the costs,” Eland said. “However, you can probably draw a number from the wattage of the data center." The building will have more than 60MW of IT load and, according to local authorities, projected costs could accrue to about £1.5bn ($2bn) over its lifetime. Pitching the project as part of the economic regeneration of Dagenham, Eland told local reporters the facility could
Issue 36 ∞ March 2020 61
Made in Dagenham
So why Dagenham? Well, the area has certain features that were deemed too good to miss. Essentially, it is closed to interconnection hubs in the center of London such as LINX (London Internet Exchange), but out beyond the central zone where there is little space left to build data centers. “London has become saturated by colos,” Eland said. “By moving out to the peripheral area of Dagenham we can take advantage of the regeneration projects in East London. Should the area grow richer and businesses relocate there, we could see more clients move in." As well as taking part in East London’s regeneration, NTT Ltd is aiming to stake out a canny location for the future. “Our centers in Slough are 35km (22 miles) from LINX but this area is just 16km (10 miles), so this made a lot of sense,” Eland told us. “Here there is power and land capacity. The new data center will be a cornerstone of our growing global data center platform.” When we visited, the basic structure had been finished, and the site was deserted except for cherry pickers and building equipment. Works on basic things like rooms, toilets and office spaces were still to be completed, but some structures and rooms were identifiable. “This is just the end of the first construction phase, and we have an exciting few months ahead as we fit out the core of the data center and our clients bring their workloads online," Eland said. A space due to become a “cooling corridor” runs along the perimeter of rooms scheduled to become server halls. On our visit, it was marked by a small wall, easy enough to hop over. Air coolers and racks will gradually be moved in when systems go online. Already, the UPS and generators had been installed, in a structure adjacent to the main building. But at our visit, simpler jobs seemed to be taking precedence. “The data center is designed to run about 60MW of IT load,” an onsite technician told us. “We can go slightly above that if we need to. So, if we have high-density land requirements coming in, then we can do them.” In May, London 1 is scheduled to open, joining a family of five other NTT Ltd data centers located in Slough and Hemel Hempstead. All together the capacity
surpasses 110MW of IT load. When completed, London 1 will cover 54,000 sq m (580,000 sq ft). Once it’s fully built out, the new campus will offer 25,600 sq m (275,000 sq ft) of IT space. Developed in phases, London 1 will ultimately have four halls which will be filled in over time. As for equipment and systems, those will be installed as demand grows. One feature at the Dagenham site that distinguishes it from most data centers is the ‘Innovation Lab,’ a facility where hardware and software can be designed, tested and deployed according to NTT's clients' requests (see box). The innovation lab was a hallmark of e-shelter, the European data center provider that NTT rolled into its Global Data Centers
“The challenge of being a multi-tenant facility is that you have to be all things to everyone" division. It presents software improvements that can help streamline services and cloud performances. All data centers promise reliability. Like many facilities, London 1 will measure itself against the de facto industry reliability standard, the Uptime Tier system, but with no formal plans at this stage to get certified. “We normally build to Tier III specifications as standard,” explained Eland. “Tiers are usually very difficult to get but it depends on how it’s designed. If there's a client requirement to get certification we do it, if not, we just build to that standard anyway. [As this building] becomes more prevalent clients may want a ‘comfort blanket’ so we may get it certified.” Like all multi-tenant data centers (MRDCs), London 1 will have to meet a number of separate customer demands, but Eland believes that a new building, constructed in stages, will be able to do this more readily. “The challenge of being a multi-tenant facility is that you have to be all things to everyone,” he told us. “This is the benefit of building in phases, we can fit it out for different customer requirements. For example, if we need to create 30kW within a specific suite, then that's a very different requirement for, say, a financial services company that may only need a 4-6kW rack. “So what we do is build flexibility into the design wherever possible.”
62 DCD Magazine • datacenterdynamics.com
Photography: Alex Alley
be compared to the economic impact of a railway. However, data centers don’t tend to employ many people at all. While there are around 200 people building the site, it will only have around 20 to 30 on-site staff when it is finished in May.
Try before you buy NTT’s Innovation Lab allows customers to “try before they buy,” according to CSO John Eland. “All of our clients and partners can come in and test any of their ICT deployments. This service is ideal because the obstacles of a data center are typically cost, and technical feasibility - essentially ‘how do you know if something works.’” The London 1 lab will be joined by similar NTT Ltd facilities in Mumbai, Johannesburg and across the US, along with existing facilities in Japan. “It will operate as a global network of lab facilities. So if you have a global enterprise and are looking for cross-continental solutions, we can give them this test environment. So, Dagenham will be part of that as well,” Eland added.
Issue 36 ∞ March 2020 63
Recycled buildings Abandoned properties are being turned into data centers, Andy Patrizio finds out why
T
he trend in data center construction has been to construct an entirely new building, designed to meet all kinds of demands, like maximum power and air conditioning efficiency. But alongside these greenfield developments, there have been some creative exceptions to the rule: old, abandoned structures which are given a new, modern life. Over the last few years, there has been an effort to convert abandoned buildings in densely populated urban areas into data centers. This flies in the face of the strategy of hyperscalers like Google, Microsoft, and Facebook, who place their data centers far from anything, but close to a renewable energy source, usually hydroelectric power. In recent years, we have seen abandoned malls, prisons, coal-fired electricity plants, and other power plants gutted and reborn as modern data centers. In Barcelona, Spain, the abandoned Torre Girona church now houses three supercomputers, including the MareNostrum machine, a joint venture between IBM and the Spanish government. The old brick walls and solid floors and roof are about the only thing left. Another iconic site, the Steelcase Pyramid in Michigan, was built in 1989 as a prestige research center for the eponymous office equipment company. It was sold off in 2010, before being bought by Switch and turned into a data center. A high-concept design, the Pionen facility in Stockholm, was originally a nuclear bunker, before colocation provider Bahnhof rebuilt it as a James Bond villain’s lair. Sometimes conversions are done by REITs, or real estate investment trusts. companies that own or finance income-
producing real estate across a range of property sectors. Most data center REITS like Equinix and Digital Realty Trust prefer to build their own facilities but some prefer to acquire old properties and convert them. One of the most famous conversions is the former Sun-Times printing facility in downtown Chicago, which was abandoned in 2011. In 2014, QTS Realty Trust took on the task of converting the old building into a modern data center. QTS made out well, it paid $18 million for the 30 acre site, which included a huge power substation. “That electrical infrastructure is very expensive. The substation is worth more than we paid for the property and business, so it’s a good investment from a business standpoint,” said Travis Wright, vice president for energy and sustainability at QTS. It also gave QTS a chance to be good citizens, recycling one million pounds of plastic and metal rather than sending it to a landfill, and earning goodwill with the city. “Local government loves you because you take an icon in the city, like the Chicago SunTimes building and you’re able to save this asset that’s been looked upon fondly for the last forty years,” said Wright.
Getting into tight spaces There are multiple advantages to buying an old building. Land in places like downtown New York, Chicago, Atlanta or Dallas is usually at a premium. There are other reasons as well. “It is the absolute truth our customers want us to be close to where their office is. All too often they are in an urban area and they want a data center nearby. And it puts you in an area where there is access to power and fiber,” said Wright. Fiber can actually be a bigger challenge than power. “You would want multiple
64 DCD Magazine • datacenterdynamics.com
Andy Patrizio Contributor providers near you. You can get land cheaper in the country but you may not be able to convince telcos to run fiber out there, so to get the maximum number of networks, they try to be closer to a city center,” said Kelly Morgan, research vice president for services at Forrester Research. Oftentimes companies will sell their data center and lease back a component or piece of that data center, which helps justify reentry into that building, said Doug Hollidge, a partner with data center provider Five 9s. “As enterprises are transitioning workloads to the cloud, they are consolidating their fleet of data centers or selling them. The site selection process is similar, with power and fiber connectivity being the most important consideration. One example he cited was the Chicago Mercantile Exchange, which sold its 400,000 square foot data center to CyrusOne, a data center REIT, then leased back a portion of it. This allowed CME to get out of owning the building and allowed CyrusOne to enter the Chicago market quickly.
Caveat emptor Some buildings are better suited for that reuse than others, notes John Sasser, senior vice president of data center operations with data center builder Sabey. “A lot of multistory facilities don’t have the structure for a good data center. You often have low clear heights on an office or school. Data centers have to have walls at least 20 feet in height. It’s for cabling that goes around the cabinets, but mostly it’s for air flow. The more space you have for air to move the more efficient it will move so that's better for air energy,” he said. Like QTS, Sabey often guts an old building
Barcelona Supercomputing Center
down to the walls. “They might be able to leverage some offices but for the data halls and data rooms it’s most likely a gut and refinish. I can’t think of any conversion where you wouldn’t be ripping out the old infrastructure and putting it in new for a data center build,” he said. Morgan thinks the real challenge is the size of the walls and doors. “I’ve heard all kinds of stories about the hassles companies went through to get the equipment inside,” she said. “The main thing about the structure is the computer equipment is real heavy and some are real big, so you need a really strong building, especially if you are going to have multiple stories.” For QTS, Wright said the criteria for selecting a building are high clear spans with wide openings, so you don’t want columns breaking up floor space; floor capacity to support load 300 to 400 pounds per square foot; and of course, access to power and fiber. In some regions there are extra concerns, like in Texas, where a building has to be able to withstand a Category 3 tornado.
Steelcase Pyramid, Michigan
Are more conversions coming? Will the trend continue? Opinions are mixed on the subject. “I think you'll see more of it,” said Morgan. “As you have this trend of Edge data centers, storing data from IoT devices, you will have more of a need for storing data where lots of people are, and processing data close to where people are generating it. So you are going to need more storage in urban locations.” “We don’t see as much of that as we used to,” said Hollidge. “Many big data center REITs are building ground up data centers. A lot of these new data centers are very state of the art and advanced. It’s harder to transition an older facility to today’s technically advanced facilities.” Wright notes QTS has only done four of what it calls “brownfield conversions,” brown as opposed to green because the land is already developed. “We’re going to be where customers want to be. My guess is it decelerates because the number of available properties that fit the scope of data centers bigger, more powerful and more dense - it’s rare to find something that works,” he said.
Bahnhof Pionen, Sweden
Issue 36 • March 2020
65
Preparing for remote working
Combating coronavirus together
W
hen we started working on this issue of the magazine, Covid-19 was just a small virus with a dozen reported cases in a single remote city. Now, as we go to print, nations around the world are on lockdown. I dread to think what it will be like by the time you read this. One can hope that self-isolation measures will slow the spread, that healthcare systems will not be overwhelmed, and that the majority of those at risk will survive this. But it is also likely that, in many parts of the world, this will prove a devastating illness. In these trying times, it can be easy to give in to despair. Yes, most of us will have to huddle at home, but that does not mean we have to cower while we do it. We will persevere, civilization will go on, communities will slowly rebuild. Throughout this ordeal, the data center industry will have to do its part - online communication will keep businesses afloat, and also help stave off loneliness and boredom. Without connectivity, this whole thing gets a lot worse. Already, networks are being strained, and demand for digital services is skyrocketing. This comes at a time when you and your colleagues may have to stay at home, and when equipment and supplies may be impacted. This is not going to be easy - expect long, stressful hours, and the fact that some in this industry may be forced to stay inside data centers for extended periods. But remember: You are doing this to help people. You should also help those immediately around you. Does your company pay sick leave? What about contractors? Does everyone feel comfortable taking time off? Then, while some of us will struggle, remember this: Of the many industries at risk of being wrecked by this virus, data centers are not one of them. Many companies in the digital sector will do better because of this, especially if teleworking becomes popular for the long term. Some of your customers may not be so lucky. Be ready, and be understanding, about late payments and defaults. This is going to be more difficult for many of them, and they will need help getting through this. This is a crisis, and we need to work together.
66 DCD Magazine • datacenterdynamics.com
“This is not going to be easy - expect long, stressful hours. But remember: You are doing this to help people�
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
StarlineDataCenter.com/DCD
an IMS Engineered Products Brand
MILLIONS OF STANDARD CONFIGURATIONS AVAILABLE IN TWO WEEKS OR LESS
MADE IN THE USA
WWW.AMCOENCLOSURES.COM/DATA
847-391-8100
DCD NEW YORK BOOTH #49