DCD>Magazine Issue 32 - Chilled Efficiency

Page 1

Issue 32 • April 2019 datacenterdynamics.com

RagingWire CEO When the REIT answer is wrong

CHILLED EFFICIENCY The EU-funded Boden Type project aims to build the world’s most efficient data center Show & preview ts highligh

AI meets HPC Science is about to change

> New

York

Supplement Building the Colo Way Best of Both Worlds

Meeting the demands of enterprise and hyperscale

Construction at Speed

Every second counts when time means money

Webscale Swallows Colo A lot has changed in just a few years


R

High Rate & NEW Extreme Life High Rate Batteries for Critical Power Data Center Applications

RELIABLE Narada HRL and NEW HRXL Series batteries deliver! Engineered for exceptional long service life and high rate discharges, Narada batteries are the one to choose. Narada provides solutions to meet your Critical Power needs.

ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties

Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339


ISSN 2058-4946

Contents April 2019

12

6 News Nvidia to buy Mellanox US details exascale supercomputer 12 Chilled efficiency Traveling to an experimental EU-funded data center in search of energy savings 18 Everybody is watching The ethics of tax breaks

Industry interview

16

16 D oug Adams, RagingWire “This is a highly deceptive market when people frame up their actual build costs. People say anywhere from $5m to $10m, but if they are building at scale, the price will be $7m to $8m.”

19 The colocation supplement Building the colo way

35

22 How webscale swallowed colo It used to be about renting space in racks. Now it’s about whole halls

44 30

28 Building at speed Time is money 30 New arrivals The biggest colocation stories of the past six months 32 The best of both worlds Meeting the demands of enterprise and hyperscale 35 How machine learning is changing science AI and HPC are converging. The results could be profound

48

42 Smart energy Uptime’s guide to energy efficient technology

52

43 The New York show preview What to watch, and where to go 51 Comparing outages One man’s downtime is another’s blip 52 The secret life of water mitigation The world is drowning: Are your data centers prepared for the flood?

48 44

54 Big data has its limits Making hay out of hay leaves you with hay. We are needled

Issue 32

April 2019 3


1.1

From the Editor Staying chilled and efficient

W

e try to avoid clichés. We avoid them like the plague. But in matters connected with Sweden, we finally faced our Waterloo. Max Smolaks, in his last DCD feature (p12), visited a data center in Boden, Sweden. And that data center lived up to every expectation we might have had of the Swedes. It's efficient, it has a simple design and, just outside the Arctic Circle, it's very cold.

Scientists make models of reality, but machine learning bypasses them model Artificial intelligence (AI) has become a cliché, but when you ask the right questions, it becomes clear that the reality of the field is far more interesting than any easy soundbite. Sebastian Moss asked machine learning experts (p35) and found that the techniques are genuinely changing the ways in which science is done, and in turn this is altering the computer design. It turns out that while scientists make models of reality, machine learning bypasses the model and nature, and directly queries the data, potentially providing an independent view - and it does this whether the data comes from sensors or simulation. Let that sink in. The changes AI will bring are not the simplistic headlines you've heard. They are more profound than that.

Target PUE (power usage effectiveness) of the EU's Boden Type Data Center One. In the Arctic climate, the actual PUE achieved will be much lower

supplement examines how that change is driven by hyperscaler customers - but still encompasses retail colocation. We've visited some of the most exciting colocation data centers in the world, and feature a CEO briefing with RagingWire's Doug Adams this issue (p16).

4

DCD Magazine • datacenterdynamics.com

Senior Designer Dot McHugh Designer Shabanam Bhattarai Head of Sales Martin Docherty

Head Office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor

bit.ly/DCDMagazine

Training

Brazil Correspondent Tatiane Aquim @DCDFocuspt

Dan Loosemore

Finally, back in Sweden we begin

Debates

SEA Correspondent Paul Mah @PaulMah

Chief Marketing Officer

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.

Intelligence

News Editor Max Smolaks @MaxSmolax

Conference Producer, EMEA Zach Lipman

Dive deeper

Events

Deputy Editor Sebastian Moss @SebMoss

Conference Director, NAM Kisandka Moses

Colocation is changing, and our

April with our Energy Smart event in Stockholm. By the time you read this, it will be over, but there will be plenty of material online - and in this magazine, Uptime Institute's experts Andy Lawrence and Rhonda Ascertio assess the status of all the possible technologies within the smart energy sector (p42). We also have a timely feature toward the back of this magazine: how to handle the floods that global warming will bring (p52). Read that, and it could be all the incentive you need to emulate our Swedish friends, and get efficient.

Global Editor Peter Judge @Judgecorp

Reporter Will Calvert @WilliamLCalvert

At our New York event in April (p43), we are expecting to learn more about AI, from the philosophical implications, down to the practical applications inside the data center. We'll also find out how the industry is responding to the Northern Virginia effect, where a giant hub is expanding exponentially. We have an expanded event guide section for those of you visiting our event in New York - and to help those of you who aren't there to gain the benefit of a global event.

Meet the team

Awards

CEEDA

www.pefc.org

© 2019 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


POWER DISTRIBUTION UNITS

NEW INTELLIGENT POWER MANAGEMENT

Can be specified with any of Olson’s data centre products or as part of a bespoke design or an in-line version for retrofits. • Remote monitoring • Remote switching • Local coloured • Display • Programmable sequential start • External temperature & humidity measurement • USB Port for storing & recalling setup

Designed and Manufactured in the UK

+44 (0)20 8905 7273

sales@olson.co.uk

MORE FROM OLSON • • • • • •

Sequential start Combination units International sockets Automatic transfer switches 19” Rack & vertical (zero U) units Custom & bespoke design service

www.olson.co.uk


Whitespace

News: White space

A world connected: The biggest data center news stories of the last two months NEWS IN BRIEF

Intel, Google and others join forces for CXL interconnect The Compute Express Link is aimed at removing bottlenecks by creating a high speed interconnect between CPUs and other components like GPUs, memory and FPGAs. The group consists of Intel, Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei and Microsoft.

Munters to shut down data center operations in EMEA Swedish air conditioning and climate control specialist Munters will wind down its data center business in EMEA, and close a factory in Dison. Products remain available in North America.

Nvidia to acquire Mellanox for $6.9 billion The deal is the the largest acquisition in Nvidia’s history Nvidia has announced plans to acquire Mellanox for $6.9 billion. The Santa Clara-based chipmaker outbid Intel’s previous offer of $6bn to acquire the networking hardware provider. This acquisition ends months of reported takeover attempts by some of the world’s largest tech firms, including Intel, Microsoft and Xilinix. Once the deal goes through, subject to regulatory approvals, it will be the largest acquisition in Nvidia’s history. Acquiring Mellanox would boost Nvidia’s data center and high performance computing business, and help the company diversify its product line to become less reliant on the gaming and GPU market. “The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s data centers,” said Nvidia founder and CEO Jensen Huang.

6

“Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant data center-scale compute engine.” Mellanox Technology is a leading producer of high-performance interconnect technology, and one of the first to ship InfiniBand products. Founded in 1999 by former executives from Intel and Galileo Technology in the northern Israeli town of Yoqneam, the companyis now based in Sunnyvale, California. “We share the same vision for accelerated computing as Nvidia,” Mellanox founder and CEO Eyal Waldman said. “[This is] a natural extension of our longstanding partnership.” It is unclear what the acquisition will mean for competitors that use Mellanox products, but Nvidia said that “customer sales and support will not change as a result of this transaction.” bit.ly/Nellanox

DCD Magazine • datacenterdynamics.com

Google seeks $15 million tax break in Minnesota The $600 million data center, effectively replacing a coal power station in Becker, Sherbourne County, is yet to be confirmed, but Google has asked local government to waive 20 years’ worth of taxes on the proposed building.

Eco-friendly data center planned for central France The €700 million ($790m) Green Challenge 36 project, in the Ozans business park, is due by 2023, and will feature a 12,000 square meter (130,000 sq ft) Tier IV data center. It is backed by Siemens, Atos and Bouyges.

Microsoft partners with Inmarsat for satellite IoT Azure customers will be able to use Inmarsat’s global satellite communications network, and Inmarsat customers can use the Microsoft Azure IoT Central platform. The announcement follows a deal between AWS and Iridium. Inmarsat operates 13 satellites in geostationary orbit 35,786km (22,236 miles) above the Earth - and claims 99.9 percent availability.


READ MORE What exascale means for machine learning, p35

Aurora, the first US exascale supercomputer, is coming in 2021 From Intel and Cray, at a cost of $500m Aurora will be delivered by Intel and subcontractor Cray to the Department of Energy’s Argonne National Laboratory, for a cost of more than $500m. Whether it will be the first supercomputer capable of one exaflops, or a quintillion (that’s a billion billion) calculations per second, depends on when in 2021 it arrives. Other nations have ambitious deadlines, with China targeting 2020-21, Japan hoping for 2021 and the EU aiming for 2022-23, but each approach is susceptible to delay (see DCD Magazine 31, p16). Delays are in fact why this system will exist. A previous version of Aurora, capable of ‘just’

180 petaflops , was due in 2018, sporting Intel Xeon Phi chips. After the cancelation of Xeon Phi, Aurora was pushed back to 2021, with an expanded performance specification designed to make it the first exascale system. Instead of Xeon Phi, the system will feature an upcoming version of the Xeon Scalable Processor CPU, and ‘Intel Xe,’ which is thought to be the brand name for an upcoming GPU line. It will also feature Intel’s Optane DC Persistent Memory and the Intel OneAPI, connected by the Cray Slingshot interconnect.

Facebook promised Canadian data center in exchange for relaxed data laws Facebook offered to open a data center in Canada if the Canadian government was willing to relax regulation over the company’s non-Canadian data. Leaked documents seen by journalists from The Observer and Computer Weekly show that Facebook threatened to withhold investment and employment opportunities in Canada unless the government adopted regulations to suit Facebook. Duncan Campbell, a UK-based freelance investigative journalist who helped uncover the story, said: “They were trying to get Canada to give them what they called a letter of comfort which would take a Canadian data center out of Canadian regulation.” bit.ly/NotHowYouMakeFriends

bit.ly/WhoWillBeFirst

GENERATE

MORE ... MORE POWER, MORE SAVINGS. Build your business with room to grow. Generate up to 44% more power in a smaller footprint. Reduce long- and short-term costs and maximize ROI with efficient compact gensets and save up to 27% on installed space. That’s power density. www.cat.com/powerdensity © 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.


Whitespace NEWS IN BRIEF

IBM to open New York AI hardware research center The IBM Research AI Hardware Center aims to achieve a 1,000 times performance efficiency improvement for AI workloads over a decade. It will invest $2 billion in SUNY Poly and its other facilities in New York State, for a $300m state grant.

Cloud&Heat launches The Beast: a 0.5MW containerized supercomputer

Arm unveils Neoverse N1 & E1 designs for server, edge chips The Neoverse expands - all the way to 128 cores per CPU After launching the Neoverse “cloudto-edge” brand back in October, semiconductor designer Arm has unveiled the first products based on a new microarchitecture, ‘N1’ and ‘E1.’ N1 aims to address the need for diversity of compute types. “Going beyond raw compute performance, the Neoverse N1 platform was built from the ground up with infrastructure-class features including server virtualization, state-of-the-art RAS support, power and performance management, and system level profiling,” Drew Henry, SVP of Arm’s infrastructure business unit, said. “The platform also includes a coherent mesh interconnect, industry-leading

power efficiency, and a compact design approach for tighter integration, enabling scaling from 4- to 128-cores.” While N1 focuses on core compute, at the edge or in a data center, Arm’s E1 has a different focus - throughput. “The Neoverse E1 platform was uniquely designed to enable the transition from 4G to a more scalable 5G infrastructure with more diverse compute requirements,” Henry said. Arm has shared both designs with selected partners, the company said. Arm’s previous forays into the server world have been beset by difficulties. bit.ly/MoreArmThanItsWorth

Google’s DeepMind uses AI to predict wind farm output Machine learning to make data centers greener Google’s AI subsidiary DeepMind has developed a machine learning algorithm to predict the productivity of wind farms up to 36 hours in advance. The system is currently used across 700MW of wind power capacity which is used in Google data centers and offices in the US. Google says this boosts the value of the energy by 20 percent. “Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind power output 36 hours ahead of actual generation,” DeepMind’s Sims Witherspoon and Google’s Will Fadrhonc wrote in a blog post. To date, the company claims, it has “boosted the value of our wind energy by roughly 20 percent,” but said it would continue to refine the algorithm. bit.ly/AIblowhards

8

DCD Magazine • datacenterdynamics.com

Claimed to be “the world’s most energy efficient supercomputer,” it can hold up to 17,280 CPU cores or 1,056 GPU nodes in a standard 20ft shipping container.

Dell wins $231 million US Navy IT Contract Under the agreement Dell will provide VMware software licenses, software maintenance and other services including data center and cloud infrastructure to the US Navy, over the next four years.

Microsoft buys 125MW of wind energy in Ohio The agreement includes two 15-yearlong power purchase agreements as well as the construction of a new wind farm in the state. The 125MW farm in Paulding County, Ohio, is expected to be operational in 2019.


We’ve got you connected! Increased data volumes are driving the construction of data centre campuses and hyperscale data centres. Can your network support emerging 5G technology where high-fibre availability is critical? We can help with our new data centre interconnect (DCI) solution. Our Corning® RocketRibbon™ extreme-density cables, combined with our innovative splice closures and hardware, can help meet the increasing bandwidth demands in your most challenging environment.

© 2019 Corning Optical Communications. LAN-2484-BEN / March 2019

Are you interconnected? Visit corning.com/emea/en/dci to discover how Corning can help with your data centre interconnect solution.


Whitespace NEWS IN BRIEF

Power outage at Tier III facility postpones surgery at Oregon hospital A 45 minute fault at OneNeck’s data center in Bend, Oregon on February 7 caused local provider St Charles Health System to postpone elective surgeries. The Vault failed because of “a power failure of both our main and backup power systems,” a company spokesperson told the Bend Bulletin. OneNeck will be looking into why the redundant system failed. The spokesperson said that services were restored for the majority of customers in 45 minutes to one hour. Among those affected was local ISP BendBroadband, which was back up in 20 minutes. The hospital system wasn’t so lucky. When it saw a network outage at 7:30am, it went to “downtime procedures” and postponed elective surgeries, which were resumed when the system came back up at 2pm. bit.ly/ChokedUp

Widespread Wells Fargo issues blamed on data center outage Wells F**** Customers of US bank Wells Fargo were left unable to access ATMs, or their online and mobile banking accounts due to an outage at one of the company’s data centers in Shoreview, Minnesota this February. Wells Fargo said: “We’re experiencing system issues due to a power shutdown at one of our facilities, initiated after smoke was detected following routine maintenance. We’re working to restore services as soon as possible. We apologize for the inconvenience.” Other reports dispute this. The local Lake Johanna fire department tweeted: “There was NOT a fire at the Wells Fargo facility in Shoreview, MN. They did have an activation of a fire suppression system that was triggered due to dust from construction.”

Facebook, Instagram, WhatsApp suffer outage Facebook suffered one of the most sustained outages in its history this March, with problems lasting 14 hours across Facebook, Instagram, WhatsApp and Messenger. “Yesterday, as a result of a server configuration change, many people had trouble accessing our apps and services,” Facebook said on Thursday March 14, more than 24 hours after issues were first reported, using rival service Twitter. “We’ve now resolved the issues and our systems are recovering. We’re very sorry for the inconvenience and appreciate everyone’s patience.” Facebook went down for longer in 2008, but only had 150 million users at the time, instead of the 2.3 billion monthly users it has today. bit.ly/DemocracyWillHaveToBreakItself

10 DCD Magazine • datacenterdynamics.com

Unconfirmed comments on Reddit, made prior to the fire department’s statement, supported the fire suppression allegation: “Fire suppression went off in one of their main data centers from some utility work this morning. No power to any of the network or compute equipment and some failovers did not work as expected... everything minus core network gear was manually being unplugged from any PDUs to help the control the initial power-on.” Fire suppression system issues have taken data centers out before - including an ING Bank facility in Romania, where the noise of escaping gas was loud enough to damage spinning hard drives. bit.ly/DustInTheShell


Los Alamos upgrades its D-Wave quantum computer The Department of Energy’s Los Alamos National Laboratory has upgraded its D-Wave quantum computer to the latest version, the D-Wave 2000Q. After acquiring a 1,000+ qubit D-Wave 2X quantum computer in 2015, Los Alamos and its research collaborators have built more than 60 early quantum applications and conducted research into domains such as quantum mechanics, linear algebra, computer science, machine learning, earth science, biochemistry and sociology. “D-Wave has been a valued strategic partner in Los Alamos’ pursuit of a new technology that is part of the expanding heterogeneous landscape of computing,” Irene Qualters, associate laboratory director for Simulation and Computation at Los Alamos National Laboratory, said. bit.ly/QuantumWave

Microsoft opens Azure Cloud data centers in South Africa

Google announces ‘Stadia,’ a video game cloud streaming service The data center as a console “For several years we have been working on a game streaming platform,” Google CEO Sundar Pichai said at the annual Games Developers Conference this March. “It was probably the worst kept secret in the industry.” After rumors, live trials, and a lot of teasing, Google has announced a new video game streaming service: Stadia. “With Google your games will be immediately discoverable on Chrome, Chromebooks, Chromecasts - we hope to bring it to other devices and browsers,” Pichai said. The platform is aimed at “everyone,” the company said - although a fast and stable Internet connection will be required. In an effort to lower latency, the Stadia

controller connects directly via WiFi to the data center, rather than through the device. Stadia relies heavily on the YouTube gaming community - various YouTube stars were shown in promotional videos, and users will be able to watch a game trailer on YouTube, press a button and start playing the title on Stadia. “We will be handing that extraordinary power of the data center over to you,” Google VP and GM Phil Harrison said. “With Stadia the data center is your platform, there is no console that limits your ideas.” Majd Bakar, head of Stadia, said that the system “is built on infrastructure no one else has.” bit.ly/NetflixofGames

Microsoft has opened two new data centers in Cape Town and Johannesburg which will give South Africa faster access to Azure Cloud. The twin data centers will be Microsoft’s first in Africa. The company follows Huawei which made its cloud service available in Johannesburg in the same week. The company previously promised that the data centers would be online by the end of 2018, but Microsoft failed to live up to this goal and has not given a reason for the delay. Azure Cloud is available now, while Office 365, Microsoft’s cloud-based productivity solution, is anticipated to be available by the third quarter of 2019, and Dynamics 365, the company’s cloud business application, is anticipated for a fourth quarter release. The company joins IBM in South Africa, while AWS is expected to open in 2020. bit.ly/CloudsDowninAfrica

Issue 32 • April 2019 11


CHILLED

EFFICIENCY Just outside the Arctic Circle, the EU is funding a data center that could break efficiency records. Max Smolaks visited Boden to find out more

A

s we touched down at Luleå Airport, the pilot cheerfully informed us that the temperature outside was -26°C (-14.8°F). That’s cold, even by Swedish standards. I’ve braved this weather to see Boden Type Data Center One – an EU-funded effort to build the world’s most efficient data center. The project is part of Horizon 2020, a €77 billion ($87bn) research and innovation program aimed at securing Europe's competitiveness on the global stage. This is the eighth such program to take place since 1984. A large proportion of its initiatives are focused on cleantech and sustainability – and since data centers are responsible for anywhere between two and five percent of global electricity consumption, the industry is an obvious target for European policymakers. The prototype 500kW facility in the small town of Boden uses every trick in the book to lower its environmental impact: it runs on renewable energy and doesn’t have batteries or gensets. The data center relies on a combination of free and evaporative cooling with no need for refrigerants, and uses lowcarbon, locally-sourced building materials. Despite its small size, the project borrows design elements from hyperscale data

centers, like slab concrete floors (rather than raised tiles), and a lack of plenum - instead, there's a version of the ‘chicken coop’ ventilation design, originally made famous by Yahoo. The first batch of servers housed in the facility was previously used by Facebook, and arrived prepackaged in a ‘rack and roll’ configuration. There are sensor arrays deployed throughout the building - collecting extensive data for analysis is one of the objectives of the project. But the main goal is to see if it's possible to build a data center that enjoys the cost benefits of hyperscale facilities, but comes in any size. "What we are doing with this project is we are creating a very efficient, and therefore low cost, operating system, we are creating a very low cost building system, which is going to enable the little guys," said Alan Beresford, managing director of British evaporative cooling specialist EcoCooling, at the Boden Type inauguration event. “By little, I mean truly small operators, compared to the world

12 DCD Magazine • datacenterdynamics.com

Max Smolaks News Editor

of multi-gigawatt operators: less than 100kW.” In line with Horizon 2020 requirements for cross-border cooperation, the project brings together organizations from four European countries; it's a collaboration between Boden Business Agency (Sweden), engineering firm H1 Systems (Hungary), EcoCooling (UK), and two research organizations - Fraunhofer IOSB (Germany) and RISE SICS North (Sweden). Work on the project kicked off in October 2017. BBA was responsible for things like planning permissions and negotiations with local politicians; H1 Systems managed the construction work; EcoCooling contributed its proprietary free cooling tech; Fraunhofer designed synthetic workloads to put the facility through its paces, while RISE was tasked with monitoring, data collection and analysis. Like the buildings around it, the exterior of the data center is painted deep red: the color is known as Falu red, after the copper mine in the nearby town of Falun. Inside, there are three data halls – the first has been

"We are trying to push forward the current benchmarking and metrics of energy efficiency"


Cover Feature | Chilled Efficiency outfitted with two rows of Open Compute Project racks, stacked with 180 of Facebook's simplified servers. This is where Fraunhofer runs its test workloads - currently modeled on the data output of a smart city, namely, Hamburg. The second hall – still empty – was designed for high-density IT equipment, and will be used for demanding academic research and 3D rendering. The third hall hosts empty shelves. These were specifically created for cryptocurrency mining equipment, but with the bitcoin market experiencing what’s been termed a ‘crypto winter,’ it’s not clear whether it will ever get to serve its original purpose. The ultra-high density specifications could be even more useful for new types of hardware - like FPGAs or ASICs used in machine learning. The building was ready in less than five months, on par with some of the fastest prefabricated data center operations out there. Beresford said that this approach to construction could create a new type of data center campus, essentially working as a trailer park: just come in, stand up a facility in a few months, plug in power and water, and it's ready. Everything inside Boden Type is instrumented for fine-grained control and data acquisition. According to Dr Jon Summers, a specialist in fluid mechanics at RISE, the facility picks up around 500 data points every second. "What we would like to do is experiment with the fan controllers," Summers explained. "The thing is, not all servers are doing the same work. When you have a mixture [of workloads], you are getting different Delta-Ts, so if we could change the fan speeds to get the same Delta-T irrespective of the workload, that would be quite an innovation. "And if we can get the control systems between the IT and the cooling system to talk about this, then we might have a chance of showing how we can do holistic control across the entire data center." The prototype facility is not perfect for every task: its remote location and the lack of traditional redundancy features means it's mostly suitable for non-mission critical, yet power-intensive workloads. At the moment, whatever redundancy there is, is achieved through software - but the creators of the

design say it could be easily adapted to include UPS and gensets, and possibly even shape up for a Tier certificate. But why should anyone outside of Sweden care about something that happens in Boden? This is where the second part of the project comes in: the idea is to codify and export the design, making it deployable not just in the Nordics, but anywhere in Europe, in a variety of configurations. Technically, the customer here is the European Commission; the project sets a requirement of an average annual PUE of 1.1, but power usage effectiveness depends strongly on location. In Boden’s climate, the actual figures are apparently much, much lower, and the real-world performance will be confirmed after running extensive benchmarks over the next few months. "If people embrace some of these techniques we are demonstrating here, then there is the potential to reduce the western world's electricity use by one or two percent. That's a truly massive potential impact. This is the starting point," Beresford said. You might think that a governmentsponsored data center design would be

no match for experiments funded by the private sector. But the amount of investment required is minuscule, especially on a European scale. For just €3 million ($3.41m), the EU is getting a fully functioning 500kW research facility with impressive efficiency credentials, and that's a good deal. "With projects such as Boden Type DC we are trying to push forward the current benchmarking and metrics of energy efficiency," said Pau-Rey García, project advisor on Horizon 2020 energy initiatives. "Hopefully soon, Boden Type project will reach new levels of efficiency, and will be a perfect lighthouse project, delivering concrete and useful results for the community in a highly competitive and fastgrowing sector." u

DCD>Awards The Data Center Eco-Sustainability Award

Enter the Awards

At the DCD>Awards we highlight the most innovative sustainable data center initiatives with the Eco-Sustainability Award. Last year's winner was NREL, whose “chips-to-bricks” holistic approach to data center efficiency led to a PUE of 1.036 bit.ly/DCDAwards2019

Issue 32 • April 2019 13


A short guide to creating a data center destination u Boden is an industrial town in the far north of Sweden, close to the Arctic Circle. Throughout its recent history, the municipality served as an important mining hub; it also hosted Sweden’s largest military base, intended to protect the country against a possible Russian aggression. But local mines and ore fields replaced people with machinery, and the size of the army was reduced, leaving Boden in a state of decline, like so many other provincial towns – and then data centers came along. "We are turning former military airfields and hangars into a home for digital services," Claes Nordmarkas, Mayor of Boden, said at the launch of Boden DC Type One. It would be fair to say that the rebirth of Boden started with Facebook, and its

decision, back in 2011, to open the company’s first European data center in Luleå. What attracted the American company to the region was the abundance of renewable power. There are 13 hydroelectric dams crossing the nearby Luleå River, producing around four gigawatts of power - and no matter how hard the local communities try to use it all, they are left with a surplus of more than two gigawatts. I was told that the river essentially serves as a giant battery for the entire region – not in the sense of storing power, but as a demand response and frequency regulation mechanism, keeping the local grid at rocksolid 50Hz. The last grid-level outage here happened in the 1970s. The second reason for Facebook’s

decision was staff: just around 80,000 people live in the region, but 10,000 of them are students at the Luleå University of Technology, offering a steady supply of skilled workforce. Other benefits of the region include subarctic climate, stable political situation and some of Europe's lowest taxes on electricity consumed by data centers. Recognizing latent potential, Boden Business Agency (BBA) made a speculative investment into a 200MW substation and started promoting the town as a data center hub. Facebook was followed by a number of smaller data center operators, like colocation providers Hydro66 and Fortlax, and several cryptocurrency mining ventures. BBA also got involved in Node Pole - an initiative launched in 2011 to offer plots of industrial land near Boden, pre-approved for redevelopment into data centers, complete with power and network connectivity. A few years later, two local power companies realized that, rather than selling their excess electricity abroad and incurring transmission losses, they should attract large electricity consumers to the North of Sweden: in 2016, Node Pole was acquired by Vattenfall and Skellefteå Kraft, and they have since expanded the initiative to include not just sites around Boden and Luleå, but across the country. Another catalyst for Boden's data center fortunes was the establishment of a local chapter of the RISE Research Institutes of Sweden in Luleå. RISE SICS North, led by Professor Tor Björn Minde - who also serves as head of research strategy at Ericsson Research - lives and breathes data centers. Among the things they have here is a unique wind tunnel

an IMS Engineered Products Brand

Join us at DCD > New York April 9-10 / Booth #49

STANDARD AND CUSTOM SIZES

14 DCD Magazine • datacenterdynamics.com

5400 lbs load rating

UL LISTED RACK HEIGHTS UP TO 62 RU


Cover Feature | Chilled Efficiency built to analyze server ventilation, and a massive plexiglass conservatory for airflow experiments. You can see an assortment of metal boxes filled with a viscous liquid in the corner - students have been building liquid cooling enclosures. The sensor arrays used in Boden DC Type One to track temperature and other environmental factors were created by the staff of RISE SICS North. "To get research going you need money. And how do you get money? You need companies. And how do you attract companies? OK, let's build a unique test facility that you can't find anywhere else. We started in 2015, and in 2016 we started the lab," Minde told DCD. Today, RISE SICS North employs 20 people. "Everything we do here is applied research," said Dr Jon Summers, scientific lead at RISE, who started working in Boden after spending 27 years at the University of Leeds in the UK. "It's the whole stack. From the infrastructure side, we're looking from the ground to the cloud, or from the chip to the chiller. "The key thing we have here is we collect a lot of data, and you can use analytics to demonstrate efficiency. We use the same kind of monitoring technology that we put together in here - which is built on open source software - we use that in Boden [project]." Participation in Boden Type DC One was an especially valuable opportunity because of all the data that could inform future research

"Let's build a unique test facility that you can't find anywhere else"

at RISE: "It's very difficult to get a hold of data from live data centers - nobody will give you access to their data," Summers said. While the Boden Type project is waiting for results, BBA continues to position the town as a prime data center location. The latest designated data center zone is Svartbyn, and a sub-station is already under construction. The agency says it has 200300MW of potential power capacity up for grabs – and more in the pipeline. Right now, the land is just pine forest covered by snow, but soon, it could house an assortment of cloud, colocation and hosting facilities. "We think that the [Boden Type DC One]

project has found the right home here in Boden, being a data center growth area at the moment, and - with the Svartbyn investment - even more so in the future," the Mayor said. The municipality has ambitious plans: it wants to attract not just data centers, but also data-intensive industries. Earlier this year, Boden got its own small film studio, called Studio Nord, and there’s hope for 3D and animation rendering workloads; there’s also a growing game developer community, with professional education opportunities, and even a local e-sports team. The small team of public servants hopes to create an ecosystem – and is attempting to do this without handing out generous tax incentives, the way local governments sometimes do in other parts of the world.

Learn more at amcoenclosures.com/dcd

CUSTOM RACKS MANUFACTURED DAILY

CUSTOM RACKS CAN SHIP IN 2 WEEKS

MADE IN THE USA

Issue 32 • April 2019 15


RagingWire CEO Doug Adams tells Peter Judge about being part of a Japanese giant, and combining retail and wholesale colocation

H

ow does the data center industry satisfy cloud providers at the same time as small colocation customers? It’s a crucial question for colocation providers. But is a $100 billion Japanese telecoms giant really the organization most likely to come up with the answer? Those were our main questions for Doug Adams, CEO of RagingWire, when we met him at the company’s 28.4MW campus in Ashburn, Northern Virginia in 2018. He’s in a position to answer both. He helped found RagingWire in 2000, and has had nearly 20 years building and running colocation data centers for enterprises. For the past five years of this, RagingWire has been under the ownership of Japanese telco NTT, moving the company up to a new league. For some, it would be a surprise to see a telecoms company buying into data centers. Most operators, from AT&T to Verizon, have cut back or got out of data center ownership altogether since 2015, leaving the business to specialists. Why does NTT think it can do better, and why buy RagingWire?

Adams tells me the company started out without venture funding, providing space for business customers in California, and building a reputation for reliability. When telcos started realizing they couldn’t compete with the specialists, NTT took a longer view, and bought 80 percent of RagingWire in 2014, completing the purchase in 2017. “[The US telcos] were very short-sighted, very quarterly focused,” he said. “They were getting their tushes handed to them by the Equinixes, Digitals and RagingWires of the world, and they backed out. I think NTT was extraordinarily intelligent for doubling down on this business.” The company has data centers in 21 countries, mostly through acquiring providers such as RagingWire, along with NetMagic (2018) in India, Gyron (2012) in the

16 DCD Magazine • datacenterdynamics.com

UK, e-Shelter (2015) in Germany. Most have kept their own branding locally, but centrally, they are all marketed under the NexCenter brand - which also includes a measure of standardization of the product. Adams meets regularly with the heads of e-Shelter and NetMagic, and: “We don’t use the same gensets, but we offer the same SLAs. We have a common set of APIs.” It’s an arrangement that has a lot of advantages, said Adams, although it lacks the tax benefits of being a data center real estate investment trust (REIT) like Digital Realty or Equinix. “REITs are not the perfect answer,” he said, suggesting that REITs are limited in the services they can offer, and have to be structured as a set of national LLC companies, with regional variations. “NTT is a large non-REIT option, not constrained to the issues REITs have to deal with.”

"REITs are not the perfect answer. We are a large option, not as constrained as a REIT"


CEO Interview | Doug Adams, RagingWire “There’s nothing stopping us,” said Adams. “We are doing pre-construction for VA5 at the same time as construction of VA4, because our funnel is strong enough we’ve already pre-sold part of VA4. When we’ve built VA4 by August 2019, we would expect much of it to be sold.” The focus has shifted while the campus is being built. VA1 and VA2 were essentially stick-built, but the later buildings have become more modular: “VA5 will be the RagingWire 2.0 showcase.” RagingWire now uses a prefabricated shell, and the space is sold while the concrete dries. The company prepares the site, including fiber pipes and power, and orders the shell. “Then we have it sitting on the lot of the pre-fabricator, and we go out and start pre-selling it,” said Adams. “The building is fit up and dried out in less than two months.” This cycle gives the company a head start on selling, and it also allows for faster and cheaper construction, by shifting the building process away from winter: “We construct in a time when it is less expensive. We avoid the middle of winter, when there is snow, mud and rain.” It’s also more focused on the needs of hyperscale customers for large capacity delivered fast and cheaply. VA1 and VA2 were made with 2MW vaults available, while VA4 and VA5 are offer 8MW halls. On cost, Adams reckoned he can build for $7.5 million per megawatt. If others claim to manage $5 million per MW, they are stretching the truth he says, by leaving out things like land costs. “This is a highly deceptive market when people frame up their actual build costs,” he told DCD. “People say anywhere from $5m to $10m, but if they are honest and building at scale, the price will be $7m to $8 million.” The company still caters for smaller customers, however: “I can swing into the supergranular. We sell an 8MW hall, but we also do a 250kW break,” said Adams. “About 70 to 80 percent of our customers are wholesale, about 20 to 30 are more traditional retail.” The combination actually has synergies: the building benefits from the economies of scale demanded by webscale customers, while the retail customers have the benefits of being close to those cloud providers: “It makes for a unique mix of customers. We’re not just retail, we’re not just hyperscale. We have the best of both. Retail equates to shorter terms and higher margins, while

wholesale equates to longer terms and lower margins.” It’s striking that the first Virginia facility, where we are, must have cost substantially more than the later price-competitive modular builds. RagingWire was a runner up in DCD’s Most Beautiful Data Center award in 2017 with its TX1 facility; the VA1 data center has the same attention to detail. It won a design award from Loudoun County’s Design Cabinet, praised for its “vibrant blue, yellow and purple exterior colors” and its “majestic, two-story atrium” with natural light. “I think we consciously put a lot of amenities in the first building on the campus,” said Adams. “We try to make sure one building on each campus is a nice building, with all the bells and whistles. Thereafter, they are much more utilitarian.” VA1 has office space, a games room, and carefully constructed observation windows, so visitors can see into the critical facilities space, without having to enter a secure area - a feature RagingWire has used in other sites including Sacramento’s SA3 facility. I asked about a stretch of water by the facility. Other Ashburn operators refer to it as “The Moat,” implying it’s a sign of overspending. Adams grinned at the suggestion: “Nah, it’s a retention pond.” Compared to subsequent buildings, it is as if VA1 was “dipped in gold,” Adams laughed. Future buildings will be more of a “market standard,” but they will never the same as the rest: “In my opinion we are always going to build a better, higher quality data center. We don’t shoot for the de minimus level. I think we build within a stone’s throw of the super low cost providers, but we also add more value.”

“This market is deceptive. People claim construction costs from $5m to $10m per megawatt"

Peter Judge Global Editor

There’s no doubt the US is important to NTT. It makes up 43 percent of global sales for data centers, and the top four markets represent 24 percent of global sales: “All of Europe is 19 percent, but Chicago, Texas, Silicon Valley and Northern Virginia together make up 24 percent of the globe.” Adams is visibly excited to have such solid backing. He’s going after business from the cloud giants now, and he showed DCD the fruits of this in Ashburn, the largest data center hub in the US.

We met him in VA1, RagingWire’s first facility in Ashburn’s so-called Data Center Alley. It sits on a 78 acre plot where subsequent builds show the rapid pace of data center construction, and a change in RagingWire’s approach. There are two other completed buildings, another two in construction, and space for eight in total. The three existing data centers are single-story: VA1 has a capacity of 14.4MW, VA2 has 14MW, and VA3 is a 16MW building. VA4 and VA5 are being built and pre-sold now, and both will be two-story 32MW facilities.

There’s a distinctive atmosphere in Ashburn. All the operators are competitors, but they’re also friends; any critiques of each others’ products are good-natured, amounting to teasing. Adams has an answer to that: “The reason we all talk to each other is, we are still in the second innings.” He’s referring to baseball. Early in the match, people don’t take it so seriously: “In the third innings, we are all going to get a little upset with each other, because we are fighting over the same deals. "In the fourth innings, we are all going to hate each other - but we are nowhere near the fourth innings.”

Issue 32 • April 2019 17


Opinion | Cutting tax cuts

Everybody is watching

Sebastian Moss Deputy Editor

Why do tech giants get big tax breaks from governments? Sebastian Moss investigates

R

egular DCD readers know all about the tax breaks offered by multiple US states to influence the location of data centers built by large players like Facebook and Amazon. The story spread to the wider public in February, when the Washington Post reported that “Google reaped millions in tax breaks as it secretly expanded its real estate footprint across the US.� Under the alias Sharka, LLC, Google secured tax breaks in Midlothian, Texas, WP reported. DCD has written extensively about Sharka, along with other Google aliases like Jasmine Development and Fireball Group, and Facebook's aliases like Raven Northbrook and Stadion. Sometimes, these companies buy several lots of land in neighboring states and wait for the best offer before choosing a site. Last year, Amazon started raising this issue in the public mind, by making its search for HQ2 a public contest, sparking a backlash that went beyond the Queens community, with the public questioning why the company run by the world's richest man needed $3bn in subsidies. On a smaller scale, tax breaks are usually a win-win: the company saves some money, and the small town - usually yet to recover from the Great Recession - gets (a few) jobs, some infrastructure and a marquee company name to try to win further investment. But at the larger scale, the situation is very different: Google may not have come to

Midlothian without those tax incentives, but it had to build somewhere. Hyperscale capex is growing, Google is growing, the needs of the Internet are growing. If company-specific tax breaks were somehow not allowed, Google, as well as all the other tech giants, would still have to pick somewhere to build their data centers. They would then provide far more money for schools, public transport, and all the other necessities that local taxes pay for. This is the trick large businesses have managed to keep going: convincing communities that they have to expect less than their dues to prosper, when the reality is the overall net gain for local residents is reduced. It's hard to know how beneficial a data center is to a community - the size of investment can vary, the power requirements can vary, the water usage can vary, and the tax breaks can vary. Muddying the waters further is the fact that most reports on the benefits of data centers are sponsored by the companies that built them. Google, for instance,

18 DCD Magazine • datacenterdynamics.com

funded a report showing benefits to local economies - and to be fair, its corporate social responsibility team provides initiatives like teaching the local schoolchildren how to code on Chromebooks, and giving out free WiFi. But the positive impact of data centers must be weighed against potential issues such as noise and air pollution, increase in electricity prices, and the use of aquifer water. And that positive impact would simply be higher if the tech companies paid the same taxes as other businesses. Some will say it is unfair to pick on these organizations; they are, after all, simply corporations that are beholden to their shareholders, and will take whatever steps are necessary to improve their bottom line. "When we enter new communities we use common industry practices," Google spokesperson Katherine Williams told the Washington Post. This is mostly true - but sometimes, large corporations also influence the political discourse, the way Facebook did when it was caught secretly helping push through a data center tax break law in Utah. And even if this is just standard industry practice, does that make it okay? Google's now-retired 'don't be evil' slogan was an in-joke, but the company has always styled itself as a corporation that tries to do things differently - even when it doesn't help the bottom line. Google has led the way on renewable energy investment, and is the largest private renewable energy purchaser in the US. It has helped pass regulations so that nonutility companies could buy renewable energy directly in countries like Taiwan, and has set the bar for how large businesses should approach power purchases. Imagine if it had the same approach to taxes? Imagine the public infrastructure, education and healthcare improvements that would be possible if companies like Google used their vast lobbying efforts to make race-to-the-bottom fighting between small communities a thing of the past? It's a naive thought. But the HQ2 debacle must surely raise awareness over secretive tax arrangements, and the long-term damage they can cause. DCD will keep reporting on these stories, with a lot of help from the tragicallyunderfunded local news publications that serve small communities. We can only hope that the rest of the media remains interested in data centers.

"Most reports on the benefits of data centers are sponsored by the companies that built them"


> Building the Colo Way | Supplement

Sponsored by Sponsored by

INSIDE

Webscale swallows colo

Building at speed

The best of both worlds

> Colocation used to be about renting space in racks. Now it’s about selling a whole hall at a time

> Talking to CyrusOne about how to shave crucial days off of data center construction

> David Liggitt talks about balancing traditional customers with hyperscale demand



A Special Supplement to DCD April 2019 Sponsored by

Contents

Colocation: is it still about shared space?

Features 22 - 25 How webscale swallowed colo 28 - 29 Building at speed 30 - 31 N ews roundup: Changing the landscape 32 T he best of both worlds

22 32 28

I

t used to be simple. Some people had their own data centers, but others put their equipment into the colocation space. The landlord provided power and cooling, and they rented space. One rack at a time. It's more complicated than that now. People with their own data center space can have other people run it, or even fill it with equipment rented from cloud providers (Azure Stack or Amazon Outposts, say). In the colocation spaces, there is equipment owned and operated by end users, by their IT partners, and by cloud providers. Among these options, there is one that is getting more visibility than the others. But all of them have a solid future.

CyrusOne gave us details of how it does this, along with a tour of its Ashburn campus (p28). There are similarities and interesting differences to the methods of Compass, QTS and others (p22).

Hyperscale customers can buy 8MW or 10MW at a time (p22). In a mega-facility at a hub like Ashburn, that's just one hall, but to much of the rest of the world, that's an entire data center in one single bite. That model, along with the speed that hyperscalers expect, is changing the dynamics of that part of the market, as providers develop new techniques to meet the demands of hyperscale customers.

Efficiency will rule in all sectors of the colocation market, and this issue's cover feature (p12) shows how this evolution touches all parts of the market. In Boden, Sweden, an EUsponsored data center is pushing the boundaries of efficiency. Facebook has donated hyperscale kit, but the project aims to deliver the kind of PUE that the giants achieve. One rack at a time.

But there's more to it. David Liggitt of information provider datacenterHawk aptly reminds us (p32) that these mega-facilities only really exist in giant hubs like Chicago and Northern Virginia. In more rural areas, colocation continues in smaller slices. Colo vendors end up dealing with both hyperscalers and retail customers. How do they do that? Doug Adams, CEO of RagingWire, gives a good description elsewhere in this issue (p16).

DCD>Debates How can you reduce the cost per MegaWatt for new data centers?

30

May 14 11am CDT

For big players, the colocation business is all about margins. Given the level of competition, it's hard to justify increased prices, so everyone has to look at their costs very carefully. Everyone needs to build quickly, cheaply and efficiently. We give some top tips on how to achieve this. bit.ly/MegaWattDebate

Issue 32 • April 2019 21


Building the Colo Way | Webscale swallows colo

How webscale is swallowing colo Colocation used to be about renting space in racks. Now providers want to sell a whole hall at a time. Peter Judge finds out more

C

olocation buildings started out as neutral territory where enterprises and other organizations could rent space, bandwidth or equipment. Most customers would need a rack or two, or maybe an aisle. They’d get a pass to get into the facility, and a key to the cage and their rack(s). Sometimes called “carrier hotels,” these facilities were an opportunity for organizations to close their own data centers, or to expand beyond IT suites that were bursting at the seams, using space, security, power and cooling that was paid for by a central provider. This “retail colocation” approach is still very significant, but in many locations, it is being superseded by wholesale colocation in which a giant customer, usually a large enterprise or a web services provider, buys up a whole suite, a hall or even a whole data center in one go. “Wholesale is massively outgrowing retail,” said Adil Atlassy, recently appointed

22 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor


as CTO at wholesale provider Compass Datacenters. Previously, he managed site selection for Microsoft, handling the giant’s acquisition of data center space, so he’s seen the change from both sides of the transaction.

“Only five years ago you felt a 250,000 sq ft building with 10MW was a lot,” said Jon Litvany, senior sales director at Digital. “Now we may do a lease for 36MW for one client, and they'll take it in six rooms.” Most players keep a foot in the retail colocation space: “Few people out there are In volume terms, retail may be growing, pure play. Everybody's got a mix of public but the customers are changing, Atlassy cloud and infrastructure and bare metal, told DCD. Customers are migrating much of and that mixture falls into a bucket called their IT to the cloud, but their “crown jewels” hybrid colocation,” said Greason. remain in their own data centers, he said. So wholesale colo providers often have a This creates a hybrid cloud. “A tremendous hall or two of retail colocation which, these amount needs to be optimized in the cloud days, is sold under the hybrid brand because to move the crown jewels,” he points out. it is capacity which has come out of an inBut the massive growth in the public cloud house data center, and may be moved on to equates to a boom in the wholesale space a cloud provider, but is run by a customer the hyperscale cloud using all three. providers buy. Wholesale colo providers Compass builds in have one big competitor multiple sizes, with when selling to hyperscale 1.2MW and 2.4MW customers: the hyperscalers buildings suiting retail themselves. Hyperscalers colo providers, and build their own giant, enterprises looking cheap facilities for planned at 6MW facilities. For capacity. They shop around hyperscalers, Compass for wholesale colocation makes 32MW data when they have to top this centers. up. In Northern Virginia “They build this 200MW last year, most of the campus, and then they colocation providers look to me for one of three DCD visited were things,” said Greason. - Jon Litvany, entirely focused on “Bringing capacity on a little Digital Realty chasing the wholesale quicker, providing overflow market, with very for production capacity that positive results. Big they didn't plan for, and customers, up to the moving non-production size of Google or Facebook, were buying up workloads out of their environments.” capacity faster than they could build it. Atlassy thinks wholesale colo can get “We believe our business works best more of the hyperscale business: “The when we have an intentional focus on the volume at which [the cloud giants] are all hyperscale community,” Tag Greason, chief growing dwarfs their ability to provide hyperscale officer at wholesale colo provider in-house. To make that growth, they will QTS, told us at the company’s new facility have to outsource,” he said. His belief is in Ashburn. It’s obvious where that’s taken that ultimately, Facebook, Google and QTS: the current facility is three stories Amazon Web Services (AWS) will decide that tall, and will have 32MW of available power construction is not their core business. when complete. It is mostly being leased a Those chasing hyperscalers need room at a time, but those rooms are big. to come close to matching the costs The size is perhaps more obvious at the hyperscaler would pay to build Digital Realty. At 36MW, Building L in for themselves: they call this “owner Ashburn will be Digital's biggest - and that's economics.” Said Greason: “They're just the first phase of a facility which will coming to me because I'm close to owner ultimately offer 84MW. Building L will have economics.” 6MW data halls, each of which is 36,000 Atlassy agreed: ”You want faster, cheaper sq ft, just short of an acre (43,600 sq ft). At and better,” he said, and traditionally, that size, the halls have diagonal pillars to the customer has to choose any two. He brace the ceiling against earthquakes - even reckons the industry can get beyond that though Ashburn has no seismic activity. and for him, the secret is scale: “With scale

"Five years ago, a 10MW 250,000 sq ft building was a lot. Now it's 36MW in six rooms."

Pods on skids Digital Realty and others have adopted a “pod” architecture. Instead of the “stick built” approach which builds everything on site, it builds its electrical rooms and cooling systems off site on skids. Said Litvany: “We're assembling them in a factory then, rather than thousands of connections on site, it ends up being dozens.” Equipment is commissioned up to a point in the lab, and the final commissioning happens on site, where it is slid into place in a building that has been built on site using “tilt-wall” construction (see box). “All our electrical rooms are prefabricated in modules off site,” said Compass’s Atlassy. “They are fully tested at the factory, shipped, pushed into the building and connected up.” One major impact on the cost base is that the wholesale colocation provider does not need to send so many highly-skilled electricians to the site.

Issue 32 • April 2019 23


Building the Colo Way | Webscale swallows colo

and volume comes more efficiency.” Innovation for hyperscalers is about getting efficiency while taking out as much cost as possible, without jeopardizing uptime. Providers aim for between $5 million and $10 million per megawatt - that's a wide variation, but then there is no clear standard for what is included in that price, and how long the contract is for. It starts with a focus on the building: “Hyperscalers care about location, economics, scale and speed. If you can't answer the mail on those four, you are literally not at the adult table talking with them.” So if Facebook asks for 10MW in Ashburn, it doesn’t matter how good your Atlanta data center is: you don’t get the deal. To do this, they also have to be physically close to the giants’ campuses.

Moving non-production workloads to wholesale colo leaves the giants with a homogeneous IT load of customer-facing applications in their own monolithic facilities, said Greason: “Hyperscalers can be very, very efficient in their operations, if they don't have to worry about test-dev, IT, or non-production assets within their production environment.” And those non-production jobs are very significant. Greason reckons around ten percent of a hyperscaler’s workloads are non-production. For a smaller company, that wouldn’t amount to much, but for a Google or an AWS, it could be 10MW. “They're dealing in hundreds of megawatts in capacity planning - and 6MW is a rounding error,” said Greason. “'Can you take care of this little nuance?' For me and my competitors, that's not a rounding error. A

regional financial bank wouldn’t say 6MW is separation between the shared infrastructure a rounding error. It's their entire business. It's managed by the provider and the equipment probably double their entire business.” installed by the customer. It’s a separation For wholesale colos, dealing with the of “church and state,” Jon Litvany told us, hyperscalers’ overflow isn’t lesser work for standing at the door to an IT hall in a Digital Greason: ”Five to 10MW of non-production facility: “We own everything out here, the environment is very valuable to me - and it’s client owns everything in there.” very valuable to them to move it.“ That division also allows the provider Wholesale colo must also have scale: “You to keep its own infrastructure at a different must build big, but can you manage it?” asked temperature to the customers’ - important Greason. “And do you have experience doing when “wet-cell” VRLA batteries are in use. it?” The big wholesale colo players all point to One common piece of the answer is their track record. standardization and commoditization. The final Big colo providers have requirement is speed, standardized their buildings, but this is not a simple making the same shells in measure, said Greason: different locations. They “Most people just also have pre-configured immediately think, quantities of mechanical how quickly can you and electrical hardware deliver? But how fast they buy from vendors In can you do the lease? QTS’ case, the modular How fast can you design is called QMOD: answer the RFP? How “We standardize on the UPS quickly could you and the generators, and we purchase that land in a have the ability to just buy new location? There are equipment and move it so many aspects: it's around to different sites.” not just about bringing “We look at - Tag Greason, QTS a megawatt online.” components that are The last part is really widely available and speed of construction. commoditized,” agreed Wholesale colo Atlassy. But they are providers build the shell first, and then fill out carefully selected to match, so there is no the floors. stranded capacity. The order of construction is important. Theoretically, these standard purchases QTS is building 32MW on three stories, so could be with multiple vendors, but in it fits out one 8MW quadrant at a time. Part practice, each colo vendor has preferred of the reason for this is that the cooling and suppliers: “We design it once, we have one set power systems are shared across all three of suppliers, and we build that relationship floors. One quadrant’s worth of mechanical with them to make sure that we have the and electrical infrastructure is installed, and right lead times, and the right delivery then the three floors are filled up. windows.” Digital Realty builds two-story facilities As well as standard components, providers and fills the top floor first. Moving heavy use pre-fabricated construction where equipment in on the ground floor is less likely possible, in particular assembling mechanical to have an impact on an upstairs floor than and electrical components as “pods” doing it the other way. and creating the building with “tilt-wall” All modern colocation vendors put construction (see boxes). mechanical and electrical equipment outside The benefits of pre-fab are obvious: “you the IT halls. This is a major plus, as it allows a shrink your timeline, and you increase your

24 DCD Magazine • datacenterdynamics.com

"You must build big, but can you manage it? And how quickly can you deliver?"


Tilt-wall construction

reliability,” said Litvany. Buildings can be completely built, and filled with customers in less than one year. But providers admit an apparent contradiction with this model. Doing business is about giving the customers what they want; so how do you square that with a limited offering? It turns out, that is what the customer actually does want, they all told DCD. “Five years ago, when everyone envisioned going to hyperscale, they said, ‘Let's just go with a blank slate, let's give them whatever they want!’” said Greason. “Over time, the providers found that hyperscale customers came to them with specific standard designs, and then started to consult with the providers on the details. “I think the hyperscale conversation has ended in two concepts,” said Greason. Originally, some hyperscalers would specify the design and the equipment; more often these days, he said “Vendors say, ‘We don't care what you use - within reason.’ They don't want to disrupt our timing and our schedule, because all they want is a quality product delivered on time, at scale, and with near owner economics. If they start monkeying with all of our components, the

cost goes up, and the delivery time gets later.” Some choices are easier to introduce at a large scale. For instance, many hyperscale customers are going with slab floors for their own facilities. Aisle containment systems can be built on top of a solid concrete base, and the cost of strengthened raised floors is eliminated. When you sell a building at a time, it’s easier to offer both. The next building in the long line at the Digital campus will have slab floor at the customer’s request. You get an idea how much the customers like limited choice, when you hear how hard the operators have to work to introduce changes. For instance, when QTS built to three stories, the hyperscalers had to know that they could deliver their equipment in heavy racks to the site and get it all the way to the IT floor - that lifts and doors were all sized for the hardware. In wholesale colocation, an inventive set of suppliers is dealing with a demanding and exacting group of customers. It’s a small world, in terms of the number of companies involved, but those companies are finding new ways to put together a giant and rapidly expanding network of massively powerful infrastructure.

DCD>San Francisco The Bay Area Data Center and Cloud Event

July 11-12

In the world's technology capital we will discuss the impact of hyperscale giants on the data center industry, as well as feature talks on AI, modernization, Edge and much, much more. bit.ly/DCDSanFrancisco

Digital Realty, like many other wholesalers, makes its buildings with sections of reinforced concrete, leaving gaps for the mechanical and electrical plant. At its Ashburn campus, Jon Litvany showed DCD around a completed data center, then took us outside. Buildings in various states of construction stretched for a quarter of a mile. The completed data center was behind us, and away far off, a fresh site was being prepared. Closer up was a plot where giant external walls were propped up and being fixed in place. Digital uses “tilt-wall” construction, where building sections are formed horizontally, then "tilted" upright with a crane and braced into position while the other elements like roofs and floors are added. It’s like a huge high-tech barn raising. First the slab floor is made, then frame molds are built on top of that, and filled with concrete. Treatments are added, and the concrete cures for two weeks to a month. When it’s ready, a frame is put in place, and the walls are raised, braced with temporary props, and joined together. Next to us was a building with floors and a roof, but square gaping holes still pierced its walls: “Remember those mechanical and electrical pods we assemble in a factory? They go in through those holes,” said Litvany.

Issue 32 • April 2019 25


™

11 July 2019 // San Francisco Marriott Marquis

Join the discussion #ICC2019

Innovation Day: International Colcation Club The International Colocation Club brings together colocation providers from around the globe to hear from industry experts and thought leaders addressing key market trends and impactful technology innovations. Attendees from this event gain new insights and collaborate with a talented colocation community - enriching their network and influencing their strategy for future growth. In partnership with DCD, this year's invite-only workshop will cover topics including: customer acquisition strategies, data center management optimization, next-generation energy efficiency, emerging trends in data center architecture and much more.

go.datacenterdynamics.com/SC_ICC.html

Attending the International Colocation Club hosted by Schneider was a great experience. Throughout the meeting I had access to thought leaders including analysts and executives and I made great connections with colocation providers from Europe and Asia. The information and insights provided me new perspectives to enhance and grow our business. Nathan Hazelwood, Director, Strategic Procurement, QTS Data Centers


Book Now!

> Event Highlights Networking with global colocation providers and industry visionaries

451 Research Keynote on the colocation and wholesale data center business

Insights to fuel innovation roadmap from thought leaders including leading providers

> Key Speakers

Kelly Morgan Vice President Research, Services 451 Research

Kevin Brown Senior Vice President Innovation & CTO Schneider Electric

Andy Haun Chief Technology Officer, Microgrids Schneider Electric

This is a powerful event with topics tailored to address the biggest challenges facing providers and designed to fuel their business growth. Mark Bidinger, President of Cloud & Service Provider Segment, Schneider Electric


Building at speed How do you launch a data center quickly? Sebastian Moss asked CyrusOne

Sebastian Moss Deputy Editor

I

n the classic 1994 film Speed, Keanu Reeves must keep a bus above 50 mph (80 km/h) or it will explode. With data demands rapidly growing, and hyperscalers' appetites increasingly insatiable, data center construction can feel just as terrifying. You need to move quickly, while trying not to crash. “If you look at how fast these web-based revenue generating companies are going, they absolutely want the product faster,” Tesh Durvasula, European president of data center real estate investment trust CyrusOne, told DCD. “Every day we can save, every hour we can take off a project, matters - if we can get the product into the customer's hands sooner, it means more money for everybody.” It’s an obvious point: the sooner companies can use their data centers, the sooner they can benefit from them. But it’s not an easy thing to manage - we’d all like to be faster at what we do, but some things just take time. For CyrusOne, the -Tesh Durvasula trick has been to try brought those CyrusOne to move the slower panels to the things away from the construction site construction site. Using on trucks and what it calls a ‘Massively used them to set Modular design’ “enables up the data center CyrusOne to commission large data center building. It saved time because we didn’t facilities in approximately 12-16 weeks, which have to stop work at the building site while is virtually an industry record,” Laramie the concrete walls were being cast.” Dorris, VP of design and construction at the Elsewhere, the company “set up another US-based company, told DCD. off-site facility where we could assemble modular power units. Each unit included Take its Sterling II data center in Northern an uninterruptible power supply, a backup Virginia, which was built in 180 days. “A generator and a utility transformer, all normal data center building has tilt-up housed in weatherproof containers.” concrete walls, which are cast on-site at the CyrusOne brought the modular units to construction site,” Dorris said. the site “and set them up in ‘lineups’ outside “But for Sterling II, we set up a separate the facility. Using modular power units off-site facility where we could cast prespeeds up construction, saves money and fabricated concrete wall panels. We then reduces the building’s footprint because we

"Every hour we can take off a project, means more money for everybody."

28 DCD Magazine • datacenterdynamics.com

don’t have to build additional rooms inside the data center to house power equipment.” To shave further days off the schedule, Durvasula said, the company keeps "inventory available and in many cases pre-ships inventory to destination" ahead of starting work. "When we anticipate something happening we'll get stuff to the site beforehand - even if we're in the midst of a negotiation." We visited the Sterling campus last year (see p22 for more from the junket) and were given a tour by Stuart Dyer, the REIT's business development manager. "This is a two story building, with two 60,000 square foot 'pods' on the first floor, and another two on the second floor. That’s 240,000 square feet, 36 megawatts. On day one, we lit up one pod, but I had capacity to light up the other three on 16 week intervals.”


Building the Colo way | Speed demon

Sterling II under construction

Day 21

In this case CyrusOne it used a 60,000 sq local and national regulatory hurdles that ft (5,500 sq m) pod, but it also has a smaller define different nations: "Just generally design, half the size. "Typically, what we do speaking, I would say as you move further is we build one large structure, and then we south in Europe it gets a little more complex. build out pods within that structure," Dyer It takes a little bit longer in Spain than it said. "Then we use the same generators, the would in Paris than it does in Germany than same UPS systems, the same air handlers, the it does in London. same PDUs across our portfolio. Having that "Language barriers and cultural barriers rinse and repeat process, it's elegant." aside, the rules of each country are very It is here the different and we're company has working with all of our to be careful. advisors and partners Standardization to make sure that we allows for speed, understand them as best but it can risk we can. The rules are slowing innovation different here, but you and preventing have to play by them." customization. One thing that is not "So somewhere different is the demand between 70 and for ever faster speeds. 75 percent of [the Mainly targeting Fortune design] we're going 1,000 companies and to keep standard," the hyperscale giants, Durvasula said, with CyrusOne expects its the rest used for European customers to innovations learned mostly be the same as its -Stuart Dyer, from previous US ones. CyrusOne constructions, "The customers acquisitions, definitely won't give or customer you years to finish their requirements. project because they're For example, people's expectation of anticipating the capacity,” Durvasula said. physical security has changed, Durvasula “They typically will give you somewhere said: "People are expecting concentric between 60 and 120 days. And after 120 days, security, border perimeter security, just based on the sensitivity of that business both audio visual biometrics and then and how intense they are about negotiating component-level security. And then, very that point…” he trailed off. strict policies around that - so you've had to After 120 days, perhaps it is time to get off adjust your systems, your monitoring, your the bus. policies and procedures to accommodate Peter Judge contributed to this report. that. Even the size of your lobby, you can't have people milling around there anymore so you want to be able to get them in and out."

Day 61

"Typically, what we do is build one large structure, then build out pods within that structure"

Day 114 Day 177

For Durvasula, it is about finding the balance between changes and standardization. "Customization is the enemy of scale, you've got to give some amount of customization, but [most of] what you're going to do is going to have to be standard." Standardization also makes maintenance easier, he noted: "if a technician knows that every time he/she goes into our data centers, they're always going to have a nine foot clearing - not six feet in one market and ten feet in another - that makes it a lot easier to say 'yes I can schedule seven chillers per hour, per day and I can be done with that whole site in two days.'” Previously the company's chief commercial officer, Durvasula is now heading CyrusOne’s push into Europe, building upon its acquisition of Zenium, and the creation of greenfield sites. There, the company is facing the usual

Day 177

Issue 32 • April 2019 29


Building the Colo Way | Colo news

Changing the landscape The colocation market place is continually changing. Peter Judge rounds up some of the more significant announcements of the last six months

Cogeco sold to Digital Colony

Peter Judge Global Editor

North American cable company Cogeco Communications has sold its data center business Peer 1 to investment firm Digital Colony for $546 million (C$720 million). Peer 1 has several data centers in North America and Europe, as well as a holding of dense metro fiber.

CoreSite opens more in Washington CoreSite Realty Corporation has opened DC2, its second data center in Washington, DC, with more than 24,000 square feet (2,230 sq m) of data center space. The facility is in Franklin Court, 1099 14th Street, close to CoreSite's 22,000 square foot (2,000 sq m) DC1, and only 30km (18 miles) away from the company's campus in Reston, Virginia.

Global Switch to IPO Data center operator Global Switch, which was taken over by Chinese consortium Elegant Jubilee in 2018, is reportedly preparing for an Initial Public Offering (IPO) on the Hong Kong stock exchange, expecting to raise up to $1 billion - according to reports by Bloomberg. The firm was established in the UK in 1998 as a colocation provider focused on the world’s largest cities, and originally owned by British billionaires, the Reuben brothers. It owns 11 data centers across Europe and Asia, totaling 3.7 million square feet, and is building a six-story data center in Singapore.

50MW for Goodyear Arizona

Sebastian Moss

Stream is planning a 418,000 square foot hyperscale campus, with up to 50MW of power in Goodyear, Arizona. The 157 acre plot of land, is just the first phase of the project, due to open in 2020.

Equinix keeps busy Giant colocation provider Equinix is expanding on every front around the world. In the US it is planning a $138 million data center next to the iconic Infomart building in Dallas, which it bought in 2018 for $800 million.

Element Critical buys two in Chicago

In APAC, it is opening its first South Korean data center; the 18,000 square feet (1,680 sq m) SL1 center in Seoul, and has announced SG4, its fourth site in Singapore. In Australia, it is building an $84 million data center (ME2) in Melbourne, and in Sydney, SY5 will be its eighth and largest facility in Australia.

American colocation provider Element Critical (formerly known as Central Colo) has bought two data centers in the Chicago suburb of Wood Dale, that formerly belonged to disaster recovery specialist Sungard Availability Services. They add up to 195,000 square feet.

In Europe, it is promising its ninth facility in London (Slough), a $115 million (£90m) site starting with 1,750 cabinets, Meanwhile, in Helsinki, the $20m (€17m) HE7 will start with 250 cabinets and eventually expand to 1,475, and Sofia, Bulgaria is set to get its second Equinix site.

30 DCD Magazine • datacenterdynamics.com

Virtus opens London campus British data center operator Virtus, a subsidiary of Singapore’s ST Telemedia, has opened a campus near London, with 70,000 square meters of space and 40MW of power capacity available. Stockley Park by Heathrow is closer to the City than Slough, and now has two new facilities, codenamed London5 and London6. The company plans to spend $645 million (£500m) on a network of five data centers in London.


Interxion builds in Frankfurt and Marseille European provider Interxion is building in Frankfurt, Germany and Marseille, France. FRA15 in Frankfurt will be a $200m (€175m) facility, with 19MW in the first of four phases due to open in 2020. In Marseille the $160 million (€140m) MRS3 will arrive in three phases from the end of 2019, eventually offering 17MW.

The global colocation market is expected to grow to $50.18 billion by the end of 2020, says CBRE

Airtel to build ten facilities Nxtra Data, the data center subsidiary of Indian mobile operator Bharti Airtel, is planning ten data centers across India, from which it will offer colocation, managed hosting, and public and private cloud services. It has already selected four locations in which to build facilities: Pune, Chennai, Mumbai and Kolkata. The first data center, in the Maharashtra city of Pune, will go live early in 2019.

Stack merges T5 and Infomart facilities Stack Infrastructure, a new wholesale colocation provider backed by Iconiq Capital, has been put together from Infomart’s facilities and some data centers belonging to T5. Through IPI Data Center Partners, Iconiq has been backing both T5 and Infomart. Stack now has eight facilities in US markets, and is continuing T5’s plan for a data center campus at AllianceTexas in Fort Worth, that could eventually reach more than 400MW of power capacity. Sebastian Moss

DataBank buys LightBound America’s DataBank has bought Indianapolisbased LightBound, giving it two data centers in the Indy Telecom Center building totaling 73,000 square feet of space with 9.5MW of power.

CtrlS to build three hyperscale facilities in India Indian colocation provider CtrlS is planning three hyperscale data center locations: a 150MW campus in Hyderabad, which could span two million square feet, and two smaller projects in Mumbai and Chennai, each covering more than one million square feet, with 100MW and 70MW of potential power capacity respectively. CtrlS runs six data centers in Bengaluru, Chennai, Hyderabad, Mumbai and Noida.

AT&T sells out US telco AT&T has sold its colo division for $1.1 billion to Brookfield Infrastructure. Brookfield will merge them into Evoque Data Center Solutions, its subsidiary formerly known as Dawn Acquisitions, that now has 31 data centers, 18 in the US. AT&T will continue to offer colocation services, selling Evoque’s offering to its customers.

Cologix buys Colo-D

Berkshire buys into Teraco

North American provider Cologix has bought Colo-D’s two wholesale data centers in Montreal, totaling 50MW, adding them to the seven facilities it already has there. Cologix aims to continue with Colo-D’s plans for another 150MW facility in the popular Montreal metro. Cologix is already the leader in wholesale colocation in Montreal, and will invest $500 million in the region in 2019. Also in Montreal, Vantage Data Centers bought 4Degrees Colocation.

Investment firm Berkshire Partners wants to buy a majority stake in Africa’s largest provider, Teraco Data Environments, based in Johannesburg. Teraco has more than 30MW of critical power load in five locations. Its data centers are home to 13,500 interconnects, and the company hosts the continent’s largest Internet Exchange, NAPAfrica. European equity fund Perima, which bought Teraco in 2014, remains "a significant investor.”

Issue 32 • April 2019 31


Colo Supplement | David Liggitt unstoppably, there’s still a very significant amount of enterprise demand, from organizations which either need their own space, or suites in colo facilities. “That’s going to be moving in the next three to five years,” he said. “But we won’t see colocation providers abandon the enterprise opportunity.” Facilities that fully focus on the bigger transactions in the hyperscale market have to be very scalable, and offer single halls up to 10MW, which is very different from servicing 250kW data center customers, he told DCD.. “There are providers who are giving up the opportunity to attract a small enterprise customer, because they are going after the bigger one,” he said. These companies have a difficult task of competing against the in-house building abilities of the web giants, but it’s not impossible, he said. They don’t have to match the economics of monolithic hyperscale buildings, if they play to their own strengths.

The best of both worlds

Peter Judge Global Editor

Hyperscale is exciting, but colocation providers still have to satisfy their traditional customers. David Liggitt talks to Peter Judge

M

eeting colocation providers, it’s easy to assume they are all focused entirely on their hyperscale customers. In fact, traditional colocation is still a major part of the market, according to David Liggitt.

“There a couple of things you have to remember,” Liggitt, founder of the datacenterHawk information service, told DCD. “In 2017, an Uptime Institute survey of 1,000 companies found that 65 percent still own and operate their own data centers.” So, while cloud services are growing

32 DCD Magazine • datacenterdynamics.com

These are mature players, and building colo facilities is their core specialization, so Liggitt believes they can offer a better product than the in-house abilities of the web giants: “They have been able to achieve the economies of scale; they can look at their pipeline and make decisions. That is harder for cloud providers to do.” The wholesale market can mean chasing quick turnaround and short building timescales, but once delivered, the projects are stable, even though wholesale customers can appear to be fickle: “We haven’t seen them order 10MW and then pull it out two years later.” And below the giants, there’s a more stable set of customers that are satisfied with contracts for 8MW at a time. But hyperscale building is very geographically defined: “Most of the hyperscale growth is in the big hubs. It’s not in rural Iowa; it’s in Chicago or Northern Virginia. In an effort to keep up with demand, providers in Northern Virginia are designing their data centers much larger than they build in other markets.” In rural hubs, vendors can offer between 1MW and 4MW. It’s a market where one expects to see players like Equinix, with generic capacity, good connections and very good coverage of most markets. “Smaller, quieter data center markets can attract both kinds of users,” said Liggitt. Local enterprises, along with national players will take space in a range of sizes. If you take a broad view, it’s clear that the colocation market still has a range of customers to serve. David Liggitt is founder and CEO of datacenterHawk, a technology start up providing subscription based services to the data center community.



PRO

www.dcpro.training

Take the Mission Critical Awareness certificate online We’ve upgraded the industry’s most flexible series of online training. Join 1000s of your peers and enroll today! info@dc-professional.com

1. Mission Critical Engineering 2. Reliability & Resiliency 3. Electrical Systems Maintenance 4. Fundamentals of Power Quality

Can you put a price on safety? We couldn’t. Take our 1 hour Health and Safety course online for free.

Take Free Course

www.dcpro.training/dc-health-safety

info@dc-professional.com

www.dcpro.training


AI + Automation

How machine learning could change science Artificial intelligence tools are revolutionizing scientific research and changing the needs of high performance computing, Sebastian Moss reports

S

cientific progress is inherently unpredictable, tied to sudden bursts of inspiration, unlikely collaborations, and random accidents. But as our tools have improved, the ability to create, invent and innovate has improved with them.

The birth of the computing age gave scientists access to the greatest tool yet, with their most powerful variant, supercomputers, helping unlock myriad mysteries and changing the face of the modern world. "High performance computers are paramount towards us driving the discovery of science," Paul Dabbar, Under Secretary for Science at the US Department of Energy, told DCD. Key to recent discovery and research efforts has been the ability to run vast simulations, complex models of aspects of the real world, to test theories and experiment upon them. “For the last several decades, the work that we've been doing inside Lawrence Livermore National Lab has been exploiting the relationship between simulation and experiments to build what we call predictive codes,” Frederick H. Streitz, LLNL’s chief computational scientist and director of the high performance computing innovation center, said. “We think we know how to do research in the physics space, that is, we write down the equations, solve them, and work closely with experimentalists to validate the data that goes into the equations, and eventually build a framework that allows us to run a simulation that allows us to believe the result. That’s actually fundamental to us - to run a simulation that we believe.” Now, however, a new tool has reached maturity, one that may yet further broaden the horizons of scientific discovery. “So on top of experiments and simulation, we're adding a third component to the way we look at our life, and that is with machine learning and data analytics,” Streitz told DCD. “Why is that important? It's because if you look at what we do with experiments, it is to query nature to ask it what it is doing. u

Sebastian Moss Deputy Editor

Issue 32 • April 2019 35


u With a simulation, we query our understanding, we query a model of nature, and ask what that’s doing. And those don't often agree, so we have to go back and forth.” But with machine learning, Streitz explained, it “is actually a completely different way of looking at your reality. It's neither querying nature, nor is it querying your model, it's actually just querying the data which could have come from experiments or simulation - it’s independent of the other two. It’s really an independent view into reality.” That, he added, “is actually a profound impact on how you approach science - it approaches predictability in places where you didn't have exact predictability.” The desire for researchers to be able to use these tools, Streitz told DCD, is “driving changes in computing architecture,” while equally changes to these architectures are “driving this work. I would say it's a little bit of both.” It’s a view shared by many in the high performance computing community,

A new kind of trial and error In traditional human-led experimentation, "you do an experiment, and then for the next thing you change exactly one thing," LLNL's Streitz told DCD, allowing scientists to work out what is different. But "it turns out when you're doing machine learning, it doesn't do that - it changes like a dozen things at a time, because it doesn't care. And now it unfolds all of that afterwards, in a way, that's very difficult for a human to unfold. "And so we can be much more efficient than a human can."

"This is actually a profound impact on how you approach science" including the CEO of GPU maker Nvidia. “The HPC industry is fundamentally changing,” Jensen Huang said. “It started out in scientific computing, and its purpose in life was to simulate from first principle laws of physics - Maxwell's equations, Schrödinger's equations, Einstein's equations, Newton's equations - to derive knowledge, to predict outcomes. “The future is going to continue to do that,” he said. “But we have a new tool, machine learning. And machine learning has two different ways of approaching this, one of them requires domain experts to engineer the features, another one uses convolutional neural network layers at its lowest level, inferring learning on what the critical features are, by itself.” Already, the top supercomputers are

36 DCD Magazine • datacenterdynamics.com


AI + Automation designed with this in mind - the current reigning US champions, Summit and Sierra, are packed with Nvidia Volta GPUs to handle intense machine learning workloads. “The original Kepler GPU architecture [introduced in 2012] was designed for HPC and not AI - that was what was originally used to do the first AI work,” Ian Buck, Nvidia’s VP of accelerated computing and head of data centers, told DCD. “We have had to innovate on the underlying architecture of the hardware platforms and software to improve both HPC and AI,” he said. That has benefited the wider computing community, as have the other innovations in the pre-exascale supercomputers.

‘Okay, go ahead and grow and another cell.’” In normal life, that activity is triggered, and the signal is sent just once. But when there’s a genetic mutation, the signal gets stuck. “And now it says, grow, grow, grow, grow, again, just keep growing. And these are the very, very fast growing cancers like pancreatic cancer, for which there's currently no cure, but it's fundamentally a failure in your growth mechanism.” This is something scientists have known for nearly 30 years. “However, despite an enormous amount of time and effort and money that has been spent to try to develop a therapeutic cure for that, there's no known way to stop this,” Streitz said. The mutation is a subtle one, with all existing ways of stopping it also stopping other proteins from doing their necessary functions. "The good news is that you cure the cancer, the bad news, you actually kill the patient." Lab experiments have yielded some insights, but the process is limited. Simulation has also proved useful, but - even with the vast power of Summit, Sierra and the systems to come - we simply do not have the computing power necessary to simulate everything at the molecular scale. "Well that's what we're going to be using machine learning for: To train a reduced order model, and then jump to a finer scale simulation when required. But we want to do that automatically, because we want to do this thousands and thousands and thousands of times." This was the first full scale workload run on Sierra when it was unveiled last year - running on the whole machine, across 8,000 IBM Power cores and more than 17,000 Volta GPUs.

"Aurora will accelerate the convergence of traditional HPC, data analytics and AI"

“The good news is, these instruments are not one off, bespoke things, they're things that can be replicated or purchased or built at smaller scales and still be extremely productive to research science institutions, and the industry.” Already, scientists are taking advantage of the convergence of AI and HPC, with Streitz among them. His team, in collaboration with the National Institutes of Health, is trying to tackle one of the cruelest, most intractable problems faced by our species - cancer. There are several projects underway to cure, understand, or otherwise ameliorate the symptoms of different cancers - three of which in the DOE specifically use machine learning, as well as a broader machine learning cancer research program known as CANDLE (CANcer Distributed Learning Environment). "In this case, the DOE and [NIH's] National Cancer Institute are looking at the behavior of Ras proteins on a lipid membrane - the Ras oncogenic gene is responsible for almost half of colorectal cancer, a third of lung cancers.” Found on your cell membranes, the Ras protein is what “begins a signalling cascade that eventually tells some cell in your body to divide,” Streitz said. “So when you're going to grow a new skin cell, or hair is going to grow, this protein takes a signal and says,

The team simulates a large area at a lower scale, and then uses machine learning to hunt for anomalies or interesting developments, splitting up the area simulated into patches. “I can take every patch in the simulation, there could be a million of them. And I could literally put

them in rank order from most interesting to least interesting.” Then they take the top hundred or so most interesting patches, and generate a fine scale simulation. Then they do it again and again - on Sierra, they ran 14,000 simulations simultaneously, gathering statistics of what's happening at a finer scale. Already, this has led to discoveries that “would not have been obvious except for doing simulations at the scale that we were able to do,” Streitz said, adding that he expects to learn much more. Similar approaches are being used elsewhere, Intel’s senior director of software ecosystem development, Joe Curley, said: “The largest computers in the world today can only run climate models to about a 400km view. But what you really want to know is what happens as you start getting in closer, what does the world look like as you start to zoom in on it? “Today, we can't build a computer big enough to do that, at that level,” he said. But again, researchers “can take the data that comes from the simulation and, in real time, we can then go back and try to do machine learning on that data and zoom in and get an actual view of what it looks like at 25km. So we have a hybrid model that combines numerical simulation methods with deep learning to get a little bit of greater insight out of the same type of machine.” This has helped guide the design of the supercomputers of tomorrow including Aurora, America’s first exascale supercomputer, set for 2021. “The three things that we are very, very excited about is that Aurora will accelerate the convergence of traditional HPC, data analytics and AI,” Rajeeb Hazra, corporate VP and GM of the Enterprise and Government Group at Intel, the lead contractor on the $500m system, said. “We think of simulation data and machine learning as the targets for such a system,” Rick Stevens, associate laboratory director for computing, environment and life sciences at Argonne National Laboratory, told DCD. u

Issue 32 • April 2019 37


u “This platform is designed to tackle the largest AI training and inference problems that we know about. And as part of the Exascale Computing Project, there's a new effort around exascale machine learning and that activity is feeding into the requirements for Aurora.” That effort is ExaLearn, led by Francis J. Alexander, deputy director of the Computational Science Initiative at Brookhaven National Laboratory. "We're looking at both machine learning algorithms that themselves require exascale resources, and/or where the generation of the data needed to train the learning algorithm is exascale," Alexander told DCD. In addition to Brookhaven, the team brings together experts from Argonne, LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest and Sandia in a formidable co-design partnership. LLNL’s informatics group leader, and project lead for the Livermore Big Artificial Neural Network (LBANN) open-source deep learning toolkit, Brian Van Essen, added: “We're focusing on a class of machine learning problems that are relevant to the Department of Energy's needs… we have a number of particular types of machine learning methods that we're developing that I think are not being focused on in industry. “Using machine learning, for example, for the development of surrogate models to simplify computation, using machine learning to develop controllers for experiments very relevant to the Department of Energy.” Those experiments include hugely ambitious research efforts into manufacturing, healthcare and energy research. Some of the most data intensive tests are held at the National Ignition Facility, a large laser-based inertial confinement fusion research device at LLNL, that uses lasers to heat and compress a small amount of hydrogen fuel with the goal of inducing nuclear fusion reactions for nuclear weapons research. “So it's not like - and I'm not saying it's not a challenging problem - but it's not like recommending the next movie you should see, some of these things have very serious consequences,” Alexander said. “So if you're wrong, that's an issue.” Van Essen concurred, adding that the

DCD>Debates Are we ready for the high-density AI ready future?

Watch On Demand

Dr Suvojit Ghosh from McMaster University's Computing Infrastructure Research Centre (CIRC) and Chris Orlando, CEO of ScaleMatrix, discuss the growth of high-density computing workloads, and how to power and cool your data center as racks get denser. bit.ly/AreYouAIReady machine learning demands of their systems also require far more computing power: “If you're a Google or an Amazon or Netflix you can train good models that you then use for inference, billions of times. Facebook doesn’t have to develop a new model for every user to classify the images that they're uploading - they use a well-trained model and they deploy it.” Despite the enormous amount of time and money Silicon Valley giants pump into AI, and their reputation for embracing the technology, they mainly exist in an inference dominated environment - simply using models that have already been trained. “We're continuously developing new models,” Van Essen said. “We're primarily in a training dominated regime for machine learning… we are typically developing these models in a world where we have a massive amount of data, but a paucity of labels, and an inability to label the datasets at scale because it typically requires a domain expert to be able to interpret what you're looking at.”

models using things like transfer learning. Those are techniques that we're developing in the labs and applying to new problems through ExaLearn. It's really the goal here.” From this feature, and the many other ‘AI is the future’ stories in the press, it may be tempting to believe that the technology will eventually solve everything and anything. “Without an understanding of what these algorithms actually do, it's very easy to believe that it's magic. It's easy to think you can get away with just letting the data do everything for you,” Alexander said. “My caution has always been that for really complex systems, that's probably not the case.” “There's a lot of good engineering work and good scientific exploration that has to go into taking the output of a neural network training algorithm and actually digging through to see what it is telling you and what can you interpret from that,” Van Essen agreed. Indeed, interpretability and reproducibility remains a concern for machine learning in science, and an area of active research for ExaLearn. One of the approaches the group is studying is to intentionally not hard-code first principles into the system and have it "learn the physics without having to teach it explicitly," Van Essen said. “Creating generalized learning approaches that, when you test them after the fact, have obeyed the constraints that you already know, is an open problem that we're exploring.” This gets harder when you consider experiments that are at the cutting edge

"It's not like recommending the next movie you should see... these things have very serious consequences"

Working closely with experimenters and subject experts, ExaLearn is “looking at combinations of unsupervised and semisupervised and self-supervised learning techniques - we're pushing really hard on generative models as well,” Van Essen said. Take inertial confinement fusion research: “We have a small handful of tens to maybe a hundred experiments. And you want to couple the learning of these models across this whole range of different fidelity

38 DCD Magazine • datacenterdynamics.com


AI + Automation

of what is known, where the reference points one can test the system’s findings against become ever fewer. “If you develop some sort of machine learning-informed surrogate model, how applicable can that be when you get to the edges of the space that you know about?” Los Alamos machine learning research scientist Jamal MohdYusof asked. “Without interpretability that becomes very dangerous.” Even with the power of exascale systems, and the advantages of machine learning, we’re also pushing up against the edges of what these systems are capable of. “We can't keep all the data we can generate during exascale simulation necessarily,” Mohd-Yusof said. “So this also may require you to put the machine learning in the loop, live as it were, in the simulation - but you may not have enough data saved. “So it requires you to design computational experiments in such a way that you can extract the data on the fly.” That also begs a deeper question, Van Essen said: “If you can't save all the data and you're training these models, that does actually imply that sometimes these models become the data analytic product output from your simulation.” Instead of you being able to learn everything from the output of the model, your insights are found buried in the model itself.

“Without interpretability it becomes very dangerous"

If you have “two trained models from two different experimental campaigns or scientific simulation campaigns, how do you merge what they've learned if the model is your summary of it?” These questions, and so many more, remain unanswered. As with all discovery, it is hard to know when we will have answers to some of these problems - but, for Streitz, the future is already clear. “This notion of this workflow - using predictive simulation at multiple scales, and using machine learning to actually guide the decisions you're making with your simulations, and then going back and forth - this whole workflow, we believe that's the future of computing,” he said.

Issue 31 • January/February Issue 32 • April 2019 39


Advertorial: Node Pole

A new global standard for green data centers? Node Pole is attempting to introduce a recognizable label for “fossil free” stacks. Peter Gothard investigates the vendor’s research and policy underpinning this bold campaign

D

espite a political backlash against climate change, most obviously in President Donald Trump’s wholesale rejection of the Paris Agreement in 2017, there is absolutely undeniable evidence that the world is warming up. Data centers, like other industries, have been labelled as a contributor to global warming. There they sit, using power generated by fossil fuels in their racks, burning even more energy in their cooling mechanisms, and releasing dissipated hot air into the atmosphere. Meanwhile, there is a growing demand from the public – enterprise and consumers alike – to know more about the connection between data that sits in the cloud and where that data ‘lives’ in the physical world. It’s a debate about ethics, politics and governance, but one largely driven by an environmental awareness that is growing in spite of climate change denial in certain political quarters. “Climate change is an enormous challenge for the data industry and both companies and consumers are now starting

Magnus Wikman, Chief Commercial Officer

to realize the carbon footprint of digital use,” says Christoffer Svanberg, chief sustainability officer at Node Pole, a data center agency for actors looking towards Sweden. Node Pole is introducing a standardized “Fossil Free Data Label” for the industry, in order to quickly and easily allow enterprise users across the world to work out whether their data is truly “green” in terms of being curated in environments that minimize carbon emissions and use 100 percent renewable energy. Node Pole envisions the standard as a “tool” to directly contribute to climate action, and set a transparent standard for how to run a data center sustainably. “By using the fossil free data label, you stand out from competition and make it clear to customers and consumers that you handle data sustainably” says Svanberg. The plan is to offer various yardsticks in the area of renewable energy, energy efficiency and in measurement of lowcarbon emissions. Essentially, Node Pole wants to see qualifying applicants meet a mark of 100 percent renewable energy for a data center,

If expressed as a country, the Internet itself would be the sixth largest consumer of electricity on the planet power usage efficiency (PUE, a ratio which improves as it counts down to 1) of less than or equal to 1.2, and carbon usage effectiveness (CUE) of 0.19kg CO2 emissions per kWh. The figures are based on the data center performance index (DCPI) from iMasons. These are measured in a variety of ways, ranging from green energy certificates for the renewable energy KPI, to self-collected data for CO2 emissions, or location-based data taken from the International Energy Agency. Node Pole is serious in its approach, namechecking Google which is thought to serve more than 3.5 billion searches per day. A 2016 environmental report by Google reveals the company caused greenhouse gas emissions of 2.9 million metric tons of CO2 but the company offset all of it to achieve its zero emissions target. If expressed as a country, the Internet itself would be the sixth largest consumer of electricity on the planet, with only the US, India, China, Japan, and Russia ahead of it. The Paris Agreement’s goal to reduce the risk of dangerous climate change can, it’s believed, be achieved if the presumption is taken that greenhouse gas emissions peak by 2020, but halve by 2030, and keep halving every decade until 2050. That essentially leaves 30 years to make a positive difference. “The consequences of missing this goal are potentially catastrophic for humanity,” Johan Rockström, director of Potsdam Institute for Climate Impact Research, has warned.


Advertorial: Node Pole

“Yet all solutions exist to begin halving emissions immediately. Now is the moment to move from incremental to exponential action.” Node Pole’s recently-published Green Data Survey doubles down on its claims, and legitimizes its concerns, collecting data from 4,016 people across Berlin, Stockholm, London, Los Angeles and Shanghai for a realistic global picture of sentiment. It reveals that only 11 percent of people in the world are unconcerned about climate change, while 58 percent are extremely concerned. Meanwhile, 73 percent of people consider climate change a “man-made environmental disaster.” The public say politicians bear 44 percent of the responsibility to make a change, and consumers another 27 percent, but it’s businesses – i.e. those who control and use data centers – that account for 49 percent of the responsibility, for improving the situation, the survey says. Just over half (55 percent) of those surveyed said they have now begun, in the last 12 months, to lower their own carbon footprint, and 66 percent say that they now see it as important that brands they choose to associate with make a positive impact on society. Any company could take this sentiment

analysis, stick a slogan on it, and begin tempting customers in with marketing rhetoric alone. But Node Pole also collected data to back up the standards it wants to set in the Fossil Free Data Label. The report reveals that in 2015, 416TWh of energy was used by data centers globally. That’s more than the UK’s entire energy consumption that year. The vast proportion of fossil fuel within this figure amounts to a projection of 1.9 gigatons of CO2 emissions generated by data centers in 2025. Seeing as data centers are already estimated to generate up to two percent of global emissions, it’s a worrying upward trend. Customers are even prepared to pay more to ensure data centers are managed in a more environmentally friendly way. A massive 44 percent of those surveyed said they would pay up to 45 percent more for data center services that were greener.

There have, of course, already been national standards for the carbon footprints of data centers – not least Singapore’s Green Data Center Standard, which was first published in 2011, and continues to see uptake. But Node Pole has wide leverage as a data center vendor with a stake in the international game. The Fossil Free Data Label represents a realistic set of standards which any DC supplier could endeavor to adhere to, and any of the growing global base of customers who have a genuine interest in the future of the planet should look out for it. “The amount of data centers in the world will grow tremendously in the coming years, and by ensuring that they are fossil free, the tech industry can avoid making the same mistakes as other revolutionizing industries and instead build a new sustainable sector that can change the world for the better” says Svanberg.

Node Pole Magnus Wikman | Chief Commercial Officer +46 (0)70 389 60 08 magnus.wikman@nodepole.com


Energy Efficiency | Report

Smart energy A range of technologies promise to revolutionize data center efficiency, but they face a struggle to reach the mainstream. The Uptime Institute gives us a guide

D

ata centers could become more efficient if they made better use of emerging smart energy technologies, but complexity and cultural issues are holding them back, according to an Uptime Institute report. Smart power provides better management, distribution and control. It has great potential, but it needs significant architectural change, so it will be an evolutionary, rather than a revolutionary change, according to the report’s authors, Andy Lawrence and Rhonda Ascertio. “The data center industry will strongly embrace smart energy, but it will take time,” says the report, Smart Energy for the Data Center. The innovations will show up first at the extremes of the market: in small, distributed and highly automated data centers (the edge) and, at the other end, in large hyperscale facilities, where the rewards are greatest. In the consumer world, smart energy means automated management and smart meters, building up to a “smart grid.” In the data center, “smart energy” is not so wellunderstood. The ultimate goals are to lower energy consumption, and reduce overprovisioning, while maintaining redundancy and efficiency. These goals will be achieved through automation, through dynamic control of energy use and storage and - when economically feasible - selling power back to the utility. Smart energy may be part of the trend toward automatic data centers. It could use artificial intelligence (AI), but may be simpler than that. It does not necessarily need large data sets, exotic algorithms, or “self-learning.”

Energy monitoring systems measure, report, and analyze; smart energy systems switch power and move loads. They go beyond software-controlled switches, and use intelligence, and multiple data sources. Today, data centers are built to allow for expansion and over-provisioned to support the highest demand peaks. It would be more cost-effective to build to the expected average power density, and use tactical methods to deal with peaks in demand, but this has traditionally been viewed as limited and risky. A lot of capital is locked into under-utilized power infrastructure. This ensures redundancy and reliability, but also strands large amounts of power. Smart energy aims to run these facilities nearer their capacity from the outset. “In our 2018 Uptime Institute research project, Disruptive Technologies in the Data Center, two separate panels of experts and end user/operators gave both SDP and microgrids low scores for their likely adoption and disruptive potential," say the authors.

The technologies promise a big return, but some of the benefits may be soft or difficult to quantify, and others may depend on the deployment of another technology, such as Li-ion batteries. Also,data center operators have always placed availability above every concern. “Over time, the acceptability of consistently operating data centers at far below their peak capacity will change, and the case for smart energy technologies as tools to enable higher utilization and re-use of capacity will be more compelling,” say the authors. Peter Judge edited this article. bit.ly/SmartEnergyReport

Summary • Smart energy is an umbrella term for a range of technologies • The sector is still in its infancy, with an uncertain adoption rate • Smart energy technologies could allow more flexible redundancy • In the next decade, data centers will move to smart energy using transfer switches, intelligent PDUs, supercapacitors, in-row batteries and more • This could help meet an industry-wide goal to reduce generator use

42 DCD Magazine • datacenterdynamics.com

Andy Lawrence Uptime Institute

Rhonda Ascierto Uptime Institute

Terminology Energy management and DCIM: Data center infrastructure management (DCIM) systems collect and manage energy data from power monitoring tools, building automation systems and more. Intelligent IT power management: Reduces power consumption by dynamically moving workloads, powering down equipment or putting it in lower power states, and capping or reducing voltages and frequencies. Microgrids: A localized group of electricity sources and loads that can disconnect from the rest of the grid and function independently is a microgrid. All data centers that can operate without the grid are microgrids, but few can actively switch between power sources. Software driven, software-defined: Software-defined power (SDP) uses the terminology of the software-defined data center (SDDC), and treats power, like processing, storage, and networking, as a virtualized and manageable resource. Shared reserve/adaptable redundant power: Transfer switches and an intelligent control system could pool and share the power of a number of large UPSs, to build a softwarecontrolled reserve. Demand response and the grid: Data center operators can participate in demand response or load reduction/ shedding programs, trading power with the grid to increase reliability, but the risk is seen as high. Li-ion batteries: Lithium-ion batteries, with higher energy density and greater manageability, can underpin software-defined power and demand response and enable many of the techniques above.


17th Annual

> New

York

Global Content Partner

9-10 April 2019 // Times Square Marriott Marquis

Show Preview

See Inside How our major conference content themes have developed for 2019 More than 100 Industry experts share their knowledge What does registration data tell us about the delegates? Full New York conference program 2019 Event Calendar

1,500+

60+

28%

100+

9.5hrs

32hrs

60%

Delegates

Sponsors & Exhibitors

Working on AI, machine and deep learning projects

Speakers

Dedicated networking time

Expert led thought leadership

Working on modernization projects

Headline Sponsor

Lead Sponsors

™

www.datacenterdynamics.com

Follow us: @dcdevents #dcdnyc Issue 32 • April 2019 43


New York

> NYC Ecosystem

> Insights from

Registration data gathered from more than 1,000 pre-qualified buyers of data center products and services in advance of DCD>New York conference is pointing to the fact that 2019 will be an important year for infrastructure modernization

registration data

49.2

20.21% Service Providers

28.79% AECs/ Advisory

As a new entrant to the market myself, I‘ve been bombarded with enough ephemeral edge definitions to fill this magazine. It’s been challenging to decipher which subjects deserve a mainstage debate, which require smaller group discussions, and those that could be relegated to a footnote in a press release. But I have been overwhelmed with the support from the industry and the DCD content team who have all helped me in my quest to build a comprehensive conference program which caters to all corners of our complex ecosystem. Together we have crafted a jam-packed agenda featuring over 100 top level speakers who will be sharing their insight across keynote presentations, panel debates, roundtables, fireside chats and other engaging formats.

Overall buyer mix

Half of all Financial Services organizations working on AI, ML and deep learning projects

51%

52

Enterprise

58%

Financial Services & Insurance

14.5%

of Enterprise audience are Snr. Dir or VP level execs

Healthcare & Pharma

Enterprise mix by vertical

61

6.50%

IT Services & Internet

13%

Other Kisandka Moses DCD Conference Producer

8.35% Government, Education & Research

of Enterprise operators have upgrade projects in the pipeline

> Speakers Sagi Brody Webair Sami Badri Credit Suisse Suvojit Ghosh McMaster University Frank McCann Verizon David Snead Internet Infrastructure Coalition Mike Bahr DRW Holdings Jan Wiersma Avaya Braco Pobric NEX Group Jim Fletcher Momenta Partners Bill McHenry JP Morgan Chase

Minjie Chen Princeton University Matt Gleason Coresite Alan Howard IHS Markit Jeffrey Goetz Fluor Charles Hoop Aon Chris Moon ING Laura Fazio Santander Bank N.A. Shweta Gupta State Street Rahul Arya KPMG Christopher Perez Puget Sound Energy Eddie Schutter Switch

44 DCD Magazine • datacenterdynamics.com

Jeff Ferry Goldman Sachs Jerry Hoffman LiiON Ross Litkenhous Altus Group Tria Case City of New York Phil Harris Intel Sreoshy Banerjea New York Economic Development Corporation Kevin Brown Schneider Electric Jim Surlow Alterra Mountain Company Eyal Itskovich LinkedIn Eric White Clinton

Foundation John Clinger ICF Russell Carpenter Verizon Gary Fernandez Verizon Tom Brown Datagyrd Brian Cox Stack Infrastructure Allen Tucker JLL Gary Bernstein Leviton Robert Meyers CBRE Al Ortega Villanova University Dave Meadows Stulz USA Svein Atle Hagaseth Green Mountain

Atle Haga Statkraft Rebecca Scheel Invest in Norway Kanad Ghose SUNY Binghamton Scott Black Leviton Paul Thornton 1&1 Ionos Eric Fussenegger Wells Fargo Chris Ludeman Uptime Institute Mark Harris Uptime Institute Kevin Heslin Uptime Institute Bill Mazzetti Rosendin Electric


> Top 10 most viewed

speaker profiles online With over 100 senior experts to hear from at DCD>New York, below is a sneak-peak of our most popular and searched for speaker profiles online as we head into the 17th edition.

1 2

Buddy Rizer Loudoun County

Rajiv Rao New York State Office

3

Kevin Brown Schneider Electric

4

Laura Fazio Santander

5

Sagi Brody Webair

6

Suvojit Ghosh McMaster University

7

Brian Cox Stack

8 9 10

Rhonda Ascierto Uptime Institute

Mark Collins ExCool Matthew Simpson ExCool Rami Radi Intel David Hybels Vertiv Ameya Soparkar Rittal John Sasser Sabey Data Centers Peter Gross Bloom Energy Jim Richardson Avant Energy Susanna Kass Stanford University Dave Sterlace ABB Simon Allen Infrastructure

Theme | Planning for Hybrid IT

With data center growth continuing, business models are evolving and stakeholders are converging. Site selection remains a key strategic challenge determining revenues, total cost of ownership and lifecycle. Evaluation factors are complex with environmental risk, taxes and regulations, transportation, construction and permitting costs and speed to market all playing a role. With huge, 24/7 power demands however, power availability, regulation and tariffs are also key.

As high-density workloads become more common, 90 percent of organizations will have hybrid infrastructure by 2020. Data center managers and IT directors converge, as workload optimization between legacy applications, managed services and the cloud must be aligned with budget, reliability and availability needs.

DCD>Boardroom on April 9 explores these themes with Loudoun County’s Buddy Rizer, uncovering the Northern Virginia boom with 30 million sq ft of data center space and over 1,000MW expected in the near future. Senior TMT specialists such as Chris Moon, ING, Goldman Sachs’ Jeff Ferry, Laura Fazio, Santander and Sami Badri, Credit Suisse demystify the trends driving investment into the technology market and giving rise to the data center-as-a-asset class.

Chris Street of STT Global Data Centers will be sharing best practice on selecting international partners and Innovation Norway will be hosting a lunch for those interested in the Nordics. Steve Conner of Vantage Data Centers will highlight key considerations for an enterprise utilizing high-density applications when project-managing new colocation implementations. Eric White, CTO of Clinton Foundation will provide practical guidance on the realities of ‘lift-and-shift’ cloud migration, with Webair’s Sagi Brody, leading a roundtable on the optimal route to hybridization.

Producer's Highlight:

The Wild West of Data Center Taxation

Ross Litkenhous Altus Group

Braco Pobric CME

Theme | The Business of the Data Center

The pace and variety of investment and financing deals, complex tenant-landlord leasing structures and competition between regions for site selection, has left taxing authorities struggling to check the right boxes in the tax code as it applies to data centers. Join Ross Litkenhous, Global Head of Business Development, Altus Group on April 9, to understand the evolving nature of tax regulation influencing data center valuation, hardware procurement and job creation.

Masons Jeff Omelchuck Infrastructure Masons Zahl Limbuwala Romonet Julie Albright University of Southern California Stephen Worn DatacenterDynamics Bruce Taylor DatacenterDynamics George Rockett DatacenterDynamics Ryan Murphy MTU Onsite Oncu Er Avant Energy

David Niles Avant Energy Herb Villa Rittal Tom Sandlin Avison Young Peder Nærbø Bulk Infrastructure AS Peter Judge DCD Steve Conner Vantage Data Centers Jake Ring GIGA Data Centers Martin Olson Vertiv David Liggitt datacenterhawK Nora Rosenberg Grobæk Innovation Norway

Producer's Highlight:

Billion-Dollar Infrastructure: Navigating The Path To The Intelligent Placement Of Workloads Avaya’s Chief Cloud Officer, Jan Wiersma, whose portfolio includes hundreds of internal proprietary data centers, will unpick the complexity involved in balancing these with multiple cloud providers in a cloud-rich ecosystem, during his morning keynote presentation on April 10

Arelis Soto Corning Anthony Pinkey Mitsubishi Electric Aaron Schott Mitsubishi Electric Joerg Desler Stulz USA Kevin W. Sanders EYP Mission Critical Facilities Jesse Sycuro BGIS Jeff Ivey CPG Russell Senesac Schneider Electric Scott Tucker RA Page Freddy Padilla PE Page

Steve Geffin Vertiv David King Future Facilities Robert Maroney Piller Power Systems Erica Glander LEED AP Armstrong World Industries

For the latest information on all DCD>New York speakers, visit dcd.events/ conferences/new-york.

Issue 32 • April 2019 45


New York

Day 1 | Tuesday 9 April Program color key 7:30am

Registration open

9:20am

Opening Remarks

Digital Transformation & the New Data Center Edge

9:30am

Plenary Keynote Evolve or Die - How We Right-Sized IT Infrastructure to Fit The Age of Ubiquitous Computing Rajiv Rao, New York State Office of Information Technology Services

Modernization & Lifecycle Management

9:50am

Plenary Panel Altered States - How is New York Responding to the Northern Virginia Effect? Brian Cox, Stack Infrastructure, Tom Brown, DataGryd, Buddy Rizer, Loudoun County Economic Development, Robert Meyers, CBRE, Panel Moderator: David Liggitt, datacenterHawk

10:40am

Coffee Break, Expo, Innovation Stage & Speed Networking

Hall 1

Planning for Hybrid IT Energy Smart Infrastructure Examining The Business of the Data Center

Hall 3

Hall 2

Hall 4

11:50am

From Buzzwords to Reality: Managing Edge Data Centers Kevin Brown, Schneider Electric

Finding the Optimal Cooling Mix in an HPC Environment Mark Collins, Excool

How to Better Manage Your Cloud Infrastructure Rami Radi, Intel

Oh, Virginia: Unveiling the County Behind the World’s Digital Economy Buddy Rizer, Loudoun County Economic Development

12:20pm

Preparing for the Black Swan: Designing Critical Facilities for Extreme Events Scott Tucker, Page

Empowering Lithium Ion: How Advanced UPS Selection Will Lower Your TCO Jerry Hoffman, LiiON

A Connected World – The Impact of Edge on Everything David Hybels and Martin Olsen, Vertiv

The Wild West of Data Center Taxation Ross Litkenhous, Altus Group

12:50pm

Discover the Edge - Energy Efficiency Through Smart Infrastructure Ameya Soparkar, Rittal

$5m per MW and PUE of 1.15: Building an Efficient Data Center Jake Ring, GIGA Data Centers

Trust by Verify: Why Assessments are a Critical Tool for Managing Data Center Operational Jesse Sycuro, BGIS

What’s the Verdict on 5G for the Interconnection Ecosystem? Sami Badri, Credit Suisse

13:20pm

Networking Lunch, Expo & Innovation Stage

15:00pm

Panel discussion: Man vs. Machine: Can Robots and Humans Collaborate to Manage Facilities? Kevin Brown, Schneider Electric Eric Fussenegger, Wells Fargo Matt Provo, Carbon Relay Zahl Limbuwala, CBRE Romonet Panel Moderator: Bruce Taylor, DCD

Panel discussion If the Two Million-Dollar Acre Is Near, Can Multi-Story Construction Drive Down CAPEX? John Sasser, Sabey Data Centers Sreoshy Banerjea, New York City Economic Development Authority, Peter Gross, Bloom Energy, Bill Mazzetti, Rosendin Electric, Tom Sandlin, Avison Young, Panel Moderator: Peter Judge, DCD

Panel discussion The Pulse Check: Where Will EdgeProcessing Live In 2025? Jim Fletcher, Momenta Partners Matt Gleason, Coresite Russell Senesac, Schneider Electric, Panel Moderator: Mark Harris, Uptime Institute

Panel discussion Data and Dollars: Why are Investors Betting on Digital Infrastructure? Chris Moon, ING, Laura Fazio, Santander Bank N.A., Jeff Ferry, Goldman Sachs, Sami Badri, Credit Suisse, Panel Moderator: Alan Howard, IHS Markit

16:00pm

The DCIRN Panel - Demystifying the Role of Data Center Incident Reporting Bill McHenry, JP Morgan Chase, Ed Ansett, i3 Solutions Group, George Rockett, DCD, Peter Gross, Bloom Energy, Simon Allen, Infrastructure Masons

Ready to Retrofit? Untangling the Complexity of Upgrades Whilst ‘Live Frank McCann, Verizon

How Verizon Won The War On Energy Inefficiency (Three Case Studies) Russell Carpenter and Gary Fernandez, Verizon

Success Factors for Finding an Global Infrastructure Partner Christopher Street, ST Telemedia Global Data Centres

16:30pm

Preventing the Blackout: What Is the State of Outages?

How to Foster Habits for Success in Data Center Management

Avoiding Legal Risk Open Source Landmines

Rhonda Ascierto, Uptime Institute

Braco Pobric, CME Group

Is Your Data Center Wasteful? Turn Smart Energy into Great Savings Jeffrey Goetz, Fluor

17:00pm

Drinks Reception and Networking on Expo Floor Sponsored by Sunbelt Rentals

46 DCD Magazine • datacenterdynamics.com

David Snead, cPanel


Day 2 | Wednesday 10 April 7:30 am

Registration open

9:20am

Opening Remarks

9:30am

Plenary Keynote Billion-Dollar Infrastructure: Navigating the Path to the Intelligent Placement of Workloads Jan Wiersma, Avaya

9:50am

Plenary Keynote Pricing the Bleeding Edge: How Will Coolants, Machine Learning and GPUs Impact The Cost Of The 2030 Data Center? Dr Suvojit Ghosh, McMaster University

10:10am

Plenary Panel Prepping for Dense Data: What Breaks When Cognitive Workloads Dominate Compute? Eddie Schutter, Switch, Dr Suvojit Ghosh, McMaster University, Phil Harris, Intel, Eyal Itskovich, LinkedIn, Panel Moderator: Rhonda Ascierto, Uptime Institute

10:50am

Coffee Break, Expo, Innovation Stage & Speed Networking

Hall 1

Hall 2

Hall 3

Hall 4

11:50am

Preparing For The 100kW Workload - Can Liquid Cooling ‘Cross the Chasm’? Joerg Desler and David Meadows, Stulz USA

The Data Center Business Case for Digitalization Dave Sterlace, ABB

A New Approach to Electric Supply for Data Centers Oncu Er, David Niles and Jim Richardson, Avant Energy

From 40GbE to 100GbE Orchestrating a Plan To Install 5G-Ready Plumbing In Your Data Center Arelis Soto, Corning

12:20pm

Case Study: Microgrids: Greening the Data Center While Reducing OpEx Freddy Padilla, Page and Russell Carpenter, Verizon

Digital Twin For Today’s Data Center Steve Geffin, Vertiv and David King, Future Facilities

Fireside Chat: The Future of Power in the Connected Era Susanna Kass, Stanford University and Christopher Perez, Puget Sound Energy

Simplify, Consolidate, Modernize: How We Modernized Our Data Center Using Hyperconverged Infrastructure Jim Surlow, Alterra Mountain Company

12:50pm

From Core to Edge: How to Implement Reliable Power in the Data Center Robert Maroney, Piller Power Systems

Building Cloud Intelligence: Learn How to Evaluate the Movement of Varied Workloads Eric White, Clinton Foundation

Powering your Community with Clean Energy Tria Case, City University of New York

Ready for Climate Change? Kevin Heslin, Uptime Institute

13:20pm

Lunch, Networking and Innovation Stage, Infrastructure Masons, Local Chapter Meeting

15:00pm

Researcher Spotlight: Liquid Cooling is no longer in the Future - it is Now: Recent Developments in Liquid and Two-Phase Cooling Research Al Ortega, Villanova University

15.30pm

Researcher Spotlight: The Merged Multi-Power Converter Producing Efficiency and Cost Savings: But How?

Panel discussion How to Win Friends and Influence People: Can You Make the Case for Hybrid Cloud Whilst Regulated? Charles Hoop, Aon Rahul Arya, KPMG Sagi Brody, Webair Kevin W. Sanders, EYP Mission Critical Facilities Panel Moderator: Alan Howard, IHS Markit

Minjie Chen, Princeton University

Panel discussion Fight the Outage: What Role will Energy Storage Play in Boosting Reliability? Tria Case, City University of New York John Clinger, ICF Paul Thornton, 1&1 Ionos Jeff Ivey, CPG Panel Moderator: Chris Ludeman, Uptime Institute

Infrastructure Masons Leadership Summit 2019 15:00pm: Fireside Chat Macquarie 15:30pm: EdgeMasons - Think Tank 16:30pm: Julie Albright ("The Untethered Generation" TBC) 17:00pm: Book Signing and Fundraising

16:00pm

Closing Futurist Panel: Predictions on the Future of the North American Data Center Market Bill McHenry, JP Morgan Chase, Eddie Schutter, Switch, Allen Tucker, JLL, Kanad Ghose, SUNY Binghamton, Phil Harris, Intel, Panel Moderator: Stephen Worn, DCD

17:00pm

Drinks Reception and Networking on Expo Floor

Issue 32 • April 2019 47


New York

> Sponsors, Exhibitors & Partners > Knowledge Partners

From personalized e-commerce and healthcare, to preventative maintenance and automated ledgers, AI is changing the face of global enterprise. The advent of GPUs and high-density computing, however, presents a significant challenge to data centers. With workload density growing beyond 40-50kW, is the technology and infrastructure ready to support evolving demand?

120 121

66

Innovation Stage

65

110 112 113

99

92

98

93

97

94

89

118

88

84

31

109

85

72

54

70

53

49

39

43

50

40

42

32

119

30

Breakout rooms

11

Breakout rooms

6

12

5

7

13

17

8

18

4

19

3

14 2

20

9

15

1

48 DCD Magazine • datacenterdynamics.com

10

16

21

Rahi System Raritan and Server Technology Rittal Rosenberger North America Saft Batteries Schneider Electric Senko Shanghai Liangxin Electrical Co., Ltd. (Nader) Shenzhen Digitalor Technology Co,. Ltd. SPEC-CLEAN Starline Stulz

Dr Suvojit Ghosh, who leads research at McMaster University, opens Day 2 with a plenary keynote looking at the cost of compute in an AI-driven world. Rhonda Ascierto, Uptime Institute, who recently published a paper on AI in the data center, will lead a panel with Eddie Schutter (Switch), Phil Harris (Intel) and Eyal Itscovitch (LinkedIn). If you want to catch a glimpse of the future, Kurtis Bowman, President, Gen-Z Consortium, will demo high-speed, low-latency, memory-semantic technology on the innovation stage.

117

104

116

103

101

115

108

102

86

71

52

41

33

Intel Corp Jacobs Mission Critical Janitza electronics GmbH KOHLER Power Systems LayerZero Power Systems Leviton Network Solutions Liquid Technology LogicMonitor Mission Critical Information Management (MCIM) Mitsubishi Electric Power Products Inc Motivair Corporation MTU Onsite Energy Narada Power Source Nlyte Panduit PCX Corporation PermAlert Piller Power Systems Inc Power Distribution, Inc Powersecure Powersmiths International Corp

114

107

105

34

106

91

47

56

44

75

96

95

76

83

77

64

82

78

63

57

81

79

62

58

80

61

59

45

46

35

Aggreko AMCO Enclosures Anord Mardix (USA) Armstrong World Industries Austin Hughes Solutions Inc BGIS Btech C&D Technologies CAI Corning CriticalCxE E+I Engineering Ltd East Penn EkkoSense EnerSys Excool ltd EYP Mission Critical Facilities Facility Grid FiberNext Fibrebond Corporation Fuji Electric Co., Ltd Future Facilities Gateview Technologies Generac Gen-Z Consortium GRC Industrial Electric Mfg. (IEM)

60

38

37

> Exhibitors

Theme | AI-Ready Marketplace

Submer Immersion Cooling SubZero Engineering Sunbird Software Syska Hennessey T5 Facilities Management Tate Inc. Thomson Power Tileflow Tindall TrendPoint Uptime Institute Vertiv Victaulic Zonit Structured Solutions

Producer's Highlight:

Man Vs. Machine: Can Robots And Humans Collaborate To Manage Facilities? DCD’s Bruce Taylor will be joined by a star-studded cast of infrastructure experts consisting of Kevin Brown from Schneider Electric, Wells Fargo’s Eric Fussenegger, Matt Provo of Carbon Relay and CBRE Romonet’s Zahl Limbuwala to discuss the cross-section between human and machine-led collaboration in data center management.

Uncover new technologies from 84 exhibits on the show floor


Theme | Digital Transformation at Data Center Edge With technology such as 5G and IoT maturing and workloads increasing, digital transformation at the data center edge continues. Exploiting cloud models and edge deployments will drive scalability, reliability and real-time intelligence. Learn from Schneider Electric’s Kevin Brown on successfully integrating edge hardware and deploying cloud models. David Hybels and resident edge expert Martin Olsen, share Vertiv's four edge archetypes to ensure futureproof implementations and Arelis Soto, Corning, uncovers key ways to get the data center 5G ready. With forecasts that 75 percent of compute processing will happen outside the core data center by 2025, Jim Fletcher, Momenta Partners, Matt Gleason, Coresite and Russell Senesac, Schneider Electric will join a panel led by Mark Harris, Uptime Institute, to provide insight into this future vision.

Producer's Highlight:

Altered States - How is New York Responding to the Northern Virginia Effect? On April 9 at 9:50am, David Liggitt, datacenterHawk will be moderating a blockbuster panel of local data capacity experts to discuss how New York State is responding to the record data center net absorption in Northern Virginia. At what stage will North Virginia reach capacity? Can New York wrestle capacity back with latency-sensitive requirements from owner-operators headquartered in the state? Panellists: Brian Cox, CEO, Stack Infrastructure Tom Brown, President and CEO, DataGryd Buddy Rizer, Executive Director, Loudoun County Economic Development Robert Meyers, SVP - Data Center Group, CBRE

32%

of audience exploring modular, containerized and edge solutions

Theme | Energy Smart Infrastructure

Theme | Modernization & Lifecycle Management

An energy-efficient approach to data center design and operations can reduce costs, improve uptime and drive new revenues. From power and cooling innovations to reducing costs inside the data center, to new business models, energy storage and PPA/VPP relationships outside the data center, the sessions share best practices from energy source to the data center itself.

Modernization and lifecycle management may be less glamorous than new-build developments, but it remains key for our DCD community, with 61 percent of delegates prioritizing upgrade projects and looking to optimize asset lifetimes. With IT equipment typically needing replacements every 3-5 years and M&E infrastructure every 10-20 years, managing CapEx & OpEx budgets is a key challenge.

David Niles and the team from Avant Energy will be sharing expertise on how to develop effective partnerships with Utilities. Tria Case, Director of Sustainability, CUNY will invite operators to join the City’s new Solarise initiative. Looking inside the data center at critical infrastructure Joerg Desler, President, Stulz, will look at the 100kW per cabinet cooling challenge and Jake Ring, CEO, GIGA Data Centers will explore cost and efficiency and their $5m per MW facility. Rittal’s Ameya Soparkar, will unpack the challenges of cooling at the edge.

Verizon’s Frank McCann, will unpack the complexity of retrofitting in a legacy data center. David King, Future Facilities, and Steve Geffin, Vertiv, will reveal the benefits of digital twins and capacity planning to reduce costs and we have a star-studded panel on the high-rise data center with Bill Mazzetti, Rosendin Electric, Peter Gross, Bloom Energy, John Sasser, Sabey discussing key tips to reduce capital expenditure.

Producer's Highlight:

Microgrids: Greening the Data Center While Reducing OpEx

Evolve Or Die - How We Right-sized IT Infrastructure To Fit The Age Of Ubiquitous Computing

Freddy Padilla of Page and Verizon’s Russell Carpenter will join forces on April 10 at 12:20pm, for a case study that examines how Verizon’s commitment to microgrids is impacting data center reliability and exploring the economic drivers behind ‘green’ facilities and onsite power generation.

CTO of New York State Office of Information Technology Services, Rajiv Rao’s plenary keynote on April 9, will walk delegates through one of the largest consolidations of IT services of any US State Government. The project consolidated 53 disparate data centers to just one “all-in” facility.

Producer's Highlight:

Check out our new website for the most up to date event details datacenterdynamics.com Issue 32 • April 2019 49


New York

>2019 Event Calendar

DCD> Madrid 29 May DCD> Jakarta 19 June

San Francisco

Dallas

DCD> Shanghai 25 June DCD> Bangalore 18 July DCD> San Francisco 11-12 July DCD> Sydney 15 August DCD> Santiago 10 September

11-12 July 2019

21-22 October 2019

San Francisco Marriott Marquis

Fairmont Dallas Dallas

With an economy that outpaces the rest of the US, Northern California’s Bay Area is the technology capital of the world. Home to half of the world’s Internet giants (the Hyperscalers) and the launchpad for next-gen industries such as healthcare, biotech and transportation - data centers and cloud infrastructure capacity are in huge demand.

The Dallas-Fort Worth area is an economic powerhouse supporting multiple tech enabled sectors such as defense, financial services, information technology and transportation. Dallas itself serves as one of the most important telecoms interconnection points in the country making it a magnet for data center activity.

DCD>San Francisco brings together the unique ecosystem that has developed to support the huge investments being made here into digital infrastructure that often impacts on a global scale.

This national event pulls together the most senior decision makers from the world of colocation, cloud and telco data centers to discuss how the next generation of Infrastructure-as-a-service will be designed, built and interconnected.

Attendees Include:

Attendees Include:

DCD> Singapore 17-18 September DCD> México 25 September DCD> Dallas 22 October DCD> London 5-6 November DCD> São Paulo 5-6 November DCD> Mumbai 20 November DCD> Beijing 5 December

Check out our new website for the most up to date event details

datacenterdynamics.com 50 DCD Magazine • datacenterdynamics.com


Opinion | Andy Lawrence

How can we compare the severity of outages? When a service goes down, the impact can vary from disastrous to so-what

A

voiding outages is a big concern for any operator or service provider, especially one providing a business-critical service. But when an outage does occur, the business impact can vary from “barely noticeable” to “huge and expensive.” Anticipating and modeling the impact of a service interruption should be a part of incident planning and is key to determining the level of investment that should be made to reduce incidents and their impact. In recent years, Uptime Institute has been collecting data about outages, including the costs, the consequences, and most notably, the most common causes. One of our findings is that organizations often don’t collect full financial data about the impact of outages, or if they do, it might take months for these to become apparent. Many of the costs are hidden, even if the outcry from managers and customers (even non-paying customers) is most certainly not. But cost is not a proxy for impact: even a relatively short and inexpensive outage at a big, consumer-facing service provider can attract negative, national headlines. Another clear trend, now that so many applications are distributed and interlinked, is that “outages” can often be partial, affecting users in different ways. This has, in some cases, enabled some major operators to claim very impressive availability figures in spite of poor customer experience. Their argument: just because a service is slow or can’t perform some functions doesn’t mean it is “down.” To give managers a shorthand way to talk about the impact of a service outage, Uptime Institute developed the Outage Severity Rating (see image). The rating is not scientific and might be compared to the internationally used Beaufort Scale, which describes how various windspeeds are experienced on land and sea. based on subjective experience. By applying this scale to widely reported outages from 2016-2018, Uptime Institute tracked 11 “Severe” Category 5 outages and

Andy Lawrence Uptime Institute

Uptime Institute Outage Severity Rating The Outage Severity Rating (OSR) describes the business/ service impact of an IT service outage, regardless of the cause Outage Severity Rating

Description

Impact of Outage

Category 1

Negligible outage

Recordable outage, but little or no obvious impact on services. No financial/ reputational impact

Category 2

Minimal service outage Services disrupted. Mostly minimal effect on users/customers/reputation. Minimal or no financial/reputational impact

Category 3

Notable business/ service outage

Notable customer/user service interruptions, although mostly of moderate scope and duration. Costs , due to revenue damage likely. Possible compliance/legal impact.

Category 4

Serious business/ service outage

Serious disruption of service and/or operations. Ramifications may include revenue and customer losses, significant disruption costs, compensation claims, compliance breaches, repuation damage, and possible safety concerns.

Category 5

Severe business/ Major and damaging disruption of services mission-critical outage and/or operations. Ramifications likely to include significant disruption costs, lost revenues and customers, compliance breaches, fines and compensation, company valuation losses, significant reputational damage, and possible safety issues.

46 “Serious” Category 4 outages. Of these 11 severe outages, no fewer than 5 occurred at airlines. In each case, multi-million dollar losses occurred, as flights were cancelled and travelers stranded. Compensation was paid, and negative headlines ensued. Analysis suggests both obvious and less obvious reasons why airlines were hit so hard: the obvious one is that airlines are not only highly dependent on IT for almost all elements of their operations, but also that the impact of disruption is immediate and expensive. Less obviously, many airlines have

been disrupted by low cost competition and forced to “do more with less” in the field of IT. This leads to errors, over-thrifty outsourcing, and makes incidents more likely. If we consider Categories 4 and 5 together, the banking and financial services sector is the most over-weighted. For this sector, outage causes varied widely, and in some cases, cost cutting was a factor. More commonly, the real challenge was simply managing complexity and recovering from failures fast enough to reduce the impact.

Issue 32 • April 2019 51


Security + Risk

Preparing for natural disasters Global warming will raise water levels round the world. Data centers need to prepare for floods, warns Jason Hood

A

s climate change gathers pace, extreme weather events are on the rise, and so are sea levels. Critical infrastructure, including data centers and submarine cable landing stations, are expected to work reliably, yet they are at risk of water ingress and flooding. Flooding and water ingress can happen during major storms, but water can be a nuisance in regular weather conditions, and leaks can reduce the reliability or performance of a facility. There are ways to limit water ingress, which can greatly reduce those risks. Around half of data centers experience outages, (Uptime Institute, 2014), and the Ponemon Institute reckons the average outage costs more than $700,000. Most failures are caused by problems with the uninterruptible power supply (UPS) or human error, but weather related events, and general water incursion figure high on Ponemon’s surveys, and their incidence is expected to increase. In 2012, Hurricane Sandy caused extensive damage in New York: Several data centers in lower Manhattan went down, and had to pump out basements and generator rooms, and then replace damaged switchgear before they could go live again. In 2016, the river Aire in Leeds, UK burst its banks and floods reached a Vodafone facility, taking it down for several days. Location is the best defense against natural disasters. Site selection already considers factors as environment, climate, power, fiber connectivity, labor costs and taxes. It should also assess the risk of natural disasters, and avoid areas prone to floods.

There are industry standards to help guide site selection. For example, TIA 942, an American National Standard (ANS) for data center reliability, includes guidelines for protection against physical hazards, including fire, flood and wind. TIA 942 suggests that Tier IV data centers should be located more than 300ft (91m) from the 100 year flood plain and more than half a mile (0.8km) from coastal or inland waterways. Areas prone to natural disasters can be identified from historical data for tornadoes, hurricanes, earthquakes, and flooding, available from agencies such as FEMA, USGS, NOAA, the European Commission and the European Environmental Agency. Site specific factors such as elevation, slope, and water table should also be considered, even outside the flood area. ANSI/BICSI 002 – 2014 provides recommendations here. Sometimes decisions taken to reduce other risks can actually increase dangers from floodwater. For instance, electrical and network links can be placed underground to avoid damage to overhead lines. If the water table rises, these ducts and entries can be at risk. ANSI/BICSI 002 recommends that utility ducts should be above the water table and utility maintenance holes should be examined as a potential source of water ingress. Even moderate rainfall can get into fiber and power distribution ducts and fill underground vaults. This causes instant problems such as short circuits, but water also causes longer term problems. Pumping it out is a nuisance, and high humidity levels can make switchgear fail, causing partial

52 DCD Magazine • datacenterdynamics.com

Jason Hood Roxtec International

discharge and bushing failures. Moisture will degrade the insulation system, and cause corrosion. Humidity in the presence of other contaminants can also increase partial discharge. Duct designs should be sloped to direct water away from buildings and equipment, and vaults should be located above the water table, but moisture can still get into vaults or ducts leading by generators, switchgear, load banks and transformers. In the long term, moisture can cause “water treeing,” where insulation degradation propagates in the form of micro-cracks, and can eventually lead to cable failure. Water treeing starts at stress points on the cables, caused during manufacturing, transportation, pulling or service. The risk can be reduced with cables that are optimized for these environments (e.g. TR-XLPE or LC), but not all cable failures are due to a breakdown in cable insulation. Splices, terminations and joints are also a potential weak link where poor workmanship can lead to water ingress. As well as power cables, many of the same considerations apply to underground fiber optic cables. Water molecules embedded in micro-cracks can cause signal attenuation, connectors can corrode, and freezing can cause mechanical damage. Outdoor cables use gel-filled tubes or water-swellable materials to minimize water penetration and survive harsh environments but, as with power cables, the connections can be the weakest link. Preventing water


Seal your ducts

ingress and minimizing moisture are the best ways to protect critical fiber optic infrastructure. What about the future? Climate change could potentially be the biggest threat to infrastructure. In the UK, increased flooding is expected to damage key assets including masts, pylons, data centers, telephone exchanges, base stations and switching centers, according to a report by AEA for DEFRA, the UK’s Department of Environment, Food and Rural Affairs. In the US, the Third National Climate Assessment, from the US Global Change Research Program reports similar risks. Over the past century, global average sea level has risen by eight inches, and it is expected to rise by one to four feet in the rest of this century, raising the risk of erosion, storm surge damage, and flooding for coastal communities. In the US, nearly 490 communities will face chronic inundation by the end of the century, including large cities such as Boston, New York, Miami, San Mateo and Newark. Data centers in these cities will be at risk of flooding. A nearer-term paper, Lights Out: Climate Change Risk to Internet Infrastructure, predicts that by 2030, with a 1ft rise in sea level, 235 data centers will be affected, as well as 771 POPs, 53 landing stations, and 42 Internet Exchange Points (IXPs). Under a modest projection, 4,067 miles of fiber conduit will be under water by the end of the century. This is infrastructure which is designed to be weather and water resistant, but it is not designed to be submerged,and some of this infrastructure is already 20 years old, and already at risk of seals and cladding deteriorating. For the data center, other danger areas include cable and pipe penetrations in the rooftop, power systems located outside the exterior wall, or connections to an exterior

network room. All penetrations through the building envelope should be treated as a potential leak path and sealed appropriately (see box). New data centers can be built outside of coastal areas but, even here, large storms have increased in certain areas by as much as 71 percent. For every one degree Celsius increase in climate, there is seven percent more moisture in the air. Based on the predicted global temperature rise of three to five degrees, heavy downpours could become even more frequent in the future, causing localized flooding and changing flood boundaries. Despite the warnings, in a recent survey of 867 data center operators and IT practitioners, only 14 percent reported they were taking climate change into consideration and “re-valuating site selection based on higher temperatures, increased flooding, or water scarcity.” Only 11 percent are taking steps to mitigate increased flood risk. These are low numbers, but they do show that the threat of climate change and flooding is starting to be recognized. Future data centers must protect against the threat of water, and it seems that some operators are already doing it proactively. Jason Hood is global segment manager infrastructure at Roxtec International

The TIA 942 standard recommends a floor drain be placed in areas where risk of water ingress exists, and data center and support equipment “should be located above the highest expected floodwater levels.” No critical electronic, mechanical or electrical equipment should be in the basement. In practical terms, even if equipment is at ground level, feeders often enter substations below ground and can become a pathway for water and humidity. Pumps and dehumidifiers can remove water and humidity, but sometimes utility ducts and distribution vaults are overlooked. Specifications recommend some type of sealant for ducts, but many are left open, and quickly fill with water and debris. This is very common when construction specifications do not include details for sealing ducts and building entries. When a problem with water ingress is discovered, a maintenance procedure is established to solve the problem Common on-site remedies include foam or silicone, but these are not effective long term remedies, as water pressure builds up behind the seal. Mechanical sealing solutions use a compressed rubber solution with tight tolerances that, when compressed, provides a water-tight seal that will contain a high level of water pressure. These solutions can be designed to last the life of the building.

Issue 32 • April 2019 53


Servers + Storage

Big data has its limits

I “The last thing you need when looking for a needle in a haystack is more hay” Cory Doctorow

n this job, you hear a lot about how great ‘Big Data’ is. Gather lots and lots of bits of information, add a healthy splash of vague artificial intelligence magic, and suddenly you will be able to do anything. Predict the future! Solve world hunger! Make people happy! And yet… despite these promises, despite a huge amount of money, time and data storage, much of these gains remain unrealized. Big data isn’t all it is cracked up to be. Just ask the NSA. After the tragedy of 9/11, the National Security Agency began an intensive mass surveillance effort that was ultimately revealed by whistleblower Edward Snowden. The program - which required an enormous data center in the deserts of Utah and supercomputers elsewhere skirted laws, norms and conventions, but was done to keep us safe, they said. It was so important, they later argued, that thenNational Intelligence Director James Clapper had to lie to the Senate Intelligence Committee to keep it secret and safe. And yet… In March 2019, a senior Republican congressional aide admitted that the system they used to analyze Americans’ domestic calls and texts hadn’t been used in months, and that it would be quietly shut down. In all the years of the program, leaks and reporting by The Intercept suggest that it never thwarted a single attack, while a 2013 White House investigation concluded that the program was “not essential in preventing attacks.” “The last thing you need when looking for a needle in a haystack is more hay,” author and activist Cory Doctorow wrote in 2015. Big data pushes for companies and countries to stock up on more and more hay, whether it is useful or not. It promotes a data hoarder mentality, when often the real gems are being obscured by the sheer quantity of data. Change is afoot - as we discuss on p35, machine learning is being used to make how we approach large problems, such as curing cancer, more intelligent. Targeted AI, that respects the real limitations of what remains a fledgling field, can be used to illicit actionable data. It will be years before this trickles down into the wider industry, and even then it will have severe restrictions. In the meantime, when someone promises you the world because of big data wizardry, in reality they might just be offering you a pile of hay.

Sebastian Moss Deputy Editor

54 DCD Magazine • datacenterdynamics.com


WHEN REPUTATION RELIES ON UPTIME. Solving your data center power challenges are our priority. Our engineering teams can create a vertically integrated solution for your facility. No matter the fuel type, our trusted reputation and unrivaled product support allow you to maintain uptime with backup power that never stops. DEMAND CAT® ELECTRIC POWER.

© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.


No matter the environment, Starline’s at the center.

Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design. For more information visit StarlineDataCenter.com/DCD.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.