DCD>Quarterly: Winter 2019

Page 1

Issue 35 • Dec/Jan 2020 datacenterdynamics.com

HPC vs Wildfires How climate change took down Cori

Award Winners The best of 2019

Digital Realty’s CEO Bill Stein sets sights on retail colo

Goonhilly’s big plans Satellite smarts unlock new data center worlds


DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER

Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation.

© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

Learn more at http://www.cat.com/datacentre


ISSN 2058-4946

Contents

December/January 2020 6 News Microsoft’s high pressure data center patent; ransomware hits CyrusOne

12

12 Star gazing We travel to Goonhilly Earth Station to learn about a different kind of space race

24

18 Amazon’s grounded ambitions The cloud heads for space

Industry interview

20 32

36

20 Real estate skills like land-banking and master planning have made Digital Realty the world’s largest wholesale data center company, says CEO Bill Stein. Now he’s moving in on the enterprise market, with a program of mergers and new platforms 22 Power infrastructure survey Efficiency over reliability? 24 California burnin’ Quickly shutting down a 30 petaflops supercomputer 32 The new chip bestiary From CPUs and GPUs, to FPGAs and other exotic hardware 34 Time for AI to get specialized Intel hopes that its Nervana processors will power our AI future 36 DCD>Awards highlights The best and brightest from the industry’s most prestigious awards

34

40 Awards history After more than a decade, our award winners serve as a chronicle of industry trends and tribulations 42 Sharing heat the Nordic way To most data centers, heat is the enemy. In Sweden, it’s a resource to be shared 46 Key trends for 2020 and beyond What does 2019 tell us about emerging data center trends?

42

48 Securing the next decade Recent world events remind us that we should prioritize security over laziness or fear

Issue 35 • Dec/Jan 2020 3


The launch of a new space race

D

on't discount the potential of technology to change. Submarine fiber took the lion's share of international communications away from satellites - but now swarms of systems in lowearth orbit are streaming down data, and there's a race on to provide the access, the AI, and the analytics this demands. Goonhilly Earth Station (p12) is a new player, grown from one of the oldest satellite hubs. But it's up against a move by Amazon Web Services (p18). Let the new space race begin!

Wood construction is more fireproof and has lower emissions than concrete (p42) Greenpeace's Gary Cook was recognized for his Outstanding Contribution to the Data Center Industry at the London gala dinner for the 2019 DCD>Awards. We have an extended feature on the 2019 DCD Global Awards (p36), which includes recognition for efficiency, social responsibility, and for the young talent the industry needs. We also have a retrospective on the shifts we've seen in the last 12 years. One of the biggest changes may be in attitudes to Greenpeace. Since he first focused on data centers in 2010. Cook has gone from a hate figure to a respected agent of change. And he saved some air miles by accepting the award on video.

Climate change vs HPC. Record wildfires in California this year were reckoned to have been caused by climate change. They also hindered our efforts to understand manmade global warming. As the forests burned, the utility PG&E told the US energy research center NERSC to shut down its 30 petaflops Cori supercomputer, interrupting climate simulations (p24).

3500

From the Editor

Total number of entries to DCD's Awards since they began in 2007

You might think that AI is already specialized enough, but there's a growing realization that it will need multiple kinds of hardware. Intel's Nervana acquisition is providing specialist ASIC chips for the twin AI applications of training and inference, Alex Alley found (p34). Meanwhile, FPGAs are another species in the AI zoo, says Max Smolaks (p32). Both are taking on GPUs - and will change data centers.

4

Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Dot McHugh Designer Mandy Ling Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses

Head Office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

PEFC Certified This product is from sustainably managed forests and controlled sources

You can cut waste by sharing. A

PEFC/16-33-254

Swedish campus showed us new ways to "heat your neighbor" (p42). Whether you read this special yearend issue online in 2019, or in hard copy in 2020, we wish you all the best for the festive season. bit.ly/DCDMagazine

DCD Magazine • datacenterdynamics.com

SEA Correspondent Paul Mah @PaulMah

Dan Loosemore

invented the data center REIT (real estate investment trust) back in 2004. Now it wants to conquer new worlds, according to CEO Bill Stein (p20). A pattern of acquisitions and product launches makes it clear that Digital is gunning for the enterprise colocation market, and its incumbent leader Equinix.

Training

Reporter Alex Alley

Chief Marketing Officer

Wholesale giant Digital Realty

Debates

Deputy Editor Sebastian Moss @SebMoss

Conference Producer, EMEA Warka Ghirmai

Peter Judge DCD Global Editor

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.

Intelligence

Global Editor Peter Judge @Judgecorp

Conference Producer, APAC Chris Davison

Dive deeper

Events

Meet the team

Awards

CEEDA

www.pefc.org

© 2020 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


Book Now!

New for 2020

Limited free passes

>Virginia 5-6 October 2020 Sheraton Tysons Hotel

Meeting capacity demands for 70% of the world’s Internet traffic Connecting the nation’s largest colo operators, cloud providers & the supporting advisory and technology ecosystems

Join the discussion #DCDVirginia

For more information visit: bit.ly/DCDVirginia

Principal Sponsor

Economic Development Partner

Energy Partner

Global Content Partner


Whitespace

News

NEWS IN BRIEF

Whitespace: The biggest data center news stories of the last two months

ICTroom declares bankruptcy along with sister companies The Dutch modular data center specialist filed for bankruptcy along with all its subsidiary companies in November. ICTroom operated offices in the Netherlands, Belgium, France, and Germany.

Celebs win data center tax case Comic Jimmy Carr and footballer Wayne Rooney have a won a dispute with the UK tax authority HMRC. The pair, along with 675 others, invested in a number of data center projects to benefit from tax breaks despite the buildings standing empty.

Facebook plans solar farm to power data center campus The solar plant will be built in Georgia, US, and will come online in 2021. The 107MW facility in Denton, Jeff Davis County, will power Facebook’s 970,000 sq ft (90,000 sq m) Newton Data Center in Station Springs, Newton County.

Microsoft patents pressurized data center for efficient cooling Air-tight can of pressurized greenhouse gas - perfectly harmless Microsoft has been awarded a patent for a high pressure data center that would allow for more efficient heat transfer rates than traditional approaches. “Some data centers employ heat sinks and electric fans (which can consume a substantial amount of energy),” the filing (Patent US10426062) states. “What is needed is a system to efficiently improve data center cooling without needing expensive additional hardware.” That system, the patent reads, is a hermetically sealed data center full of high pressure gas. Higher pressure makes air denser, and increases its heat capacity - and therefore the amount of heat it can remove from IT systems. Gases mentioned include normal air that contains nitrogen (N2), oxygen (O2), argon and carbon dioxide (CO2), or inert gases such as pure nitrogen, carbon dioxide, sulfur hexafluoride (SF6), and combinations of the above. SF6 as an inert gas with a high molecular mass is already used as a dielectric medium for electric equipment like circuit breakers, and switchgear.

6

Microsoft’s patent says that, in a hermetically sealed data center filled with SF6, fans can be powered at 25 percent of standard levels, while heat transfer is nearly seven times as effective. However, SF6 is the most potent greenhouse gas ever evaluated by The Intergovernmental Panel on Climate Change. It found the gas had a global warming potential of 23,900 times that of CO2 over 100 years. Leaks can occur, with one of the largest users of the gas, the US’ Department of Energy, discovering it was leaking huge quantities of SF6. According to electrical company Eaton, leaks could be as high as 15 percent for switchgear over its lifetime. Even an air-tight facility would leak a small amount of the gas and SF6 has an atmospheric lifetime well over 1,000 years. Microsoft’s patent does not delve into the risk of leaks, but details a range of environmental controls such as gas composition sensors, pressure controls, and human access safety doors. bit.ly/PressureCookingUpSomething

DCD Magazine • datacenterdynamics.com

Google begins work on Curie cable’s Panama branch The Curie submarine cable connecting California to Chile will also feature a branching station to link up Panama to Google’s network. SubCom, the submarine cable company responsible for Curie, will now install the branching unit that will land in Balboa, a district in Panama City.

Norway-UK submarine cable construction underway The 700km (435 mile) Englandcable will be deployed in 2021, and from its cable landing station (CLS) in Green Mountain’s Stavanger data center in Norway will lead to Newcastle, England. From there, the cable will be just another connection for the Euroconnect-1, a larger fiber network operated by the subsidiary of Altibox, Lyse.

Google opens $600m data center in Tennessee, USA The facility will be located in Montgomery County, Tennessee, and will source its power from several solar farms. The multimillion-dollar Clarksville site was opened by Governor Bill Lee at a ribbon-cutting ceremony on November 6. Jobs are now being advertised for the Clarksville site, with a data center facilities technician position posted on job site Indeed.


Ex-Apple staff raise $53m to launch start-up Nuvia founder sued by Apple

CyrusOne hit by ransomware attack The virus affected its New York data center and multiple customers On December 4, attackers gained access to network resources at the CyrusOne facility, and encrypted files belonging to customers, sending a ransom demand to the customers and CyrusOne. A total of six customers were affected, including a financial services provider. CyrusOne has since refused to pay the ransom, and is working to restore services to its clients, according to reports and a company statement. The global data center firm is rumored to be a takeover target, and says that the attack only affects managed services customers at a data center in New York, with colocation customers, and businesses located elsewhere unaffected.

The following day, one of the victims, financial firm FIA Tech, said it had suffered a cloud outage that was “focused on disrupting operations in an attempt to obtain a ransom from our data center provider.” CyrusOne then confirmed on its investor portal that it is “addressing” an incident, saying its managed service division is “working to restore availability issues to six managed service customers due to a ransomware program encrypting certain devices.” CyrusOne has some 45 data centers in the US, Asia, and Europe. The investigation is currently ongoing.

Nuvia is planning a new range of data center processors and has raised $53m from investors for its public debut. The company was founded by ex-Apple engineers Williams, John Bruno and Manu Gulati. The three men led initiatives including Apple’s A-series of chips that power the iPhone and iPad. Soon after the launch, Apple announced it would sue its former chief architect and Nuvia’s CEO Gerard Williams III. The tech giant says he broke his employment agreement while setting up Nuvia, a claim he contests. bit.ly/SourAppleSues

bit.ly/HeldToRansom

Swedish wind energy project lifts off, will power Finnish Google data center Called the Stavro project, the planned 62 turbines will power Google’s Hamina data center in Finland via a power purchase agreement. Located in northern Sweden, the wind farm will be divided into two sub-sites, Blodrotberget (40 turbines) and Blackfjället (22 turbines) for a total capacity of 254MW. The wind farm is expected to be finished before the end of 2021, and commercially operational that same year. Hamina is located on the coast, some 145 kilometers (90 miles) east of Helsinki and has a population of around 20,000 people. The facility was last upgraded in 2018, when Google opened a cloud region in Finland. Earlier this year, Google announced 18 new renewable energy agreements, totaling 1,600MW. The deal was the largest in corporate history, bringing Google’s overall portfolio of wind and solar agreements to 5,500MW. bit.ly/PfftWhatABreeze

Issue 35 • Dec/Jan 2020 7


Whitespace

AWS readies new Arm CPU and AI chip Amazon has announced the Graviton2 and the Inferentia

For more on upcoming chip architectures, see p32

AWS says that its new Graviton2 processors will deliver up to 40 percent improved price/performance over comparable x86-based processors. The Arm-based CPU is available on Amazon’s Elastic Cloud (EC2). Coming a little over a year after the first Graviton processors, Amazon claims the cards now offer higher throughput, lower latency, and sustained performance for real-time and batch inference applications. The chips are based on Arm’s Neoverse designs, first revealed in February 2019. Unlike rivals Intel and AMD, Arm does not sell its own chips - it licenses out core designs, which other companies build upon, adding elements like memory, storage and PCIe controllers. Graviton2 provides up to 64 vCPUs, 25 Gbps of enhanced networking, and 18 Gbps of EBS bandwidth. Customers can also choose NVMe

SSD local instance storage variant (C6gd, M6gd, and R6gd). AWS CEO Andy Jassy said: “We decided that we were going to design chips to give you more capabilities. “While lots of companies have been working with x86 for a long time, we wanted to push the price to performance ratio for you.” He added that Intel and AMD remain key partners for AWS. Amazon also announced a new inference chip named Inferentia, which will be available on EC2 Inf1, a dedicated instance for machine learning. The Inferentia processor provides 128 TOPS per chip and up to two thousand TOPS to allow for EC2 Inf1 instances across multiple frameworks like TensorFlow and PyTorch, and Apache MXNet. bit.ly/TransformersRipOff

We’re putting the power in data. Temporary power and battery solutions for data centres. Our temporary power and battery solutions will provide your data centre with the power you need, when you need it, for as long as you need it. Bridging any gap in demand while you’re building a permanent off-grid solution. Aggreko put the power into data.

8 DCD Magazine • datacenterdynamics.com Tell us what you need 0333 016 3475

aggreko.com


UK deploys Shasta supercomputer to help with nukes

European HPC hosting deal agreed Some hosts will be housing the first pre-exascale HPCs on the continent The European High Performance Computing Joint Undertaking (EuroHPC JU) has signed hosting agreements with eight IT companies and laboratories to house supercomputers. The systems will be made accessible to European researchers to develop new uses for artificial intelligence, medicines, models on climate change, and more. Mariya Gabriel, European Commissioner for Digital Economy and Society, said: “These signatures mark a milestone in the Joint Undertaking’s activities, bringing us a step closer to our ambition of making Europe a global leader in high-performance computing. “By the end of next year, eight world-class supercomputers will help European researchers and industry, wherever they are in the EU, run applications that require large amounts of

computing power to make significant advances in fighting climate change, designing new drugs, developing new materials, and many other areas.” The European Union set up the EuroHPC JU to develop its supercomputer infrastructure. A hosting agreement is a contract defining the roles, rights, and obligations of a hosting entity. Now that this is signed the procurement process for the new computers can begin. Three of the new machines will be pre-exascale supercomputers and expected to be operational by the end of 2020, while the five other sites will host petascale computers by mid-2020. The pre-exascale (more than 150 petaflops) HPCs will be located in Barcelona, Spain; Kajaani, Finland; and at the CINECA site in Italy.

AMD has bagged another contract to help Britain’s Atomic Weapons Establishment (AWE) deploy a seven petaflops Cray Shasta supercomputer. The ‘Vulcan’ system will feature AMD’s Epyc 7542 processors, Cray ClusterStor Lustre storage, and Slingshot interconnect. “High-performance computing is a critical aspect of AWE,” the head of the establishment’s HPC, Andy Herdman, said. “It underpins the vast majority of our sciencebased programs, and we’re continually looking for ways to enhance and support this important work. This is why we chose Shasta, for its unique and powerful features.” In 2017, the AWE deployed an IBM supercomputer to maintain the Trident nuclear arsenal. Ever since the 1996 Comprehensive Nuclear Test Ban Treaty (CTBT), supercomputers are used to study how aging nuclear stockpiles degrade. bit.ly/ItsTheBomb

bit.ly/WillitBeAFlop

Vertiv listed on NYSE after reverse merger with GS Acquisition Holdings Platinum Equity, which acquired Vertiv for $4 billion in cash in 2016, will own approximately 38 percent of the company once the deal goes through. Vertiv is expected to be valued at $5.3 billion after the merger and will continue to be run by CEO Rob Johnson. The transaction is expected to close in the first quarter of 2020 with stock trading under the ticker symbol NYSE: VRT. The deal’s sponsors will own approximately five percent of Vertiv Holdings, and the approximately $705 million of cash held by GSAH. Additional investors have committed to participate in the transaction through a $1.239 billion private placement. The reverse merger will mean that Vertiv will be able to list on the stock exchange without going through an IPO. bit.ly/Vitrev

Issue 35 • Dec/Jan 2020 9


Whitespace

Data center part of USAID’s plan for Afghan power plant The US Agency for International Development is planning to fund a small disaster recovery site in Kabul, Afghanistan. The new facility will be located at the dieselfueled Tarakhil Power Plant operated by state-owned utility Da Afghanistan Breshna Sherkat (DABS). In a posting on the federal procurement website, USAID states: “Currently the data gathered by DABS is not backed up. “In case of any interruptions, the business operations of DABS will be adversely affected. “To mitigate disruption of business operations, DABS plans to establish a Disaster Recovery Data Center at the Tarakhil Power Plant.” USAID also wants to see the development of five utility load centers in Kabul, Kandahar, Balkh, Herat and Nangarhar provinces. The agency spent $335m on the 105MW power plant before it was given over to DABS in 2010. The agency has not given any indication of the size of the new facility.

DARPA contract for “shallow” neural network at the Edge It wants to fund research into running accurate AI on low power systems The Defense Advanced Research Projects Agency (DARPA) is looking to fund research into shallow neural network architectures that could run accurately on low-powered Edge systems. These networks, Hyper-Dimensional Data Enabled Neural Networks (HyDDENN), would provide similar results compared to existing Deep Neural Networks (DNN) but without the latency and large computational requirements. The DARPA presolicitation document says, “The basic computational primitive to execute training and inference functions in DNN is the multiply and accumulate (MAC) operation.” As DNN parameter count increases, networks require tens of billions of MAC operations to carry out one inference.

The document added: “This means that the accuracy of DNN is fundamentally limited by available MAC resources. “This compute paradigm will not satisfy many DoD applications which demand extremely low latency, high accuracy artificial intelligence (AI) under severe size, weight, and power constraints.” The Phase 1 Feasibility Study should have costs no more than $300,000, while the Phase 2 Proof of Concept should not exceed $700,000. Although DARPA’s focus is on military applications such as tactical Edge deployments, the agency believes the technology could find use elsewhere in the civilian sector. bit.ly/WarAtTheEdge

Peter’s government factoid AWS is challenging the DoD’s decision to award Microsoft a $10bn cloud contract. The company claims President Trump blocked Amazon due to his disdain for Jeff Bezos.

bit.ly/AfghanPower

Grey Wolf data center launched by FBI in Idaho The FBI has opened its new data center in Pocatello after it was first announced two years ago. Known as ‘Grey Wolf,’ the 100,000 square foot (9,300 sq m) facility is part of a $100m expansion that includes a 40,000 sq ft (3,700 sq m) office addition, both built by JE Dunn Construction. In the procurement documents, the FBI states that the Tier III facility has 25,000 sq ft (2,300 sq m) of data floor space with an optional 8,000 sq

10 DCD Magazine • datacenterdynamics.com

ft (750 sq m) expansion. The data center will be used by the US Marshal Service, the District Attorney, the Bureau of Prisons, and the Treasury Department. “This building, while in Pocatello, Idaho, will be the center and epicenter of what we are doing in Washington, DC,” Jeffrey Sallet, FBI assistant director of finance and facilities, said at the groundbreaking. bit.ly/CanYouImagineTheFleas


BP will shut down European servers as it moves workloads over to AWS cloud A date for when the move will be completed has yet to be set BP is shutting down servers in two European data centers as it migrates over to the cloud. The oil and gas company announced during Amazon’s re:Invent in Las Vegas that it will move to the cloud so it can better utilize AWS’s services. BP currently owns servers inside two European data centers believed to be operated by HPE. These will now be decommissioned as it prepares to move around 900 applications and vast amounts of data over to AWS. Bill Vass, VP of technology, storage, automation, and management for AWS, said: “We are pleased to expand our relationship with BP as the company moves [from] its largest mega data centers, which host mission-critical data applications, to AWS. “AWS is the world’s leading cloud, with an unmatched portfolio of cloud services, proven performance, and operational expertise, which is why global companies like BP trust AWS to support their digital transformations.” BP will create a data lake on Amazon Simple Storage Service and will use Amazon Kinesis, to collect, process, and analyze real-time, streaming data for

emissions monitoring and gas station pump operations. “We’ve been working with AWS for many years, and today’s announcements further strengthen that relationship. AWS is helping BP to transform our operations, and together we are using the cloud and renewable energy resources to drive energy efficiencies,” BP CIO Steve Fortune said. “Exiting our European data centers and migrating to AWS supports our digital transformation agenda, and we’re excited about the possibilities for increased flexibility, operational efficiencies, and opportunities to innovate while helping to advance the energy transition.” In April 2019, Amazon employees called on the company to stop courting oil and gas cloud contracts, with an open letter noting the company has a division “devoted to helping fossil fuel companies accelerate and expand oil and gas extraction.” Since 2014, AWS pledged to source 80 percent of its energy from renewable sources by 2024. bit.ly/PetrolClouds

Mission Critical Training

Colocation/MTDC Managing training within organizations that provide “Infrastructure as-a-Service” is complicated given the variety of learning requirements needed, but it is essential to maintain the competitive edge. DCPRO has a flexible approach to workforce development and we even work with you to ensure our materials fit your bespoke requirements exactly.

Quote MAG20 when booking a course for 10% off

“DCPRO’s training courses are always informative and interactive. The trainers are very experienced and knowledgable. I recommend these courses not only to the operations team, but to anyone who works at a data center to understand the criticality of running a data center,” Charlene Gomez | Digital Realty


Cover feature

t aun

ra

:G

her

ap ogr

G ham

ot

Ph

12 DCD Magazine • datacenterdynamics.com


Peter Judge Global Editor

If we see a sudden growth in space technology, the UK’s Goonhilly Earth Station could be a major winner

H

ugging rugged

a short distance away, known to Goonhilly

moorland on the

staff as Goonhilly 3 (GHY-3).

Lizard peninsula in the

This was once the largest satellite earth

far West of Cornwall,

station in the world, and a jewel in the

the Goonhilly Satellite

crown of international operator BT. From the

Earth Station has its

1960s to the 1980s, Arthur and Merlin were

eyes on the future, its feet on the ground,

joined by another 60 antennas, as Goonhilly

and roots which dig through the past into

brought news from the world, of Moon

myth and history.

landings and Olympics.

The history hits us first. Giant dishes

But ten years ago it was dismissed as a

dominate the horizon as we drive five

historical relic, made obsolete by submarine

miles through rain from the village of

fiber cables and improvements in satellites.

Mullion. Inside the security gates, we

Geostationary satellites and low earth

park and run for the shelter of the low

orbit satellites don’t need giant equipment

buildings which huddle in the shadow of

anymore.

colossal antennas which brought Britain into the age of space communications, nearly 60 years ago. Goonhilly started in 1962, with a 26m (85ft) dish built to communicate with the new Telstar satellite. The entire 1,100-tonne structure can rotate through 360 degrees in three minutes and be positioned to within 1/100 of a degree.

In 2008 the earth station closed. A trickle of tourists came to gaze in awe at Arthur and Merlin. Then in 2010 the visitor center closed - and that should have been that. But in 2011, Goonhilly began to stir. A new business: Goonhilly Earth Station Ltd, took over the site, and began installing new technology which builds on the old infrastructure and stakes a claim on an

Marine engineers from nearby

emerging market for space communications

Falmouth docks helped to

technology. In 2018, it got a £24 million

build the antenna, and Goonhilly staff call this structure Goonhilly 1 (GHY1). The rest of the world

($31m) investment from financial services tycoon Peter Hargreaves. The new Goonhilly is based on a resurgence of interest in space, and on the

prefers to call it Arthur, a

vision of its CEO, satellite engineer and

mythical reference which

entrepreneur Ian Jones, who leads the new

continued with the larger (32m) Merlin dish

body. “I was running a small business, and I was fed up with feast and famine - building

Issue 35 • Dec/Jan 2020 13


Cover feature A similar chunk of Goonhilly’s business is

to be the first private enterprise offering

customers, and then having to find the next

technically advanced projects for

consultancy: handling the kind of projects

communications to these efforts.

customer,” says Jones. “I wanted something

which Jones dealt with in his previous

“There aren’t any commercial providers

with a broader scope, that had a range of

company, Orbit Research, developing

of deep space services at the moment,” Jones

revenue, in terms of breadth and value.”

systems for satellite communications.

tells us. “This is something that’s only been

Goonhilly is certainly addressing a diverse set

“We can’t recruit people fast enough,” he says.

tackled by space agencies. Projects from

of opportunities - alongside its space projects,

“There’s a huge amount of work in how you

private enterprise, or from countries other

it includes a surprising spin-off: a pitch to

apply signal processing.”

than the US, Europe and China, have to buy

apply astronomical thinking to everyday AI

Related to this, Goonhilly also plans to

processing problems, in a specialized data

manufacture signal processing equipment

center on the site.

for satellite ground stations, using more of

Around 20 percent of Goonhilly’s business is in its home territory, communicating

communications capability from NASA or the ESA.” It has started with upgrades to two dishes.

the site’s legacy, and a skills network which

An £8.4m ($10.8m) project is turning the 32m

could turn this into a solid revenue stream.

GHY-6 into a deep space communicator.

Our tour of the site takes in fabulous

Meanwhile, GHY-3 (aka Merlin) has a new

with “near space” objects orbiting the

laboratories where microwave equipment

super-cooled receiver for radio-astronomy,

earth. The most high-profile of these is the

was designed and built by hand. We also

sensitive enough to pick up signals from

International Space Station (ISS) which has

see a product under development: a mobile

remote stars. And new research has found

its own dedicated dish at Goonhilly. The most

satellite ground station, complete with dish,

a way to adapt this detector for two way

lucrative will be low earth orbit satellites for

power supply and signal processing, which

observation.

packs inside a rugged shipping container to

“We’ve got monthly revenue coming in from some of our antennas,” says Jones.

be deployed anywhere in the world. Another 20 percent of the site’s business

“Some people rent an entire antenna, and

is a completely new venture, already off to

the associated processing for a long term

a good start: “deep space” communications.

contract.”

There’s an increasing number of projects

In this part of the business, it’s got a surprising competitor, as Amazon Web

sending probes outside earth orbit, to the moon and beyond. Goonhilly plans

Services (AWS) has launched Ground Station-as-a-Service, a plan to add satellite services to its cloud data centers (see p18). Jones has no illusions about the size of its rival, but believes that Goonhilly has advantages in its home turf.

14 DCD Magazine • datacenterdynamics.com


“i find we are got into an odd part of the Kingdom” - John Bradley Astronomy and communications are deeply enmeshed with

From a temporary base at the Lizard lighthouse, Bradley set up

the Lizard peninsula. The land near Goonhilly is dotted with

an observatory, working with poor weather, limited visibility,

megaliths, including the Dry Tree standing stone right next to

and workers who spoke Cornish, not English.

the Earth Station. “Many of the sites within the area are linked

‘We have not had one day since we have been at the Lizard

to the solar and lunar cycles,” says archaeoastronomer and local

without some rain,’ Bradley wrote to told Maskelyne on 4 June,

resident Carolyn Kennett. ”Some have links to the stellar cycle.”

after miraculous breaks in the clouds enabled him to observe

In 1769, the Astronomer Royal, Nevil Maskelyne sent a rising

the Transit the day before, and a partial solar eclipse that

scientist, John Bradley, to the Lizard to observe a rare Transit

morning. “I do assure you I am so lame with my old rheumatic

of Venus and other astronomical events. The Lizard is the

complaint that I can scarce crawl about.”

Southernmost part of the British mainland, with a treacherous coastline that was a major shipping hazard for shipping. Bradley’s mission was to gather accurate maps of the

130 years later, Guglielmo Marconi launched radio communications in experimental stations on the Lizard.

Lizard, and astronomical data which would precisely establish

In January 1901, he sent wireless signals over the horizon,

its longitude and latitude coordinates. “Accurate coordinates

receiving a signal from the Isle of Wight, 180 miles away.

for the Lizard Point would be of great value for mariners, for

Later that year he sent the first transatlantic signal, from

at that time all they had to rely on were rough estimates and

Poldhu, on the West of the Lizard to Newfoundland, a distance

bad charts,” says Kennett in a paper on Bradley’s expedition

of 2,100 miles. This paved the way for Cornwall’s future role in

(published in Antiquarian Astronomer, 2015).

radio, satellite and submarine fiber communications.

Ph oto g

rap

he

r:

Gr a

ha m

Ga un t

Issue 35 • Dec/Jan 2020 15


Cover feature

communications, so one of the site’s older dishes can join the newest space projects. The final section of the Goonhilly plan

farm. Roberts also has an immersion cooled

done the translation, you can see they’re

SmartPod cabinet from Submer. It’s currently

the same thing, and that is actually very

a demonstration system, with GPUs and

powerful.”

is a departure, into digital infrastructure.

AMD CPUs provided through French HPC

Jones expects data centers and digital

specialist 2CRSI, but Roberts hopes to open a

Goonhilly sees wider applications: “There’s a

infrastructure to deliver another 20 percent of

“fully green” supercomputer, powered directly

convergence between the satellite world and

the organization’s business. In the summer of

by energy from a modular solar farm, where

the data center world.” says Roberts.

2019, Goonhilly opened two data halls which

each solar panel is backed by batteries.

have 2MW of power available, which can be extended to 5MW.

Meanwhile, the first real customers for the

Beyond these space applications,

Vast numbers of AI problems are about

Goonhilly data centers are in the space field. A

finding data in a noisy signal, using

team from Hertfordshire University is taking

matched filtering, says Jones: “You can

supercomputer on site, and Jones plans to

algorithms developed by radioastronomers

apply techniques from communications

fill those halls with AI and machine learning

to detect distant galaxies obscured by

engineering and from computer science to

loads, with the help of his head of data

background noise, and using Goonhilly’s

get that signal back. And that’s the sort of

centers and cloud, Chris Roberts. There’s an

Nvidia supercomputer to repurpose them.

thing radio astronomers can do. And because

enclosed modular aisle of racks in one of the

Now they can take images from earth

of the unique connectivity we have to the

halls, fitted by HPE.

observation satellites, and spot features which

astronomy world, we have the ability to pull

were previously obscured by clouds.

that data in and build a data service around it.”

There’s currently an Nvidia DGX-1

One might think Goonhilly’s location is too remote for this job, but it has excellent

Jones foresees a market for satellite

The training and model development

network connections, some inherited from its

operators needing AI services to extract

stages of AI involve running a model for

BT days, and others installed under the new

useful data from the vast quantities which will

months at a time, to derive rules which can be

regime. Goonhilly has a direct termination

come down from the sky.

applied in the field. Jones and Roberts believe

on the SEA ME WE 3 cable, which connects

Different operators will be looking for

that even applications like autonomous

from South East Asia and the Middle East, and

different things within that data, perhaps

vehicles, with no link to space, will eventually

lands in Cornwall. “It’s only 180ms to India,”

following shoals of fish for conservation, or

come to Goonhilly: “We believe that,

says Jones. Goonhilly is also on Janet, the

tracking weather to optimize agriculture.

regardless of whether it’s satellite or not, we

UK’s Joint Academic Network.

They will use AI systems to find what they are

can run that training model at Goonhilly, and

looking for, and Goonhilly will offer them a

it will be more cost-effective than running it

double benefit.

on Amazon and GCP,” says Roberts.

The site infrastructure is solid. The data halls are in sturdy structures originally built

Firstly, they can process the data right

Not all the expertise is at Goonhilly itself:

for satellite support services, and a network

there, and avoid network charges incurred

there’s a second office at Surrey Satellite

operations center (NOC) has been built with

by sending huge quantities of data to a

Technology, a Surrey University spin-off at

a full view of the site. There’s a comms room

colocation space in London or another hub.

Farnborough.

that used to be the main egress and ingress

“Projects are starting to create data lakes

point for the UK, which had 200 cabinets

that exceed those of Google and Facebook,”

skilled people to work at Goonhilly: There’s

labeled by country. Now it has been stripped

says Jones. “The Square Kilometre Array is

a move to create a space port at nearby

back, and is becoming a meet-me room.

looking at an exabyte of data per year. LEO

Newquay airport (which has regular flights

constellations can produce tens of megabits

to London). It’s hosted a conference on

per second.”

deep space communications, and earlier in

Goonhilly Earth Station has inherited an 11kV ring round the site, which was designed to support the heavy motors and

But Jones says there’s no difficulty getting

And secondly, Goonhilly has access to

2019, 3,500 people came to Goonhilly for a

signal systems, but is just as good for digital

expertise in finding the needle in those data

spectacular science and arts festival on the

equipment. There are four 1MW diesels along

haystacks. “We understand the maths used

anniversary of the Apollo landing.

with batteries for backup: BT would not have

by satellite communications engineers and

allowed the signal to be lost during a major

radio astronomers - and that maths happens

of Cornwall is part of what’s allowed it to

event carried by satellite.

to be the same as that for machine learning,”

survive,” says Jones. “If Goonhilly was in the

says Jones. “There’s a completely different set

Southeast, it would have just been sold off

of jargon used to describe it, but once you’ve

and subsumed into the city.”

The site is right next to major wind power resources, along with a planned solar

“The fact that Goonhilly is in the depths

Photographer: Graham Gaunt

16 DCD Magazine • datacenterdynamics.com


The Goldilocks spot Fortuitously, Goonhilly’s location, at 50 degrees North

in polar orbits which converge at the North and South

and five degrees West, is good for satellite technologies

Poles, and spread out at the equator, covering the

developed after it was established. “We can see most of

whole earth as it rotates beneath them. At Goonhilly’s

the population of the earth from a single geostationary

latitude, the orbits are bunched together closely

satellite from here,” says Jones. “We can see all of the

enough so the station can reach each one every few

Americas, all of Europe, all of Africa and the Middle

times it orbits.

East, and most of Asia, and even Australia, from a single satellite hop. 50 degrees is the inclination angle of the

It’s also good for satellites serving specific areas: “Developing nations are in middle latitudes and developed nations are [mostly] in the North,” says

International Space Station, so Goonhilly is the perfect

Jones. “We’re just about South enough to be the link

location for communicating with and tracking the

between the developed and developing nations: we’re

ISS. And low earth orbit (LEO) satellites are generally

in the Goldilocks spot.”

Issue 35 • Dec/Jan 2020 17


AWS Ground Station

Amazon’s grounded ambitions

Sebastian Moss Deputy Editor

Amazon Web Services hopes to dominate the ground station market by following the cloud model that has served it so well before, Sebastian Moss reports

T

he battle for space is being waged on the ground. As more and more countries and corporations find reasons to send satellites into our skies, a fight is underway to win the business of connecting the systems back down to earth. Traditionally, ground stations have been run by the major satellite companies like Inmarsat and Iridium, or by countries like the US. Satellite operators have to either build their own stations, or lease antennas at the site. That could change, with Amazon Web Services hoping to mirror the success it has had with the shift from enterprise data centers to the cloud by renting antennas out to users by the minute. This Ground Station-as-a-service, the company claims, could save users up to 80 percent of the costs. “It's actually striking to me how similar this is to the cloud adoption of years ago,” Shayn Hawthorne, AWS Ground Station general manager, told DCD. “In those days, you had start-ups where maybe they had no [data center] capability, and thus it made it very easy for them to jump into the cloud. But you also have some established, really capable customers who built their own on-premises capabilities, but used cloud for extra capacity.” He expects the same to happen with Ground Station, where satellite start-ups like Myriota and Capella Space will use AWS for most, if not all, of their connectivity needs. Larger, established - but unnamed - satellite companies are equally “looking at using AWS Ground Station as dynamically scalable, extra capacity to support new needs that they didn't plan when they originally built their architecture,” Hawthorne claimed. In May 2019, AWS launched ground stations in Ohio and Oregon, and followed up with a site in Bahrain in November. It expects to operate 10 such sites by the end of the year. “And that's going to give us the ability to have downlink capabilities in the European area, in

the Middle East, in the African area, Southeast Asia, Asia proper, Australia, and then South America, and then back to the United States,” Hawthorne said. Once data is received by the ground station, it is sent to nearby AWS data centers that are at most 9.5 milliseconds away, he said. Theoretically, that’s up to 2,000km (or 1,200 miles) away given light travels 200,000km per second in glass fiber - but fiber is generally not deployed in uninterrupted straight lines. “So we're close enough to our data centers that we have sub-WAN latency getting into each of the regions that our ground stations are connected with. We do a little bit of Edge processing, and then we get that data into the cloud.” That cloud is, of course, Amazon’s own - its push to dominate the ground station market is directly tied to its cloud service. “You can log into AWS and you can actually get to our ground station console,” Hawthorne said. “And then you can start to do an onboard depending on if you have a satellite or a space data processing capability.” The GS service directly transfers data into an AWS S3 bucket, Hawthorne said. “A customer can always then move it out of AWS to anywhere they want to go. But it's going to start by going through an antenna into AWS, like any other service.” While Hawthorne would not discuss it, AWS GS is almost certainly tied to another vastly ambitious effort by Amazon to control the global network: Project Kuiper. Led by Rajeev Badyal, previously VP of satellites at SpaceX, Kuiper aims to operate some 3,236 satellites that offer high-speed broadband connectivity to the earthlings below. The Kuiper System will “leverage Amazon’s terrestrial networking infrastructure to deliver secure, high speed, low latency broadband services for customers,” an FCC filing states. But to reach the terrestrial infrastructure of AWS data centers and fiber investments, Kuiper will use “Gateway earth station sites distributed throughout the Kuiper System’s service area.” Technically, the filing could be talking about

18 DCD Magazine • datacenterdynamics.com

a different set of sites, but that seems unlikely. The only snag, currently, is that AWS GS only supports X-Band, S-Band, and UHF band frequency - while Kuiper will use Ka-band. “We work in three specific frequency bands that the low earth orbit small sat CubeSat customers focus on right now,” Hawthorne said. “These are the common bands for a lot of the new start-ups and innovative systems that are being built by a bunch of companies that are funded by venture capital, both in the US as well as in Europe and in Asia.” Hawthorne declined to comment on when GS will support Ka, only noting that “we’re willing to look into integrating feature frequencies for customers based on their needs.” With the larger satellite companies like Iridium and Inmarsat using L-Band, “we would try to figure out how to put those types of antennas in to meet their needs as well, if we had customers who wanted it. As customers come in with other bands, we will move into that. I can even see in the future us someday supporting other types of communication mechanisms like Optical RF.” Antennas form another battleground for Amazon, its competitors and its customers. The company is currently building out the ground stations in partnership with defense contractor Lockheed Martin, but Hawthorne was keen to note the partnership was far from exclusive, and likely to be changed should superior antennas be found. “We initially began collaborating with Lockheed Martin because we're very interested in many different types of antenna technologies,” Hawthorne said. “We're very interested in meeting with a number of companies in order to come up with the most cost-effective, but maximum capability, antennas we can get, because right now we will have hundreds of antennas after installing them at our ground stations for the next ten years.” Parabolic antennas at the moment require individual antennas to communicate with individual satellites. “If we can instead move into some antennas that allow you to use one antenna to communicate with multiple satellites at the same time, then it will really help us.” Hawthorne added: “And so we don't have any limits - except for the physics of communicating with each satellite, one at a time from each antenna.”


INTELLIGENT MEDIUM POWERBAR Delivering power safely and efficiently in mission critical environments. E+I Engineering's innovative iMPB product is an open channel busway system designed for use in data centres and other mission critical environments. E+I engineering have completed iMPB installation in data centres across the globe where security and flexibility of electrical distribution is paramount. iMPB has been engineered with the safety of the installer and user in mind.

For more information about our full range of products please contact us at info@e-i-eng.com Donegal, Ireland | South Carolina, USA | Ras al-Khaimah, UAE

WWW.E-I-ENG.COM


CEO focus

Realty check The biggest wholesale data center company in the world, Digital Realty, has ambitions to dominate other sectors. CEO Bill Stein spoke to Peter Judge

L

et’s start by saying this: Digital Realty is the biggest data center company in the world. It’s also the originator of the business model which other leading players have adopted: the data center REIT (real estate investment trust). So DCD welcomed a chance to quiz its leader William A (Bill) Stein. In wholesale colocation, a 2017 study by Structure Research gave Digital 20.5 percent of the market, a bigger share than the next three players combined. In the two years since, its acquisitions have included the number three player, Dupont Fabros (which had six percent),

Peter Judge Global Editor

bought for $7.8 billion in 2017. Spread the net wider, and comparisons get tricky, as data center loads shift between the cloud and enterprise spaces. Technavio says the top monolithic hyperscale cloud providers, Google and Amazon Web Services (AWS), have the most data center space. And surveys that focus on retail colocation providers serving the enterprise market always put Equinix top. Digital has roughly $3 billion in revenue and 1,500 employees, while Equinix turns over $4bn and employs 7,000. But consider: Digital is essentially a real estate company, and its facilities underpin both these markets. It owns more than

20 DCD Magazine • datacenterdynamics.com

260 facilities round the world with more than 35 million sq ft of space, and a market capitalization of around $35bn. It offers “build-to-suit” facilities and powered shells (or “powered base buildings”) to hyperscalers like Google, and to retailers like Equinix which is one of Digital’s top ten customers. Back in 2004, Private equity firm FI Partners created the concept of a data center REIT. It took 21 facilities it had bought in bankruptcy auctions, launched Digital Realty to manage them and floated it on the stock market. Bill Stein joined Digital from FI, and rose through the roles of chief financial officer and chief investment officer, to become CEO in 2014. Data centers are still considered a niche (an “alternative asset category”) within REITs, alongside the traditional major “food groups” of office, retail and industrial, says Stein. But the company is in the top ten of all REITs, and Stein is the first executive from an alternative asset category to chair the trade body Nareit. Meanwhile, other big players including Equinix have restructured as REITs, and Digital has pressed on with a combination of building and acquisition. “We’re a hybrid of commercial real estate and technology,” says Stein. In the job of building data centers Stein says Digital’s strengths come from its real estate roots: “We have the most efficient supply chain,” says Stein.”We have the masterplanning, and the modularity, to build out these facilities. “We have the lowest cost of capital, which gives us a huge advantage,” says Stein, proudly comparing Digital’s credit rating (BBB from Fitch) with CyrusOne and Equinix (both BBB-). This enables the company to stockpile land in key areas, he says: “We bought 400 acres in Northern Virginia at half a million dollars per acre, while others are paying upwards of $2m per acre. "We can’t buy generators at 25 percent, but we can take that much land and warehouse it.” As a result, Digital has a stunning campus in Ashburn, where the 100MW Building L sits alongside a production line of buildings in various states from prepared ground to shell, aiming to deliver capacity to hyperscale customers just as fast as is humanly possible. This is where planning comes in. Digital has enough projects in the pipeline to keep construction companies and infrastructure providers busy: “When they finish one project, right away we can move them to another. That’s important: if they have to stop for a while, that will cost.” Contractors on the Ashburn campus may have been working there for years, on a succession of buildings, but Stein is looking beyond this.


“They are running out of land in Loudoun County,” observes Stein. Some providers are promoting a new hotspot 100 miles south in Richmond, fed by the the new Virginia Beach cable landing, but he thinks Digital’s ability to keep buying close to the Washington hub gives it a lead: “Richmond might be too far for availability. Manassas [30 miles from Washington and Loudoun] will be the next hot market in Northern Virginia,” he predicts. Sometimes Digital will use partners in new territories: for instance it has a joint venture with Mitsubishi in Japan, and one with Brookfield to handle the Ascenty facilities in Brazil, which it bought for $1.8bn in 2018. In 2019, it did a long-term joint venture with Singapore’s Mapletree, selling ten of its US powered-shell facilities, and sharing three others into joint ownership. The Mapletree deal was a clear effort to rationalize Digital’s assets, realizing assets which had matured, and gathering resources to enable forward looking moves like the Interxion bid. “From a real estate viewpoint, it’s a good thing to take current stabilized assets, sell them and recycle that,” explains Stein. The powered-shell buildings were full and generating a six percent capitalization rate: “The buildings are basically full, and there is very little opportunity to create additional value in the asset,” he says. “Mapletree appreciates this rich return, but our core investors are looking for a higher return. Our guidance is a nine to eleven percent cap rate. That’s basic real estate.” As well as looking for returns, Digital has been strategic in looking for well-connected high profile sites, says Stein: “50 East Cermak in Chicago is an incredibly well connected, network-dense asset. In New York, we lease data centers in 60 Hudson and 32 Avenue of the Americas. These are beautiful buildings with very classic architecture.” These attributes are important in the new direction in which Stein is leading Digital: into retail colocation, dealing directly with enterprise customers alongside its leasing activities. Before 2015, it dabbled in retail, but its Telx acquisition that year for $1.9bn brought with it a strong enterprise customer base. This doubled Digital’s retail colo holdings, and was followed by other purchases including eight facilities bought from Equinix in 2016 for $874 million. In 2019, Digital made its biggest retail play yet. In October it announced the acquisition of Interxion for $8.4bn (pending regulatory approval), and in November, at its MarketplaceLIVE event in New York, it launched PlatformDIGITAL, a strategy to support enterprises’ cloud and data center estates. Both the Telx and Interxion acquisitions

included buildings owned by Digital but leased to its partner, Multi-tenant data centers (MTDCs) are now a major part of its portfolio. Digital is definitely aiming to compete with Equinix. The Interxion merger brings it a step closer to parity in enterprise colo floor space, while PlatformDIGITAL is a bid to eclipse Equinix’s bid to encourage an interconnected ecosystem in its facilities, which goes by the name of Platform Equinix. In response, Equinix is making a foray into Digital’s territory, with a joint venture to build wholesale colocation in Europe. “I would say we have approached the business from opposite sides,” says Stein. “They started out as a colocation and interconnection provider, and are only recently getting into the large footprint space. They are still far and away a colo company, they have high margins and their portfolio is predominantly leased.” By contrast, Digital is still growing in

"We bought 400 acres in Northern Virginia at half a million dollars per acre, while others are paying upwards of $2m per acre" colocation. “Our colo revenues are about 30 percent, which is nowhere near what Equinix has, but we are growing that part of the business.” This is the preferred way to approach it, he says: “We own the building all the way up to the cage. It is easier to go up the stack than down. If your core business was to lease a floor it is harder to go to owning the building.” Entering at this stage, with a diverse portfolio, Stein reckons Digital can match changing enterprise needs better, by offering options for cloud services or larger spaces within Platform DIGITAL: “Customers are outgrowing colo only environments. We provide a nice way to expand to larger footprints. The lines are blurring. We remove the complexity so the customer can easily achieve a hybrid multi-cloud architecture.” He’s also pretty clear that Digital will be undercutting Equinix in many areas, including the prices charged for crossconnects within the ecosystem. “We are working with low margins and an efficient supply chain. One of the advantages we have because of our scale is the absolute best pricing you can get - all the way down the supply chain.”

The wider picture Stein is concerned with the same wider factors as the rest of the industry: “When I took over as CEO in 2014, I emphasized sustainability initiatives, particularly in sourcing. As chair of Nareit, I set the policy agenda, with government, and sustainability is a major issue. It’s not just power, it’s also water, and the buildings themselves. We try to use recycled material where possible. We’ve also been trying to be sensitive in other areas, such as governance, and social responsibility.” On diversity, Digital Realty has one woman on its board and is looking to increase that. “Within the company, this is hard. Real estate is male dominated, and data centers even more so. We are making an effort to recruit as many women as we can in entry level positions.” It’s still hard to find women in mid-level positions, he says, so it’s all about recruitment, training and development. Stein is surprisingly relaxed about the idea of expansion into new territories. perhaps because things are working well in his existing markets. “We are not building or investing in China. We’ve looked at it off and on, and we do business with Chinese customers, but we just haven’t found the right structure.” Similarly, the company has a only a small sub-Saharan investment (in Kenya) and is at an exploratory stage with India, working on a feasibility study with the Adani Group.

Issue 35 ∞ Dec/Jan 2020 21


Industry survey

Efficiency begins to tell 66%

Data center professionals are giving efficiency an increasingly high priority, a DCD survey of 100 industry figures found 70%

Challenges of operating respondents' data centers

65% 60%

48%

55% 50%

Peter Judge Global Editor

41%

45%

36%

40%

12%

13%

13%

15%

15%

15%

17%

20%

20%

21%

22%

23%

23%

25%

24%

30%

25%

26%

35%

10% 5%

or tin g si on IT be & tw fa ee ci n Im lit pa y tra ct ns s o fo f d rm ig at ita io l n Th

e

di vi

Re p

th

co st s ne Kee w pi te ng ch u no p w lo it gi h es C om ac p cr lia ed n ita ce tio & In n cr ea si de ng ns rac Fi iti k nd es in th g e st rig aff ht w sk ith M ill s cu eet st ing om v er ary ne in ed g M ee s fo tin ro g ur de se m rv an ic d to Ph es y th si e ca H d yb at l th a rid ce rea IT nt ts de er sy liv st er em In y m s f teg od ro ra m tin el di g e ffe q u re ip nt m ve en nd t or & s

ra U pg

C

yb

er

in

de

re at s

k g

ris

cy ag an

er gy En

M

effi ci en

ni an pl ity

ac

ap C

O

pe

ra tio na lc

os

ts

ng

0%

Source: DCD Research 2020

E

fficiency is a great rallying cry. By saving energy, we save money and we reduce our emissions at the same time. It’s good for the planet and for our margins. And yet, however much the data center sector pays lip service, efficiency has never been a top priority. Consider the critical nature of the data that is held in these facilities, and the consequences of a breach. Imagine the revenue generated by the applications running in the racks: if there was any kind of failure, it would incur losses far greater than outweigh any savings from an increase of energy efficiency. It’s not surprising, then, that reliability and security have always been closest to a data center operator’s heart, and efficiency seen as an unnecessary risk. Given this, it’s actually a big surprise to

see “efficiency” (41 percent) polling higher than “managing risk” (36 percent) in our power infrastructure opinion poll. DCD asked 100 decision-makers about their technology plans and concerns, and the results showed a definite shift within a traditionally conservative sector. The responses to our email questionnaire was predominantly from data center operators and service providers (colocation & cloud services), represetnting 70% of our data. As such, we have a solid view of key challenges here. Historically, the sector has been siloed, with mechanical and electrical infrastructure run by separate teams to the networking and IT functions. This separation has been a major block to efficiency, as individual domains in a siloed business will not see the full costs of their operation. If the IT people

22 DCD Magazine • datacenterdynamics.com

see power and cooling as a free gift from their organization, they will not work to conserve those resources. So it is good news to see signs that the industry is becoming multi-disciplinary, and the old silos are being eroded. Our survey had high responses from people with responsibility for the facilities side (power 56 percent, cooling 55 percent, power protection 49 percent and building management 48 percent) and also from those running the IT side (servers 55 percent, and networking 41 percent). We allowed multiple responses and, in an encouraging number of cases, found a lot of people with a foot in both worlds. We also allowed multiple answers to the crucial question: What do you see as the most important challenges in operating your data centers? As noted above, efficiency (41


Powered by

Technologies deployed in their data centers 2019

50% 43%

2020 (planned)

43%

40%

38% 34%

33%

37% 30%

30% 26%

20%

20%

19%

19% 12%

11%

24%

17% 14%

10%

0% IoT systems

AI/ML management systems

Big data/ analytics

Liquid cooling

Open sourced equipment

DCIM

Software-defined infrastructure

Management as a service (data center, power etc.)

Source: DCD Research 2020 percent) now gets priority more often than managing risks (36 percent). However, both still trail behind the classic business priorities that would make or break a professional’s career in almost any sector: operational costs (66 percent) and capacity planning (48 percent). There are plenty of other challenges, of course: cyber threats loomed large for 26 percent of people, and upgrade costs for 25 percent. Compliance, staff recruitment, rack densities, and a few other issues featured around the 20 percent mark. Near the bottom of the “challenges” list, we note with approval that only 13 percent are worried about that old division between IT and facilities. Also, it’s interesting to see that the supposed hot topic of 2019, Edge computing, is almost completely off the radar for these respondents. Only five percent rated it as one of their top challenges - at least, for now. The level of technology deployment more or less followed what an informed industry insider would expect. Data center infrastructure management (DCIM), a mature and productive implement in the data center toolbox, has been deployed by 44 percent of our respondents. Likewise, cloud-based infrastructure technologies (managementas-a-service) are in use by a solid 37 percent. Software-defined infrastructure has been promoted heavily, and is emerging: 26 percent of our sample are using it. Two “hot” technologies, IoT and Big Data, are languishing at around 20 percent, but this lack of enthusiasm may reflect a perception that these may be revolutionary applications, but data centers are not the first place to implement them.

A surprise for us was the lack of enthusiasm for open sourced equipment, only in use by 17 percent or our sample. However, at the moment open source hardware like that from the Open Compute Project (OCP) mostly applies to webscale companies with monolithic facilities - and remember that half our respondents are in the enterprise space, with multiple applications and therefore more traditional IT hardware. Given the high take-up of DCIM within our audience, we probed further to find exactly what people are using these products for. Energy management is top at 46 percent, with security close behind on 41 percent. Perhaps the mechanical part of the system is harder for DCIM to deal with, or is a lower priority for users: cooling is only managed by DCIM at 36 percent of our sample. Maintenance is under DCIM control at 29 percent and capacity planning at 27 percent. For DCIM vendors, here is a useful statistic: The top feature customers want is ease of use. At 57 percent of respondents, this gets a higher response than any actual function that the software might deliver. A sizeable 55 percent want software that can be customized to their needs and 46 percent want “actionable intelligence” (i.e. useful data). With demands increasing and Customers wanting to focus on core competencies, they don’t have time to waste; they want products that give actual results, are comprehensible to use, and are not so inflexible that they can’t plug their existing kit into them. Given the high level of adoption of DCIM, it suggests these products are creating value and the next generation of DCIM is on the horizon.

Why they use DCIM and building management software 0

20%

30%

40%

50% 47%

Energy management

41%

Security

Capacity planning

36%

Cooling & heat removal

36%

Operational & maintenance scheduling Equipment & system integration Capacity management

28%

27%

22%

We don’t use

ES&P*

29%

12%

*Equipment specification & procurement

Source: DCD Research 2020

This survey was sponsored by Schneider Electric, but the questions, audience gathering, and analysis were produced independently by DCD.

Issue 35 • Dec/Jan 2020 23


Cori and the wildfires

California burnin' Climate change is here. Sebastian Moss talks to NERSC about shutting down Cori amid power blackouts

A

s California’s forests glowed, lights at NERSC began to switch off. For the second time in as many weeks, one of the world’s most powerful supercomputers was carefully being shut down. Climate change, once an abstraction being dispassionately simulated on the 30 petaflops Cori system, had manifested into reality. The machine, a pinnacle creation of mankind - a species that prioritized rapid progress over sustainable development - had been waylaid by the impact of that progress on the world.

Like the families that prepared to flee California’s suburbs, Cori could do nothing to stop the effects of anthropogenic climate change. It was truly powerless. In October, after an unusually long dry spell, remarkably high winds, and arid conditions, utility PG&E grew increasingly concerned a massive wildfire could be caused by a broken power line. Partly responsible for the deadliest and most destructive wildfire in Californian history, 2018’s Camp Fire - which cost $16.5 billion and led to at least 85 civilian deaths - PG&E currently faces bankruptcy and a potential state takeover. While climate change has exacerbated the causes of wildfires, and will increasingly

24 DCD Magazine • datacenterdynamics.com

Sebastian Moss Deputy Editor

make things worse, PG&E has been criticized for its lack of preparation, not clearing trees and brush from around power lines, and not having enough emergency staff. This year, PG&E decided the best way to mitigate the risk of its grid sparking another fire was to switch off parts of that grid, preemptively shutting down power to nearly three million people in central and Northern California during this fire season. Still, despite the precautions, some fires raged, including the Kincade Fire that burned 77,758 acres in Sonoma County. Among those caught up in two separate multi-day power cuts was the Lawrence Berkeley National Laboratory (LBNL), home


to the National Energy Research Scientific Computing Center (NERSC) and an unlucky Cori, the thirteenth most powerful supercomputer in the world. “There was some warning,” Professor Katherine Yelick, associate laboratory director for computing sciences at LBNL, told DCD. “It was about five hours.” It takes around two to three hours to shut down Cori, NERSC director Dr. Sudip Dosanjh said. “If there's a sudden power outage, you can have issues with the system, maybe some parts fail, so just to be on the safe side, we decided both times to go ahead and bring the big system down.” NERSC has uninterruptible power supplies and on-site generators, “but that's not enough to power Cori,” which consumes 3.9MW. “It's enough to power the network and some file systems,” Dosanjh said.

"There's a big difference between having a plan in place and then having to execute it" “So we kept the auxiliary services up during the entire outage, including the network, and something we call Spin,” which can be used to deploy websites and science gateways, workflow managers, databases and key-value stores. Those systems could have stayed up indefinitely, as long as there is enough available fuel to refill the generators’ day tanks every six to eight hours. The first shutdown, beginning on October 10, “was the first time that something like this had actually happened,” Yelick said. The lab had emergency procedures in place for similar events, but “there's a big difference between having a plan in place and then having to execute it,” she admitted. “We did have a plan, but it wasn't as though this was really expected.” NERSC “certainly learned a lot during the first shutdown that helped with the second,” which began on October 26, she said. “We learned certain things about communications and the generators and how each one works - those kinds of things.” Another crucial lesson was how many people Emergency Operations Center (EOC) fielded to deal with LBNL’s power cut, which also took down other science resources, including the DNA sequencing lab, the Molecular Foundry, and the Advanced Light Source.

“[The first time] we didn't have a large number of people that were cycling through the EOC,” Yelick said. “And so I think they got pretty tired. We added some additional people the second time. [In future] we would want to make sure that there's enough people that are able to bring the systems up and are confident of doing it on their own, so that we don't overly fatigue a small group of people.” The second time, NERSC was even able to do some maintenance, doing tests on the upcoming community file system ‘Storage 2020.’ Roughly 100 personnel were involved with the emergency operations at the lab, of which around 20 were actually on site. “We're trying to collect that list of exactly how many people were involved right now,” Yelick said. Cori itself only had a few staffers working on it during the shutdown and return, including employees from the supercomputer’s manufacturer, Cray. The process of returning everything online after power came back took six to eight hours both times. With various teams interacting, many of them working remotely, communications infrastructure was a key concern. Luckily, cell towers and Internet connectivity mainly stayed online during both outages. “We were using cell phones,” Yelick said. “That's one of the things that we added in the second outage. And most people work pretty hard to find some way of communicating if they can't, even if it means driving someplace.” “Long before the power went off, the emergency response teams were using email, text alert, their Slack channel, Twitter,” Computing Sciences Area communications manager Carol Pott said.

Keeping the fires at bay During both outages, LBNL and NERSC were far from the flames themselves, with the closest fires petering out some 10 miles away. "There were never any fires that were immediately putting the lab property at risk," lab director Yelick said. But, as the property backs onto a forested area, the fire department carries out annual checks to cut down on risks. “There's actually a lot of work that's done to keep the vegetation down to keep the fire hazards as low as possible,” Dosanjh said. To make things safer still, the lab “rents a herd of goats that come in and eat a lot of the underbrush," Yelick added.

“They set up a website and other communications options for people to get the latest alerts. They were trying to cover as many bases as possible to communicate with people who might not have access to the Internet or had other limitations.” Dosanjh added: “Now, if there were a broader outage - one that affected the entire East Bay, for example - that would be more problematic for all the staff just in terms of being able to get access to things.” Communication travels both ways, and NERSC’s efforts to keep services online prompted an outpouring of encouragement from many of the 7,000 researchers that use its systems. “I was really pleasantly surprised at all the emails and support that we got from the community,” Dosanjh said. “The staff worked very hard, they're very, very dedicated to the lab's mission, which is

Photography: Lawrence Berkeley National Laboratory staff

Issue 35 ∞ Dec/Jan 2020 25


Cori and the wildfires

to further human knowledge of science.” One of the many cruel ironies of the shutdown of Cori was that it is one of the tools necessary to fight the ravages of a planet off-kilter. One workload on Cori may be simulating energy storage solutions that help us break free from our addiction to fossil fuels. Another may be studying the impact of our seemingly inevitable inability to escape our addictive nature. Cori has been calculating how high the seas will rise, and how large the tornadoes could grow. Just one week before Cori’s first shutdown, it was actually simulating how the forests would burn. "Results from the high-resolution model show counterintuitive feedbacks that occur following a wildfire and allow us to identify the regions most sensitive to wildfire conditions, as well as the hydrologic processes that are most affected," an October paper studying Camp Fire by LBNL researchers Erica R. Siirila-Woodburn and Fadji Zaouna Maina states.

Photography: Lawrence Berkeley National Laboratory staff

"When somebody calls you up and tells you 'Oh, by the way the computers are going to be all down.' You're like, 'Oh crap, what can we do?'”" The Department of Energy “does a lot of simulations of Earth systems,” Yelick said. “So, simulating climate change, as well as looking at alternative materials for solar panels, materials for batteries, and a lot of different aspects of energy solutions.” Some of this work was delayed by the two outages, pushing back valuable research efforts. "At the end of the year, yes there was some lost time for sure," Yelick said, but she stressed that no data was lost, and that due to the normal backlog of jobs to run on NERSC systems, it "for the most part just changes the delay that people were expecting." But NERSC does support some areas of scientific research where time is everything. “There's several where it's a major deal,” said Peter Nugent, LBNL scientist and an astronomy professor at Berkeley. “The ones that the Department of Energy is involved with a lot are at the Light Sources - these are very, very expensive machines that they run and scientists get a slot of time and it can be anywhere from a half a day to a few days. And that's it.

“If they don't have these capabilities there for them, they lose their run. That's a huge expense and a huge loss. But because of the nature of the detectors that they're running there, gathering more and more data, it's not possible for them to process it locally and do everything they want. They need to stream it to one of these HPC centers and get things done.” Nugent’s work is also incredibly time sensitive. “The research that I'm involved with right now uses supercomputers to search for the counterparts to the gravitational wave detections that the LIGO/ Virgo collaboration is making,” he said.

26 DCD Magazine • datacenterdynamics.com

Nugent - senior scientist, division deputy for science engagement, and department head for the computational science department in the computational research division at LBNL - crunches data from the Virgo interferometer in Italy when it spots gravitational wave events, and then tries to capture details on the four meter Victor Blanco telescope in Chile. There’s a problem, however: The gravitational wave discovery “usually comes with a large uncertainty in the sky as to where it would be,” so Nugent has to “start taking a bunch of images to follow these events up, then stream this data up to the


NERSC supercomputers to process it,” and then take more images, as he hunts for signs of the event. “Time is of the essence, these are transient events - they fade very rapidly in the course of 24 hours, so we have to get on them immediately. We have to do this search right away. It's a tremendous amount of data.” When successful, the information gathered can yield important scientific insights. “These are very interesting new discoveries,” Nugent said. “This is the merger of black holes and neutron stars, the latter of which has led to the discovery of where all the elements that are very high on the periodic table - gold, platinum, silver come from. “So when somebody calls you up and tells you 'Oh, by the way, the computers are going to be all down.' You're like, 'Oh crap, what can we do?'” Thankfully, just a few months before, Nugent’s team had already begun to prepare for Cori going down - although at the time, they were thinking of scheduled maintenance.

"Experimentalists come to rely on these HPC centers more and more for doing their data processing" “We were like ‘what happens if an event goes off during those two days that they're down, what can we do?’ Nugent said. “And so we've looked at porting our entire pipeline to a cluster of computers that are run by the IT department at LBNL, known as Lawrencium.” To pull this off, Nugent’s team had already put its code in Dockerized containers, making porting to different systems easier. “We did that earlier in the summer when NERSC was down for maintenance, and it worked out really well. "But then this next thing came up, and we couldn’t use Lawrencium because it [would also go down] when PG&E shut off the power.” The researchers turned to Amazon. “We applied for and received a special educational grant that gave us compute time over there,” Nugent said. “And we were able to - with enough advance notice of when this is going to happen - push all of our data, our reference data and our new data to AWS.” The process worked, but “was sort of last minute,” Nugent said. “It's a real pain, but we managed to get it done and keep it going.”

With more time now, Nugent’s team are looking at other cloud and cloud-like services. “We would love to run it at NERSC all the time, but now we have a backup plan for when this occurs and we’re looking at making it so that it naturally just turns over and goes from one service to another, depending upon the status.” Commercial providers could form a part of the solution, but Nugent hopes to use government systems where possible. “The Department of Energy runs some smaller clusters, so we're going to talk with them about how we could set something like this up in the future,” he said. “This is something that the DOE is certainly very invested in making happen, because sometimes there are bugs and they have to take systems down. “Experimentalists come to rely on these HPC centers more and more for doing their data processing, so they will need to have the capability to shift from one place to another.” He, like many in the HPC community, envisions the ‘super facility,’ a virtual supercomputer that takes advantage of the best aspects of different HPC deployments and services and combines them. “It's this idea that you can use the networking and the streaming of the data to a different resource and process it where you want.” That may take time, by which point NERSC could be home to another huge supercomputer, the 100 peak petaflops Perlmutter system, expected to draw more than 5MW when it launches in late 2020. The system is named for Saul Perlmutter, who won the 2011 Nobel Prize in Physics for observations of supernovae which proved that the expansion of the Universe has been accelerating. “Saul Perlmutter was the guy who hired me out at the lab back in '96,” Nugent said. Perlmutter - the person - is currently looking for more distant supernovae: “Now we have a computer named after somebody who is hunting for the same type of explosions in space that we started with some 20 or so years ago,” said Nugent. “It’s come full circle.” By the time it comes online, or when the ~20MW exascale NERSC-10 system launches in 2024, it is not clear how regular grid power cuts will become. "It is during a very limited period of time where this is an issue," NERSC's Dosanjh said. "Almost all of our rain occurs between November and April, so it's really primarily an issue in October and November. "It's not something we worry about every day, but there are certainly - as we've learned - occasions where you can be dry,

The stars don’t like dust The world's first permanently occupied mountain-top observatory perches on the summit of Mount Hamilton, just east of San Jose. Since 1887, the Lick Observatory has scoured the skies in search of new discoveries. “It’s one of those observatories where pretty much from May to October you can guarantee you're going to get a good night,” Nugent said. “But the number of nights that the observatory has been closed to us in just the past couple years is something that I never encountered before. Due to the smoke particulate matter being too high, we had to close the dome to protect the instruments and the mirror - that's a really, really odd thing.” It’s a new reality the observatory now has to deal with "where, just at the perfect time of year, you have to close because there are too many particulates in the sky,” he said. “Even though it nominally looks like a clear sky, there's just too much crap out there.”

six months after the rain, and there's high wind and high temperatures.” PG&E has warned that it might use preemptive blackouts on its millions of customers for up to a decade, as it catches up on maintenance it should have undertaken years ago. Communities will have to prepare for sudden outages and fear potential fires. But the danger with what happened in California is perhaps not just that of loss of life or property. It is a loss of perspective. It is the danger that this becomes the new normal. “We don't want that,” Nugent said. “We really, really don't want that.” His hope is that “necessity is the mother of invention,” and that the impact of climate change “will get us to do some interesting things because of this.” Within Cori, and its successors, small slivers of replica worlds full of interesting things wink into existence. A weather system here, a turbine engine there. Perhaps in one there is a world where this works out, where a pathway out of destruction is found. But Cori can’t get us there, it can’t change consumption habits, or craft policy proposals. No matter the state of the grid, it can’t change the world beyond its racks. That is our world. We have the power.

Issue 35 • Dec/Jan 2020 27


The Smartest Battery Choice for Resilient, Profitable Data Centers Reduce Your Risk, Cost, Hassle — with Proven Technology. We get it. Data center leaders and CIOs face endless demands — greater efficiency, agility and operational sustainability — while mitigating risk and lowering costs. Deka Fahrenheit is your answer. It’s an advanced battery technology for conquering your biggest data center battery challenges.


The Deka Fahrenheit Difference Deka Fahrenheit is a long-life, high-tech battery system designed exclusively for fast-paced data centers like yours. Our system provides the most reliable and flexible power protection you need at the most competitive Total Cost of Ownership (TCO) available.

Your Biggest Benefits:

Best TCO for Data Centers

Proven Longer Life

Environmentally Sustainable

Slash lifetime TCO with lower

Field testing and customer

Virtually 100% recyclable. End-of-

upfront cost, no battery

experience show an extended

life value and recycling helps lower

management system required,

battery life that reduces the number

cost of new batteries and ensures

longer life and less maintenance.

of battery replacements over

self-sustaining supply chain.

the life of the system.

Safe, Dependable

Flexible, Scalable

Trusted Battery Experts

A technology known for its long

Expand and adapt as needed,

Located on over 520 acres in Lyon

history as a safe, reliable,

without making a long-term

Station, PA, East Penn is one of

high-performance solution —

commitment to an unproven

the world’s largest and most

for added peace of mind.

battery technology chosen by

trusted battery manufacturers.

a cabinet supplier.

We’ll be there, for the long-term.


Let Facts Drive Your Decision. Balancing your data center needs isn’t easy. Deka Fahrenheit simplifies your battery decision by comparing the TCO of a Deka Fahrenheit battery system to lithium iron phosphate.

Overall TCO: Deka Fahrenheit Wins (1036.3kWb - 480VDC Battery System)

TOTAL COST OF OWNERSHIP

$1,000,000 $800,000

Lithium Iron Phosphate

$600,000 $400,000

Deka Fahrenheit

$200,000

1

5

YEARS IN SERVICE

10

15

Data Center TCO Analysis Factors 1 MW System (1036.3kWb - 480VDC Battery System)

Lithium Iron Phosphate 10-yr Warranty

Deka Fahrenheit 7-yr Warranty

Initial System Cost

$236,420

$180,489

Maintenance Cost Per Battery

$39

$5

Replacement Cost Per Battery

$1,750

$525

Replacement Labor Cost Per Battery

$25

$40

Battery End-of-Life Value or Cost Total Cost of Ownership (TCO)*

$91 COST per kwh

$33 CREDIT per kwh

TCO = $832,662

TCO = $568,111

Approximately $264,551 in Savings

* Space calculations assume floor space costs of $60 per ft2, and Net Present Value (NPV) of 6%. Space assumptions include 2018 NFPA855 requirements with 4’ aisle. Does not include additional costs for UL9540A design changes or facility insurance for lithium iron phosphate systems. Total decommissioning costs for a 1MW Li-Ion battery based grid energy storage system is estimated at $91,000. Source: EPRI, Recycling and Disposal of Battery-Based Grid Energy Storage Systems: A Preliminary Investigation, B. Westlake. https://www.epri.com/#/pages/summary/000000003002006911/ Terms and conditions: Nothing contained herein, including TCO costs and assumptions utilized, constitute an offer of sale. There is no warranty, express or implied, related to the accuracy of the assumptions or the costs. These assumptions include estimates related to capital and operating expenses, maintenance, product life, initial and replacement product price and labor over a 15-year period. All data subject to change without notice.


Specifications: The High Tech Behind Deka Fahrenheit • Advanced AGM front access design decreases maintenance, improves safety and longevity • IPF® Technology — Optimizes capacity and reliability • Microcat® Catalyst — Increases recombination and prevents dryout • Sustainably designed for recyclability — End-of-life value enhances profitability • Exclusive Thermal Management Technology System: ° THT™ Plastic — Optimizes internal compression ° Helios™ Additive — Lowers float current and corrosion ° TempX™ Alloy — Inhibits corrosion

Deka Shield Protection Allow Deka Services to install and maintain your Deka Fahrenheit batteries and your site will receive extended warranty benefits. Deka Services provides full-service turnkey EF&I solutions across North America. Ask East Penn for details.

Do you have the best battery system for your data center? You can’t afford downtime or extra costs. Contact East Penn for a full TCO analysis.

Stuart, output to 20 inches Stuart, output to 20 inches, two copies

610-682-3263 | www.dekabatteries.com | reservepowersales@dekabatteries.com Deka Road, Lyon Station, PA 19536-0147 USA


The new chip bestiary Max Smolaks looks at the future of AI workloads, and the uncertainty around hardware that’s aiming to dethrone the GPU Max Smolaks Contributor

I

n 1971, Intel, then a manufacturer of random access memory, officially released the 4004, its first single-chip central processing unit, thus kickstarting nearly 50 years of CPU dominance in computing. In 1989, while working at CERN, Tim Berners-Lee used a NeXT computer, designed around the Motorola 68030 CPU, to launch the first website, making the machine used the world’s first web server. CPUs were the most expensive, the most scientifically advanced, and the most power-hungry parts of a typical server: they became the beating hearts of the digital age, and semiconductors turned into the benchmark for our species' advancement.

32 DCD Magazine • datacenterdynamics.com


New hardware for new AI Few might know about the Shannon limit or Landauer's principle, but everyone knows about the existence of Moore’s Law, even if they have never seated a processor in their life. CPUs have entered popular culture and, today, Intel rules this market, with a nearmonopoly supported by its massive R&D budgets and extensive fabrication facilities, better known as ‘fabs.’ But in the past two or three years, something strange has been happening: data centers started housing more and more processors that weren’t CPUs. It began with the arrival of GPUs. It turned out that these massively parallel processors weren’t just useful for rendering video games and mining magical coins, but also for training machines to learn - and chipmakers grabbed onto this new revenue stream for dear life. Back in August, Nvidia’s CEO Jen-Hsun ‘Jensen’ Huang called AI technologies the “single most powerful force of our time.” During the earnings call, he noted that there were currently more than 4,000 AI start-ups around the world. He also touted examples of enterprise apps that could take weeks to run on CPUs, but just hours on GPUs. A handful of silicon designers looked at the success of GPUs as they were flying off the shelves, and thought: we can do better. Like Xilinx, a venerable specialist in programming logic devices. The granddaddy of custom silicon, it is credited with inventing the first field-programmable gate arrays (FPGAs) back in 1985. Applications for FPGAs range from telecoms to medical imaging, hardware emulation, and of course, machine learning workloads. But Xilinx wasn’t happy with adopting old chips for new use cases, the way Nvidia had done, and in 2018, it announced the adaptive compute acceleration platform (ACAP) - a brand new chip architecture designed specifically for AI (see DCD Issue 27). “Data centers are one of several markets being disrupted,” CEO Victor Peng said in a keynote at the recent Xilinx Developer Forum in Amsterdam. “We all hear about the fact that there's zettabytes of data being generated every single month, most of them unstructured. And it takes a tremendous amount of compute capability to process all that data. And on the other side of things, you have challenges like the end of Moore's Law, and power being a problem. "Because of all these reasons, John Hennessy and Dave Patterson - two icons in the computer science world - both recently stated that we were entering a new golden age of architectural development." He continued: “Simply put, the traditional architecture that’s been carrying the industry for the last 40 to 50 years is totally inadequate for the level of data

generation and data processing that’s needed today.” “It is important to remember that it’s really, really early in AI,” Peng later told DCD. “There’s a growing feeling that convolutional and deep neural networks aren’t the right approach. This whole black box thing - where you don’t know what’s going on and you can get wildly wrong results, is a little disconcerting for folks.” Salil Raje, head of the Xilinx data center group, warned: “If you’re betting on old hardware and software, you are going to have wasted cycles. You want to use our adaptability and map your requirements to it right now, and then longevity. When you’re doing ASICs, you’re making a big bet.” Another company making waves is British chip designer Graphcore, quickly becoming one of the most exciting hardware start-ups of the moment. Graphcore’s GC2 IPU has the world’s highest transistor count for a device that’s actually shipping to customers 23,600,000,000 of them. That’s not nearly enough to keep up with the demands of Moore’s Law - but it’s a whole lot more

"The traditional architecture that’s been carrying the industry for the last 50 years is totally inadequate for the level of data generation and data processing needed today" transistor gates than in Nvidia’s V100 GPU, or AMD’s monstrous 32-core Epyc CPU. “The honest truth is, people don’t know what sort of hardware they are going to need for AI in the near future,” Nigel Toon, the CEO of Graphcore, told us in August. “It’s not like building chips for a mature technology challenge. If you know the challenge, you just have to engineer better than other people. “The workload is very different, neural networks and other structures of interest change from year to year. That’s why we have a research group, it’s sort of a longdistance radar. "There are several massive technology shifts. One is AI as a workload - we’re not writing programs to tell a machine what to do anymore, we’re writing programs that tell a machine how to learn, and then the machine learns from data. So your programming has gone kind of ‘meta.’ We’re even having arguments across the industry

about the way to represent numbers in computers. That hasn’t happened since 1980. “The second technology shift is the end of traditional scaling of silicon. We need a million times more compute power, but we’re not going to get it from silicon shrinking. So we’ve got to be able to learn how to be more efficient in the silicon, and also how to build lots of chips into bigger systems. “The third technology shift is the fact that the only way of satisfying this compute requirement at the end of silicon scaling and fortunately, it is possible because the workload exposes lots of parallelism - is to build massively parallel computers.” Toon is nothing if not ambitious: he hopes to grow “a couple of thousand employees” over the next few years, and take the fight to GPUs, and their progenitor. Then there’s Cerebras, the American start-up that surprised everyone in August by announcing a mammoth chip measuring nearly 8.5 by 8.5 inches, and featuring 400,000 cores, all optimized for deep learning, accompanied by a whopping 18GB of on-chip memory. “Deep learning has unique, massive, and growing computational requirements which are not well-matched by legacy machines like GPUs, which were fundamentally designed for other work,” Dr. Andy Hock, Cerebras director, said. Huawei, as always, is going its own way: the embattled Chinese vendor has been churning out proprietary chips for years through its HiSilicon subsidiary, originally for its wide array of networking equipment, more recently for its smartphones. For its next trick, Huawei is disrupting the AI hardware market with the Ascend line - including everything from tiny inference devices to Ascend 910, which it claimed is the most powerful AI processor in the world. Add a bunch of these together, and you get the Atlas 900, the world's fastest AI training cluster, currently used by Chinese astronomy researchers. And of course, the list wouldn’t be complete without Intel’s Nervana, the somewhat late arrival to the AI scene (see p34). Just like Xilinx and Graphcore, Nervana believes that AI workloads of the future will require specialized chips, built from the ground up to support machine learning, and not just standard chips adopted for this purpose. “AI is very new and nascent, and it’s going to keep changing,” Xilinx’s Salil Raje told DCD. “The market is going to change, the technology, the innovation, the research all it takes is one PhD student, to completely revolutionize the field all over again, and then all of these chips become useless. It’s waiting for that one research paper.”

Issue 35 ∞ Dec/Jan 2020 33


Intel seeks nirvana

TIME FOR AI TO GET

Specialized? Behold, the Nervana Neural Network Processors (NNP). Alex Alley reports on the two new chips that Intel says will "revolutionize" the use of AI

A

t a certain point, it pays to stop generalizing. Artificial intelligence has come a long way using graphics processing units (GPUs), a sector led by Nvidia. Now Intel is pitching specialist Training and Inference chips as a lower-cost alternative. AI specialist Nervana was just two years old when it was bought by Intel for around $400m in 2016. Since then the company has been absorbed as a division of the chip giant and is hard at work developing applicationspecific integrated circuits (ASICs) designed for Training and Inference. Nervana founder Naveen Rao is now Intel's corporate VP and GM for Artificial Intelligence. In November, DCD caught up with him at Intel’s AI Summit in San Francisco. He said: “With this next phase of AI,

Alex Alley Reporter

we’re reaching a breaking point in terms of computational hardware and memory." In other words, GPUs just aren't going to cut it. “Purpose-built hardware like Intel’s Nervana NNP range is necessary to continue the incredible progress in AI,” Rao said. "You're going to see this benefiting everybody because the whole purpose of the computer is shifting to be an AI machine. "It appears that pretty much every application is going to need AI [in the future], probably a lot of Inference and, at some point, Training.” The two core aspects of deep learning are Training and Inference. AI Training involves feeding large amounts of data into an infant AI model or neural network, again and again until the model can make an accurate prediction. Inference is the deployment of the trained model to make decisions in the field.

34 DCD Magazine • datacenterdynamics.com

These roles are usually conducted by GPUs because of their ability to calculate vast amounts of mathematical equations. The newest, cheapest and smallest wattage GPU that Nvidia has is the 70W Tesla T4 series. The NNP range replaces the GPU with specialist hardware, conducting Training and Inference on separate NNP-T and NNP-I chips, which use less power and are much more scalable. For example, the Inference chip, the NNP I-1000, is available in two products: the NNP I-1100, a 12W card which holds a single NNP-I chip, and the NNP I-1300, a 75W card which holds two NNP-I chips. “Power matters. You can't keep throwing up a power rack computation to solve these problems in the IoT world with data centers,” said Rao. The more powerful training chip, the NNP-T 1000, contains up to 24 tensor processing cores, along with memory, and a fast inter-chip communications link (ICL) with 16 112 Gb/sec channels. Like the Inference chip, the Training chip is available in two products: the 300W NNP-T 1300 card and the 375W NNP-T 14000 card. Despite the prominence of the NNP range, Intel is still pitching GPUs for more specialist cases. Mere days after the AI Summit, it announced that America’s first exascale system, Aurora, would house the company’s new Ponte Vecchio GPU. Described as the 'workhorse' for HPC and AI convergence, Ponte Vecchio is a powerful GPU designed to take on more taxing roles. Intel’s VP and GM of Enterprise and Government Group, Rajeeb Hazra, made it clear that the company’s grand plan for AI is to have a chip for any particular need, specific or general. Aurora will be required to multi-task, hence the necessity for powerful GPUs. Hazra said: "As high-performance computing moves from traditional modeling and simulation to the advent of data, there will be a drive for diverse computing needs, which will [spur] a new tailwind for heterogeneous computing.


"One size doesn’t fit all. We must look at the architectures [and how they are] tuned to the various needs of this era. "If you need a general-purpose solution then Ponte Vecchio has already described its leadership performance [for] when you get those workloads that have tremendous bandwidth requirements and dense floatingpoint operations. "If you were then to take the next step and say ‘I am very, very interested in the best possible performance for AI deep learning, Training and Inference,’ that's what our NNP families are for. They are less generalpurpose in some sense than the GPU, which runs a broader set of workloads, but [are] acutely and solely focused on deep learning and scale. "And so that is what we believe is the right approach to a diverse set of workloads that are also morphing quite quickly as the industry experiments and innovates." To cut a long story short, Intel wants to give customers a choice depending on what circumstances they are facing. Naveen Rao said: “CPUs have hooks in them to help with Inference and Training so customers can start on Xeons [CPUs], which they probably already have, or they may eventually move to an NNP or even to an FPGA depending on what kind of flexibility they need.” Splitting deep learning functions onto ASICs such as the NNP devices is by no means a "novel" idea. It has also been used by Google, in its Tensor Processing Unit (TPU) and its Inference and Training variants, released in 2017 on its cloud service. Google has data centers full of TPUs that are available to rent. Notable customers who use the processors include Lyft, Twitter, and HSBC. At the AI Summit event, Intel showed a 10-rack pod with 480 NNP-T cards, using their ICL links, and with no external switch. This platform trained multi-billion parameter models in reasonable amounts of time. For field deployments, NNP-I chips will be placed inside a regular server rack. Intel says. Intel’s tests implied that the NNP-I would out punch its rival, in a quarter of the physical space. Intel deploys the NNP-I in a single rack unit (1U) chassis, which holds up to 32 NNP-I chips in the “ruler” form factor,

a long slim module. Intel said this had 3.7 times the density of a 4U module which would be required to hold 20 Nvidia Tesla T4s. Nvidia declined to comment to DCD in time for print. Naveen Rao said: “I think a lot of data center operators have already latched onto that and have an expandable infrastructure as it is. The NNP-I is probably the new order. It's an order of magnitude more efficient than that in general purpose.

"Pretty much every application is going to need AI in the future" “So, if you know Inference is 30 percent of your total workloads. Then it makes a lot of sense to incorporate something like this.” Facebook’s machine learning compiler Glow already uses the NNP-I. The social media giant’s AI director, Mikhail Smelyanskiy, said: “With 2.4 billion users today, there are a lot of seemingly unrelated products or services, but in reality, there are many AI algorithms that are running underneath. And some examples are photo tagging [or translation]."

Similarly, Baidu is an early adopter of the new NNP-T. Kenneth Church, an AI Research Fellow at Baidu, said the company focused on implementing the training chip for Paddle-Paddle, an open-source deeplearning platform at Baidu that is used by 1.5 million developers in China and the chip to power its X-Man 4.0 Open Accelerator Infrastructure. Gadi Singer, AI Products group VP and head of the design team on the NNP-I, gave DCD some extra details on its deployment. “Unlike some other services that were very specifically focused on solving a particular problem, we built it for a family of [general deployment] issues,” Singer said. “This is a plugin… so you would see different types of deployment. In some cases, you would see a rack like today or you have sockets that [are] today used for SSDs. “Because of data centers… we needed something that will work within existing infrastructure, allowing [data center operators] to simply plug the rack onto extension sockets. “It's built as a toolbox. When new usages arise, you can use this in a very diverse manner and use more of these to scale up. “One thing that is very clear in our space is that, by the time you're finished finding a solution to a problem, the problem has already changed.”

What’s in an NNP? Intel claims its AI products will generate over $3.5bn in revenue in 2019, and by 2022 it hopes to bring in $10bn. The NNP-I and NNP-T are complementary pieces of hardware, but have relatively different microarchitectures. The NNP-I is made up of around 12 Inference Engines (ICE) whereas the NNP-T comes with 24 Tensor Processing Clusters (TPC). Intel says these can hit performances of around 119Tops (Tera operations per second), or 119 trillion operations per second. Unlike previous performance measures such as TeraFlops (trillions of floating point operations per second), this figure is hard to compare because AI operations combine data types. The NNP-T is equipped with the PCI Express 4.0 x16 graphics interface (PCIe x16) as opposed to the NNP-I’s PCIe v3 x8. The Training chip has 32GB of High Bandwidth Memory (HBM) connected to four HBM ports. The amount of distributed memory on the chip is 60MB (2.5MB per TPC). The NNP-I, on the other hand, has 4MB of Static RAM and Dynamic RAM bandwidth of 68GBps. The amount of internal memory is 75MB.

Issue 35 ∞ Dec/Jan 2020 35


>Awards | 2019

Category Winners Following months of deliberations with an independent panel of expert judges, DCD Awards is proud to celebrate the industry’s best data center projects and most talented people. Enterprise Data Center Design Award

Sponsor

Winner: DUG GeoSolutions (Left-Right) Angelique Davis | Ralph Spitsnaugle, DellEMC Ben Barritt, Keysource

Edge Data Center Project of the Year

Dan Oldham, DigiPlex Mark Lommers, DUG GeoSolutions

Winner: DellEMC The micro Modular Data Centres (MDCs) designed by DellEMC’s Extreme Scale Infrastructure team, helped a multinational automotive company in the rollout of its connected vehicle technology - deploying semiautonomous vehicles.

Completely designed in-house, the DUG 250 Petaflops, 15MW High Performance Cluster is entirely cooled by complete liquid immersion. They use their supercomputer to produce high quality seismic processing services.

Hyperscale Data Center Innovation Award

Winner: CyrusOne

TM

At over a million square feet, CyrusOne's Kincora is one of the largest data centers in North America with a contiguous floor plate on two floors. Sterling 6 is phase 2 and was built to accommodate a hyperscale customer. The ‘Massively Modular’™ approach means that additional buildings can be deployed in as little as 12 weeks.

Multi Tenant Data Center Design Award

Sponsor ™

Winner: AirTrunk

Damien McHugh, Linesight

AirTrunk has developed a range of innovative designs that aim to increase the speed of deployment, deliver an industry-low PUE, ensure customer flexibility and customization while minimizing costs at its rapidly expanding 130MW capacity campus in Western Sydney.

36 DCD Magazine • datacenterdynamics.com

Peter Patsalides, CyrusOne


Awards 2019 Winners

Data Center Modernization Project of the Year

Sponsor

Winner: ServerFarm

Ainsley Harriott, Chain of Hope Charity Partner representative. We have raised over £150,000 in the last 3 years.

Hybrid IT Project of the Year

Sponsor

Sharon Besley, Serverfarm Luke Neville, i3 Solutions Group

ServerFarm acquired a 120,000 sq ft, 10.5MW data center originally commissioned in 2002 that was operating well below its design capacity and through a program of capital expenditure managed to optimize facility performance and institute a new operations management model without impacting existing tenants.

Energy Smart Award

Winner: L&T Metro Rail Hyderabad The Hyderabad Metro Rail Project is the world's Largest Public-Private Partnership Project (PPP) in the Metro Sector. The project has harnessed the power of multiple clouds alongside on-premise IT, to build towards the rapid scaling of metro operations.

Sponsor

Winner: CoolDC

Barry Maidment, Rittal Ltd, Tim Chambers, CoolDC, Markus Mandemaker, Asperitas

Eschewing traditional air-cooling in favor of liquid-based technologies, LDC utilizes a combination of the most energy-efficient cooling solutions in a design that facilitates a *collectively* more efficient output.

Data Center Operations Team of the Year

Sponsor

PRO

Winner: Rack Centre Click here to view the Awards highlights video

Mission Critical Tech Innovation Award

Despite a very challenging operating terrain, characterized by lack of reliable grid power and limited access to skill sets, the team has continued to achieve set objectives at Nigeria’s first carrierneutral Tier III colocation data center, using purely local talent.

Sponsor

Winner: ZutaCore

Erez Freibach, Zuta-Car Systems Shahar Belkin, ZutaCore

ZutaCore’s HyperCool2 directon-chip evaporative cooling technology brings a unique combination of self-regulation, on-demand and low-pressure in a single system. Issue 35 • Dec/Jan 2020 37


>Awards | 2019 Data Center Manager of the Year

Sponsor

Winner: Farooq

Al-Jwesm, Saudi Aramco

Farooq Al-Jwesm, Saudi Aramco

Over and above the daily duties of data center manager at what is reputedly the world's most profitable company, Farooq has demonstrated an aptitude for solving engineering problems - which are many when your data centers are distributed across harsh environments. For his efforts, he has been granted two patents by the US patent office.

Nonprofit Industry Initiative of the Year

Sponsor

Giordano Albertazzi, Vertiv | Daniel Mace, Bouygues | Simon Anderson, Virtus Data Centres

Data Center Construction Team of the Year

Winner: Virtus Dealing with challenging local planning requirements, and product recalls mid-construction, the various teams pulled together to deliver the Virtus 5 project on time, on budget and to quality.

Winner: Boden Type DC

Alan Beresford, EcoCooling, Laszlo Kozma, H1 Systems

Funded by the European Horizon 2020 project, this prototype 500kW facility in the small town of Boden uses every trick in the book to lower its environmental impact: it runs on renewable energy and doesn't have batteries or gensets.

Sponsor

Young Mission Critical Engineer of the Year

Winner: Sarah Davey, Arup Sarah has demonstrated a maturity well beyond her years in terms of taking responsibility for major clients; the size and scope of those projects on which she has also taken responsibility and the thought she has put into relating her work experience to its broader impact on design engineering.

Sarah Davey, Arup

In one instance she developed a prefabricated reference design that reduces time to market and is now favored by the client to deploy in new untested locations.

38 DCD Magazine • datacenterdynamics.com

Jason Okroy | Kristen Vosmaer Salute Mission Critical

Corporate Social Responsibility Award

Winner: Salute Salute has taken a potentially vulnerable group of people (some recruits were actually homeless when hired) and used their mission-critical aptitudes to transform them into world-class data center technicians, at the same time addressing the data center skills shortage.


Awards 2019 Winners

Sponsor

Public Vote: Best Mainstream Press Coverage of the Data Center Industry

Winner: Adam Satariano, The New York Times This year's public vote category recognizes journalists and mainstream publications which have helped the data center community by delivering useful information about the sector to a broader audience.

Gary Cook, Greenpeace

Outstanding Contribution to the Data Center Industry

Winner: Gary Cook, Greenpeace

Peder Nærbø, Bulk Infrastructure

Business Leader of the Year Winner: Peder Nærbø, Bulk Infrastructure This year’s winner personifies not only the entrepreneurial approach to doing business but also the strong sense of social responsibility and environmental stewardship that we need to see more of.

As someone from outside of the industry daring to comment on its drawbacks and failures, Gary Cook started as a much-vilified figure. Working with his team at a somewhat controversial activist organization, his first report came out in 2010. It was divisive and difficult to read. People found things to criticize in it, they made excuses for why they lagged behind their competitors. But, ultimately, they listened, and they improved. We believe the Clicking Clean report has done more to change mindsets within the industry than many would care to admit. At such an important time, when scientific reports paint a stark future, when thousands take to the streets to protest emissions, when consumers are buying based on their carbon footprint, the quest to go green has never been more important. Now the industry appears ready to come together to help fight this crisis.

He moved from shipping to data centers a decade ago and soon realized that a lack of connectivity would hold him and the rest of his native Norway back. Rather than wait for consensus and government support, he went ahead and used his own money - a novelty in the modern age - to build the first submarine cable connecting Norway to North America. The Havfrue cable was lit in October. All this is in parallel to his data center developments, one of which is the size of Central Park. Next, the Arctic.

Stephen K. Amos, Awards Host

Issue 35 • Dec/Jan 2020 39


Awards history

A brief history of the DCD Awards

Nick Parfitt Lead Analyst

From 2007 to 2019, the categories, the sponsors and the chosen winners can tell us a lot about the data center industry The Datacenter Dynamics Awards launched in August 2007 with ten categories designed to test “data center leadership, achievement and best practice.” On December 4, 2007, 500 guests gathered in the comfortable surroundings of The Brewery London, EC1, to find out who’d won. The first Awards established the principles of independence, and 2007’s four month program set in stone a pattern of structured entries, evaluation and further questioning by independent judges before the final re-evaluation and the announcement of the winners. One sign of the pace and consolidation of the industry is the fact that only four of the original 10 sponsors survive under their name and brand from 2007 - although all still exist in some form or another. From 2007 to 2019, the Awards grew - in number of entries, categories, attendees and judges. In 2010, the Datacenter Dynamics Awards platform was extended into Spain and Latin America, and in 2014 into Asia Pacific. As well as acting as a focus on achievement in both of these regions, these programs feed into the entry round of what has since 2016 become the Global Awards. Adapting to trends To keep relevant, the Awards have had to change with the industry’s technological and operational evolution and its geographic spread. Cloud was represented early: From 2009 on, the EMEA awards had a series of IT Optimization categories; as the Awards entered emerging markets, a Cloud Journey category was added and a Hybrid IT award in 2019. Edge started as a category in 2017 and the longstanding Mega-Data Center Award evolved into Hyperscale Innovation in 2019. The Awards have included categories for people from the early days. In 2009, Business Leader of the Year joined the staple Outstanding Contribution and the Operational Team Awards. The Data Center Manager and Construction Team categories were added more recently. Four of the categories have been

DCD Awards 2007 - London

relatively consistent in what they look for and how they go about identifying top quality entries, right from the original 2007 Awards: • The ‘Green’ Data Center Award ran from 2007 to 2014, the Improved Data Center Energy Efficiency Award ran from 2008 to 2018, and there is now an Energy Smart Award. • Operational Team of the Year has remained from 2007 to the present • Innovation in the Mega-Datacenter became Hyperscale Innovation in 2019. • Future Thinking became Mission Critical Innovation in 2017 The Road to Green For more than half of its history, there have been two categories for data center energy efficiency. The Green Data Center Award honored best practice at the design stage, and a year later the Green Grid sponsored the Improved Energy Efficiency Award intended to recognize upgrades to older data centers. Both aspects together with energy sourcing have now been rolled into a single Energy Smart category. Efficiency is an obvious subject for scrutiny and reward, given the impact of best practices, monitoring, and management. It’s

40 DCD Magazine • datacenterdynamics.com

worth noting that energy consumption is also often featured in other categories. For example, a majority of the winners of the Future Thinking category have been initiatives driven by energy conservation. In 2009, Infinity SDC won the Future Thinking Award for its Infinity One data center in Suffolk, which moved the target from ‘green’ to ‘dark green’ by using energy generated from bio-matter supplied by a group of local farms. Of the 15 listed finalists in the Future Thinking (2016) and Mission Critical Innovation (2017 onwards) Awards, four focused on cooling efficiency and six on other energy efficiency initiatives. Other finalists have looked at improving security and resilience. The single ‘Energy Smart’ category recognizes a change in the sector towards service data centers (for colocation and cloud), while the industry is moving away from a narrow focus on power usage effectiveness (PUE) as a measure of efficiency. The Energy Smart Award aims to tell a broader story including sourcing, technology, decision making, automation, site location, embedded energy and recycling.


DCD Awards 2012 - China

DCD Awards 2019 - Singapore

For instance, in 2018, STT Global Data Centres won for finding a way to bring free cooling into its Singapore data centers. This year, one finalist is using AI to manage efficiency of cooling. Not just another working day The Operations Team category divides the Awards era very clearly into two time periods. In the first from 2007 to 2013, the majority of entrants and all the winners come from large scale and high profile enterprise entrants. While the best entries told of dedication, initiative and achievement, the focus of many of these entries was very much inside the data center and focused on data center tasks. Possibly with this in mind, the category was renamed from 2011 to 2014 as the Special Assignment Team category. Facebook’s win in 2014 set the category on a new course. It was the first victory by a nonenterprise data center and from 2016 to 2018 the Operations Team category was separated into different categories for Enterprise facilities, and Service data centers used by colocation and cloud providers. In a re-combined form, the 2019 entrants and finalists reflect the service data center sector and also the increasing presence of third-party facility management service providers. In recent years, Award-winning teams seem

DCD Awards 2015 - Brazil

DCD Awards 2019 - London

to be doing a far greater variety of tasks. While this may be attributed to the change of focus of the Award, the best entries have always showed teams dealing with the complexities of live data center tasks. In 2010, for example, Britannia won the award with a team had become proactive around power downs and decided to have an annual power down at each of its data centers. In 2009, Computacenter won for forming a nine-person migration team tasked with moving eighteen existing customers from existing data centers into a new Tier III facility.

"The APAC Awards had its first entirely non-human entry for the Data Center Manager of the Year" In 2014, the Facebook entry broke the mold. It told the story of the social media giant bringing a ‘flat pack’ style of pre-prepared, quickassembly racks to Sweden, the land of Ikea. It was also an entry which combined in-house

personnel with half a dozen specialist suppliers across different time zones. This project helped form the concept of building bigger, better and faster, and the entry illustrated the role of people (unfortunately, the design was later wrapped up in controversy, after it was found that it relied on trade secrets stolen from BladeRoom - Facebook settled out of court, while co-conspirator Emerson Electric lost the case). A glimpse of the future? Over the past three years, skills availability and skills limitations have begun to have an impact across several categories. For instance, entries in Operations Team have made use of systems automation and management software since that category began, while many of the Innovation Awards have been for ideas that explicitly or implicitly address skills shortage. This year, however, we have seen an entry which just might be a sign of things to come: the APAC Awards had its first entirely nonhuman entry for the Data Center Manager of the Year category. The AI data center manager did not attend the finals - no one (or thing) wants to spend six hours stashed as cabin luggage - but it is quoted as saying “6f 6d 67!” for being nominated. "73 75 62 6d 69 74" your 2020 entry now.

Issue 35 ∞ Dec/Jan 2020 41


Nordic competition heats up

SHARING WARMTH THE NORDIC WAY Sebastian Moss heads to Sweden to watch data turn into pellets

42 DCD Magazine • datacenterdynamics.com

Sebastian Moss Deputy Editor


H

eat means different things to different people. To a data center operator, it is the enemy, a problem to be overcome and removed as quickly as possible. But to a family in the middle of a fierce winter, heat is a friend, a vital source of life. There are some who hope to turn that dichotomy to good use, and harness the waste heat produced by data centers. DCD visited a campus in Sweden which is doing just that. “When we started, it was all about reusing heat in the district heating system,” Jan Fahlén, EcoDataCenter site development manager, told DCD. Located in the small city of Falun, the company’s eponymous Swedish campus hopes to use its heat for a variety of purposes. “We have expanded our plans so that we can use it in wood pellet production. In the future - depending on the location - we have thought about greenhouses and fish farms.” EcoDataCenter has installed large underground pipes that carry hot water to the combined heat and power (CHP) generating plant next door, run by Falu Energi & Vatten. The CHP facility, which provides heat for both Falun and nearby Borlänge, also produces wood pellets that are sold across the country for heating. Standing in front of a mountainous pile of sawdust, Falu Energi’s sustainable development engineer Lars Runevad explained: “So we put the sawdust on six meter-wide mats, and blow hot air over it.” Sawdust comes to the plant with 50-55 percent moisture content, but needs to leave it with no more than 10 percent. “Otherwise we can't produce wood pellets that will last and it will start to degrade and mold,” Runevald said. Currently, the CHP uses its own heat from burning residual wood and biomass in two giant 30MW boilers. But, during the winter months, all that heat has to be used for the city, so pellet production shifts to using propane gas, and then shuts down entirely. “Now, with a data center nearby, we can prolong the season,” Runevald said. “We can produce more wood pellets than we would otherwise have been able to. We may also be able to [stop] the propane excess needed to top it up.” B1, the first data center hall on the EcoDataCenter campus, will send up to 10MW of waste heat to dry pellets. As it consumes electricity from renewable sources, EcoDataCenter bills itself as not just climate neutral, but as ‘climate positive.’

Issue 35 ∞ Dec/Jan 2020 43


Nordic competition heats up

"I think that sustainability will be a requirement for customers moving forward," CEO Lars Schedin told DCD. But with one hall built and barely occupied, most of the company’s plans are still up in the air pending major customer contracts. During our tour earlier this year, we counted just a handful of operational racks in the 800 square meters (8,611 sq ft) of white space in B1. "That hall was opportunistic in its nature," site manager Dan Andersson said. "B2-3 and the rest of this site will be business-driven." The wooden skeleton of B2 looms large over the site, looking odd to those not used to seeing wooden frames - but it’s a relatively common approach in Sweden. “We use cross-laminated glulam wood for two reasons,” Fahlén said. “The first is sustainability, we don't use as much carbon dioxide as with concrete. The other reason is really for fire protection. “You will ask me, why, wood burns?,” Fahlén preempted. “Yeah, it does, but laminated wood doesn't burn that way - try to put fire on this, you won't be able to.” Using wood can also be quicker to deploy, and a wooden ceiling makes it easier to hang things from, he claimed. B1 is still covered in steel plates, however, to get a higher security rating. “In Sweden, we thought eternal peace came at the end of the ‘90s,” Andersson said. Standing to attention with the alert posture of a two-decade military veteran, he went on: “Now we realize that this isn't the fact, so the government is putting a lot of pressure on the state departments, on the regional department, and on municipalities. They have to step up to have more resilience. “This is an awakening for society, which is good for us because we decided to be able to deliver data center services according to this security law.” The security rating allows it to offer space to government departments, while other businesses have asked it to get ISO certifications and meet the EN5600 standard. CEO Schedin said that there is currently no demand to get an Uptime certified tier rating, but claims B1 meets Tier IV specifications, while B2-3 may be Tier III. Work is underway on the second hall, but “we won't fully build B2 until we have a customer,” Fahlén said. “That way we can have a choice if we like HPC or colocation or wholesale.” The company is in talks with a large potential customer for a 30MW wholesale deal, Fahlén said, but that could fall through. After acquiring Fortlax in June 2019 and its two facilities in Piteå, northern Sweden, EcoDataCenter is also trying to convince Fortlax’s clients to buy space in Falun.

44 DCD Magazine • datacenterdynamics.com

Learn more about data center efficiency at DCD>Energy Smart on 27-28 April


Photography: Sebastian Moss

“The largest customer in Fortlax is the automotive company BMW,” Schedin said. “That kind of core manufacturing industry is currently a big user of high-density capacity, but that market segment grows quite rapidly right now. "In two years’ time, you will have a lot of companies in that industry looking for 2-3MW of capacity.” Andersson added: “BMW will come here and look at what we've got - the needs they have are greater than the capacity we have left up in Piteå.” Schedin hopes that once a large anchor tenant is found, it will lead to others quickly following, but the company can’t wait until that happens, he said: "We have to have an extreme pace." With a background in movie and TV production, taxi services, and shirt manufacturing, turning to data centers was a marked shift for Schedin. "It's all about understanding the DNA of the industry. Here, it is important to become the number one or number two. Otherwise, I don't want to say you're a nobody, but…” The goal for EcoDataCenter, Schedin said, "is to become the main Nordic service provider that is not an American company. "We should be the Nordic service provider for anyone that is looking for highdensity solutions for artificial intelligence or big data. "We have a huge potential to develop a really large business," he claimed. "I am just about to close a deal here in Stockholm, a big one that I can't reveal yet." Beyond Falun, EcoDataCenter has ambitions in the surrounding Dalarna County: “Our goal is to have somewhat about in the area of 500MW across three or four sites,” development manager Fahlén said. “It's already out. We have an MoU for a place just 50 kilometers from here in Smedjebacken, for 150MW.” The company hopes to dub the region ‘Dala Quincy’ after the small US settlement. “Quincy is just this little town in Washington state with numerous data centers,” Fahlén said. “It’s our model.” Eyeing locations near dwindling steel and paper mills that are steadily reducing their power loads, EcoDataCenter hopes to acquire and build facilities in the Nordic area, and may even venture further out into the wider FLAP market - the region including Frankfurt, London, Amsterdam, and Paris. Backed with SEK1 billion (US$105m) from REIT Areim, its priority is to find places "where the heat could also be reused," Schedin said. "Our main target is always to reuse the heat."

Issue 35 ∞ Dec/Jan 2020 45


A look ahead

Key trends for 2020 and beyond • Snapchat users share 527,760 photos • More than 120 professionals join LinkedIn • Users watch 4,146,600 YouTube videos • 456,000 tweets are sent on Twitter • Instagram users post 46,740 photos This, inevitably, brings challenges alongside opportunities. Having chaired over 40 webinars and panels across the year with respected industry experts, I have heard

greater confidence around edge use-cases, increased uptake of hybrid IT models and plenty of fantastic technology innovation. But, at the same time, we have also seen concerns around environmental impact and energy-usage, frustrations around 5G and DCIM, and ongoing questions around resiliency that require collaboration and careful planning. Here are the trends that I see shaping our debates in 2020:

h ck

er e t o

v iew

1) Continued movement from core to edge

Cl i

A

s we approach the end of 2019, the data center sector continues to evolve and the IT infrastructure ecosystem supporting it becomes more complex. Demand is certainly not slowing down with 2.5 quintillion bytes of data generated every day. Indeed, we are all now familiar with the mind-blowing social-media numbers that our industry is responding to every minute:

Dan Loosemore CMO

Data demand is unrelenting and use-cases are emerging beyond gaming into new sectors like retail distribution, industry 4.0 and Government/ education. More data will be processed away from the core and a greater level of distributed architecture will be required. Watch the 2019 webinar with Vertiv and EdgeConnex for more. er e t o

v iew

Cl i

h ck

2) The emergence of modular & micro As per the edge deployment point above, 5G will be a major catalyst for greater uptake in micro data centers and modular solutions. Andy Lawrence, Executive Director, Uptime Institute, anticipates demand accelerating from 2022, but this topic will see increased coverage in 2020 with the economics and technological challenges evolving fast.

Cl i

er e t o

v iew

h ck

er e t o

4) Fuel cells, lithium-ion (Li-ion) batteries and cloud based software on the rise? Diesel generators may remain imperative for most and of course, this will not change dramatically across 2020. We are starting to see, however, a drive towards new solutions. With costs decreasing, lower power requirements at the edge and software-based systems (demand-response etc) gaining traction, is the groundwork being laid for less reliance on generators?

46 DCD Magazine • datacenterdynamics.com

v iew

As the convergence of OT/IT continues, next-generation DCIM combines with the evolution in artificial intelligence to drive data center automation to the next level. The business case continues to develop with evidence for cost-reduction and energy-efficiency improvements growing. Watch the discussion with ScaleMatrix & McMaster University for more information.

h ck

Cl i

3) AI and automation for data center efficiency


Cl i

here to v ck

iew

5) As-a-service to become the order of the day? With enterprises now shifting towards an increasingly hybrid model of operation, the scrutiny shifts from capital expenditure to operational expenditure. Rather than an expensive outlay on on-prem hardware, the industry is seeing increased demand for outsourced pay as you go / as-aservice offerings. er e t o

v iew

Cl i

h ck

6) Evolving investment With continued growth, comes continued investment. With the sector moving from a niche investment to a more mainstream option for institutional buyers, we are seeing a move towards larger deals and longer return timelines that look set to stimulate the market further in 2020.

Cl i

here to v ck

iew

7) Can we offset increasing energy use? Alongside colocation growth, the rise of 5G and Edge deployments looks set to put more pressure on energy infrastructure. 90% of respondents from a recent 451 survey, believe 5G will result in higher costs. Vertiv are predicting the move to 5G is likely to increase total network energy consumption by 150-170 percent. It looks unlikely that technology innovation and efficiency alone will offset this increase. er e t o

v

iew

Cl i

h ck

8) Investment in new talent Across 2019, many of our experts have spoken extensively around both hiring for new sets of skills (analytics, data etc) as the IT/OT convergence continues but also, driving increased diversity in the industry. To encourage a new talent pool to the sector, we must better promote the vast opportunities that IT infrastructure can offer.

Gary Cook, Greenpeace winning Outstanding Contribution to the Industry at this year’s DCD>Awards, is perhaps reflective of the growing support for green initiatives and the importance now being placed on them. As regulators, Government and, of course, the public press harder, it is safe to assume that greater regulation and increasing efficiency standards will soon follow.

Cl i

er e t o

v

v iew

h ck

er e t o

iew

h ck

Cl i

9) Climate change bringing CSR and regulation

10) Distributed resiliency With several high-profile outages across 2019, a more distributed architecture and increasingly complex landscape, end-to-end resilience will continue to drive more discussion across 2020 and beyond. What does best practice look like and how do we get there?

Issue 35 ∞ Dec/Jan 2020 47


Critical thinking for critical infrastructure

Securing the next decade

T “Backdoors work for anyone who finds them”

48 DCD Magazine • datacenterdynamics.com

he new decade is sure to be a troubled one, wracked with the conflict and chaos that besets our civilization. As operators of critical national infrastructure, data center companies need to ensure that they do not become pawns in the endless struggle between states. The US assassination of Qasem Soleimani in Iraq emphasized how delicate global relations currently are and, as we go to press, the long term implications are not clear. The biggest victims of any escalation will likely be those in the region, who bear the brunt of any conventional warfare. But many also fear the impact of a conflict waged in another sphere: cyberspace. Iran’s capabilities are advanced, with the country believed to be behind 2019 espionage campaigns targeting digital infrastructure supporting government and industry in Saudi Arabia and the US, as well as a hacking campaign against banks, local government networks, and other public agencies in the UK. Shortly after the death of Soleimani, the US Cybersecurity and Infrastructure Security Agency warned that Iran may turn to “offensive cyber operations” as a form of retaliation. The agency recommended several security best practices for digital infrastructure operators. Even those that look after information that seems innocuous can be at risk, with hackers using one data point to get ahold of another - for example, Russian state-backed hackers used compromised Yahoo accounts to gain access to the corporate email of an International Monetary Fund official. If we’re lucky and cool heads prevail, this current spat will not lead to major cyber conflict - but operators should prepare for the worst nonetheless. This will surely not be the last crisis of the decade. Should something terrible happen, either digitally or physically, it is then when we must be even more careful. Security agencies, ever ready to not let a tragedy go to waste, will undoubtedly double down on pressuring tech companies to allow backdoors, weaken encryption, and demand they become part of the state surveillance apparatus. Such calls must be viewed rationally, and not with the heated emotions that follow a disaster. We must weigh not just the potential for greater security from more informed intelligence agencies against the risks to freedom, but also the very real possibility that such actions will make us less secure. Backdoors work for anyone who finds them. In March 2016, an advanced persistent threat group began using hacking tools and backdoors created by the NSA. Fourteen months later, a group calling itself the Shadow Brokers published many of those tools for all to see - and we’re still dealing with the fallout. As we face an uncertain decade ahead, we need to be certain about one thing: Security has to come first.



Still relying solely Switch to automated on IR scanning? real-time temperature data.

Introducing the

Starline Temperature Monitor Automated temperature monitoring is the way of the future. The Starline Critical Power Monitor (CPM) now incorporates new temperature sensor functionality. This means that you’re able to monitor the temperature of your end feed lugs in real time– increasing safety and avoiding the expense and hassle of ongoing IR scanning. To learn more about the latest Starline CPM capability, visit StarlineDataCenter.com/DCD.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.