DCD Magazine Issue 39: Building an Internet for the Moon

Page 1

Flexential’s CEO On merging Peak 10 and ViaWest

The Cooling supplement Using waste heat, AI cooling, quantum, & more

Awards Special

Here’s who won in this most unusual year

Issue 39 • December 2020 datacenterdynamics.com

INTERNET FOR THE MOON We go behind the scenes of NASA’s LunaNet project, the first step towards connecting the Solar System



ISSN 2058-4946

Contents December 2020

23

6 News A huge year for data center M&A, SolarWinds used to hack the US, billions in CIA cloud contracts, and the biggest stories of the past two months 12 Building an Internet for the Moon An exclusive first look at LunaNet, NASA’s plan to develop an Internet for the Moon, Mars, and beyond

The CEO interview

16 12

“ I don’t think anybody’s deployed Edge at scale - but I also don’t think ‘Build it and they will come’ is a smart Edge strategy. We’re in a good position to be what I call ‘tangible’ on the Edge,” Flexential CEO Chris Downie tells DCD

18 Awards time! Here’s who won this year’s DCD>Awards 23 Data centers and batteries When will they tie the knot? 28 Diverse cooling Find the right cooling for your niche: There’s no one size fits all

25 24

18 41

30 Cooling smarter Artificial intelligence heats up 34 Using waste heat Examining the different use cases for your unwanted thermal excretions 37 Cooling quantum computers Keeping qubits stable requires mindbogglingly low temperatures 41 Don’t let the hardware let you down Grease up your systems 44 Google’s big battery bet The hyperscaler is testing out eliminating a diesel generator in Belgium

44

45 IBM’s hybrid reality Talking to IBM Cloud’s CTO about transforming a struggling business in this unusual year 46 Economic centralization harms

Issue 39 • December 2020 3


Uptime is everything—

So don’t fall for the imitators. Trust 30 years of innovation and reliability.

Originally released nearly 30 years ago, Starline Track Busway was the first busway of its kind and has been refining and expanding its offering ever since. The system was designed to be maintenance-free; avoiding bolted connections that require routine torquing. In addition, Track Busway’s patented u-shaped copper busbar design creates constant tension and ensures the most reliable connection to power in the industry—meaning continuous uptime for your operation. For more information visit StarlinePower.com/DCD.


Keep watching the skies!

T

his holiday season, a lot of people will be looking upwards. It's the Great Conjunction, and that means that on 21 December, Jupiter and Saturn will come closer together in our skies than they have for 400 years. Digital infrastructure people may be looking upwards for other reasons. Satellites and stratospheric balloons are becoming part of our Net, but we're about to go one step beyond that. There are solid plans for an Internet on the Moon, and beyond that, a definite need for data centers on Mars.

This year, a trip to the office might as well have been an interplanetary journey The Edge of space Missions to the Moon are back in vogue. China brought back a probe from the far side, NASA is sending a man and woman, and there's talk of permanent bases. It no longer makes sense to send all communications via Earth, so a LunaNet is on its way (p12). Mars missions are coming too, and that's when space infrastructure becomes essential. Signals take at least four minutes (and sometimes 24 minutes) to reach Earth. Not many applications can tolerate that sort of latency, so Mars needs Edge resources. Is Edge a Mars-shot rather than a Moonshot? Chris Downie, CEO at US provider Flexential, tells us the first step is to build a platform that works (p16).

Our brightest stars Data centers on Mars might seem especially fanciful this year, when even a trip to the office might as well have been an interplanetary journey. Of course, the data center industry has been working to provide the tools that ensure that life can go on, even when we're stuck at home. So this year's DCD Awards were streamed online, and recognized the brightest and best stars of an industry that has been even more essential this year than ever before. If you weren't watching along, the results are online, and you can meet the winners in this magazine (p18).

384k km

From the Editor

Distance from the Earth to the Moon (equivalent to ten circuits of the Equator)

Frontiers of cooling Not all exploration is in space. There are plenty of problems to solve in our existing data centers. And everything needs a regular rethink. That's why we are excited to present a supplement on cooling. If you thought this part of data centers was done and dusted, think again. HPC needs different cooling than enterprise colocation, hyperscalers have developed their own solutions. And the Edge? Edge facilities are going into use now - and they have to handle extreme temperatures and humidity. They might as well be on Mars (p25).

Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison Chief Marketing Officer Dan Loosemore Cover photography by Gregory H. Revera

Head Office

PEFC/16-33-254

Finally, a festive note. Unlike many of the gadgets you will find under the Christmas tree, this issue (at least in paper form) doesn't need electricity. And in any case, batteries are included (p23 and p44). Happy holidays!

Training

SEA Correspondent Paul Mah @PaulMah

This product is from sustainably managed forests and controlled sources

Batteries included...

Debates

Deputy Editor Sebastian Moss @SebMoss

PEFC Certified

Peter Judge DCD Global Editor

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.

Intelligence

Global Editor Peter Judge @Judgecorp

DataSantaDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU

Dive even deeper

Events

Meet the team

Awards

CEEDA

www.pefc.org

Š 2020 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


Whitespace

News

NEWS IN BRIEF

Whitespace: The biggest data center news stories of the last two months

AWS develops its own inrack UPS “Rather than using a big, third party UPS, we now use small battery packs and custom power supplies that we integrate into every rack,” global infrastructure VP Peter DeSantis said.

FTC claims Facebook is a monopoly, sues to break up WhatsApp and Instagram Instagram was originally built on AWS, and it took Facebook a year to shift to its data centers. It took three years to get WhatsApp off of IBM Cloud.

Google AI ethics co-lead says she was fired for raising ethical concerns Dr. Timnit Gebru criticized Google for retracting a paper on AI bias, said that the company did not care about diverse hiring, and told staffers to “stop writing docs because it doesn’t make a difference.”

2020 was a record-breaking year for data center M&A Totaling almost $31 billion It would be an understatement to say that 2020 has been a year of extremes, with the worst global pandemic in a century and profound economic shocks roiling markets. But a shred of economic optimism can be found within the data center industry, where the total value of mergers and acquisitions has beaten all records, a feat not achieved since 2017, when the number was more than that of both 2015 and 2016 combined, according to Synergy Research Group. The $30.9bn sum was realized through 113 acquisition deals, despite the inevitable delay to a number of deals caused by Covid-19 - with more set to be completed before December 31st. The largest single deal was Digital Realty’s acquisition of Interxion for $8.4 billion - a deal struck last year but that closed in the first quarter of 2020 - but this was boosted by a number of other transactions worth more than $1bn, as well as a $2bn secondary share listing. Throughout the 2015-2020 period, Digital Realty and Equinix were by far the most prolific investors, with massive

6

acquisitions under their belts including the $7.6 billion Dupont-Fabros merger in 2017 and Equinix’s $3.6bn purchase of Verizon’s data centers and $3.8bn Telecity deal. Other acquisitive companies include Colony, CyrusOne, GDS, Digital Bridge/DataBank, Iron Mountain, NTT, GI Partners, Carter Validus, QTS, and Keppel. Synergy’s chief analyst John Dinsdale called 2020 a “bumper year” for data center M&A activity, adding: “We are also aware of almost $7bn in deals and IPOs that are at various stages of closing, so the pipeline remains robust despite the flurry of activity in 2020. “This drive to find new sources of investment capital is being fueled by an almost inexhaustible demand for data center capacity.” Demand for data center services has done nothing but grow this year, a trend which was already in motion in 2019 but was compounded by the pandemic. Data center construction also flourished this year - again, despite inevitable roadblocks caused by the pandemic. bit.ly/ABreakingYearIndeed

DCD Magazine • datacenterdynamics.com

Foxconn to build Google servers at controversial Wisconsin plant The factory was announced in 2018 by President Trump who called it the “eighth wonder of the world,” but has repeatedly been delayed, seen its purpose changed, and failed to hit job targets, despite record tax breaks.

US Commerce Department decides not to enforce TikTok ban President Trump originally signed an executive order banning US companies from working with ByteDance back in August, giving a deadline of September 20. He then signed off on a semi-forced sale that would see TikTok hosted on Oracle. But now it looks like that won’t happen.

CBRE: US data center market to grow by 13.8% in 2021 The first half of 2020 saw 134.9MW of wholesale data center space taken up across key markets (Northern Virginia, Dallas, Silicon Valley, Chicago, Phoenix, New York Tri-State, and Atlanta). 373MW of capacity is being built, including 239MW in Northern Virginia.


AirTrunk opens Singapore, Hong Kong data centers

Yotta Infrastructure plans $950m data center campus in Delhi, India Yotta Infrastructure is developing a 20-acre hyperscale data center park in the Greater Noida region of Delhi for around $950m. The park will consist of six interconnected data centers, 30,000 racks, and a capacity of about 200MW. “The Hiranandani Group has demonstrated their vision of building a data center park in Uttar Pradesh even before we have formally launched our Data Center Policy and UP’s first data center park will unlock a lot of possibilities, which will play a key role in realizing the PM’s Digital India vision,” said Uttar Pradesh chief minister Yogi Adityanath. Work on the campus will begin next month and the first facility is expected to be

operational before July 2022. In July this year, the company signed up an MoU with local officials in Chennai, India for another 200MW campus. Yotta and its owner the Hiranandani Group declared its intention to invest around 3,000-4,000 Crore ($450m-$600m) over the next decade in the 13-acre campus. In May, Yotta also announced its 50MW Yotta NM1 data center in Panvel, near Mumbai, had received a Tier IV Design award. The 820,000 sq ft (76,180 sq m) facility is located in an 18-acre plot of land that will eventually grow to a campus of five data center buildings. bit.ly/ThatsaYottaMoney

AirTrunk has opened hyperscale data centers in Singapore and Hong Kong. These are the company’s first AirTrunk data centers to go live outside Australia, where the independent data center provider has three hyperscale facilities. When fully built-out, SGP1 and HK1 will offer a total of 80MW combined. A sixth hyperscale campus in Tokyo, Japan is currently under construction. SGP1 has a PUE of 1.25, while HKG1 has a PUE of 1.35, the company claimed. bit.ly/OverThereTrunk

Equinix announces SG5 - the tallest data center in Singapore But another is being built that is even bigger Equinix has announced its fifth data center in Singapore, a greenfield facility located at Tanjong Kling, previously known as the Singapore data center park. With a first-phase investment of US$144 million, the nine-story SG5 will be the tallest data center in Singapore when it opens in H1 2021 – at least until Facebook’s $1 billion, 11-story data center is ready in 2022. SG5 will provide an initial capacity of more than 1,300 cabinets in 1,710 sq m (18,400 sq ft) of colocation space in the first phase. At full buildout, the facility will provide up to 5,000 cabinets, with a total colocation space of close to 12,000 sq m (129,000 sq ft). Equinix says SG5 will strengthen its cross-island presence and location diversity: SG5 is in visual sight of SG2; SG1 and SG3 are located side by side, while SG4 is located in Tai Seng in the East. Excluding SG5, Equinix’s four facilities currently provide more than 43,500 sq m (468,000 sq ft) of colocation space. bit.ly/ThingsBeLookingUp


Whitespace TerraScale to deploy Ambri liquid metal battery at Energos Reno Project Clean infrastructure firm TerraScale plans to deploy Ambri’s Liquid Metal Battery technology at its planned data center campus in Reno. The news comes just days after TerraScale hired Google’s head of global data center operations Michael Coleman as its CIO. Set next to the Tahoe Reno Industrial Center home to Switch, Apple, and Google data centers, as well as Tesla’s Gigafactory TerraScale’s Energos Reno Project has equally lofty ambitions. The company hopes to build a 3,700-acre mixeduse development within ten years, which will include data centers and a logistics hub with 500MW of renewable power generated on-site and distributed by a microgrid. The site currently has a fiber optic trunk line installed, 23MW of geothermal power, and 10MW of solar power. In its first phase, the company aims to develop a 20MW modular data center and a 600kW pre-fab data center, in collaboration with undisclosed data center partners. bit.ly/ARenoGamble

Switch and Dell team up to build Edge data centers at FedEx locations Starting with one site in Memphis Switch and Dell plan to build Edge data centers at FedEx locations across the US. The companies are currently working on one project in Memphis, Tennessee, which is expected to go live early next year, and then plan to add modular Edge data centers to further FedEx sites. Switch will build a MOD 15 modular data center, with Dell servers, hyperconverged infrastructure, storage and networking products on secured FedEx land. The MOD 15 is 15ft wide by any length, built in container pods and holding from 24 to 100 cabinets. The systems will connect to four main Switch data center locations, and primarily operate without need for human intervention. When maintenance or repairs are required,

they will be carried out by Switch partner Vertiv. “Teaming up with FedEx and Dell allows the three of us to create and demonstrate how enterprise customers can maintain independent control of their technology futures in the age of hybrid multi-cloud,” said Rob Roy, CEO and founder, Switch. Already a Switch customer, FedEx will serve as an anchor tenant in the Edge locations. The two companies previously sponsored the FIRST Robotics competition created by Segway creator Dean Kamen, and each operate similar-looking robots - Switch for security, FedEx for robots. The companies declined to comment on the nature of their robotic relationship. bit.ly/EdgeShippingNow

Peter’s liquid-metal factoid Ambri batteries utilize a liquid calcium alloy anode, molten salt electrolyte, and a solid antimony cathode. Heated to 500°C (932°F), the cell melts, creating a liquid-metal battery.

ITRenew and Blockheating combine Edge data Circular economy specialist ITRenew has teamed up with Dutch cloud hoster Blockheating to offer all-in-one containerized data centers that recycle waste heat for greenhouses. Blockheating makes 200kW Edge data centers in containers, using liquid cooling so the servers’ waste heat can be provided as water at 65°C (149°F) to nearby greenhouses. Under the partnership, the containers will be stocked with ITRenew’s Sesame rackscale compute and storage systems,

8

DCD Magazine • datacenterdynamics.com

made from OCP hardware which has been recycled from hyperscale players like Facebook. A containerized data center next to a greenhouse can heat two hectares in the summer and half a hectare in the winter, which Blockheating says is enough to grow tons of tomatoes every year. More importantly, the heat provided as a waste product from the data center is much cheaper than natural gas, cutting emissions. bit.ly/WarmsYourHeart


Supply chain attack on SolarWinds used to breach US government Russia is thought to be behind the huge hack IT systems of several US government agencies were breached as part of a widespread hacking campaign believed to be the work of the Russian government. Hackers were able to gain access to agencies including the Treasury and Department of Energy using a malicious software update introduced in a product from SolarWinds. Microsoft and FireEye were also hacked. The attack was discovered by FireEye during the cybersecurity company’s investigation into its own breach. Hackers added sophisticated malware to SolarWinds’ network monitoring software Orion in updates sent out to customers in March and June - among them FireEye. SolarWinds has more than 300,000 customers, including 425 of the US Fortune 500 companies. Among its customers are all five branches of the US military, the State Department, NASA, Department o Justice, Office of the President of the United States, the Federal Reserve, the National Security Agency, the Secret Service, contractors Booz Allen Hamilton and Lockheed Martin, and the top ten largest US telcos. Following the hack’s discovery, the Cybersecurity and Infrastructure Security Agency (CISA) issued a rare emergency directive calling on all federal civilian agencies to disconnect or power down SolarWinds Orion products immediately. bit.ly/ThisIsReallyBad


Whitespace

ICE plans $100 million in cloud spend on AWS and Azure The US Immigration and Customs Enforcement agency plans to spend at least $100 million over five years on cloud services. The government body is looking for “cloud infrastructure hosting in AWS and Microsoft Azure environments,” despite protests by staff at both companies. ICE is seeking a “Solution Provider to provide access to FedRAMP authorized AWS and Microsoft Azure Cloud Service Provider (CSP) marketplace products and cloud based CSP resource offerings including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) within the ICE Cloud,” according to a procurement notice. In 2018, more than 100 Microsoft employees signed an open letter to CEO Satya Nadella calling for Microsoft to “cancel its contracts with ICE.” Microsoft already provides cloud services for ICE to support “legacy mail, calendar, messaging and document management workloads” for $19.4m. bit.lyAtrailofdeaths

CIA awards multibillion C2E cloud contract to AWS, Microsoft, Google, Oracle, and IBM When you’ve gotta go, you’ve gotta go - even if you’re a data center The Central Intelligence Agency has awarded its Commercial Cloud Enterprise contract to Amazon Web Services, Microsoft, Google, Oracle, and IBM. The C2E contract was previously revealed to be worth tens of billions of dollars over the next decade and a half. Under the award, each company will be able to compete for specific task orders at various classification levels for the CIA and the 16 other agencies in the intelligence community. Some orders will be small, while others are expected to be considerable. “We are excited to work with the multiple industry partners awarded the Intelligence Community (IC) Commercial Cloud Enterprise (C2E) Cloud Service Provider (CSP) contract,” CIA spokesperson Nicole de Haay told Nextgov, which first reported the contract award. “[We] look forward to utilizing, alongside our IC colleagues, the expanded cloud capabilities

resulting from this diversified partnership.” Since 2013, the CIA’s cloud computing needs have been met by Amazon Web Services, with the cloud provider also hosting much of the wider intelligence community in contracts worth more than $1bn. Under C2E, the CIA will shift to a multicloud posture, picking the best cloud provider for specific workloads. It will also pick cloud providers on behalf of the other intelligence agencies. The move is in contrast to the Department of Defense’s Joint Enterprise Defense Infrastructure contract, which was controversially awarded to just one company. Microsoft Azure was awarded JEDI back in October 2019, but the contract is currently being contested by AWS over claims of political interference. bit.ly/TheNewSurveillanceState

IBM reportedly plans to lay off 10,000 staff in Europe IBM is reportedly planning to lay off 10,000 employees in Europe. The figures, first reported by Bloomberg, account for about 20 percent of its staff in the region. The UK and Germany are expected to be the worst hit. “Our staffing decisions are made to provide the best support to our customers in adopting an open hybrid cloud platform and AI capabilities,” a company spokesperson said. “We also continue to make significant investments in training and skills development for IBMers to best meet the needs of our customers.” The company announced a global firing spree in May that could total 20,000. Next year, it expects to spin off its legacy IT services business. IBM instead plans to double down on hybrid cloud services. We talk to IBM Cloud’s CTO about what this shift means for the company, and whether it can hope to compete with hyperscale giants on page 48. bit.ly/HireThemIfYouCan

10 DCD Magazine • datacenterdynamics.com


MILLIONS OF STANDARD CONFIGURATIONS AND CUSTOM RACKS AVAILABLE

THE POSSIBILITIES ARE ENDLESS... MADE IN THE USA

www.amcoenclosures.com/data

847-391-8100

an IMS Engineered Products Brand


Cover Feature

Building an Internet for

THE MOON NASA’s moon-based LunaNet will be one step towards bringing the Internet to the rest of the solar system

I

t’s time for the Moon to get online. NASA is embarking on an ambitious project to build an Internet for the Moon, opening up its far side, setting the groundwork for human habitation, and preparing us for connected civilizations on Mars. We don’t have much in the way of operating technology on the Moon, but what little we have keeps in touch by direct communication with Earth. "So far, all of the existing lunar stuff has been directto-Earth,” NASA exploration and space communications projects division architect David Israel told DCD. There are two notable exceptions - China’s Chang'e 4, which achieved humanity's first soft landing on the far side of the Moon last year, and this year’s follow up, which both used the Queqiao Relay Satellite to pull off their feat. “But for our missions, right now, we don't have any relay capability,” Israel said. That's a problem if the US ever wants to explore the far side of the Moon, which never faces Earth. Even on the side that faces us, craters and valleys can block direct line of sight, while the need for every mission and every device to have its own direct-Earth connection capabilities can limit their scope and ambition. It’s clear we need infrastructure to communicate with people and equipment on our nearest neighbor, because new missions are planned. After nearly five decades of neglect, the US plans to return to the Moon in a big way. The Artemis program will take the first woman and the next man to the Moon in 2024, if everything goes to plan, and the upcoming administration doesn’t make any changes. By the end of the decade, NASA hopes to have set up sustainable operations on the planetary satellite, with an eye to a

manned Moon Base. To pull all of this off, potentially along with fleets of rovers, sensors, and exploration projects on both sides of the Moon, direct-Earth communication for every system just isn’t going to cut it. Instead, it needs the LunaNet. “The way to think of the LunaNet is to substitute the word Internet for LunaNet, and then that's the mind set,” Israel, who heads the project, explained. The plan is to deploy a whole interconnected network of lunar science orbiters, lunar exploration orbiters, lunar surface mobile and stationary systems, Moon and Earth orbiters that provide relay and PNT (positioning, navigation, and timing) service to lunar systems, lunar ascent and descent vehicles, and associated Earth ground stations and control centers. “Once we have relays in place, that's where you have the ability to give the network connection to the lunar South Pole, or the lunar farside,” Israel explained, with both regions currently set to be visited in 2024. Equally important is PNT, with GPS-like technologies crucial for human navigation and autonomous systems on the Moon. Of course, putting anything on and around a body some 250,000 miles from us is prohibitively expensive, so a core aspect of LunaNet is to ensure it can be expanded modularly, bringing connectivity to areas

Sebastian Moss Deputy Editor

only when it is needed. “The analogy that I use is that when the mobile networks started you could get your phone coverage when you were in the city,” Israel said. “But when you went out to the country you didn't have coverage anymore. You didn't need a new phone, they just needed to put base stations out there. So the build up of the LunaNet is very analogous to the build up of mobile networks and the Internet.” Much like the terrestrial Internet, which sprung out of US military labs but has since grown into an international endeavor, the aim is to make LunaNet a joint effort. Standards are built from existing efforts by the Interagency Operations Advisory Group (IOAG) and the Consultative Committee for Space Data Systems (CCSDS), which include all major space agencies. “We could start to have different types of providers for LunaNet,” Israel explained. “International partners, commercial partners, NASA things, European Space Agency things, etc., all part of the larger infrastructure that provides LunaNet service to build out the larger LunaNet.” The more LunaNet Service Providers and technology providers the better, Israel believes. “We need to make sure that this doesn't become one commercial provider with proprietary things where everybody that's going to the Moon has to buy this company's systems.”

"No one organization owns LunaNet. The LunaNet community includes government, commercial and academic entities; it could eventually include individuals as well.”

12 DCD Magazine • datacenterdynamics.com


In procurement documents, NASA notes that “LunaNet relies on standards and conventions to achieve interoperability among Service Providers and Service Users. As a result, no one organization owns LunaNet. The LunaNet community includes government, commercial and academic entities; it could eventually include individuals as well.” Part of that played out in late October 2020, when Nokia was awarded a contract to deploy a 4G network on the Moon by the end of 2022. Part of the Tipping Point program, the effort is "really its own thing," Nokia Bell Labs VP and head of the project Thierry Klein explained. "The ambition of the Tipping Point program is really to explore advanced new technologies to support future missions on the Moon or ultimately Mars." The $14.1m contract will deploy cellular technologies that are similar to those used by telcos on Earth, to see if they work just as well on the Moon. "So the mission is to put the equipment on a lunar lander, and what would be the base station and the network side of the solution is integrated into the lunar lander," Klein said. Then the lander will deploy a rover, which will act as the equivalent of a phone user down here. "A cellular link will be established between this rover and the equipment on the lander, and we will both explore short range surface communication at 1-300 meters as well as much longer range where the rover can go up to 2-3 kilometers away from the lander." While mission details are still being finalized, Nokia expects to have to run the cellular network for several weeks to validate if it is space hardened. But Klein is confident the system will do well, as work began long before the Tipping Point contract was awarded. "We've been working on this for several years," he said. "We have built a unit that is space hardened already," with the company putting it through trials akin to lunar operation. First there's the shock, vibration, acceleration of the journey, then there's the temperature changes, vacuum, and radiation to handle on the surface itself. "We put the equipment through those tests, as much as we can on Earth." Nokia built a simulation model to capture the RF performance on the Moon, where there are no buildings or trees, but there are craters and rocks. "And then we found a place on Earth that has similar lunar scape characteristics, the island of Fuerteventura in Spain,” Klein said. “And we set up our entire system exactly in a configuration that we would expect the system to be on the Moon, and validated from a communications

"Cloud computing and storage, and all this stuff which was enabled by networking protocols and then network access between Edges and devices, then becomes possible in our lunar scenarios.” perspective, from an RF perspective, as far as throughput, latency, coverage, and so forth is concerned." Should the Nokia trial prove successful, along with a possible 5G follow up, it could serve as the stepping stone to the wider LunaNet. “That's where if you're an astronaut, and you're cruising around on the surface of the Moon, then you get your network access through the equivalent of your cell

phone through the Nokia cell tower,” Israel said. “That's your LunaNet access point, and if your data is all sitting on the Moon, then maybe it all stays within that local lunar surface network. But if you're trying to get data back to Earth or back around to the far side of the Moon, then maybe it's going to go from that base station, and work itself straight back to Earth through a relay or any combination thereof - the same way

Issue 39 ∞ December 2020 13


Cover Feature that the Internet traffic is kind of bouncing around.” Just like our earthly efforts, building a network will not just be about connection points, but require compute and storage. “Somebody could land something on the far side of the Moon, that takes raw data from all the different things around it,” Israel explained. “All the sensors themselves don't have to be that smart, you could just have an Edge computing device on the Moon. Cloud computing and storage, and all this stuff that we see here which was enabled by networking protocols and then network access between Edges and devices, then becomes possible in our lunar scenarios.” Some of that lunar communication will likely be carried out with standard TCP/ IP communications protocols, but for the perilous voyage to Earth that simply won’t suffice. For all its comparisons to the Internet, back on Earth network infrastructure is primarily designed to be static. Data centers, submarine cables, and cell towers all stay put. LunaNet has to be designed for space systems moving at different orbital speeds. Satellites can come into view of potentially connecting systems, and move in and out of view within minutes, requiring rapid establishment and disestablishment of connections. Then there’s potential radio signal interference and other issues that can cause data loss. With this in mind, LunaNet will rely on the disruption tolerant networking (DTN) bundle protocol which uses a store-andforward mechanism along with automatic retransmission to ensure data makes it to its destination. DTN came out of work started by NASA back in 1998 for an 'Interplanetary Internet' which, after a series of false starts and changed priorities, has reformed as a plan to build a Solar System Internetwork (SSI) with DTN at its core. This is where LunaNet may prove to be most valuable. There is a huge scientific benefit in Moon operations - with the recent discovery of significant quantities of water showing just how little we know. But it also serves as an important staging ground for operations elsewhere.

"Working out things at the Moon is difficult, but it's more accessible than going to Mars," Israel said. If we can hope to reach the goal of a man on Mars by the 2030s, such steps to iron out all the kinks will be vital. “If and when we do make it there, however, network connectivity becomes even more interesting. At some 140 million miles away, speed of light latency really starts to become apparent. "If you went to the Moon on vacation 20 years or so years from now, certainly exchanging emails and posting stuff on the web would be fine," Israel said. "A phone call

"If you're an astronaut, and you're cruising around on the surface of the Moon, then you get your network access through the equivalent of your cell phone through the Nokia cell tower." 14 DCD Magazine • datacenterdynamics.com

would be possible, but it would be difficult and annoying because of the few second delay." On Mars, however, "it's measured in minutes," with delays compounded by "data interleaving, where you store up a buffer of data, and you kind of shuffle it in a predictable way. It's a powerful way to help deal with certain types of errors in the system, but there's a penalty of buffering time." Should we build civilizations on Mars, as Elon Musk claims we will, its connection to Earth will be inherently slow. That will require more local storage and communication - and, yes, data centers on Mars. Such notions are a long way off, of course. Before we bring connectivity to Mars, Venus, or even asteroids, we must first focus on our closest neighbor. That’s no mean feat - and one that could unlock significant scientific advances in its own right. Connecting the far side of the Moon will help us explore space well beyond our solar


system, says Israel, by giving astronomers access to the clearest signals ever from space: "We don't have anything on the far side of the Moon, but that side has been the dream of radio astronomers forever, because it's like the quietest place,” Israel said. “All of the racket coming from Earth is blocked by the whole Moon." LunaNet will also expand the vision of agencies like NASA, Israel hopes. He pointed out that until the 1990s, software developers had to individually work out how to exchange data between computers for their projects. "Once the Internet hit, then suddenly, all those clever people didn't have to spend any brain power or time at all wondering how to get the data from here to there, they could just spend it all on just dreaming up their applications,” he said. "My goal with LunaNet is that it'll be just as enabling as the Internet was to the Earth. Once this whole network-based mindset gets into the user side, the people planning the missions, then there'll be all sorts of new types of missions and applications that just grow out of it.”

Buried lasers Back in 2013, David Israel was part of a team testing a lunar laser communications system from the LADEE spacecraft. The test went perfectly, successfully demonstrating the ability to send 622 megabits per second from the Moon by laser. “It worked great,” Israel said. “Unfortunately, our ride for the experiment was this mission that by design crashed into the Moon to stir up some dust to see what happens. So we took a perfectly functioning laser communication system and crashed it onto the surface of the Moon.” For LunaNet, thankfully, there are no plans to crash it into the lunar surface. But, Israel joked, the buried comms system may prove useful in the future to a stranded astronaut like in the book and movie The Martian. “Maybe somebody will dig it up and try and get the laser to work again.”

"If you went to the Moon on vacation 20 years or so years from now, certainly exchanging emails and posting stuff on the web would be fine. A phone call would be annoying."

Issue 39 ∞ December 2020 15


CEO Interview | Chris Downie

Building a US giant Chris Downie heads a US colocation brand with ambitions for hyperscale customers and the Edge. What’s the common thread?

F

lexential is a provider that handles all the major data center types, from Edge to hyperscale, across the US. Its leader, Chris Downie, is a man who has been connected with more billion dollar mergers than most. Downie spent eight years at Telx, building the data center provider into a well-liked brand. After six years as CFO he stepped up as CEO in 2013. Then in 2015, Digital Realty came calling, in the midst of a major period of data center acquisitions. Digital bought Telx for $1.9 billion, a significant deal which marked Digital’s entry into retail colocation. Downie went to Peak 10 - a provider which had itself been recently acquired by GI Partners for around $900 million. With GI’s backing, Peak 10 expand in 2017, with the $1.7 billion purchase of ViaWest - an operator with 30 data centers. It was an interesting deal because GI Partners had previously owned ViaWest (jointly with Oak Hill), and sold ViaWest to Canadian telco Shaw for $1.2 billion in 2014. By 2017, telcos were getting out of data centers, and GI bought it back again, combining it with Peak 10 to create what became Flexential. “GI Partners has been a big investor in the data center industry for a long time,” said Downie - reminding us that GI seems to have been involved with every other major player: “They were a founding investor in Digital Realty, they created SoftLayer which was acquired by IBM. "They owned Telx where I came from, which was ultimately folded into Digital Realty. We are fortunate to have them as a partner.”

16 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor


GI had a specific plan in creating Flexential, said Downie: “Peak 10 and ViaWest were large regional operators. Together, they provided a new market opportunity in that neither legacy company was positioned to serve national demand sets. There's a fair amount of folks that want multiple facilities in East, West, North and South geographies.” Three years into the merged company, he admits, “not everything clicked overnight. We went through a little over a two year integration process, harmonizing both the platforms. Now we've got some large, capable and scalable operating systems that support the need to grow the business. As we round out 2020 all that integration is well behind us.” Of course, there’s more to 2020 than that: “It’s certainly been an interesting year. In Q1, I'd say things just sort of seized. It was very difficult for our customers to make decisions, because no one knew exactly where they were working or what was going to happen. Since then, every quarter has accelerated back towards normal. “We're looking forward to 2021 - partly because 2020 will be behind us - but some of those things can't pause forever. They have to reengage, and early on readings on the vaccine are hopeful.” The pandemic itself had surprisingly little impact on the data center environment: “Our staff worked 24x7 to watch over facilities, and our environments were deemed essential, so there were no complications from local protocols. “It confirmed a fundamental part of value proposition: resilience. Our environments are built to withstand natural disasters, so we've got the physical resilience, and process resilience to make sure these environments are not impacted through what could still be a difficult time.” Like other operators, Flexential restricted access to multi-tenant environments, and locked hyperscale suites for access by single tenants like AWS. “Our corporate personnel went remote, and we had to control foot traffic through facility. We had to trust that we could do most of the things that needed to be done remotely.” Since everything worked so well, will here be long term changes? “I think so, but it’s a bit too early to tell. We do a lot for some customers, but some like to do changes on their own.” Stepping back from that, the first thing that strikes us about Flexential is it wants to do everything, including Edge, colocation and enterprise, as well as selling wholesale space to hyperscalers. Does all that make sense? He clarifies: “We don't sell large hyperscale footprints, the 10-plus MW deals. Our largest deals tend to be 3-5MW.

“I don't think anybody's deployed Edge at scale but I also don't think ‘Build it and they will come’ is a smart Edge strategy. We’re in a good position to be what I call ‘tangible’ on the Edge." We do sell to the hyperscalers, but it's more for their localized caching nodes, in multiple markets around the country. And those can be 500kW to 2.5MW, and the scale of our facilities is certainly adequate for those.” “In the last couple of years, hyperscalers have been advancing into Tier Two markets like Atlanta, Denver and Nashville,” he says, and those deals need good geographical coverage and networking: “Neither legacy company [Peak 10 or ViaWest] could position itself well in front of the Microsofts, the AWSes or the Facebooks, but now at our scale and reach provide a platform that is highly relevant to their requirements.” The network is important: “I'd say much of the service fabric came from the ViaWest side. And when I first looked at it, I thought it might have been a bit over-architected. But what I didn't appreciate is that it was effectively protected at scale. “It's ultimately leased from Lumen and Zayo, but it was purchased at 100Gbps, scalable to 400Gbps - and that gives us the ability to provide it to our customers at a very competitive rate without having to go manage third parties.” The network also links in 16 carrier hotels: “I'd argue that there's not many, if any, that really have that comparable type of network, that really ties everything together and allows for federation to the number of carrier hotels that we have. It allows us to deliver virtualized services private cloud, data protection, backup, and storage - in a uniform and consistent way across all 20 of our markets.” There’s diversity within that: “The Hillsborough marketplace up in Oregon is a very good place for larger requirements, given its proximity to California, with the power and the network resources that are available up there. But we've also engaged with a lot of large enterprise requirements in not so technology-centric markets." These include graphics companies in Nashville, a financial institution, in Minneapolis: “We have the scale to serve multi megawatt requirements in several of our markets.” The hyperscalers, or “technology companies” as he calls them, have uniform requirements, and need more power: “The technology companies tend to be more

advanced, if you will. In the enterprise sector, they're density sensitive but they're not built with that same server structure.” Enterprises are likely to have different needs: “In the financial vertical, they tend to have very significant requirements on the security side. They'll want different physical security, a different entryway, and different access because they've got their own regulatory requirements that they need to solve for.” The Edge is a natural development for a provider with facilities in Tier 2 and Tier 3 cities along with a good network, and for Downie it’s important to make it real: “I don't think anybody's deployed Edge at scale - but I also don't think ‘Build it and they will come’ is a smart Edge strategy. We’re in a good position to be what I call ‘tangible’ on the Edge. “There are people are using the Edge word, who are not in geographic areas that are more ‘Edgy’ than New York or Chicago, and nor do they have the ability to federate from remote environments. I do think it's not tangible if you don't have the network.” Flexential’s Local Edge project is a partnership with cell-tower provider American Tower, which will own the locations where it is deployed: “They could put a modular data center unit just about anywhere, given the scale of their platform. That partnership really gave us the ability to establish some prototypes and begin to appreciate how to deploy in these environments.” It’s practical work: “For us it was very important to make it tangible otherwise it is just marketing." Flexential worked through use cases including the different Edge requirements of content delivery networks and enterprises, and the availability of power, fiber and backhaul networks - along with the crucial issue: How much will it cost for end user to deploy? “We are beyond the experimental stage. We are deploying, but the true market has yet to be defined,” said Downie. But right now, he can walk into a room and say: “Hey, our Local Edge product can be deployed anywhere you think you need it. We can put it just about anywhere, we have the network that can support that workload, and those computer requirements. Let's work together and figure out where you want to go.”

Issue 39 ∞ December 2020 17


>Awards | 2020

Category Winners In a year when everything else changed, DCD found a way to honor the industry’s best projects and its most talented people Edge Data Center Project of the Year

Winner: DUG GeoSolutions This award recognizes unique and strategic approaches to housing IT at the edge that can act as an exemplar to the wider industry. DartPoints collaborated with broadband, content and infrastructure providers, local networks and municipalities to aggregate networks and create the Eastern Iowa carrier-neutral interconnection point. The project provided enhanced quality and performance for residents WFH needs, bridging the digital divide with collaboration and innovation.

Enterprise Data Center Design Award

Winner: OCP Group In collaboration with Jacobs Engineering Group This award recognizes innovation in data center design within the enterprise space, Often referred to as on-premise data centers, they make up a large part of the global data center inventory. Located at the Techpark of the Green City of Mohammed VI, the largest solar energy research site in Africa, this new Tier III/IV data center provides world class infrastructure serving Morocco and the African continent. It brings great flexibility to OCP, their customers and partners specializing in IT services and cloud computing solutions.

Multi Tenant Data Center Design Award

Sponsor

Winner: EcoDataCenter In collaboration with Total Data Center Solutions This award recognizes innovation in greenfield and brownfield data center design within the faced paced colocation sector. This unique facility from EcoDataCenter is climate positive - actively supporting carbon reduction. Every aspect of the design, construction and operation considers that as the main priority. The radical construction materials were critical in achieving a low carbon footprint for the site and contributing to the sustainable design.

18 DCD Magazine • datacenterdynamics.com


Awards 2020 Winners

Hyperscale Data Center Innovation Award

Sponsor

Winner: Facebook In collaboration with Cundall, Fjernvarme Fyn

Stephen Donohoe

This award recognizes original designs or technological solutions within the hyperscale data center segment that will benefit the wider data center community. The Odense Data Center is one of the most advanced, energy-efficient data centers in the world. It uses innovative cooling, achieving a PUE of 1.12 (target design value), and is supported by 100 percent renewable wind energy. The recently announced expansion will also heat 11,000 local homes.

Sponsor

Energy Smart Award

Winner: Microsoft In Collaboration With Power Innovations This award recognizes the world’s most energy-aware and innovative approaches to building sustainable digital infrastructure. This innovative project tested generators composed of hydrogen-powered fuel cells rather than traditional diesel equivalents. The resulting system responded to sudden load shifts, ran very quiet, and emitted only warm water vapor as exhaust. A 48-hour duration test showed the fuel cells, originally designed for use in automobiles, could perform the technical requirements of the generator task, pointing towards a smarter energy future.

Mission Critical Tech Innovation Award

Sponsor

Winner: Enel X In collaboration with Digital Realty This award recognizes cutting-edge technology solutions from the world of critical power, cooling tech, monitoring and operational management systems. Digital Realty has innovated the way it consumes energy to support the Australian electricity grid and ensure reliability. The two Australian data centers are available to switch over to their UPS and backup generation, allowing aditional capacity as part of a Virtual Power Plant (VPP), building resilience and reducing pressure on the utility. Chris Quint

Data Center Operations Team of the Year

Winner: Salute Mission Critical

Sponsor

PRO

In association with Cloudflare This award recognizes teams convened for a special task, at any stage in the data center life cycle, which goes above and beyond day-to-day operations. To meet demand, Cloudflare turned to Salute Mission Critical to provide skilled personnel to help deploy, maintain, manage and secure their data centers. Salute's workforce development strategy - upskilling military veterans provides the advantage of building a culture that has been tested in the most adverse environments and bridges the industry's growing skills gap. Issue 39 • December 2020 19


Awards 2020 Winners

Data Center Construction Team of the Year

Winner: CyrusOne In collaboration with Mercury Engineering This award is about recognizing the commitment and initiative shown by teams that deliver something better than expected. CyrusOne Frankfurt III (FF3) is a brand-new purpose-built data center covering 11,500 sq m (123,786 sq ft) at an ultra low PUE. It provides secure and resilient data center solutions within a key business hub.

Young Mission Critical Engineer of the Year

Winner: Vinodhkumar Sampathkumar, Cundall In Collaboration With Power Innovations This award recognizes the best of the next generation of mission-critical engineer community needs a new generation Vinodhkumar has worked on numerous 20-200 MW data centers developing solutions for maximum uptime, effective space utilization and reduced carbon footprint. He leads and supports projects from concept design stages through to construction.

Public Vote: Data Center Architecture Award

Winner: Yotta Infrastructure, Mumbai This year's public vote category recognizes a data center that achieved a new level of design, so it doesn't just do the job, but it looks great as well. Hundreds of votes were cast worldwide to determine the winner of this Award.

Outstanding Contribution to the Data Center Industry

Winner: Peter M. Curtis In association with Cloudflare Critical infrastructure operations safety standards have been part of Peter M. Curtis’ DNA since his early teen years when he managed complex medical equipment for his ailing sibling. Surviving 9/11 then propelled Peter M. Curtis to focus his 30 years’ experience in the mission-critical engineering industry to design and implement tools to protect critical infrastructure and public safety by establishing the best practices and standards for facility professionals. Since that day, he has devoted himself to the protection of critical infrastructure and to the defense of public safety and our country’s assets. Professor at the New York Institute of Technology and Marist College, Curtis is the founder of PMC Group One and author of "Maintaining Mission Critical Systems in a 24/7 Environment."

20 DCD Magazine • datacenterdynamics.com

Sponsor


Thank you to our sponsors who made the DCD>Awards 2020 show possible in this unusual year Headline Sponsor

™

For more information on the awards winners, and on how to support the charity Chain of Hope, visit bit.ly/DCD_Awards



Battery Power

Andy Patrizio Freelance

When Will Data Centers Run 100% on

Battery Power? Despite the increased efficiency of battery power and renewable energy, data center operators still have diesel generators out back. That may be changing.

D

ata center operators have been mindful of their power consumption for some time. Just how mindful is open for debate - and very hard to ascertain since many countries don’t break out statistics for data center power consumption. Some estimates suggest data centers account for as much as three percent of the worldwide electricity consumption, but the most authoritative study, done in 2010 by Stanford University professor Jonathan Koomey, put the worldwide draw at between 1.1 percent to 1.5 percent. Koomey and his colleagues have updated this since and found that the growth in energy demand is mitigated by increases in efficiency. Whether they adopt hydro wind or solar power, data center operators are striving to be as green as possible. However, one thing is for certain, data center operators want to get off fossil fuel as quickly as possible, but out back behind every data center is a diesel generator - normally a set of them - just in case. These diesel generators are there to

deliver energy for short periods when there is a power outage. That sounds like a job that could be done by batteries, but that is not going to happen overnight, according to Adam Kramer, executive vice president of strategy at US-based provider Switch: “Do we think eventually we will be able to use battery storage to replace that? Maybe. But our number one job is to make sure our clients are up 100 percent of the time.” The same goes for Equinix, the largest data center provider worldwide, according to Craig Pennington, vice president of global design. “I'm hoping that we can prove over the space of a few years that the only time those diesel generators ever run is to just do the monthly performance test to make sure that they're there if we ever need them,” said Pennington. “I'm hoping that we can get to the point where we feel confident that we can build a facility without diesel.” Diesel lives… for now Jennifer Cooke, research director for data center issues with IDC, doesn't see the transition away from diesel happening

“I'm hoping that we can get to the point where we feel confident that we can build a facility without diesel.” quickly. “Like anything in the data center space, obviously, it's mission critical. Changes will happen slowly, because it's more important to run the business,” she said. Having said that, customers are demanding change from operators, she said: “I do hear of more data center builders saying that their customers who are engaging with them to build data centers are increasingly asking, ‘Can we do this without a generator?’ A lot of that is coming from their desire to become a ‘cleaner’ company, a greener company.” At this point, customers are asking and operators are in the investigative stage. Most

Issue 39 ∞ December 2020 23


Battery Power operators are not actively shunning diesel generators yet. “I think that it will happen, maybe to some extent in shrinking, but probably not within the next ten years,” said Cooke. One of the few exceptions is Google's battery project in Belgium, which we explore on page 44. The UPS also lives on Switch also made news last July when it announced a partnership with First Solar and Tesla to store solar power in Tesla Megapacks. Switch has been running its Nevada data centers on 100 percent solar power since 2016 but couldn’t store energy gathered during daylight hours because the technology to store mass amounts of electricity wasn’t ready. Tesla’s Megapack is a beefed-up version of the Tesla PowerWall used in residences to store solar power for just a house. Each Megapack can hold up to three megawatt hours (MWhs) of power and 1.5MW of inverter capacity. Those Megapacks don’t do the whole job though - Switch’s uninterruptable power supply is hanging on for practical reasons, said Kramer: “We use lithium-ion Megapacks outside the data center, and the UPS is a lead acid battery inside the data center. The switching technology we deploy on our UPS batteries to provide dual conversion of the energy is better suited on a data system to maintain the resiliency inside the data center. They serve distinctly different purposes.” But Tim Hughes, director of strategy and development at data center provider Stack, views the batteries as an opportunity to extend UPS systems rather than replace them. “I have looked at the idea of essentially just expanding the size of your UPS battery. I think you could view batteries as a potential functional augmentation of UPS versus a replacement of UPS,” he said. Next steps in battery power Battery technology chemists continue to make impressive strides in lithium-ion technology. In October 2020, the US Energy Information Administration (EIA) issued a report that showed grid-scale battery-project costs in the United States dropped 70 percent in just the last five years. Between 2015 and 2018, average project costs decreased from $2,152 per kilowatt-hour of storage to $625. But while a good advance, it still falls short of data center needs, said Hughes. “What they're solving is on a daily need level, but what happens when I need it for a week? We've seen events happen that are not common but do happen, in which we had data centers running on backup generators for days on end, that would have created pretty significant issues, because you

“What they're solving is on a daily need level, but what happens when I need it for a week? It's not common, but events like this do happen."

wouldn't have enough power,” he said. And last year, the Uptime Institute noted that lithium-ion is still unproven in data center scenarios as compared to vented lead-acid (VLA) or valve-regulated lead-acid (VRLA) batteries, particularly as relates to failure mode. So there are a variety of efforts taking place in battery technology beyond just improving Li-ion technology. One of them is an Australian startup called Lavo Hydrogen Technology Ltd., which uses power from rooftop solar panels to produce hydrogen from water by electrolysis. The gas is stored in a metal hydride container and converted back into electricity when needed using a fuel cell. Lavo claims its energy unit will be able to hold 40kW hours of energy, compared to 13.5kWh in Tesla’s Powerwall. Tesla did not return requests for comment, nor did competitors Bloom Energy or Lavo. Another battery technology hitting the market is nickel-zinc (NiZn), promoted by a startup called ZincFive. ZincFive claims its NiZn Battery Energy Storage Solution product has backward and forward compatibility with megawatt class UPS inverters with rapid charge/discharge capabilities across a wide temperature range without the threat of thermal runaway. NiZn batteries promise a greater than 10-year operating life with minimal maintenance requirements, contributing to a low TCO relative to short-lived lead-acid batteries with high annual maintenance costs. They also check the sustainability box with non-toxic, widely available materials that are easily recycled. The space problem While battery technology continues to advance, it is only part of the problem of powering data centers, especially hyperscale data centers, which are the size of football stadiums. The biggest issue, argues Pennington, is having access to adequate solar power for these large systems, but energy density for solar is relatively low. “To power a data center you need acres of space, at least four acres of land to generate a megawatt or store hundreds of megawatt hours,” said Kramer. Switch’s Reno solar farm is expected to generate around 550MW. By way of

24 DCD Magazine • datacenterdynamics.com

comparison, the Topaz Solar Farm in San Luis Obispo County, California, offers 550MW and covers 4,500 acres with solar panels. That comes out to 8.2 acres per MW. Ideal for the Nevada desert, perhaps, but a non-starter for data centers in large cities. “It's not like you could do that in downtown Frankfurt,” said Pennington. “Even if you cover the entire roof of your data center with solar with solar power with solar cells, you're not going to generate enough power to power the whole facility.” The Edge opportunity While solar might be a tough ask in the city, battery power could be good for the urban Edge, where data centers are much smaller and don’t need as much power, said Cooke: “If you think of Edge data centers, they're not going to be in a place where a lot of times people want something so noisy and smelly as a diesel generator, so I think we'll see a lot more different battery technologies and different backup power sources, as it shifts to more distributed locations, where it's just not feasible or allowable to have generators.” She continued: “I think local regulations on noise and pollution will prohibit diesel generators in many places. If it's very disruptive, like in a hospital or retail location, things that are quieter, less obtrusive, will be the preferred solutions.” Like with so many data center technologies, change will come slowly in the risk-averse data center industry. The technology will improve but another driver is pushing the industry as well: the sustainability pledges providers are making. It seems the change will be forced as much as embraced.


Sponsored by

Cooling Supplement

INSIDE

Frontiers of thermal management

AI for efficiency

Quantum cooling

Using waste heat

> It turns out that heat removal is a great application for machine learning

> The frontiers of computing are pushing temperatures close to absolute zero

> When you reach the limits of efficiency, the next step is to reuse your waste energy



Cooling Supplement

Sponsored by

Contents 28. Cooling fragmentation The old consensus has gone, and each market has its own ideas on cooling 30. AI cooling: more than a pipe dream It turns out that thermal management is a great application for machine learning 32. Advertorial: Data centers that make sense With demands for higher density and eficiency, decisions are needed 34. Hot properties Re-using waste heat has long been a dream, but there are practical difficulties 37. Cooling quantum computers Even close to absolute zero, removing heat is still an issue

10

Cool operations

A

cold data center may seem like one where the cooling is tangibly effective. If it feels chilly, that must mean the cooling systems are working!

But a cold data center is one where chillers are overworking and energy is being wasted. That's been the simplest expression of the efficiency drive of recent years. Facilities without chillers are driving PUE down ever closer to 1.0, but that's left the industry with a question: What do you do next?

Reusing waste heat is the next horizon of cooling. When PUE hits its practical limit, there's no way to reduce the energy going into the facility. The only way to improve efficiency is to radically rethink what happens to that energy once it's been used. Fundamentally, cooling is the removal of a waste product: heat. Find someone else who wants that heat, and you've saved primary energy usage. Of course, it's not quite that simple in practice... (p10).

Fragmented cooling

Cooling quantum

The first thing to note is that data centers are no longer the standard monolithic beasts they used to be. The sector is developing in several different directions, and each one makes its own demands on the systems which keep it cool (p4). Hyperscale providers have giant uniform spaces to play with, and the economies of scale are producing their own specialized solutions like fan-walls. Meanwhile, colocation players have to deal with different densities in one room, while HPC providers are pushing to higher rack densities - both of which are creating an opening for liquid cooling.

When the next generation of computers comes in, will we still need to cool it? We're pretty sure the answer is yes. We spoke to the scientists developing quantum computers, the radical new paradigm that could. They operate close to absolute zero, applying advanced physics. And they STILL have to remove heat. Will we see the end of cooling? We think not (p13).

Smart cooling

30

Recycle your heat

Since Google claimed big savings with AI-driven cooling, smart systems have promised better environmental control than humans. It's time now to examine how much of this promise can be delivered in practice, either through predictive control or spotting patterns in the data (p6).

Future cooling When data centers enter any new area, there are new challenges for cooling. In the pandemic, cooling systems have had to operate with less personal attention as travel has been minimized. That's going to be a valuable lesson for the future, as untended operation is exactly what's needed for the hotly-touted new sector - the Edge. In 2021, we expect to get good answers to Edge cooling. And to find new cooling problems. Join us on that journey.

Cooling Supplement 27


Peter Judge Global Editor

Cooling: there’s no longer one answer Data centers used to be uniform. Today there are many different kinds of facilities - and an array of techniques to keep them cool

I

n December 2020, when Japanese giant NTT opened a data center in London, one big item of equipment was missing. Data center managers from a few years ago would have been surprised to see the 32MW building in Dagenham has no air conditioning units. In the last few years, the old consensus on how to cool a data center has gone. And there are further changes on the way. "The latest technology removes the need

for compressors and refrigerants," said Steve Campbell-Ferguson, SVP design and engineering EMEA for NTT Global Data Centres, at the virtual launch event of the Dagenham facility. This was not the first data centers to be built this way, by a long chalk. In 2015, Digital Realty claimed that a 6MW London facility it built for Rackspace was the first in the UK to have no mechanical cooling. And there are simple reasons why operators should want to move in that

28 DCD Supplement • datacenterdynamics.com

direction. Data center designers want to reduce the amount of energy spent removing heat from the IT load in the building. Before energy conservation was a big concern, data centers were built with air conditioning units which could consume as much energy as the IT racks themselves. In the 21st century, this “wasted” energy became a key concern, and builders aim to reduce it as close to zero as possible, driving towards a PUE figure of 1.0. Replacing air conditioning units with more passive


Cooling Choices cooling techniques is one way of doing that, and can reduce the energy used in cooling by around 80 percent: NTT promised a PUE of 1.2 this year, while Rackspace claimed 1.15 five years ago. The change does not just reduce energy consumption: it also reduces the amount of embodied energy and materials in the building, and also cuts the use of refrigerants which are themselves potent greenhouse gases. This option doesn’t work everywhere in the world: in warm or humid climates, there will be a large number of days in the year when chillers are needed But there’s a principle here. At the start of the century, it was assumed that there was one way to keep a data center cool: mechanical chillers driving cold air through contained racks of equipment. Now, that assumption is broken down. Along with the drive to make data centers more efficient, there’s another reason: data centers are no longer uniform. There are several different kinds, and each one has different demands. Colocation spaces, as we described, have a well-established path to reducing or removing the use of mechanical cooling, but there are other steps they may need to take. There are also newer classes of data center space, with different needs. Let’s look at a few of these. High Performance Computing (HPC) Supercomputers used to be rare beasts, but now there’s a broader need for high performance computing, and this kind of capacity is appearing in existing data centers. It’s also pushing up the density of IT, and the amount of heat it generates, sometimes to more than 100kW per rack. With efficiency in mind, data center operators don’t want to over-cool their facilities, so there simply may not be enough cooling capacity to add several racks of this sort of capacity. Adding HPC capacity can mean putting in extra cooling for specific racks, perhaps with distributed cooling systems, that place a cooling system such as a rear-door heat exchanger in specific racks that need more cooling. Alternatively, an HPC system can be built with a separate cooling system, perhaps using circulating fluid or an immersion tank, such as those provided by Submer, Asperitas or GRC. Hyperscale Giant facilities run by the likes of Facebook, Amazon and Google have several benefits over the rest of the world. They are large and uniform, often running a single application on standard hardware across a floorplan as big as a football field.

HPC loads are demanding localized cooling in data centers. And at the Edge, imagine a traditional office closet. That's never been designed for an IT heatload. The hyperscalers push some boundaries, including the temperatures in their data centers. With the ability to control every aspect of the application and the hardware that runs it, they can increase the operating temperature - and that means reducing the need for cooling. Hyperscalers Microsoft and Google were among the first to go chiller-free. In 2009, Google opened its first facility with no mechanical cooling, in Saint-Ghislain, Belgium. In the same year, Microsoft did the same thing in Dublin. Giant data centers are cooled with slowmoving air, sometimes given an extra chill using evaporation. It has turned out that the least energy-hungry way to produce that kind of flow is with a wall of large slowturning fans. The “fan-wall” has become a standard feature of giant facilities, and one of its benefits is that it can be expanded alongside the IT. Each new aisle of racks needs another couple of fan units in the wall, so the space in a building can be filled incrementally. Aligned Energy builds wholesale data centers, and makes its own Delta3 cooling system, a fan-wall which CEO Andrew Schaap describes as a “cooling array” to avoid trademark issues It supports up to 50kW per rack without wasting any cooling capacity, and scales up. “No one starts out with 800W per square foot,” Schaap told DCD in 2020. “I can start a customer at a lower density, say 100W per square foot, and in two years, they can densify in the same footprint without any disruptions." Cooling specialist Stulz has produced a fan-wall system called CyberWall, while Facebook developed one in association with specialist Nortek. Edge Distributed applications like the Internet of Things can demand fast response from services, and that’s led to the proposal of Edge data centers - micro-facilities which are placed close to the source of data to provide low-latency (quick) responses. Edge is still emerging, and there will be a wide variety of Edge facilities, including shipping-container sized installations, perhaps located at cell towers, closets or server rooms in existing buildings, or small

enclosures at the level of street furniture. There’s a common thread here - putting IT into spaces it wasn’t designed for. And maintaining the temperature in all these spaces will be a big ask. Some of this will be cooled traditionally. Vendors like Vertiv and Schneider have micro data centers in containers which include their own built-in air conditioning. Other Edge capacity will be in rooms within buildings, which already have their own cooling systems. These server rooms and closets may simply have an AC duct connected to the building’s existing cooling system - and this may not be enough. “Imagine a traditional office closet,” said Vertiv’s Glenn Wishnew in a recent webcast. “That’s never been designed for an IT heatload.” Office space air conditioning is typically designed to deal with 5W per sq ft, while data center equipment needs around 200W/sq ft. Adding cooling infrastructure to this Edge capacity may be difficult. If the equipment is in an open office environment, noisy fans and aircon may be out of the question. That’s led some to predict that liquid cooling may be a good fit for Edge capacity. It’s quiet, and it’s independent from the surrounding environment, so it won’t make demands on the building or annoy the occupants. Immersion systems cocoon equipment safely away from the outside, so there’s no need to regulate outside air and humidity. That’s led to vendors launching pre-built systems such as Submer’s MicroPod, which puts 6kW of IT into a box one meter high. The problem to get over, of course, is the lack of experience in using such systems. Edge capacity will be distributed and located in places where it’s hard to get tech support quickly. Edge operators won’t install any system which isn’t thoroughly proven and tested in the field - because every site visit will cost hundreds of dollars. However, liquid cooling should ultimately be a good fit for Edge, and even provide higher reliability than air-cooling. As David Craig of another immersion vendor, Iceotope, points out, these systems have no moving parts: “Immersive cooling technology removes the need for intrusive maintenance and its related downtime.”

Cooling Supplement 29


Smarter Cooling

Max Smolaks Contributor

AI for data center coolingmore than a pipe dream Thermal management is emerging as one of the most promising applications of AI

T

oday, artificial intelligence can be found all around the data center – helping manage and protect the network, filtering alerts, and moving workloads. However, the industry has been slow to apply AI to the problems of operational technology, rather than IT. And specifically, to the realm of cooling, sometimes responsible for as much as a third of the overall power consumption of a server farm. You have likely heard the story about Google using a deep learning-based recommendation engine developed by DeepMind to “consistently” reduce the amount of energy used for cooling in one of its data centers by 40 percent. In 2018,

the company went a step further and allowed the algorithms to make adjustments automatically, under human supervision. Given the industry’s concern over emissions, and its enthusiasm for AI, one might have hoped that this development would now be in more widespread use by now. Unfortunately, creation of such systems requires a combination of deep technical expertise in data centers, and cutting-edge machine learning research; we have to remember that DeepMind is a ‘moonshot factory’ with seemingly unlimited budget and no commercial products. Since data centers represent the core of Google’s business, the company is unlikely to share whatever it has developed with the

30 DCD Supplement • datacenterdynamics.com

industry. It is up to established data center software vendors to introduce machine learning into white space management. The process has been slow, hampered by general distrust in AI tech, but in the past two years, promising case studies have been coming to light, outside Google as well as within. Data center operators are reporting that AI does indeed reduce the amount of energy they spend on cooling, shrinking their energy bills and their carbon footprint. Real-time control of cooling equipment presents a fitting problem for machine learning models, since they can consider much more data in their decisions than human teams possibly could, and can produce solutions that might seem unconventional and even counter-intuitive.


AI to reduce the carbon footprint of 5G Another interesting application of AI is being proposed by Nokia - the company has just launched the AVA Energy Efficiency service to help telecommunications providers reduce energy bills by up to 20 percent. The idea is fairly simple: AVA dynamically powers down parts of the radio network when traffic levels are low, helping conserve energy. In this case, machine learning is used to predict traffic levels based on historic data and maximize energy savings without compromising level quality.

There’s plenty of data to feed the models: unlike some other industrial environments, data centers are already chock-full of sensors, and can easily add more. At the end of 2020, German industrial giant Siemens published a whitepaper in which it highlighted some of the benefits of AI-based cooling in action. It said machine learning enabled cooling systems to adjust their output in real-time to match facility cooling needs to the cooling output as IT loads change. That’s a worthy goal, as it directly cuts energy use by avoiding the issue of overcooling – which is widespread in data centers. This industry loves to err on the side of caution. AI for cooling also minimizes the need for staff supervision and personnel on site, allowing employees to be assigned to other critical tasks and reducing the number of people who need to visit, which is important in a pandemic when site access is limited. Siemens’ own approach to AI for cooling combines two products: Demand Flow, which focuses on monitoring and control of chilled water delivery, and something called White Space Cooling Optimization (WSCO) – a platform that collects temperature and air supply sensor data and calculates the required adjustments in airflow to maintain the correct temperature for each aisle of racks.

In December, the platform was deployed to supervise cooling at the first Tier IVcertified data center in Paris, built for French state-owned bank Caisse des Dépôts. The facility is expected to operate at a power usage effectiveness (PUE) of 1.2. Siemens’ WSCO was developed in partnership with a fascinating company called Vigilent (formerly Federspiel Controls), a tiny Oakland-based firm specializes in one thing, and one thing only - mission-critical cooling. Vigilent has developed (and patented) a dynamic cooling management system powered by supervised learning that can take control of equipment, just like the system developed by DeepMind. The software learns by continuously analyzing sensor data for environmental changes, basing its recommendations on historic behavior. It can establish the contribution of every single CRAH unit in the building, and point out the ones that are wasting their cooling efforts. Vigilent promises similar levels of energy savings to those seen in the Google experiment, claiming an average 38 percent reduction in power spent on cooling across more than 500 installations. The company supplies machine learning tech to not just Siemens, but a plethora of DCIM and BMS software vendors like ABB, Hitachi Vantara. and Schneider Electric.

Another business leading the charge on AI for automated cooling management is Chinese conglomerate Huawei. Last year, the company launched iCooling, a cloud-based service that uses deep learning to process sensor data, find the relationships between parameters of different pieces of equipment and systems, and match the output of pumps, chillers and cooling towers to the IT load. The company claimed that the service improved power usage effectiveness (PUE) by eight percent when deployed at one of its own cloud data centers. When China Mobile trialed iCooling, it shaved off 3.2 percent of the overall power consumption of its facility in Zhongwei, or over 400,000 kWh. The system is expected to produce even greater energy savings as it continues to learn from data. The conversation about AI for data center cooling ties into the wider debate around AI for sustainability: a recent report from Capgemini Research estimated that innovative applications of artificial intelligence could reduce global greenhouse gas emissions by 16 percent within the next three to five years Capgemini identified a number of positive AI use cases within the report, and noted that energy optimization platforms and algorithms to identify defects and predict equipment failures without interrupting business operations are among those that are set to make the biggest impact. Following the success of AI deployments in other industries, AI for cooling and other elements of data center infrastructure management is the next big thing – it’s just a matter of turning theory into practice. “AI promises to reshape data center operations in the coming years,” Siemens warned in its whitepaper. “However, data centers need to prepare today to be relevant tomorrow.”

Cooling Supplement 31


Submer | Advertorial

Data centers that make sense As the era of the smart data center approaches, there are three things to understand: densities and efficiency; sustainability; and scale and automation

T

he era of the smart data center is upon us, born out of an ever increasing demand on digital services and a growing digital economy , the traditional data center can no longer support the business needs of a hyperconnected world. A smart data center is optimized at every level, from facility design to utilizing renewable energy sources, enhanced energy distribution and cooling systems, high density networks and robotics. Data centers are now faced with the decision of which technologies best align with their business objectives. There are three pillars which must be considered when designing or constructing a Smart data center.

amount of data being consumed has grown exponentially over the past 10 years. This situation has only amplified since the start of 2020, where as a result of the global Covid-19 pandemic, global internet traffic grew 40 percent. Naturally, data-intensive workloads require more compute power, which in turn increases the amount of electricity used by servers and the amount of heat the servers produce. Under these conditions, the standard methods of cooling popularized when

1. Densities and efficiency 2. Sustainability 3. Scale and automation Densities and Efficiency Catalyzed by an increase in the use of AI, IoT, Edge and Cloud computing, the

32 DCD Supplement • datacenterdynamics.com

data centers first began, are unable to meet the demand and are highly inefficient. Data centers require advanced cooling systems which are able to easily deal with the increase in power density. Immersion cooling offers data centers the ability to keep up with the increasing demand for higher power density along with many other advantages such as: A reduction in CO2 levels, power usage and better Total Cost of Ownership. Additionally liquid cooling


offers protection from the surrounding environment, including dust, heat and water, all of which could damage the hardware. In light of this, Submer has developed a number of technologies, powered by liquid immersion technology. The SmartPodX and SmartPodXL are the very first commercial Immersion Cooling systems designed according to OCP principles. A compact, modular LIC solution with >100kW dissipation capacity and a 1.02 PUE. Accommodating 21” and/or 19” HW and a flexible busbar system supporting multizone 12v & 48v configurations. Submer’s most recent launch, the MicroPod, incorporates Submer’s industryleading technologies with a compact, modular, plug-and-play, data center-ina-box configuration that allows for fast deployment in any location, including direct sunlight and an easy plug-and-play installation with an unrivalled energy footprint.

“The first step to creating a smart data center starts with a smarter infrastructure, that is, utilizing the space in the most efficient way possible," explains Daniel Pope, CEO of Submer. "With Submer’s solutions, data centers can dramatically reduce space, while simultaneously improving their power density and efficiency. To put this into perspective, 100 racks with 10kw/rack is equivalent to just 10 Submer pods.” Sustainability Data centers place a huge strain on the world’s natural resources. According to a report by the US Lawrence Berkeley National Laboratory, US data centers will have consumed around 73 billion kW hours of energy during 2020. Electricity consumed by data centers resulted in the release of 100 million metric tons of carbon dioxide (CO2) during 2020, according to the US National Resources Defense Council. PUE also represents a huge challenge for data centers. As the world becomes increasingly conscious of the environment and human impact, data centers are under increasing amounts of pressure to reduce their global footprint - and this cannot be achieved using traditional air cooling methods. The idea of a ‘green data center’ has grown in popularity, but just what does it mean? Characterized by the use of renewable energies and the reduction of power, water and land, a green data center should not contain inactive or underused servers.

Submer’s solutions have been specifically manufactured to enable data centers to operate as efficiently as possible. Immersion cooling allows for energy reuse, which can be leveraged to power local municipalities, other businesses or sold back into the grid, enabling the idea of a circular economy to become a reality.

“The properties of Submer’s Smart Coolant when used within a system powered by Immersion empowers data centers to be truly sustainable," says Peter Cooper, PhD, Chemical Engineer. "Our Smart Coolant offers zero waste of water, 50% savings on electricity consumption, use of a certified biodegradable and non-toxic (for people and the environment) coolant” Scale and Automation In the past, human intervention was an integral part of the day-to-day running of a data center. Fast forward to today, thanks to big developments in cloud computing, the complexity of tasks continues to grow. There is growing pressure on data center staff to complete tasks at impossible speed. Automation can help relieve IT professionals of monotonous daily tasks such as updates, security checks and file back ups and allow them to focus on other management tasks which help improve the data center as a whole. Gartner predicts that more than 30 percent of data centers that fail to sufficiently incorporate automation will no longer be operationally or economically viable by 2020. Data center automation is predominantly implemented through software solutions. In order to meet these needs, Submer has designed the Network Backplane. The Network Backplane is just the first step towards smart data centers and has been conceived to offer a reference design compatible with traditional and OPC servers, facilitate autonomous serviceability of networking and server nodes and take intra-rack communication to the next level and enable a new generation of TOR switch density gains.

Scott Noteboom, CTO of Submer puts it like this: “When we launched the Smart DC project, we wanted to offer data centers, hyperscalers and supercomputers a series of solutions to help them improve their energetic and operational efficiency. Submer and Samtec, together with 2CRSI,have designed a solution that takes the cables off the table. Thanks to a robust blind-mated connector solution this technology allows an endless range of possibilities." Smart data center, the future is now To conclude, smart data centers are the future. Whether you are ready for them or not, data centers must invest in smarter technologies in order to remain competitive and to meet the increasing restrictions and demands of governing bodies in relation to sustainability and efficiency. A smart data center is a data center that makes sense. Contact Submer today to see how our solutions can help you prepare for the future, today.

Find out more: White Paper: How thermodynamics influences density and efficiency Click here Use case: Practical benefits of immersion for HPC Click here The Submer Network Backplane. Click here Request a demo Click here

Cooling Supplement 33


Waste Heat Warms Up

Hot property Re-using waste heat from server coolers is not a new idea, but it remains mired in challenges

34 DCD Supplement • datacenterdynamics.com


"In essence, the amount of waste-heat recovered compared to the electrical input will remain the same” – Erik Barentsen, a senior policy officer, energy and sustainability, at the Dutch Data Center Association

Graeme Burton Contributor

W

hen Facebook was looking to Denmark as a key location in its network of global data centers, part of the brief for the designers was to maximize every possible aspect of sustainability. One item on that list was the idea of reusing waste heat from data center cooling systems - a proposal taht has excited interest for some time, but has often foundered on the practical realities of implementation. It doesn’t take complex technology to

convert waste heat into something useful. The challenge is the expense, combined with the fact that data centers are often a long way from the locations that can actually do something useful with their second-hand energy. But that’s changing, and Facebook is just one high-profile proponent, along with Amazon Web Services, Google, and others. In Denmark, Facebook was able to secure a tie-up between its third facility, in Odense, and the local district heating company, Fjernvarme Fyn, to recycle warm air extracted from the ‘hot aisle’ of the data center. “The warm air is directed to our cooling units. This warm air is directed over a coil – cold water comes in, the air heats up the water, and the warm water is then piped across the street to the heat pump,” says Lauren Edelman, energy program manager at Facebook. Once the air reaches Fjernvarme Fyn the temperature is boosted using a heat pump – powered by renewable energy – before the water is delivered into the district heating network. This scheme is expected to recover some 100,000 megawatt hours of energy per year, which Facebook estimates will warm around 6,900 homes. The drive behind such sustainability initiatives comes after 10 to 15 years of technology companies and data center operators pushing the power usage effectiveness (PUE) down from above two – the US average is around 2.5 – to figures ever-closer to one, says S&P Global Market Intelligence senior research analyst Daniel Bizo. Improving PUE means data centers consume less energy. Reusing their waste heat is an additional benefit, but it needs infrastructure. And, while the European Union has been pushing district heating schemes as an environmentally friendlier alternative to electric or gas central heating, these are not widely used outside of Germany, Scandinavia (excluding Norway), and a handful of other places. The difficulty of finding alternative uses for data centers’ waste heat is illustrated by Google’s latest data center opening in Middenmeer, the Netherlands. While the company claims it has been

able to radically slash power consumption per unit of compute power in its data centers, compared to the data centers it opened less than ten years ago, its waste heat re-use at Middenmeer doesn’t currently extend any further than helping to heat the office space at the data center for its 125 employees. Staying cool Erik Barentsen, a senior policy officer, energy and sustainability, at the Dutch Data Center Association notes that there are basically three main forms of data center cooling. The first one, direct air cooling, “is not really applicable for recovering waste heat,” says Barentsen. “The second is where you have computer room air conditioning, in which the air in the IT room is chilled,” he adds. With servers arranged in ‘cool’ and ‘hot’ aisles, the exhaust air can be extracted, run through a heat exchanger and returned to the cool aisles, helping to lower air conditioning costs, as well as extracting heat for re-use. However, the typical exhaust heat from an ordinary air-cooled data center is only between about 25 degrees and 35 degrees celsius, he adds, limiting its value without the addition (and expense) of a heat pump to boost its temperature. The third system, says Barentsen, is by deploying liquid cooling. “Liquid cooling can be done either through immersed technology, where the whole system is immersed in oil and the oil itself is conditioned to a certain temperature, or you can use a closed-loop liquid cooling system,” he says. The main benefit of liquid cooling, adds Andy Lawrence, executive director of research at the Uptime Institute, is that servers can be run hotter and harder, while higher exhaust temperatures widens the scope for re-use. Using liquid cooling, server racks can be also more densely packed. “The exhaust heat is going to come out piping hot – above 50 degrees Celsius would be quite common – and using it for hot water or heating would make a lot of sense,” says Lawrence. But, notes Barentsen, liquid cooling doesn’t improve overall data center efficiency – it merely makes exhaust heat

Cooling Supplement 35


Waste Heat Warms Up

“The case for liquid cooling is quite strong, but… every designer knows air systems. There’s lots of equipment out there, lots of standardized designs and it’s cheaper in terms of capital outlay” – Andy Lawrence, executive director of research at the Uptime Institute

re-use more viable. “In essence, the amount of waste heat recovered compared to the electrical input will remain the same: 90 percent of the thermal energy that goes into a data center can be recovered,” says Barentsen. “However, at least for the time being, the residual heat temperature will make a difference because with liquid cooling the residual heat is easier to use in a district heating system.” Even then, there are challenges over how to re-use this resource if there isn’t a friendly neighborhood district heating company willing to take it off of the data center operator’s hands. The US National Renewable Energy Centre, for example, used excess warm air to heat the pathways around its HQ in Golden, Colorado in order to keep them free from snow and ice in winter, but that is scarcely an efficient use of a valuable resource. A more practical alternative solution has been developed by Dutch tech company Blockheating, together with consultancy IT Renew. It has devised a containerized data center that can use liquid cooling to maximize heat capture, piping the result to commercial greenhouses – for

which the Netherlands is famed – to help keep tomatoes and bell peppers growing throughout the autumn and winter months without the use of gas. Its 200kW Edge data centers use liquid cooling – enabling more compute power to be packed into a relatively small space – that can be converted into water at a toasty temperature of 65 degrees celsius. However, the demand for such Edge data centers next to greenhouse facilities is likely to be highly niche, and while gas prices are low it’s unlikely to gain much traction, suggests Barentsen. Tighter regulation What may help drive data center exhaust heat re-use is a combination of the broader corporate push towards carbon neutrality and sustainability – especially among well-healed organizations that can most easily shoulder upfront expenses – and regulation, particularly in the European Union. For around the past decade, the EU has been pushing member states to implement district heating schemes, providing funds for start-ups, arguing that district heating is more efficient and less carbon-intensive than either electric or gas central heating.

36 DCD Supplement • datacenterdynamics.com

Indeed, part-funding from the EU is behind a district heating scheme in Dublin, Ireland. South Dublin County Council – under whose authority the Castlebagot ‘digital business hub’ falls – has established its own publicly owned energy company, called Heatworks, to pipe heat from data centers in the hub to the newly established Tallaght District Heating Network. More recently, there have been calls for tighter regulation of the data center industry, especially following a December 2020 United Nations report claiming that carbon emissions from the construction and operation of buildings now accounts for 38 percent of total global energy-related CO2 emissions. Heating (and cooling) buildings around the world is responsible for just under 10 gigatonnes of CO2 emissions, it claimed. But for the time being, warns Lawrence, proponents of liquid cooling need to convince an industry geared towards air cooling that it is the way forward. “The case for liquid cooling is quite strong, but… every designer knows air systems. There’s lots of equipment out there, lots of standardized designs and it’s cheaper in terms of capital outlay,” says Lawrence.


A Quantum State

Cooling quantum computers Keeping your qubits stable requires some of the most extreme cooling equipment around

Sebastian Moss Deputy Editor

A

s most of the cooling industry braces for everdenser racks and debates how to cool 1000W processors, there is one sector struggling to handle chips that consume barely any power. "On our quantum chip, power consumption is very, very low," Intel's Jim Clarke told DCD. "We don't measure it in terms of watts, we measure it in terms of how much does our system warm up during processing,” the company’s director of quantum hardware explained. That’s because even small temperature increases can render the entire system unworkable, Clarke said. "Typically, we are trying to avoid a warm up of even 10 millikelvin." For the superconducting quantum computers under development by companies like Google, IBM, and Rigetti, or the silicon spin qubit systems at Intel, the enemy is noise. To keep systems in a quantum state, designers have to minimize the risk of anything disrupting the fragile position.

The slightest temperature increase can mean that atoms and molecules move around too much, potentially causing a quantum bit (qubit)'s voltage to spike, and flip from one quantum state to another. “Quantum chips have to operate at very low temperatures in order to maintain the quantum information,” Clarke said. To do this, Intel uses cryogen-free dilution refrigerator systems from specialist Blufors. The refrigerator features several stages, getting colder as you go down - all the way "down to temperatures just a fraction of a degree above absolute zero - that is really cold. In fact, it's 250 times colder than deep space,” Clarke said. “We use a mixture of helium isotopes as our refrigerant to get down to these very cold temperatures, in the tens of millikelvin.” While the refrigeration system can bring temperatures down to extremes, it can't remove heat very quickly - so if you have a chip in there that's creating a lot of heat, you're going to have a problem. "You're probably familiar with the power dissipation of an FPGA," Clarke said. "This would overwhelm the refrigeration cooling

capacity. At the lowest level of a fridge, you typically have about a milliwatt of cooling power. At the four Kelvin stage [higher up in the fridge], you have a few watts."

Future fridge designs are expected to improve things, but it's unlikely to massively increase the temperature envelope. "That imposes limitations on the power dissipation of your control chips." Beyond the quantum chip itself, quantum systems need other hardware to make them fully-fledged computers. Some of that can be handled outside the fridge, with most quantum computers paired with a few racks of conventional servers. But other parts, particularly the control chip, need to be inside the fridge itself - which means yet another thing that needs to work under extreme conditions, while not giving out too much energy itself. "Controlling the quantum chip is actually pretty difficult, and that's what we do with Horse Ridge," Clarke said. Its secondgeneration Horse Ridge II is a CMOS chip, "with more than 100 million transistors, produced on our 22-nanometer node."

Cooling Supplement 37


The hardware, revealed this month, is verified at operating at 4 Kelvin, but the company hopes to push that in the years ahead. "Going forward, you would probably focus not only on additional capability, but on additional power optimization of Horse Ridge inside the fridge," Clarke said. "And you would probably also focus on improving the cooling capacity of the fridge, which our external suppliers and the quantum ecosystem is working on." But Intel has taken a slightly different route than some of its quantum competitors. Like Google and IBM, it has worked on superconducting qubit systems, including its 49-qubit Tangle Lake processor, but in recent years it has started to shift to researching spin qubits with partner QuTech.

A debate rages about which approach is best, but one huge advantage of spin qubits, which more closely resemble existing semiconductor components, is that they are expected to have a 'much' higher operating temperature. Instead of 20 millikelvins, they can run at around one degree. That might not sound like a huge difference, but “believe it or not, it makes things tremendously easier.” Spin qubits are far smaller than superconducting ones, with a square millimeter theoretically having enough room for up to a billion spin qubits. But we're a long way from that, and currently efforts to build more powerful quantum systems will require bigger fridges. "I think the form factor of the fridge will be bigger than it is now," Clarke said. "Right now, it's about the size of a keg of beer. When I think about what it would take to get to a million qubits, you still aren't at a point where you're worried about the form factor of the fridge or the system - it's not going to be a similar size as a typical supercomputer, with rows, rows, and rows of racks. “It may, in the end, look like one of the gas cylinders you might see out of a hospital, that sort of size, but that's still I think, quite manageable from a form factor in the data center.” Others think they could do better, arguing that we could eschew giant fridges entirely. "We run at room temperature," Peter Chapman, the CEO of quantum

computing startup IonQ, said. "Yeah, there's cooling, but it's laser cooling, and it's on individual atoms. How much energy does it take to cool 40 atoms?" Instead of a whole rig to cool a quantum chip, IonQ posits that you could just cool the atoms you need to be in a quantum state. In laser Doppler cooling, atoms are surrounded by lasers which are tuned to a frequency slightly below an electronic transition in the atom. Should an atom move towards one of the lasers, the light is blue-shifted to where the atom can absorb the light, slowing its motion. The atom then emits a photon in a random direction. For trapped ion quantum computers - which are also being developed by companies like Honeywell - to work, ions (a charged atom or molecule) need to be approximately stationary, operating at a temperature of about a microkelvin. To do all this at the atomic level sounds immensely complex, but Chapman’s company is able to take advantage of decades of research from an adjacent industry. “The secret is, what we're doing is based on atomic clocks - it’s the same exact technology. We've managed to push an atomic clock onto a chip, and that's used by many industries.” The company in October unveiled a 32 qubit chip. According to one benchmark, that is the most powerful quantum computer ever made - but quantum computing measurement is a whole other feature entirely. Earlier this year, the company opened its very own data center to test out its three main quantum systems, and offer remote access to researchers. By 2023, it hopes to be in other data centers - at the rack level. "Our goal is to get it to a rack-mounted system," Chapman said. He pointed out that the company’s approach vastly differs from the superconducting system used by IBM (which, according to a different benchmark, claims to be the world’s most powerful quantum computer). “IBM said that they could no longer buy a dilution refrigerator large enough,” Chapman said. “So they had to go start building their own - they released a picture a couple of weeks ago, and it looked like a kind of a missile silo. The exact same thing at IonQ is the size of a half-dollar.

"Quantum chips have to operate at temperatures just a fraction of a degree above absolute zero that is really cold. In fact, it's 250 times colder than deep space.” 38 DCD Supplement • datacenterdynamics.com

“And they acknowledge that you would have to get to hundreds of those things, so for them, you would have to get to be the size of a football field. All at damn close to zero degrees. There's probably not enough helium in the world to do those things.” Laser cooling wouldn’t work for IBM or Intel’s approaches, Clarke said. “You're not only worried about what temperature you can get, but also cooling power. It’s a question of how many watts of power can you remove from the system and still maintain that temperature. “And so with Doppler cooling, the question to ask is not what temperature you can get to but how much cooling power you have. I think that for these systems, you're primarily looking at the helium dilution refrigeration technique.”

As for which system approach is better, well, that’s a rather contentious matter among academic and corporate circles, with the winner potentially netting billions and changing the face of computing. But for Amazon Web Services, the dominant cloud provider, it doesn’t mind that there are so many different approaches. That just means more options it can sell. "There's a real open field in the industry, in terms of which of these machines will be most suitable to build a long term powerful device," said Richard Moulds, the general manager of AWS' quantum computing service Braket. "There are a dozen or so companies that are building quantum machines, essentially competing with different technologies to prove that their particular chosen path is superior to another. In the end, it might turn out that a single technology for building a quantum computer doesn't necessarily win the race, it might turn out there are multiple different technologies that are each well suited to different applications." With Braket, the company offers cloud access to several of the approaches, including Rigetti (gate-based superconducting qubits), D-Wave (quantum annealing, operating at 15 millikelvin), and IonQ. Signs point to Amazon also developing its own system, something it would not comment on. It's also worth noting that Amazon invested in IonQ (along with


A Quantum State

"Even our conventional transistors have quantum effects. By studying something at low temperature, you can start answering fundamental questions"

of information in a very short time, we think by 100× or 1000×,” Clarke said. “It's gonna take something like that to really accelerate quantum computing, but it will serve a purpose as we try to understand the performance of our more conventional devices.”

Advances in extreme cooling Google Ventures), and Peter Chapman was previously the head of Amazon Prime. "Braket was the first time that an IonQ machine was publicly accessible," Moulds said. "And we expect to increase the range of technologies that will be available to the Braket service over time - our goal is to make all the different ways you can build a quantum computer available to our customers." Each approach will have different cooling needs, AWS director of quantum computing Simone Severini told DCD. "For example, light can be used for quantum computing. Take a photon, it is a type of qubit because you can prepare it in two distinguishable states by polarizing the photon. And they of course are around us, so they exist in an environment that doesn't require extremely cold temperatures." This technique was recently used by Chinese state-backed researchers to develop a highly-specialized system capable of achieving quantum supremacy: That is, to pull off a calculation no conventional computer could do. The group claimed it was 10 billion times faster than Google's system, which achieved supremacy last year, but both are only useful for one specific calculation. "So in general, the thing to keep in mind is that there are many different approaches to build quantum computers," Severini said. "Some of these approaches will require a cryogenic environment, some of these will not. All these approaches are equally interesting, and all these approaches are currently subject of research." Whether cryogenic cooling proves to be the de facto approach to quantum computing or not, Clarke notes that there are other benefits to such low temperatures. “Even our conventional transistors have quantum effects,” he explained. “And by studying something at low temperature, you remove a lot of the variability or noise and you can really start answering fundamental questions about your device.” The company built a 'cryoprober' with Bluefors and Afore, a fast electrical characterization tool that can operate in the quantum regime at about a kelvin. That means the company can probe how its systems - be they quantum chips, controllers, or other hardware - operate at extreme temperatures. “It allows us to get massive amounts

technology funded by the quantum computing industry could also open the door for another ‘it’ll-be-big-in-10years’ technology: superconducting supercomputers. First posited by electrical engineer Dudley Buck, the idea to take advantage of the lack of electrical resistance exhibited by superconducting materials at near-zero has been a goal of the National Security Agency for decades. But, despite hundreds of millions being sunk into various public and top-secret projects, efforts have struggled. “There’s the idea of using superconducting logic,” Clarke said. “So it looks something different than a typical transistor, but because of superconducting logic, the question would be, can you essentially have a device that has almost no power dissipation?” Low temperatures can offer benefits even to traditional computing systems, Clarke said. "If you take a transistor, and you operate it at very low temperature, perhaps with some process optimization, it's probably either 2× faster or 2× power efficient.” Currently the effort it takes to get that improvement isn’t worth it, but some argue that once costs come down it could make sense. “I think that the jury is still out on that,” Clarke said, adding that the cryoprober could help that jury make a decision. “We built this device for this tool for quantum computing, but it will serve a purpose as we try to understand the performance of our more conventional devices.” As for the quantum computer itself, it may take up a large amount of space, and it may require expensive and complex cooling systems. That’s okay, Clarke said: "If it provides you something that no computer on Earth can provide you, then it's worth the space.”

Cooling Supplement 39



Grease lightning

Don’t let the hardware let you down If you want your data center to be reliable, you may need to use grease

A

data center is a complex system comprising technology and the humans that run it. Increasing reliability is a combination of many things. You need a good design, with no single points of failure. You also need to make sure your staff are well trained, to minimize the risk of human error. And you need to maximize the fundamental reliability of your hardware. Research suggests that there’s an upper limit to reliability, and no systems can be expected to work continuously for more than 200,000 hours (more than 22 years).

But the only way to get anywhere near this figure is to address the mechanical issues that will eventually bring down any hardware. Preventing hardware failure is a combination of making sure you have reliable, high-quality equipment, and taking all steps to make sure it doesn’t wear out. Most of the hard work is going to be done by vendors, to ensure kit is reliable before it arrives, but once inside the data center, it’s up to the owner of that equipment to take care of it. “Maintenance is an insurance policy,” says Brian Kinkade, market development

Peter Judge Global Editor

manager at Nye Lubricants. “It makes sense to do it.” While most data center technicians already keep a close eye on temperature, humidity, and dust in the circulating air, Kinkade says the industry should also be aware of physical wear and tear. His company specializes in preventing it, and his social media slogan is: “Solving reliability and performance challenges with grease.” Data centers include plenty of moving parts, and also some non-moving parts which benefit from lubrication, says Kinkade, who has been developing business in the sector recently. To keep data centers

Issue 39 ∞ December 2020 41


up-and-running and minimize unplanned downtime, engineers have to know about potential failures and address the issues, he says. Cooling fans and hard drives need specialized lubrication, and so does the large-scale mechanical and electrical plant, including cooling units, air handlers, and diesel generators. Surprisingly, some of the more static parts of the data center also benefit from lubrication, Kinkade explains. The part of the data center which first got him involved in the sector is the busbar. Busbars deliver power from the electrical room to the racks and servers. Although they’re stationary, they can still suffer from “fretting corrosion,” where a steady period of micromovements produces wear on the contacts. The wear can remove plating on the contacts, exposing the underlying copper, which can then be oxidized. The oxide layer acts as an insulator, making the power connection less efficient, and eventually causing it to fail. A lubricant film prevents that by minimizing contact between the metal surfaces, says Kinkade. It’s standard practice in high-vibration applications such as in

motor vehicles, but he has found from experience that data center operators need to consider it on their busbars. In normal operation, most data centers have little serious vibration - though this is something that might change when Edge roll-outs start placing some near are subway stations (see box) - but to push reliability high, lubricant on busbars can be good insurance. But there’s another issue: to make the most of data center technicians, many organizations have moved to “rack and roll” installation, where racks are populated with equipment in a central factory, then shipped to the data center for installation. “When they plan on shipping racks fully loaded to the data center, operators and OEMs should do a transport, shock and vibration test,” says Kinkade. The vibration and shock during transport may be more than the equipment is designed to handle - and applying a lubricant during assembly may be a way to ensure the equipment isn’t harmed during transport. At this point, the operator needs to look at the warranty, he says: “It’s a certain bunch of agreements as to where some equipment will work and for how long.” The connections between a server and a

busbar will be designed to support a certain number of insertions, and a certain amount of vibration and shock. Servers and switches are mostly designed to be inserted in a stationary rack, not inserted in a rack which is transported. It’s conceivable that shipping a rack with the servers inserted might exceed the lifetime expectation for the physical wear on those contacts. As rack and roll has become standard, the OEMs have understood how to handle the equipment, and Kinkade believes hardware vendors have taken it into account, perhaps thickening the plating on contacts.

POWER TO KEEP YOU CONNECTED Protect your critical data with backup power that never stops. Our priority is to solve your data center challenges efficiently with custom continuous, standby, and temporary power solutions you can trust to keep you connected. Our trusted reputation and unrivaled product support demonstrate the value of choosing Caterpillar. For more information visit www.cat.com/datacenter. © 2020 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Corporate Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.


Grease lightning

Maintaining the Edge

Servers are mostly designed to be inserted in a stationary rack, not in a rack which is transported - shipping a rack with servers might exceed the lifetime expectation for the physical wear “There are still people that choose to ship stuff separately and install it at the data center. There is a mix,” he says. It will be down to the customer to make sure that the equipment they use is up to the use to which it’s being put. Operators should keep careful track of the tolerances - to make sure that the equipment is sufficiently rugged, and apply any extra lubrication that is needed. Another point about lubricating those contacts is that applying more lube can mitigate existing damage to connectors and prevent further fretting corrosion. Lubricants designed for the automotive industry are a good choice, as a car is an environment with plenty of vibration. Nye also provides lubrication for cooling fans. Bearings are sealed inside the unit, and the best kind are sintered bearings - which consist of pressed metal powder with pores which are impregnated with a lubricant designed to keep the fan permanently lubricated for its lifetime. The lubricant should include antioxidants and be rated to work at high temperatures. Because they are sealed, the lubrication is down to the fan manufacturer, and the same goes for hard drives, which have fluid dynamic bearings to allow the spindle motor to move freely without friction and wear. As well as physical properties, that lubricant

should have the right thermal range, so it doesn’t oxidize and cause debris. Most of the time, apart from busbars and connectors, any lubrication is handled directly by the equipment vendor, and is a one-off intervention during manufacture. But there is one exception to this - which may be on the increase. As the industry becomes more focused on its environmental impact, moves to refurbish and re-use data center equipment are coming to the fore. “If there’s a refurbish in the business model, where last-generation servers are repurposed, you could want some grease in that process,” says Kinkade. “As an insurance policy to minimize failures, it might make sense to do it anyway. You would also want to test and measure it to make sure it works.” And of course, any reuse process is going to involve transportation again. If equipment is re-shipped loaded into a rack, then that will introduce further vibration and wear, and might need further grease. It’s worth underlining this point. IT people may focus on refurbishing the hardware, and clearing any data by scrubbing memory and drives. The physical danger to IT systems might only be obvious to someone who lives and breathes lube.

DCD>Operations DCD>Magazine Supplement Upgrading The Artificial andIntelligence retrofiting data Supplement centers

Out now

This article featured in our a free free digital digital supplement supplement onon artificial data center intelligence. operations. Read today to learn today Read about to rack learn density, aboutdeep howlearning data centers technologies, can reduce the carbon role ofemissions CPUs in inferencing, in concrete,the quest for reduce water fusion waste power, in cooling, and much andmore. find out what are the cyber security risks of a humble chiller bit.ly/DCDOperations bit.ly/AISupplement

“People aren’t putting data centers next to subway stations,” says Kinkade, but he might be wrong there. Edge data centers are emerging, which have to be positioned close to sources of data and users of that data, to support applications This means that small facilities are being proposed in sites such as the bases of cell towers, and in urban environments. Some of these micro facilities could end up close to subway stations, or in other sites where there are environmental hazards. Concerns have been growing that those installing such micro Edge data centers may not have taken the new environment into consideration. Edge providers should be careful not to assume that the same maintenance routines applied in large data centers will apply in the Edge. In a recent technical bulletin ASHRAE, the group which specified much of the best practice used in large data centers, examined how small Edge facilities should be treated differently. As well as the possibility of vibration, Edge data centers may get more exposure to outside atmospheric conditions. They are likely to only have one door between the equipment and the outside air, so any maintenance process could expose the electronics to condensation and dust, including gypsum and salts. “If they get down inside a contact, like a DIMM or a processor, and you do a service on those, what you can do is actually smear the particles onto the contact. I liken it to smearing peanut butter or Nutella on toast,” said Jon Fitch, a data scientist from Dell and a member of ASHRAE’s TC 9.9 committee, in an interview with DCD (see DCD Magazine issue 38). “It's very thick and viscous and, by golly, if you want to get it off the toast, it's pretty hard to do!” Blocking dust and condensation could be addressed by lubrication and other maintenance practices.

Issue Operations 39 ∞ December Supplement 2020 43


Getting off generators

Google’s big battery bet The cloud goliath takes its first steps towards a diesel-free future

H

yperscale giant Google plans to replace a generator with lithium-ion batteries, in a trial that could herald the end of diesel in its data centers. “We expect this battery to be operational towards the end of the summer of next year, and this is really going to be a first of its kind in the data center industry,” the company’s carbon free energy lead Maud Texier told DCD. The company will swap out a generator for “an equivalent battery, same capacity, that’s 3MW,” at its St Ghislain, Belgium data center, Texier said. Should the grid go down, the company expects the battery to be able to run "specific workloads" for an hour. The former Tesla exec declined to comment on which battery provider Google will use, but noted that they chose lithiumion batteries not just because of recent price decreases, but because it was a tried and true approach that they could risk putting in a data center. Lithium-ion batteries have also seen a remarkable improvement in energy density, making finding space for the system relatively simple. “Today, the system that we have is actually very close to the physical space that a diesel generator would take,” Texier said. “In our early investigation phase, that was one of the key questions but because the lithium-ion industry has made a lot of progress, this is really not as much as a concern as it would have been a few years ago.” Unlike diesel generators, which are essentially useless for most of their life and only spring into action when power fails, Google hopes that batteries can have a much closer relationship with the grid. “So for this pilot, we are looking at frequency regulation,” Texier said. “When the battery is not used for backup, we would use the capacity to help the imbalances of the grid, generally coming from excess or deficits between the production of electricity and consumption in real time.” Frequency regulation makes sense for its Belgian site, but future deployments could be

Sebastian Moss Deputy Editor

more ambitious. “As we expand to other data centers, depending on the local conditions, we can see other applications and ways of using this battery behind the meter, either for own energy portfolio optimization, for specific services that we can deliver,” Texier said. The company currently purchases enough renewable energy to cover its data center needs, but that doesn’t always mean that its data centers are running on renewable energy. During times when the sun isn’t shining or wind isn’t blowing, grids may only offer fossil fuel-based electricity. In future, Google could turn to batteries charged during renewable hours - to power its data centers during that period. But this isn’t just about doing what is right during a growing climate catastrophe there’s also a business case, with money to be made in helping the grid. “This is really why we decided to move forward with the pilot, actually,” Texier admitted. “Between the aspect that we can replace diesel generators with shorter duration batteries, plus the cost curve of these technologies, plus the potential benefits that you can make from those applications moving forward, we believe that this becomes less of a sustainability project, and more like a business operations project that makes sense from an operations perspective.” Looking at the wider data center industry,

44 DCD Magazine Supplement • datacenterdynamics.com 44 • datacenterdynamics.com

Google estimates that there are around 20 gigawatts of backup diesel generators dotted around the world. Replacing them all would remove a major carbon polluter, swapping in something that actually helps renewable deployments. However, even Google plans to initially only roll this out to greenfield facilities, so should this prove successful it will likely impact the next 20 gigawatts of backup power. Beyond the data center industry, the need for energy storage is growing rapidly as renewable plants come online. To meet this demand, Google expects to become more involved with battery storage deployments behind and in front of the meter, Texier said. This could mean exploring different energy storage solutions beyond lithium, she said, but would not give specifics. Sister company X, for example, has a project exploring the use of molten salt to store energy, while countless companies and government research projects are pushing different ways to store energy when renewables falter. But before all of that, Google needs to prove that swapping a generator for a bunch of lithium-ion batteries is a good idea, setting itself a number of undisclosed milestones to beat before moving ahead. “I hope it can be as quick as a few months, but we can't tell at this point,” Texier said.


Changing Lanes

IBM’s hybrid realities Facing strong headwinds, IBM hopes to find new life in the cloud

I

BM has a problem. The storied company is trying to reinvent itself as a cloud computing business, shedding entire business segments and staff as it pivots from services and legacy hardware. Getting out of declining sectors makes sense, but it means that instead IBM is putting all of its hopes into one of the most competitive markets around, one dominated by well-funded titans like Amazon, Microsoft, and Google. To pull this off means not engaging headon, but trying another approach. "I think we are going for something different," IBM Cloud CTO Jason McGee told DCD. "And that difference is really a recognition of the realities of enterprise IT and the complexities of the mission critical workloads that people need to run to operate their businesses. Our approach has been to build location independence, hybrid and multicloud, deep into our architectures." Part of that strategy is seen in the company's embrace of OpenShift, developed

Sebastian Moss Deputy Editor

by Red Hat - which IBM acquired for a whopping $34bn last year. “We're really pushing our clients to build their applications on a platform that allows them the freedom to run wherever,” McGee said. The container platform is the backbone of a new service Big Blue launched this year, IBM Satellite. This is "really our distributed cloud model, where we can take IBM cloud services and we can run them back in your data center, or we can run them on Azure or Amazon, or we can run them at the Edge of the network,” McGee explained. “And so you can get that kind of full as-a-service public cloud experience in any location that a customer needs to be able to run their workloads.” There are numerous companies, large and small, trying to serve as a cloud facilitator for businesses early in their cloud transition, ideally helping firms best place their workloads. Few, however, offer their own cloud services, something that can either be seen as an advantage for IBM or a conflict of interest.

McGee asserts that IBM believes companies should “use the best provider for the capability that you're targeting, and then we can wrap that in a common approach.” That could mean using IBM Public Cloud “for pockets where it is really strong,” like high resiliency, high security, regulated workloads, or it could mean using other clouds. “Like for SQL Server databases on the cloud, Azure might be a good destination for that.” Then there’s “a lot of grey area where, frankly, IBM Cloud, or Azure, or Amazon could fit the bill. So then it just comes down to what the customers' current choices are." This strategy has seen some success for McGee’s division, with its Q3 revenue increasing by 19 percent to $6 billion, its best quarterly growth yet. "It's been a great year for IBM Cloud,” McGee said. “I don't think this is Earth-shattering information, but with the pandemic, and people shifting to working at home, it has really accelerated cloud adoption.” But such acceleration was unable to offset the wider decline at IBM. In May, it began what is thought to be its biggest round of layoffs in a decade, totaling some 20,000 employees. This October, around 10,000 European employees were told they would be let go. Much of this was due to issues that predated the pandemic, but the virus exacerbated problems, causing revenues to plummet. It’s unclear how heavily IBM Cloud was impacted by the ongoing layoffs, although the related Power division is thought to have been reduced significantly. Our amicable conversation screeched to a halt when the topic was raised. “It’s not something I’m going to comment specifically on,” McGee said. The company is also spinning off its Global Technology Services’ (GTS) managed infrastructure services unit, to create a whole new company, currently known as NewCo. "That's a different part of the [wider IBM] business," McGee said. "Obviously that part of the business does a lot of work building and running applications on cloud and moving workloads to the cloud. They've done that as part of IBM, and I suspect they will do that as partners of ours going forward, as they become an independent organization." Should that happen by the end of next year, IBM will be a much smaller company and one it hopes will be much leaner. This was a decision made before Covid, but McGee believes that a bet on the cloud makes more sense now than ever. "I think that the acceleration of cloud adoption will stick because I think people were forced to accelerate given the circumstances," he said. "And by being forced, they realize that they can move more workloads to cloud than they thought, so I think it's helped to eliminate some of the wait-and-see skepticism or hesitation."

Issue 39 ∞ December 2020 45


Economic Centralization Hurts The Net

So much for decentralized

T

he Internet was built to be resilient at its core. A decentralized system, it has no single owner that can control it or shut it off. But the cloud risks changing that. It’s hard to grasp the rapid rise of the hyperscale giants simply from their overflowing quarterly reports, laid out dryly as numbers without context. It’s when they suffer a major outage that their breathtaking reach is finally in view. This year has seen several such incidents from the major cloud providers, and the impact of even brief disruptions has been felt far and wide. When AWS East-1 started playing up in November, Flickr, Adobe Spark, Anchor, Roku, and tons more all stopped working - as did the company’s own status page. When Google had issues in December, suddenly Gmail, YouTube, and Google Drive were unavailable, among other services like Google Classroom. Efforts to map the scope of cloud ownership often fail because they don’t take into account how many mission critical services use the cloud for just a small, but vital, part of their operations. A cursory glance at a company’s IT footprint can miss how even a small reliance on a cloud provider puts their platform at risk. Those outages weren’t caused by a hardware fault at a data center,

but issues with software and APIs provided by hyperscalers. Those are finding their way into more applications, ThousandEyes CEO Mohit Lad told me over a coffee before the lockdowns began. “It’s being made easier for you to develop something because instead of building this whole sequence of actions, it’s now a Twilio API or someone else’s, and you’re adding more and more third parties. “While building applications becomes easier, the complexity of the application is actually increasing dramatically. And so there’s a ripple effect if something goes down. Suddenly, a Google outage can take down home networks where people could not connect to their Nest cams or smart locks anymore.” This economic centralization harms the entire concept of a decentralized network, at a time when we are relying on connectivity more than ever. Cloud services are usually more reliable than on-premises or colo efforts, with more resources to fight against outages, but they also pose a greater threat should they fail. If your colocation service goes down, your business and a handful of others will be impacted. But we live in a world where most of the largest websites, banks, intelligence agencies, and healthcare providers could soon rely on a single cloud provider. If something goes wrong, what then?

46 DCD Magazine • datacenterdynamics.com

Applications are easier to build, but their complexity is increasing. A Google outage can take down entire networks

Sebastian Moss Deputy Editor


Save your cloud from drowning You can never stop the rain, but you can harden your data center to avoid water ingress. Use Roxtec cable and pipe seals to prevent flooding, protect critical infrastructure and ensure business continuity. #nomoredowntime roxtec.com/datacenters


DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER

Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation. Learn more at http://www.cat.com/datacentre

© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.