DCD>Magazine Issue 34 - The Great Disconnect

Page 1

OVHcloud’s CEO A European approach to data

AWS’s spotty pricing When cloud costs change

Genomic treasures Mining with Wellcome Sanger

Issue 34 • November 2019 datacenterdynamics.com

Plus: Show preview and highlights for >London

Supplement

The Telco Story 5G cometh

Readying yourself for a data deluge

Head to space

The skies are filling with new opportunities

Sell it all

Telcos are lining up to sell their data centers


R

High Rate & Extreme Life High Rate Batteries for Critical Power Data Center Applications

RELIABLE Narada HRL and HRXL Series batteries deliver! Engineered for exceptional long service life and high rate discharges, Narada batteries are the one to choose. Narada can provide solutions to meet any Critical Power need.

ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties

Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339


ISSN 2058-4946

Contents November 2019

6 News Police raid NATO bunker used as illegal data center

12 49

12 Globalization and its disconnects Nations are shutting down the Internet in an effort to quell protests. But they risk fracturing the web as great powers disentangle, fundamentally changing the net as we know it

Industry interview

20 59

20 Michel Paulin, OVHcloud “We do believe that our roots are in Europe and in openness. Transparency, GDPR and open source are European values. When we go to Asia and Australia, do we have to change these key DNA values? I don’t think so.“ 22 Uptime on downtime Why outages have vastly different impacts on different companies

26

23 The Telco supplement A special supplement delving into all things telco - from 5G and Edge, to the CORD initiative, to space-based Internet. Plus, learn about fiber’s renaissance and why telcos are selling off their data centers 39 Amazon’s spotty pricing AWS said they were making pricing smoother. It ended up more expensive and less predictable 46 Virginia’s land dilemma When space is at a premium, perhaps it is time to look elsewhere 49 Building a home for AI Prepare for high-density workloads of the future, today

34

51 The London show preview What to watch and where to go 59 Mining genomes We head to the Wellcome Sanger Institute’s data center

63

63 Storage wars Batteries still rule when you want to keep charged 66 Do I even have to say it? Stop arguing about scientific facts

Issue 34 • November 2019 3


The Internet wasn't meant to be like this

T

he Internet was going to change everything. We'd have frictionless access to any goods, cutting waste and freeing up resources. We'd have anonymous virtual worlds to dream up utopias. And online access to information would blitz conspiracy theories and educate us all. Instead, it seems like things are the same or worse. Social media has empowered the far right, e-commerce has powered inequality.

Block protesters from the Internet, and it may increase violence, not reduce it The great disconnect. Efforts by nation states to intervene with online power have turned out to be either futile, or sometimes worse than the problem they perceive. Some countries have protocols in place to limit access to the Internet. China has the most hardline control over its Net, but the results are mixed. To take one example, some states have the ability to shut down communications completely to hamper the activity of protesters or political opponents. Even if you think that's a good idea, there's a problem. Research suggests protesters without the Internet may actually be more likely to turn violent. Sebastian Moss found plenty of surprises in his investigation of state control over the Internet (p12).

GDPR versus the CLOUD Act? The US has caused controversy with its expectation that it should have access to private communications of citizens elsewhere in the world. The European GDPR guarantees online privacy of individuals. But it's not just a political issue: for OVHcloud, the leading European cloud player, it's a business model. CEO Michel Paulin (p20) told us the world needs a cloud provider outside of the increasingly intrusive US and Chinese regimes.

55PB

From the Editor

Amount of storage the Wellcome Sanger Institute says it has in its data center... but it's growing at 30 percent per year

Curing illness with DNA data. It was

4

DCD Magazine • datacenterdynamics.com

Training

SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Dot McHugh Designer Mandy Ling Head of Sales Martin Docherty Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses

Head Office

PEFC Certified This product is from sustainably managed forests and controlled sources

Elsewhere this issue we explore the

Debates

Reporter Alex Alley

DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

PEFC/16-33-254

Peter Judge DCD Global Editor

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.

Intelligence

Deputy Editor Sebastian Moss @SebMoss

Dan Loosemore

Dive deeper

Events

Global Editor Peter Judge @Judgecorp

Chief Marketing Officer

a pleasure to step away from politics and visit the Wellcome Sanger Institute, to see a data center in harmony with scientific research (p59). Data center manager Simon Binley faces unprecedented demand, as the Institute's genetic sequencers generate petabytes of DNA data. But his spend comes from a budget that also saves lives through genomic research - and any money he saves enables more of that research. The Institute's facility is evolving in tandem with the Institute itself, and Binley only takes the upgrades that really support its work. I'm looking forward to joining Simon on stage at DCD>London on 5-6 November, to tell the story.

most revolutionary frontiers of the telecoms world in a supplement (p23). We also hear about AWS pricing discrepancies (p39), look in detail at DCD>London, and round up the most vital news in the field (p6). bit.ly/DCDMagazine

Meet the team

Awards

CEEDA

www.pefc.org

Š 2019 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


Connect to maximum density

with faster installations.

As the popularity of cloud computing and big data grows, the demands for high-speed transmission and data capacity are greater than ever before. Address your most challenging data centre concerns with our high-fiber-count MTPÂŽ trunks, a preterminated solution offering increased density, easier cable management, and enabling reduced installation time.

Š 2019 Corning Optical Communications. LAN-2472-BEN / September 2019

Visit www.corning.com/emea/en/dc-solutions to learn more about how to overcome cabling challenges in larger networks operating at higher speeds.


Whitespace

News

NEWS IN BRIEF

Whitespace: The biggest data center news stories of the last three months

Equinix’s $1bn hyperscale joint venture with GIC set to begin Equinix has completed the formation of a $1bn joint venture with its Singaporean sovereign wealth fund partner, GIC. The unnamed JV will develop hyperscalefocused data centers in Europe.

LinkedIn to move to Microsoft Azure in multi-year effort After acquiring professional networking site LinkedIn for $26.2bn back in 2016, Microsoft is set to shift the company to its Azure cloud service.

Cray wins $600m supercomputer contract for National Nuclear Security Administration El Capitan, is expected to be delivered in late 2022, and run classified nuclear weapons simulations. It will have a peak performance of 1.5 exaflops.

Police raid illegal NATO bunker The German data center was hosting all sorts of crimes A “criminally operated data center” in a former NATO bunker was shut down after it was discovered hosting sites for child porn, drug dealing, and botnets. More than 600 police officers stormed the ‘CyberBunker’ data center in TrabenTrarbach, western Germany, where they seized roughly 200 servers. Seven people were arrested at a local restaurant and in Schwalbach, outside Frankfurt. 13 other people aged between 20 to 59 are under investigation. None of the suspects were at the data center when the arrests took place. Acquired in 2013, the former NATO facility was bought from the Office for Geoinformation of the Bundeswehr, by an unidentified Dutchman - the chief suspect. The Dutchman, now 59, upgraded the bunker “to make it available to clients, according to our investigations, exclusively for illegal purposes,” regional criminal police chief Johannes Kunz said. “I think it’s a huge success... that we were able at all to get police forces into the bunker complex, which is still secured at the highest military level.” When the bunker was purchased

6

in 2013, the buyer was not identified but said that he was also involved with CyberBunker, the alleged operator of a Dutch data center in its own Cold War bunker. In 2013, the now-defunct data center company ‘Bunker Infra’ claimed CyberBunker was using images of its bunker but was not based there. CyberBunker previously said it would host “services to any Web site ‘except child pornography and anything related to terrorism.’” The company’s website has been seized by the German police. The location of the Traben-Trarbach facility matches that of Calibour, a company that said it operated a NATObunker based secure data center. Its website is now also unavailable. The CEO and MD of Calibour, Herman-Johan Xennt, claimed to own CyberBunker as of 2010. The cases are still developing, and there are as yet no formal identifications or charges. While 200 servers were seized, some reports suggest that there could be as many as 2,000 at the facility. Kunz told reporters the data analysis could take years. bit.ly/NATOdiversifies

DCD Magazine • datacenterdynamics.com

Apple to sell Irish land after data center site construction fails to get off the ground The proposed location of the $1bn Derrydonnell data center in Athenry, County Galway, is up for sale. Five years of protests and court battles delayed the project, culminating in the tech giant throwing in the towel.

Trans-Saharan and Tatweer to deploy Libyan data center The small prefabricated facility will be deployed in Tripoli next year. “There are multiple challenges, mainly the political unrest, electricity instability & infrastructure,” Ehab Elghariani, Trans-Sahara’s DC unit manager, told DCD. “We are willing to overcome the latter two challenges and minimize the impact of the first.”

Share-online.biz shut down in police raids across three EU countries Police from Germany, France, and Holland have conducted raids on multiple data centers in a “crackdown” on an illegal hosting site. Share-online.biz, the largest file hosting site in Germany, was taken down in raids led by Cologne prosecutor Christoph Hebbecker and the Cybercrime Nordrhein-Westfalen (ZAC NRW) division.


China’s data centers primarily coal powered

READ MORE Understand the deeper USChina struggle, p12

US sanctions hit China’s tech firms The companies have been blacklisted by the US amid claims of human rights violations against the muslim minority in China Eight tech companies are among the 28 Chinese public security bureaus and companies on The US Commerce Department’s “Entity List,” essentially blocking them from doing business with American firms. The blacklist is purportedly over their involvement in human rights violations against Muslim minorities in Xinjiang. SenseTime, the world’s most valuable artificial intelligence startup (at least before the ban), the large AI company Megvii, and facial recognition firm Yitu Technologies were among those put on the Entity List. Also listed were surveillance companies Hikvision, Zhejiang Dahua Technology

Co, voice recognition company iFlytek, cybersecurity group Meiya Pico and nanotech firm Yixin Science and Technology. Hikvision, one of the world’s largest security camera makers, could be among the hardest hit, with its servers likely impacted. A Commerce Department spokesperson told reporters the ban was unrelated to the US and Chinese trade negotiations despite the talks resuming on the same day. It follows a similar blacklist against supercomputing companies Sugon, Hygon, and telecoms giant Huawei.

According to a study by Greenpeace and the North China Electric Power University, China’s data center industry was responsible for 99 million tonnes (109m US tons) of CO2 emissions in 2018. The regional data center industry consumed 161TWh of electricity in 2018. The facilities - which spanned 150 million square meters (1.6bn sq ft) in 2017 - mostly relied on grid energy, with coal providing 73 percent of power used by the data centers in 2018. Without serious changes, carbon emissions are expected to rise drastically. bit.ly/ChokeOnThis

bit.ly/SenseCrime

Mission Critical Training

Colocation/MTDC Managing training within organizations that provide “Infrastructure as-a-Service” is complicated given the variety of learning requirements needed, but it is essential to maintain the competitive edge. DCPRO has a flexible approach to workforce development and we even work with you to ensure our materials fit your bespoke requirements exactly.

Head over to www.dcpro.training for more

“DCPRO’s training courses are always informative and interactive. The trainers are very experienced and knowledgable. I recommend these courses not only to the operations team, but to anyone who works at a data center to understand the criticality of running a data center,” Charlene Gomez | Digital Realty


Whitespace

“A profound sense of sadness and loss:” Oracle co-CEO Mark Hurd passes away The CEO lauded as one of the ‘greatest salesmen in Silicon Valley’ has died at 62 Oracle co-CEO Mark Hurd has passed away. His death was announced in a company-wide email by Oracle Chairman Larry Ellison on October 18. Back in September, 62-year old Hurd began a leave of absence for unspecified health-related reasons. It was understood co-CEO Safra Catz and Oracle founder Ellison would assume his responsibilities during this time. Ellison wrote in the email: “It is with a profound sense of sadness and loss that I tell everyone here at Oracle that Mark Hurd passed away early this morning. “Mark was my close and irreplaceable friend and trusted colleague. “I know that many of us are inconsolable right now, but we are left with memories and a sense of gratitude… that we had the opportunity to get to know Mark, the opportunity to work with

him… and become his friend.” It’s likely the company will look for a new partner soon to replace Hurd since Ellison has reportedly grown to appreciate the dual-CEO system. One option is Jeff Henley, Oracle’s vice chairman, and former CFO. According to Bloomberg, Ellison once mentioned Don Johnson, head of Oracle’s cloud infrastructure division, and Steve Miranda, head of Oracle’s applications unit as possible replacements to Hurd. As of yet, no announcements have been made. Appointed by then-CEO Ellison in 2010, Hurd was named president of Oracle Corporation alongside Safra A. Catz. In 2014, he and Catz were named joint CEOs when Ellison stepped down. bit.ly/SiliconValleysGreatestSalesman

We’re putting the power in data. Temporary power and battery solutions for data centres. Our temporary power and battery solutions will provide your data centre with the power you need, when you need it, for as long as you need it. Bridging any gap in demand while you’re building a permanent off-grid solution. Aggreko put the power into data.

8 DCD Magazine • datacenterdynamics.com Tell us what you need 0333 016 3475

aggreko.com


Banking services across Mexico down due to Prosa outage

NERSC shuts down amid blackout California goes dark after PG&E cuts power over risk of wildfires On October 9, as utility PG&E cut power to hundreds of thousands of Californians in an effort to reduce the risk of wildfires, supercomputers were forced to shut down. The National Energy Research Scientific Computing Center, part of the Lawrence Berkeley National Lab, turned off its supercomputers as power went out. “PG&E has informed us that they will definitely be cutting power to the Berkeley Lab campus sometime between 12:01 am (Pacific) and noon Wednesday. Berkeley Lab is closed effective Wednesday, October 9 at 12:01 am. NERSC will continue to operate until power is cut by PG&E,” user engagement group leader Rebecca Hartman-Baker said in an email to NERSC users ahead of the cut. All of the high-performance computing facilities at NERSC had to be shut down, including the 30

petaflops Cori supercomputer. NERSC’s HPC systems are used by 7,000 scientists working on various research projects. “Our users run large scale climate models, they run large scale simulations of exploding stars, they run large scale simulations of a fusion model,” Katie Antypas, division deputy and data department head at NERSC, told DCD earlier this year. NERSC detailed research it is working on simulating the ‘Camp Fire’ wildfire that last year killed 86 people and burned more than 150,000 acres. With PG&E trying to avoid a Camp Fire scenario reoccurring, such work had to be paused, as were other research projects studying the impact of anthropogenic climate change. Systems returned online on October 12.

An outage at a data center in August brought much of Mexico’s banking services offline, with customers unable to make purchases or withdraw cash. Electronic transaction services firm Prosa said that an electrical fault at its data center in Santa Fe, Mexico City, was to blame. It impacted customers of Banorte, HSBC, Invex, Santander, Scotiabank, and Banjército. The company said at the time: “We want to inform you that today we are having an outage on our Santa Fe data center. “The management team and the entire IT and Innovation team are working as a priority in resolving this incident.” It took several hours for services to start to resume, and hours more for cards to work. The outage comes at a time Mexico wants to cut down on cash and move to electronic banking systems. bit.ly/TakingThePeso

bit.ly/NERSCgetsNerfed

Software outage knocks 500 stocks off London Stock Exchange The London Stock Exchange staggered when it opened on August 16 after a “technical software issue” caused its longest outage in eight years. 489 stocks were unable to trade for an hour and forty minutes after London started trading at 8am, including those in the FTSE 100 and 250 indexes. An LSE spokeswoman refused to rule out whether its trading software was at fault. The LSE handles £5bn ($6bn) worth of trading each day and is currently in talks to acquire Refinitiv, a financial markets data and infrastructure firm, for $27bn. A similar technical error caused an outage in June 2018. The LSE also suffered outages in 2011 and 2009. bit.ly/StockShocker

Issue 34 • November 2019 9


Whitespace

US military to acquire three Cray supercomputers for $71m The US Air Force will deploy a Cray Shasta supercomputer, while the Army Research Lab (ARL) and the US Army Engineer Research and Development Center (ERDC) will each deploy a Cray CS500. The contracts are worth more than $71m. The Air Force’s $25m system will be acquired by the Air Force Life Cycle Management Center in partnership with Oak Ridge National Laboratory. Named HPC11, it will be used for meteorology to help the US Air Force and Army operate in numerous theaters. The other Cray CS500 supercomputer will be deployed by the US Army’s Engineer Research and Development Center (ERDC). ERDC manages the DoD Supercomputing Resource Center (DSRC) at Vicksburg, Mississippi. DSRC typically operates two or more supercomputers on an average four-year life cycle. Work underway at the site includes research into nanotechnology.

DoD awards controversial $10bn JEDI cloud contract to Microsoft Azure goes to war After delays, legal fights, employee protests, and an intervention by President Trump, the US Department of Defense have awarded the longdiscussed JEDI cloud contract. Microsoft will provide its services for the Joint Enterprise Defense Infrastructure cloud in a deal that could last 10 years and be worth as much as $10bn. “This contract will address critical and urgent unmet warfighter requirements for modern cloud infrastructure at all three classification levels delivered out to the tactical edge,” the Department of Defense said in a statement. “The DoD will rigorously review contract performance prior to the exercise of any options,” the Department said.

Amazon, once the front runner for JEDI, is thought to be considering a legal challenge. With the President already known to be negatively inclined towards Amazon CEO Jeff Bezos - due to his ownership of The Washington Post - and with the frequent Fox Newswatching world leader tuning into segments about possible JEDI corruption by Amazon, there were several rumors throughout the year that he would intervene to stop AWS winning JEDI. In Holding The Line: Inside Trump’s Pentagon with Secretary Mattis, author Guy Snodgrass claimed that Trump called Mattis in the summer of 2018 and directed him to “screw Amazon” out of a chance to bid on JEDI. bit.ly/ExpectToSeeThisInCourt

Peter’s military factoid Palantir is developing an $800m Distributed Common Ground System (DCGS-A) to act as the Army’s primary system to track troop movements, enemies, weather and more

bit.ly/CrayCrayCray

US Army buys $12m IBM shipping container supercomputer The IBM system is housed in a could be placed near to the theater shipping container with on-board uninterruptible power supply, chilled water cooling, and fire suppression systems. The HPC-in-a-Container is designed to be deployable to the tactical edge; with deployment opportunities to remote locations “currently being explored and evaluated.” It is unlikely that the system would be deployed on a battlefield itself, but

10 DCD Magazine • datacenterdynamics.com

of war. It will be deployed at the US Army Combat Capabilities Developmental Command Army Research Laboratory DoD Supercomputing Resource Center at Aberdeen Proving Ground, Maryland, later this year. The system is capable of six petaflops of single precision performance. bit.ly/ComesAlreadyWrapped


READ MORE Learn about genomics and data centers, p59

EU biological data unit moves into Kao campus, London EMBL-EBI takes 1.5MW for genomic sequencing and other bioinformatics work The European Bioinformatics Institute (EMBL-EBI) has signed up for 1.5MW of data center capacity at Kao Data, the science-focused campus being built in North London. Cambridge-based EMBL-EBI has taken a substantial chunk of capacity at Kao Data One, the first of four 8.8MW data centers scheduled to be built in the Kao campus, within the Harlow Enterprise Zone, close to the M11 motorway between London and Cambridge. EMBL-EBI already holds some 270 petabytes of biological data, and its job is to make it available to the scientific community. Kao Data One is the first of four projected 8.8MW data centers. The company plans to sell capacity either a whole building at a time, or 2.2MW suites (a quarter of a building), or

else in smaller quantities called “cells” or “pods.” The Institute took space in Kao Data so its data center engineers could have easy access to the equipment, saving on operational expenditure. EMBL-EBI’s data storage demands are growing daily, and it could scale quickly into TS02 if required in future, according to the company. “The biological data we store and share through our data resources are used by life science researchers all over the world to power new discoveries,” said Steven Newhouse, head of technical services, EMBL-EBI. “As such, data center space, physical security and infrastructure availability were critical in our decision-making.” bit.ly/NewhousesNewHouse

WHEN REPUTATION RELIES ON UPTIME. Solving your data center power challenges are our priority. Our engineering teams can create a vertically integrated solution for your facility. No matter the fuel type, our trusted reputation and unrivaled product support allow you to maintain uptime with backup power that never stops. DEMAND CAT® ELECTRIC POWER. © 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.


Cover Feature | The Great Disconnect

Governments are shutting down the Internet, using digital sieges to quell unrest, and threatening the Balkanization of the web. Sebastian Moss reports

Sebastian Moss Deputy Editor

30 DCD Magazine 12 Supplement • datacenterdynamics.com • datacenterdynamics.com


E

very time web traffic suddenly drops in a particular country, an alert goes off in Cloudflare’s headquarters in California. “It could be that there's something wrong with one of our points of presence, or that something’s wrong with the connectivity,” John GrahamCumming, the web infrastructure company’s CTO, told DCD. “Our first reaction is ‘did we break something?’ And we want to be able to fix it.” The alert is repeating again and again, but there’s nothing Cloudflare can do. The outages are real, but nothing is broken - it’s an intentional disconnection. “We actually have an internal chat room called Internet Shutdown Tracking, because we see these things happening pretty regularly.” One moment a country is part of the Internet, a piece of the whole. The next, darkness, a nation winking out of digital existence, unmoored and alone. “The thing to note about state-sponsored cut-offs of the Internet is how widespread they are,” Graham-Cumming said. “Even this year, there’s been Venezuela, Sudan, Indonesia, Sri Lanka, and Ethiopia, to name but a few. The Democratic Republic of Congo shut down the Internet completely for 20 days - it's a long list of shutdowns, and some of them are quite large.” Authoritarian regimes are the most common abusers of this power, forcing telecommunications companies, which are often state-owned, to shut down operations. But it is in a young democracy that the worst offender is found. INDIA'S SHUTDOWNS “There’s been quite a lot of advocacy around some of the shutdowns that happened in Africa, which were largely on a national level and executed by central authorities,” censorship and connectivity researcher Jan Rydzak told DCD.

“There were surprisingly few people focusing on the number one country in which the most shutdowns were taking place: India. Since 2012, the country has had approximately 350 cases of shutdowns of various kinds at various levels. This is just orders of magnitude more than any other country in the world.” That count, ever-growing, has been carefully tallied by the Software Freedom Law Center, India, which began tracking the outages in lieu of any official announcements. “In 2012, we started noticing - in addition to website and content blocking - complete blanket shutdowns of the net in certain areas,” Mishi Choudhary, ‘SFLC.in’ founder and human rights lawyer, said. “It started with around three shutdowns. By 2014, it was still in the single digits. Then in 2015, we saw a spike in the number of instances to around 14. “Last year, we had the highest numbers we've ever seen - 134.” Choudhary’s figures are on the conservative side, she noted. Only outages that the center can confirm as intentional are included, with media reports and tips used as a starting point, backed up by SFLC’s legal challenges to individual states, demanding information. India’s federal government has a long process for deciding whether to initiate a shutdown, with checks and balances in place, but the country’s 28 states are not bound by the stipulations of the Information Technology Act. “What the states started doing is use the Police Power, a very different statute,” Choudhary said. In cases of civil unrest, police in mostly northern and western states are turning to

shutdowns to try to defuse the situation. “It can become a checklist for the police when they do their job: They think ‘first shut the Internet down, that will stop this viral spread of messaging, and then go and control the situation on the ground.’” This approach can be tempting for those in power, Choudhary admitted. “India has a long history of communal riots exacerbated by our colonial masters. Social media has really amplified the messaging and aggravated the situation, and made it very, very easy for a large group of people to be able to receive messages and assemble in one place. “As much as I am a free speech advocate, as much as I love the Internet, I am not blind to the fact that if a police commissioner or magistrate who thinks there are going to be 2,000 people [converging] that want to kill each other because of some WhatsApp message, they may want to shut it down.” The problem is there’s no data to confirm that shutting down the network actually helps law enforcement agencies do their job, or stops the messaging, Choudhary said. “It's not that the riots didn't happen before these tools were there. People use phones, there was word of mouth, people live in areas together, there are ghettos. There's a lot that goes on in a riot.”

Issue 34 Edge ∞ November Supplement 2019 13 31


Cover Feature | The Great Disconnect Working at Stanford’s Global Digital Policy Incubator, Rydzak has tried to analyze whether the shutdowns were actually effective. His paper ‘Of Blackouts and Bandhs: The Strategy and Structure of Disconnected Protest’ took SLFC’s Indian outage data, as well as datasets on protests (their location, length, who was involved, and whether it was violent), to see how the shutdowns changed their nature. “Essentially, I was trying to look at whether the protesters’ strategy changes with a lack of access to information and communication,” he said. “In an information vacuum, does information travel differently, and does it lead to different outcomes for protests?” In non-violent protests, the results proved “very ambiguous and inconsistent,” Rydzak said. “Shutdowns are sometimes effective against peaceful demonstrations, but it's by no means guaranteed. It's practically no different from a coin toss.” It was in violent protests that Rydzak saw a real difference. “Shutdowns are followed by an escalation in violent protests. It's a very strong effect that doesn't just refer to the first day that a riot takes place, but to several subsequent days as well. “People will always find a conduit for protest. Social media is just a platform for people to vent their frustration and anger. This is just a hypothesis, but it's possible that anger that is normally spilled out on social media can spill out into the streets instead.” THE SIEGE ON KASHMIR The majority of intentional outages last up to 72 hours, short disruptions across a state, or sometimes a smaller area. Other times the blackout is widespread, long-lasting and total. Rydzak calls these events Digital Sieges. “What's happening in Kashmir is

absolutely a siege. It's just unprecedented. Landlines are not usually affected. But in this case, the government decided to take no risks and just cut off all communication completely. It's a siege in every sense of the word; not only a militarized siege, but also a siege of all forms of communication. You'd be hard-pressed to find an equally extreme example even among the hundreds of shutdowns that we've seen so far.” The longest shutdown ever recorded happened in the state of Jammu and Kashmir in 2016, lasting 133 days. Now, in October, the state is offline again, but this time it’s different. “This is the first time that phones were completely shut down landlines, mobiles and Internet - everything,” Dr. Mudasir Firdosi, a Kashmiri psychiatrist and writer based in London, told DCD. In Jammu and Kashmir - a troubled state with high levels of unrest, an insurgency movement, and regular terrorist attacks - darkness has prevailed since August 4th. “Even in the modern world today, it is possible to isolate a large population of eight million people and not let them talk,” Choudhary said. “Again, there are national security reasons for it, we can’t deny them but it has a real impact.” 72 days into the siege, a small opening was allowed. On October 15, phone calls from ‘postpaid’ contract cell phones were let through, while calls from the more commonly used top-up phones remain blocked. “I believe the reason for that is when you take a postpaid connection in Kashmir, they verify your identity, they know who you are,” Firdosi said. With limited connection resumed, the Kashmiri diaspora is finally able to connect with loved ones in the state. In some cases, the news has been dire - relatives have learned of illnesses or deaths; funerals have been missed, weddings delayed. “This has got so many costs,” Firdosi said. “We are living in the Internet age, students are sitting at home, people have to fill in forms for jobs or for higher education. Businesses are run on the Internet. Everything is down.” “People at some places didn’t get

$2.4bn The cost of shutdowns in 2015 14 DCD Magazine • datacenterdynamics.com

immediate medical help, resulting in deaths and aggravation of terminal illnesses,” Aakash Hassan, Kashmir correspondent at CNN-News18, told DCD. For journalists such as Hassan, the situation is fraught with danger. “There have been multiple cases of journalists being detained and even injured while covering stories. One photojournalist was injured with pellets - this is the physical aspect,” he said. “[But the] intangibility of this clampdown has affected reporters the most, because they are the ones who have to write about it and get news out… We have been provided a facilitation center by the administration where we can use the Internet for around half-an-hour in 24 hours. Each day, we bring our stories and have to wait in line to file. There has never been a time when journalists were so disempowered.” Fear prevails. No one knows who could be listening. Firdosi recalled conversations with doctors in Kashmir who were granted limited mobile access: “When I start asking them how the situation is, they just start saying, ‘oh, the weather is good.’ They don’t talk about it. People are afraid.” The distress is not limited to the region. Firdosi and colleagues are studying the impact of the disconnection on Kashmiris living abroad. “We have a survey with around 450 responses,” he said. “Though we can’t diagnose people on surveys, it just gives an indication, but 88 percent showed abnormal scores pointing to cases of depression or anxiety.” Using the ‘Hospital Anxiety and Depression Scale,’ more than 90 percent scored high on the ‘frightened feeling as if something bad is about to happen’ section of the survey. “It’s the not knowing,” Firdosi said. “It has taken over our lives - I am at work right now, and I am still thinking about it.” So far, these outages have primarily impacted less influential areas. “If something were to happen in Delhi, Bombay, or Calcutta, the noise would be heard and ricochet all around the world,” Choudhary said. This is partly because they are seats of power, globally recognized regions deeply integrated with the wider world. It may also be because these are areas with higher levels of Internet penetration, where a shutdown would have a far more profound impact. “One of the things which we've always struggled with is that, because of the Digital India initiative, so many of the services are now going online,” Choudhary said. “I'm supposed to pay my taxes online, I'm supposed to keep all my important documents with the government online; after demonetization, we’re expected to go cashless completely. Almost my entire life is going to be online.


“And then I'm handing over the power of that kill switch to these police guys who have not even thought about these things in a nuanced way. And, unfortunately, constitutional rights and laws mean nothing to them.” HONG KONG ON THE BRINK We may soon find out what a shutdown of a highly-connected global financial center could look like. Over in Hong Kong, the threat of disconnection is growing. On October 4, Hong Kong chief executive Carrie Lam enacted the colonial-era Emergency Regulations Ordinance, which allows the government to “make any regulations whatsoever” that it considers to be in the “public interest,” if faced with “an occasion of emergency or public danger.” This would include communications shutdowns.

"Any such restrictions, however slight originally, would start the end of the open Internet of Hong Kong" “The use of the Internet for both the [2014] Umbrella Protests and the current protests is vital,” Nathan Law, Hong Kong politician and activist, told DCD. “In terms of the current protest, we use online platforms to generate ideas, making our protest more fluid and more influential. Using the Internet can also make us better at reaching the international community. The protester can participate in the agenda… broadcasting our message rather than letting the media interpret everything.” Law, founding chair of the Hong Kong youth activist group Demosistō, said the movement already assumes it could be under digital surveillance: “We don’t talk about sensitive issues through social networks or online communication software. If we have to do so, we will use a more secure app like Signal.” Protesters are also “preparing for the possible shutdown of the Internet,” Law said. For instance, they have apps that don’t need the Internet to work, like FireChat, which uses wireless mesh networking to enable smartphones to communicate directly. But being taken offline would still stymie the movement, he admitted.

“It is important that we retain a free flow of information and a channel to reach the world,” he said. “If you take a look at the situation in China, it is like a black box, it is difficult for the activists to be connected and for the outside world to understand what is happening inside. So it is very important for us to remain open.” Others in Hong Kong are concerned, including members of the business community. In August, as rumors of impending digital censorship spread, the Hong Kong Internet Service Providers Association sent out an urgent statement: “Technically speaking, given the complexity of the modern Internet, including technologies like VPN, cloud and cryptography, it is impossible to effectively and meaningfully block any services, unless we put the whole Internet of Hong Kong behind [a] large scale surveillance firewall. “Therefore, any such restrictions, however slight originally, would start the end of the open Internet of Hong Kong, and would immediately and permanently deter international businesses from positing their businesses and investments in Hong Kong.” The association added: “Hong Kong is the largest core node of Asia’s optical fiber network and hosts the biggest Internet exchange in the region, and it is now home to 100+ data centers operated by local and international companies, and it transits 80 percent+ of traffic for mainland China. All these successes rely on the openness of Hong Kong’s network.” A Chinese approach to the Internet would mark a radical shift for Hong Kong, which at the time of publication - has a relatively free and open network. “China is the prime example of a preventive regime,” Rydzak said. “Instead of reacting to protests, they try to smother them in advance. They are operating under the assumption that criticism and protest born on the Internet can spill over into the streets, so nipping it in the bud is their priority.” A WEB OF ITS OWN China’s Internet is unlike anything else. “The sophistication of the infrastructure and the censorship system in China is much more superior to anything that we've seen,” Professor Christopher Leberknight, online censorship researcher at Montclair State University, told DCD. “China has gotten it down to ‘we can block specific keywords, we can block specific pages of a website.’ They also have a huge army of people that are just looking at blogs, websites, and if there's something that's a little bit ambiguous, then the information doesn't get posted. It sits in limbo for 24 hours. “And then there are individuals that

Threats from the sea With the vast majority of intercontinental data transfer occuring in cables under the oceans, some fear that submarine cables could be easy targets for malicious actors. In August, the UK banned the export of submarines to Russia, citing the threat. "This additional control is a consequence of Russia developing certain capabilities - including the ability to track, access and disrupt undersea communication cables," the International Trade Department’s export-control unit said. "These activities represent a risk to our national security and the new control is intended to mitigate this risk." In 2015, a Russian Defense Ministry-owned news channel said that Russia can “both cut the special communication cables on the ocean floor and scan the signals they carry.” Earlier this year, Russia's AS-12 Losharik submarine caught fire. US officials claim the vessel was designed to tamper with submarine cables.

actually read the content and make the decision as to whether or not to publish it." Circumvention tools in the country are hard to use, because “in China, you can't use encryption-based technologies unless you register with the authorities. If you try to transmit unregistered encryption-based data, the packets just get dropped,” Leberknight said. Beyond the software and man-power required to run the Great Firewall, the nation - with the fervor of a technocratic regime made sure its Internet was built in its image. "Most developed nations have a large number of non-domestic carriers with a presence in-country. This means that foreign telecoms are interconnected with local and other international carriers at physical locations (Internet Exchanges) within these countries," Dave Allen, Oracle's VP of business operations and strategy, said in a research report. "China is different: there are no observable foreign carriers with a presence in China’s borders. The general trend globally is that countries - both developed and developing are becoming increasingly connected. China, on the other hand, has had no meaningful foreign telecom presence over our many years of historical data.

Issue 34 ∞ November 2019 15


Cover Feature | The Great Disconnect "Nevertheless, we know Chinese citizens can still connect with the global public Internet, subject to the restrictions placed on them by the Great Firewall. China’s connections to the rest of the global Internet just aren’t in China. They are in Western Europe and the United States, along with a few other locations." This offers a crucial advantage for a censorial regime, Mohit Lad, CEO of network monitoring company ThousandEyes, told DCD: “If you can concentrate all your traffic to a certain set of points, then you can technically have the ability to inspect every single packet that goes through there and be able to apply rules and so on. “The interesting part about China is they built it very early in the Internet's rise. And as a result they've built it, they've scaled it, they've tuned it. And they are able to handle the kind of volume that they see through their firewall at scale.” Other states are envious, but may struggle to achieve the same level of control over their network, Lad said. “If you think about countries like Russia, it's going to be pretty challenging, because it's at a scale where you can't just turn it on; it's a very different problem.” But that doesn’t mean they're not trying. BUILDING RUNET “There are a lot of small ISPs [in Russia] who have this transitional traffic flow, and these lines are still working,” Ilona Stadnik, a cyber security researcher at Saint Petersburg State University, told DCD. “[Most] Russian traffic still goes inside the territory and just one percent goes out, but these lines are still working,” Stadnik said. “And if you issue an order that now just one state network operator, Rostelecom, will be able to move traffic abroad, it won't work the authority would have to go and dig out all the lines that are going outside Russia. It will take time, and it's not feasible.” That may change, however, with incoming laws that impose strict demands on network operators that “are so high that they will probably be deprived of their business, and will have to sell it because the expenses will be too high,” Stadnik said. “This could lead to the absorption of small ISPs and network operators by the biggest one. So the number of independent, unknown transborder lines will be reduced.” Stadnik could not say whether this was an intentional result of Russian policy or a side effect, but the outcome is clear: “Just change the market itself, and then it will be easier to control.” Russia has also - perhaps more aggressively than any other state - sought to shut down the Internet of other nations. The country is linked to numerous distributed

denial-of-service attacks on foreign territories, although attribution can always be tricky. BRINGING DOWN YOUR ENEMIES “You've got a situation where outages are technopolitical,” Martin Rudd, CTO of cyber security and government infrastructure company Telesoft, told DCD. ”You can use an outage to enforce an aim, whether the goal is espionage, sabotage or theft. You can use that outage against either that nation-state or against a multinational competitor or multinational organization.” In 2007, following the removal of a statue of a Soviet soldier, a series of coordinated DDoS attacks battered the tiny Baltic state of Estonia. The attacks grew in intensity, hitting government, banking, and media sites, among others, and threatened to bring the nation to a standstill.

"When the 2007 cyber attacks happened, it was kind of like ‘this is the real deal’"

“Basically we closed off Estonia from abroad, the Internet became an intranet in Estonia. We could still operate it, except that there were no [outside] connections - you don't see CNN or BBC or whatever you need to use, but you can still use the services inside,” Cybernetica CEO Oliver Väärtnõu told DCD. Väärntõu’s company, best known for developing Estonia’s ‘X-Road’ network layer and the Internet voting system that has allowed Estonia to become a highly advanced digital nation, is all too aware that attacks by nation states could happen again. “When the 2007 cyber attacks happened, it was kind of like ‘this is the real deal.’ So we looked at how we can actually get over this, what are our vulnerabilities, etc. It’s about not only creating systems using the secure software development process, but also how you develop and refine the architecture of the e-government.” The country is actively preparing for the worst. In 2017, Estonia announced plans to build a data center in Luxembourg to store crucial government and citizen data as a backup. More data centers in different countries were planned, Väärtnõu said, but

16 DCD Magazine • datacenterdynamics.com

the process has been slow. “Today, we are not fully in this position where we can say that if Estonia is completely shut down, then the government will continue in cyberspace after being taken over. It is a very fancy thing to say that our government is backed up to Luxembourg, or in the future to whatever country it is - but we also have to bear in mind that when we create the infrastructure inside that country that it's fully resilient, that we have the failovers, and that everything is being copied.” The idea of a digital nation unencumbered by the threat of physical attack remains a dream. The nightmare of an attack on sovereign soil is still a terrifying possibility. And, despite Estonia and Cybernetica’s efforts to improve cyber security, there’s little one can do against certain events. “If there is no electricity, I think then we're going back to the Stone Age,” Väärntõu said. Taking out a larger nation may prove trickier. "I'm not of the belief that one single attack could take down America's Internet at all," Winn Schwartau, the cyber security researcher who in 1991 warned Congress of the threat of an 'Electronic Pearl Harbor,' told DCD. “You'd have to cut too damn many wires.” Motivated by nothing more than their business interests, companies in the US have helped strengthen the Internet, pushing for resiliency, backups and redundant connections. It’s also not clear if an adversary would want to take America out. “It's much more profitable to keep it going, because of social media and the access it gives you,” Mark Carney, pentester and security researcher for Security Research Labs, said. “So motivated and intelligent attackers think ‘okay, there is now such redundancy that we can't bring it down, however, there's such connectivity that we can influence that in a way where we can have an effect.’” Russia, meanwhile, maintains that it too could face attacks on it own network. In April, the country passed the controversial ‘Internet isolation’ bill “providing for the safe and sustainable functioning” of Russia’s Internet, that by November is meant to allow the state the ability to cut itself off from the wider web. “We should be afraid of [an] external kill switch - this is how it is explained to us,” Stadnik said. “This is a unique discourse that the Russian authorities have, nobody in the world is really talking about an external shutdown. This is a story that can be really kind of favorable for other countries to pull.” Many are concerned that the law has more to do with Russia controlling its own territory than any real fear of foreign attacks. “This August, there were documented shutdowns of the mobile Internet during the


protest in Moscow. I think they are trying to test how far they can go with this, and to what extent they can execute this, and to see how people react to it. “The trend is obvious,” Stadnik said. “The government really wants to keep track of what's happening in the information sphere, what's inside the traffic. But the only problem is that the amount of such data is so enormous that even if you pass this regulation, you won't find enough equipment to do so.” Unable to build the tech on its own, Russia appears to be turning to China for help, with a number of technological agreements signed over the past three years. This October, the two nations signed a joint treaty aimed at tackling “illegal Internet content,” which lacked specifics, but might include China sharing some of its Great Firewall hardware and software. As for Russia’s Internet isolation law, it’s light on technical detail, partly for security reasons, and partly because "our legislators don't know in detail how everything works, because for them the Internet is more like a telephone," Stadnik said. "You have people in the legislative bodies that don't have enough technical expertise to make such laws. And they are sometimes very wrong, sometimes very illogical. But it doesn't prevent legislators from moving them forward. That's how it works in Russia." Key aspects of the law remain shrouded in mystery, but the country is barreling ahead nonetheless. In September, Alexander Zharov, head of the federal communications regulator Roskomnadzor, said that “equipment is being installed on the networks of major telecom operators” to allow RuNet to be separated from the greater Internet. "Countries are always threatening to cut themselves off," Professor Milton Mueller, author of Will the Internet Fragment? and one of the founders of the Internet Governance Project, told DCD. "But when you look more closely at those proposals, they really are not about cutting themselves off from the Internet. Other than in emergency situations, that's really pretty stupid and self-destructive.” Instead of a complete separation of a nation’s Internet, “the number one fear for me would be the Balkanization of the Internet,” Schwartau said. Mueller agreed: “Countries are trying to align national boundaries with their digital economies as much as they can. That's kind of pushing the stone uphill, because it's just not the way the Internet is constructed. What's interesting now is that it's gone beyond the network layer into equipment and software, where we're discovering how damaging that decoupling and that disintegration will be.”

GREAT POWERS DIVERGE He fears what the deteriorating relationship between China and the US will do to the Internet. “There was hope that China would gradually become aware of the degree to which their power and their wealth relies on openness, interconnection, and trade with the rest of the world,” Mueller said. “And they couldn't play the game both ways: they can't shield themselves from foreigners, and at the same time expect foreigners to be open to them.” But, while he conceded “there was some need for a confrontation or systemic change in the way [China does] things,” Mueller believes that “President Trump went about it in a very, very terrible way.” Tariffs, sanctions on globally-focused companies, and restrictions on trade in technology will only make the divide worse, he argued. “What the US is doing is using our dominance of chips to shut the Chinese out of high technology, because they have this nostalgic sense that the technological and economic dominance that the US enjoyed after the decline of the Soviet Union is going to last forever. “And they think that they can stunt China's growth and keep it subordinate indefinitely by cutting them off from US technology. All that's going to do is make China develop its own chips and its own advanced services in a way that is not integrated with the West. I would much rather have China buying US chips and having technology flowing both ways, than to have these fragmented blocks around these technological superpowers.” The idea of a sovereign digital state in charge of all the bytes within its borders is growing in popularity, with data residency laws spreading across the globe. "What we're seeing is that all those data centers around the world are becoming sovereign assets,” Telesoft’s Rudd said. “The boundaries in cyber are so blurred, stuff like banking, finance and the stock market - are they part of the sovereign state, while being commercially owned?” For the normal day-to-day operation of the Internet, which country a data center resides in is mostly irrelevant. Latency, grid infrastructure, and cooling costs are all important, but an average user in the UK will not notice a difference between a facility in Denmark or Belgium, for example. However, when things become strained, the location of data can be crucial. Imagine a scenario where hackers attack a “national telecoms operator in a particular country that is using a cloud provider,” Rudd said. “Do you really want the data [on the attack] which is being brought up from that network to be classified by a third party in another country? “That's the same data which is used to

optimally train the AI algorithms in defense for anomaly detection or machine learning. It tells you who is attacking what and how.” Conversely, if nations require data to be stored in a country of origin, information on how an attack happened may be withheld from a multinational business. Suddenly, it is unable to fully study the intrusion attempts it has suffered in certain regions, and is less secure as a result. We don’t know how this will play out; it is not clear how much further the Internet will Balkanize. In a world where authoritarianism is on the rise and democracies are weakening, the future appears bleak. “I think the norms and the levels of cooperation that are happening across North America and Europe are going to move towards a more integrated system,” Mueller said. “And then we're seeing the rest of the world, the more authoritarian world, detaching from that and possibly creating a bipolar information technology world.” The West, to be clear, is not without sin. It exports the tools of surveillance and censorship and is experimenting with small shutdowns of its own, on San Francisco’s BART network and the London Underground. It also surveils its people, and blocks certain websites. “It is very difficult for the Western world to complain about things at this point, because we're putting so much pressure on the social media companies to censor things,” Mueller said. “So what basis do we have to say that China should have this freewheeling open social media environment? “And what happens immediately is that they pick up on that and say, ‘Oh look, you just did this. Why can't we do that?’” It may be, Mueller said, that the only way to overcome the nationalistic issues of the Internet is to overcome the nationalistic issues of the world. “It's a big ask,” he added. “I just don't think that's gonna happen.”

Issue 34 ∞ November 2019 17


XXXXXXXX

For more than a decade, the DCD Awards platform has showcased stories of innovation and cuttingedge design globally, with our APAC Awards adding a regional focus over the past six years. Our entrants have shared examples of best practice and innovation, as the region gears up for a major new era of digitalization.

1

The Edge Data Center Project of the Year

2

The Multi Tenant Data Center Design Award

Winner: GPX India Pvt Ltd Project: GPX Mumbai Data Center

Winner: AirTrunk Project: AirTrunk SYD1

4

5

Hybrid IT Project of the Year

The Energy Smart Award

Winner: China Life Insurance Company Project: China Life Hybrid IT, Shanghai

Winner: Huawei Technologies Co. Ltd Project: iCooling at the Langfang Cloud Data Center

7

8

Data Center Manager of the Year

Winner: Rathish Mani, Digital Realty

Data Center Construction Team of the Year

Winner: Bangladesh National Data Center in conjunction with ZTE Corporation Project: The National Data Center Construction Team

10

3

The Enterprise Data Center Design Award

Winner: DoIT Government of Rajasthan in conjunction with Sterling and Wilson Project: State Data Center, Jaipur

6

Operations Team of the Year

Winner: Datacom Project: New Zealand Rebuild and Expansion Project Teams

9

Business Leader of the Year

Winner: Jeremy Deutsch, President Asia Pacific at Equinix

Outstanding Industry Contribution

Winner: Jacqueline Chan, Director at DSCO Group

18 DCD Magazine • datacenterdynamics.com


>Awards | 2019 Public Vote: Best Mainstream Press Coverage of the Data Center Industry NEW! Everyone deserves to understand a subject as important as data centers. When the press gets it right, governments will regulate the sector better, and make better economic decisions about it, while the public will engage constructively with the facilities they live alongside. This Year's public vote category will recognize journalists and publications which helped the data center community by delivering useful information about the sector to a broader audience.

www.dcdawards.global/best-mainstream-press

Sponsored by:

Public voting closes Nov 2 5! See we b link

Charity Partner

above


CEO | In focus

For more high-level insights, check out our C-Level Summit on 31 March

Vive la différence! In a world dominated by US and Chinese providers, does the world need a European cloud player to preserve privacy? OVHcloud’s CEO, Michel Paulin, talks to Peter Judge

O

VHcloud is the only global cloud provider based in Europe, and that simple fact is guiding its next steps, says the CEO, Michel Paulin. The company backs open standards and choice, and it champions European privacy measures against the statebased intervention of the US and China. He believes the world needs a European alternative to interventionist states and the AWS monopoly. The company is not well known outside France, but turned 20 this year. The OVH hosting business was founded in 1999 by

Polish-born entrepreneur Octave Klaba. It is still often seen as a web hoster, but has a thriving public cloud, and hosted private cloud business, served from 30 data centers on four continents (in the US, Canada, Singapore, Australia, and five European countries). In 2018, Klaba appointed Paulin as CEO, and at this year’s annual OVH Summit in Paris, the company announced a new name: OVHcloud. “It's a way to demonstrate to a market where we are not well known, that we are a cloud provider,” he told us at the summit. “And we are not to be perceived as a small web hosting company any more.”

20 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor

The new name and new CEO may underline the company’s message, but there’s no major change in direction. Paulin comes to the cloud from a different sector: in the telecoms world, he managed the IPO of Neuf Cegetel and its merger with the telco SFR, later becoming SFR’s CEO. But he’s not there to change things. Paulin is adapting to OVHcloud’s world, rather than the other way round, and says the cloud is different to telecoms: “There’s much less regulation. And it is very, very new technology.” Telecoms is slower, he says: “Even 5G uses IP technology, and fiber is nearly 50 years


old. We're not talking about pure software like Kubernetes, and Hadoop servers. Telecoms took 15 years to go from GSM to 2G. In the cloud the rhythm is days, or weeks.” Klaba remains as chairman of the board, and the two have daily discussions: “Octave has a vision, and we are trying to execute as fast as we can.” Klaba’s role as a visionary - a “geek in a good way” - is clearly important. The Summit is designed to demonstrate OVHcloud’s status and continuity with OVH. Three thousand delegates (and maybe a few thousand more over the web) enjoy a keynote bristling with senior speakers. The French Digital Economy Minister Cédric O says business needs a European cloud, vice-admiral Arnaud Coustilliere from the Ministry of Defence says everyone needs private data. They are joined by senior executives from Capgemini and Deloitte - and finally, in an OVH tradition, Klaba performs a musical number with two friends. At the Summit, Paulin tells us his mission is to broaden the company’s appeal, and raise the profile of its cloud services, countering the widespread perception that it is primarily a hoster, and increasing awareness of what it’s actually doing. “We are sometimes a ‘best kept secret,’” Paulin says. Not everyone knows that 70 percent of OVHcloud’s business comes from the cloud, he says. “Of the world’s top ten cloud providers, all the others are US or Chinese, and there is one Japanese operator. We are the European cloud provider with a worldwide presence.” And that cloud business is diverse. OVHcloud aims to offer an alternative to AWS and the other giants, and Paulin says it’s always had a more mature vision of the reality of cloud. “We do PaaS (platform-as -a-service) and IaaS (infrastructure-as-aservice) offerings somewhat similar to the AWS behemoth, but also include bare metal, containers, and private cloud.” Other people have overestimated the power of public cloud: “Five years ago, people expected public cloud would solve all the problems for everybody, that it would be the Holy Grail, and everything would be perfect. Octave always said no - it would work, but it would have some problems. The solution is hybrid cloud.” Early on, OVH launched a product for hosted private cloud based on SDDC (software-defined data centers), to use alongside public cloud: “You can put the appropriate workload with the right architecture, to be sure that you have the same SLAs, the same resilience and the same latency - as if it was on your own premises.” Like those hybrid clouds, OVHcloud’s business itself isn’t monolithic, like the offerings of big players like AWS and Azure. It needs partners: “We do need an ecosystem,

and we don’t want to be monolithic. We rely on a lot of partners when our solutions go to the market. Because when you are three you can go faster than alone, and when you are 100 you can even accelerate.” There’s one drawback to this. If the customer only sees the partner’s brand, doesn’t OVHcloud remain a “best kept secret?” “This is something we are aware of,” says Paulin. “Some of our customers don't want to say our name,” he admits. Some PaaS and SaaS customers and hosters want to maintain the illusion that they have their own facilities. For instance, giant hoster GoDaddy is understood to be an OVH customer, but as both offer hosting, they are also competitors. In some countries (like the UK) the majority of partners don’t promote the OVHcloud name, says Paulin: “So we are under the radar.” But in future he believes that OVHcloud’s unique approach will become something that people will actively want to share.

"A French politician says there's a war between GDPR and the CLOUD Act" The company’s distinctive hardware is definitely winning customers, he says: “We don't have air-conditioning. We do water cooling. It is much more eco-friendly - for the same level of servers, we use 10-50 percent less energy than a classical data center. Our PUE [power usage effectiveness] is 1.09; the latest generation of air cooling is at 1.2 and many facilities are at 1.6.” Water cooling makes things simpler in the long run, he says: “It's complicated to monitor, it's complicated to design. But because you have fewer elements, you use fewer components. For example, there is no air cooler. It’s less expensive in capex, less expensive in opex, and less expensive to manage.” It also saves on building expenses. The liquid cooled racks don’t need traditional contained aisles and raised floors, so they can be installed rack by rack in ordinary warehouse space: “We don't need to have white rooms, filtering and freon. We don't have all those ugly words.” OVH hosts cloud services based on the OpenStack open source platform, and VMware’s vCloud - both of which were heavily promoted in the early 2000s, as public cloud alternatives to the AWS juggernaut. As those public cloud

contenders fell by the wayside, OVHcloud remains as an exception. The company has doubled down on that role, adding more unusual services, including a cloud service which offers Kubernetes containers implemented on bare metal servers, instead of the virtual machines (VMs) favored by other providers. This gives customers more performance, but is more of a management headache for OVHcloud, CTO Alain Fiocco explained at the same OVHcloud event. In 2017, OVH picked up VMware’s failing public cloud effort vCloud Air, and consolidated the underlying hardware into its own data centers. This gave it a US footprint, and Octave Klaba spent the first half of 2019 living in Dallas to understand how to integrate the US-heritage business into a European organization. Will OVHcloud have to modify its European attitudes as it becomes more international? No, says Paulin. If anything, the European DNA is even more important: “We do believe that our roots are in Europe and openness - transparency, GDPR and open source are European values. “When we go to Asia and Australia, do we have to change these key DNA values? I don't think so,” he says. If anything, adherence to the European GDPR gives OVHcloud an edge over US-based firms that have signed the CLOUD Act. “The CLOUD Act is a legal tool which gives access to your data if you're on Amazon, Google or Microsoft. So companies are a little bit afraid that without notice, someone can ask for access to their data.” OVHcloud actually offers compliance with the CLOUD Act as an option, since some firms may require it. Its US subsidiary is based in the States, so it has been established as a strictly separate entity, bound by the Act. He’s surprised that the UK has effectively signed up to the CLOUD Act with a new treaty, and amused at the US reaction to the bilateral agreement. “In the US, some organizations said 'Oh, this is a shame. We gave the right to the UK to have access to some data in the US'." This is "bizarre," he says, since that's exactly what the US has demanded from the rest of the world. Paulin thinks there’s more opportunity in holding out an alternative: “The government in France and the European Commission asked us to think about what we need to do to protect European data. The French government doesn't want other states or jurisdictions or companies to have access without any limitation.” “A politician in France said there's a war between GDPR and the CLOUD Act,” Paulin says. “I don't know if it's a war. But I think it is a debate.”

Issue 34 ∞ November 2019 21


Outages | Andy Lawrence

Creeping criticality Why do some industries and organizations suffer more serious, high profile outages than others? Andy Lawrence explains

I

n June 2019, the US General Accounting Office (GAO) issued a report on the IT resilience of US airlines. The GAO wanted to better understand if the all-too-frequent IT outages and resultant chaos passengers face have any common causes and, if so, how they could be addressed. Since then, the UK-owned carrier British Airways suffered its second big outage in two years, once again stranding tens of thousands of passengers and facing heavy costs. The GAO report didn’t uncover much new: in some cases, the airlines needed better testing, a little more redundancy here and there, and some improved processes. But despite suspicions of under-investment, there was nothing systemic wrong. The outages had varied causes. They were often avoidable in hindsight, but not predictable. But there is still an undeniable pattern. Our own analysis of three years of public, media-reported outages shows that two industries, airlines and retail financial services, do appear to suffer from significantly more, highly disruptive (category 4 and 5), high profile outages than other industries. To be clear: these businesses do not necessarily have more outages, but rather they suffer a higher number of highly disruptive outages, and as a result, get more negative publicity when there is a problem. Cloud providers are not far behind. Why is this? The reasons may vary, but these businesses very often offer services on which large numbers of people depend, for which almost any interruption causes immediate losses and negative publicity, and in which it may not be easy to get back to the status quo. Another trait that seems to set these businesses apart is that their almost complete dependence on IT is relatively recent (or they may be a new IT service or industry). They may not yet have invested to the same levels as, for example, an investment bank, stock exchange or a power utility. In these last examples, the mission-critical nature of the business has long been clear, they are probably regulated, and so have investments and processes fully in place.

Organizations have long conducted business impact analyses, and there are various methodologies and tools available to help carry these out. Uptime Institute has been researching this area, particularly to see how organizations might specifically address the business impact of failures in digital infrastructure. One simple approach is to create a “vulnerability” rating for each application/service, with scores attributed across a number of factors. Some of our thinking - and this is not comprehensive - is outlined below:

"Outages were often avoidable in hindsight, but not predictable"

Profile. Certain industries are consumer facing, large scale or have a very public brand. A high score in this area means even small failures - Facebook’s outages are a good example - will have a big public impact. Failure sensitivity. Sensitive industries are those for which an outage has immediate and high impact. If an investment bank can’t trade, planes can’t take off or clients can’t access their money, the sensitivity is high. Recoverability. Organizations that can take a lengthy time to restore normal service will suffer more seriously from IT failures. The costs of an outage may be multiplied many times over if the recovery time is lengthy. For example, airlines may find it takes days to get all planes and crews in the right location to restore normal operations. Regulatory/compliance. Failures in the certain industries either must be reported or will attract attention from regulators. Emergency services (e.g., 911, 999, 112), power companies and hospitals are good examples … and this list is growing. Platform dependence. Organizations whose customers include service providers - such as software-as-a-service; infrastructure-as-a-service; and colocation,

22 DCD Magazine • datacenterdynamics.com

Andy Lawrence Uptime Institute

hosting and cloud-based service providers will not only breach service level agreements but also lose paying clients. (There are many examples of this.) One of the challenges of carrying out assessments is that the impact of any particular service or application failing is changing, in two ways. First, in most cases, it is increasing, along with the IT dependence of all businesses and consumers. And second, it is becoming more complicated and harder to determine accurately, largely because of the interdependence of many different systems and applications, intertwined to support different processes and services. There may even be a logarithmic hockey stick curve, with the impact of failures growing rapidly as more systems, people and businesses are involved. Looked at like this, it is clear that certain organizations have become more vulnerable to high impact outages than they were a year or two previously, because while the immediate impact on sales/revenue may not have the changed, the scale, profile or recoverability may have. It may be that airlines, which only two years ago could board passengers manually, can no longer do so without IT; similarly, retail banking customers used to carry sufficient cash or checks to get themselves a meal and get home. Not anymore. These organizations now have a very tricky problem: How do they upgrade their infrastructure, and their processes, to a level of mission criticality for which they were not designed? All this raises a further tricky question that Uptime Institute is researching: Which industries, businesses or services have become (or will become) critical to the national infrastructure - even if a few years ago they certainly were not (or they are not currently)? And could regulation help? We are seeking partners to help with this research. Organizations are not the only ones struggling with these questions governments are as well. More information on this topic is available to members of the Uptime Institute Network bit.ly/UptimeInstituteNetwork


> Telco | Supplement

Sponsored by

INSIDE

™

The 5G promise

The Telco Edge

The big sell-off

> A new generation of connectivity means a new world of opportunities

> Fixed and mobile network operators shift virtual workloads to the Edge

> Telcos are selling data centers by the dozen to pay mounting piles of debts



Telco Supplement

Sponsored by

Contents 26 A competitive Edge for telcos: Shifting virtual workloads to the Edge has multiple advantages

Hailing on all frequencies

T

elecoms is an equal partner with data centers in the digital infrastructure which is vital to our lives. Either without the other would much less useful. Telecoms has been seen as a slow-moving sector compared with the digital world and the cloud (see comments by OVHcloud CEO Michel Paulin, on p20, for instance). That is now changing. The vast and rapid shifts which are continuously remaking the data center sector, are shaking up the telecoms sector, transforming the ways in which operators interact with customers, transmit data, and choose their battles. This supplement takes a look at the biggest of those changes.

28 5 G, Edge and the revolution: A lot of 5G predictions sound like hype. Here is the reality 30 A dvertorial: Getting up to speed on 5G strategy and micro data centers 32 F lood the world with fiber: If we can lower the cost to deploy fiber, we could connect the planet 34 R outers in the sky: Terrestrial fiber has ruled the world. Now satellites are challenging it 36 T he telco sell-off: Telcos thought they'd be great at colo services. Now they are moving out

A competitive Edge Data center resources are being directed to the edge of the network, to support emerging applications like the IoT (p26). This move depends on having access to fast networks to transmit the data those applications need. So Edge is as much an issue for telecoms as it is for facilities. There's an unprecedented crossover in the moves to deliver the hardware that these applications need. Standards for telecoms facilities are emerging from the data center world

28

32

36

5G brings a revolution The fifth generation of mobile services (5G) is more than just a new and faster kind of network. It's data-driven, and delivered through small cells. 5G services will be fundamentally data driven, and

will drive a new dependence on resources embedded into the network. Using satellites Fiber has been seen as the best way to distribute telecoms services. After the losses, limits and power demands of copper, glass fiber sparked a revolution when it began to displace metal wires in long distance communications. But now satellites are seeing an unprecedented comeback. It turns out that low earth orbit (LEO) constellations can take a lot of data. And even though the route up to space and back down seems like a long way to go, there are fewer hops, and the light travels faster in the vacuum of space than it does in glass (p34). So expect to be offered satellitebased offerings for applications like back up and recovery Funding fiber Despite that, fiber roll-outs continue apace. As well as the obvious international submarine cable connections, projects are aiming to find new ways to fund fiber for areas where cost is a barrier (p32). Seeing sense Finally, this rush of exciting telecoms activity shouldn't make operators feel invulnerable. They are still recovering from a rash of over-investment in data centers in the early years of this decade (p36). As telcos shift their data center assets to the specialists who can handle them, we are seeing an object lesson on focus. Telcos need to think on their feet, and move smartly.

Telco Supplement 25


A competitive Edge for Telcos Shifting virtual workloads into the Edge gives both fixed and mobile network operators multiple advantages. Martin Courtney reports

A

s a new approach to architecting telecommunications networks, there is no doubt that Edge computing has significant potential to change the way that carriers and service providers deliver a range of services to their business and consumer customers. But with the technology at such an early stage of its development, specific use cases are still under development - particularly as telcos work out the best way to exploit softwaredefined networking (SDN) and network functions virtualization (NFV) to drive down their own infrastructure costs and streamline provisioning, configuration and management processes. There is still debate over what the network Edge actually is, and no strict definition to clear up the confusion. Most see it as smaller data center hosting/ processing facilities located closer to the end user, but others feel it could incorporate local workloads running on customer premise equipment and other points of presence with local (LANs) rather than wide area networks (WANs). Instead of a location, the Edge defines only a workload hosted at some indistinct node within the provider or customer infrastructure. According to network and telecommunications equipment manufacturer Cisco, the point of the Edge is threefold: to deliver lower latency to the end device to benefit application performance and improve the quality of the experience; implement Edge offloading for greater network efficiency; and perform computations that augment the capabilities of devices and reduce network transport costs. To that end, much of the ongoing innovation has so far focused on the enablement of fifth generation (5G) cellular networks. Indeed most mobile operators agree that cost efficient 5G service delivery is simply unfeasible without the deployment of some form of Edge data hosting and processing

capability to override the need to transmit and crunch large volumes of information via centralized data centers and the core. But Cisco’s imperatives apply equally to applications and services delivered over wired broadband connections as they do to 5G links, as much to Edge infrastructure workloads in cable broadband and gigabitcapable passive optical networks (GPON) access as to the 5G radio access network (RAN). As such, fixed line carriers and service providers too are looking at where Edge computing solutions can help them deliver wired broadband connectivity and the range of IP-based voice and data services which that supports - to customers previously accessed via local loop telephone exchanges.

"Edge computing will process data to facilitate services as close to the user as possible" US telco AT&T, long at the vanguard of SDN/NFV adoption, is currently working to convert some of its estimated 4,700 telephone exchanges into mini data centers. The fixed and mobile network giant is close to achieving its goal of virtualizing 75 percent of its infrastructure by 2020, having already deployed SDN enabled broadband access (SEBA) to deliver superfast fiber broadband services to consumers and businesses in US cities such as Irving and Atlanta. SEBA is a set of open networking components that virtualize the software to run optical network terminals (ONTs) and optical network units (ONUs) on fiber networks, though it can be extended to other types of network including fixed wireless and Gfast that use copper cabling. In the UK, BT is extending core network

26 DCD Supplement • datacenterdynamics.com

Martin Courtney Contributor

functions currently hosted within five to ten of its exchanges to around 100 metro locations. BT’s own Network Cloud will evolve to reduce data and application latency, again initially for 5G applications and services. But, once the infrastructure is in place, it can be used for a variety of different functions, including broadband, IP telephony, and unified communications as a service (UCaaS) provision to customers. BT currently has around 1,200 local exchanges in the UK which serve as a first point of aggregation, more of which could be migrated to Edge facilities to meet the needs of different cloud hosted services in the future. 5G cell towers are another proposed location for Edge compute resources in the base of 5G cell towers which can also be used to accommodate fixed line operators’ equipment. Defined by the Open Network Foundation (ONF), the Central Office Rearchitected as a Datacenter (CORD) initiative combines NFV, SDN and commodity clouds to bring cost efficiency and cloud agility to the Telco Central Office (UK parlance the local telephone exchange), allowing them to dynamically configure new services for residential, enterprise and mobile customers in real time. As voice as well as data traffic becomes IP enabled, routing and switching functions can be virtualized, making them easier to provision, configure and manage remotely. It is envisaged that the reference implementation of CORD will be built from commodity servers and white-box switches defined by the Open Compute Project (OCP) which are cheaper to buy than proprietary telecommunications hardware for example, alongside disaggregated access technologies (vOLT, vBBU, vDOCSIS), and open source software (OpenStack, ONOS, XOS). Elsewhere, the European Telecommunications Standards Institute (ETSI) multi-access Edge computing (MEC) specification was designed to promote the


The Telco Edge convergence of mobile base stations and IT and telecommunications networking, ostensibly to support anticipated new business cases around video analytics, location services, IoT, augmented reality, data caching and optimized local content distribution (what used to be known as a content delivery network - CDN). Those use cases were defined specifically with 5G in mind, but as software overlays devolved from the underlying network, there is no reason why they cannot be applied equally to wired broadband connections too (multi-access is included in the acronym for a reason). Similarly, Edge routers, designed to process data collected from thousands of different devices and end users, already provide various interfaces to both wired and radio-based transmission technologies and communication standards - everything from 5G and WiFi to Bluetooth and Ethernet. Edge can compete with cloud, after a fashion. Telcos like AT&T and BT need to be able to deliver fast, reliable hosted voice and data services. They are a crucial element of commercial cloud strategies, but the cloud is delivered from centralized facilities, a sector where the telcos have failed. IT giants like Amazon Web Services, Microsoft, Google, IBM and others have won in the enterprise space by investing heavily in building their own hyperscale facilities. After finally admitting defeat, AT&T sold off its core data center assets to Brookfield Infrastructure for $1.1bn earlier this year, following similar divestitures by other telcos (see p36). Having a distributed compute infrastructure at their disposal gives telcos something the cloud service providers do not, and would find very difficult to obtain for themselves: dedicated Edge hosting and processing facilities closer to the customer which are better able to support a range of latency sensitive applications for business customers. Those could include everything from infrastructure- (IaaS), platform- (PaaS), network- (NaaS) and unified communication-as-a-service (UCaaS) to industrial IoT (IIoT) and high definition video capture (e.g. CCTV surveillance and consumer retail applications), the provision of which (telcos hope) could be supplemented by value added systems integration and managed services contracts. And building out their Edge facilities puts telcos in a prime position to make themselves indispensable to bigger cloud providers when it comes to delivering more latency sensitive services and applications to their own customers - a potential market carve up that plays to both sides’ strengths and reach.

Telco Supplement 27


5G, the Edge and the service revolution A lot of 5G predictions sound like hype. Vlad-Gabriel Anghel explains the reality

I

n mainstream media during 2019, the term 5G has been increasingly seen and touted as the future for mobile communications and data processing. But infrastructure industry giants have been hard at work for quite some time getting ready to tackle the challenges that come with the vast amounts of possibilities that 5G will allow. It is tempting to see 5G as an incremental step up from 4G/LTE, but 5G is exponentially better. It is capable of reaching speeds of up to 20Gbps and supporting up to a million devices per square kilometer (that’s a lot of IoT devices) while providing an alleged 1ms latency. 5G is ultimately the true foundation for the Internet of Things. Since the emergence of IoT devices, network limitations have placed numerous boundaries in terms of real-life use cases, while fields like HPC applications have seen

limitations on data handling requirements. A ridesharing app can only reach out to an AI prediction algorithm within a data center so many times per minute, and the same applies for other types of apps that rely on data processing in the cloud. This is true regardless of where the data is obtained from: Ultimately, network constraints will not allow for a fully seamless and instantaneous end user experience if the processing is centralized in the cloud. The digital infrastructure industry has proposed a solution to this in the form of Edge computing - a way to make distributed systems more efficient by taking out parts from a centralized core and making them available closer to the data source or the Edge. In simpler terms, this means storage, data services and computing power being redistributed accordingly based on their

28 DCD Supplement • datacenterdynamics.com

Vlad-Gabriel Anghel Contributor

service or function. The ones that benefit from lower latency are moved closer to the Edge, while the rest remain at the core. Edge computing can reduce latency by placing critical resources close to the end users and increases resiliency as it creates alternate data transmission routes. It does, however, fragment the system and that can pose a risk in terms of both physical and logical security, and because it relies on additional hardware it requires a significant upfront investment. In practice, the current capabilities of Edge computing are far from being able to support the innovative use cases envisioned in one form or another for several decades. It all comes down to network latency and availability. 5G can go a long way in removing these constraints, because it effectively increases the capability of the network edge. The demand for data storage and processing


5G is coming power there will tremendously increase. If 5G delivers on that promise of 1ms latency and one million devices per square kilometer, it will reshape a lot of industries, right down to their best practices and design standards. For example, the majority of distributed system architects have been limited in their design choices by bandwidth and latency considerations. If 5G brings these barriers down then, instead of monitoring 30 sensors in real time, systems might manage 1,000, if this would bring a competitive or strategic advantage. The same goes for mobile apps. Instead of querying a cloud endpoint every minute, why not every second? As these design choices evolve, it will have a massive impact on the digital infrastructure supporting these systems, and businesses will need to embrace HPC technologies and design strategies. Essentially, 5G will rely heavily on highperformance computing elsewhere. Current mobile networks are capable to a certain extent of providing services for technologies like autonomous cars, drones and weather forecasting, but these applications will be truly unlocked by the use of 5G. What gives 5G network their tremendous speeds and bandwidth is their technology. They operate on what is known as millimeter waves - radio signals with a frequency between 30GHz and 300GHz (4G operates between 1GHz and 5GHz). These have less range than shorter wavelengths, so the area previously covered by one 4G transmission tower must now be covered by a multitude of smaller, inexpensive 5G antennas fixed to buildings and streetlights. Telco Edge data centers are the first to see this change and keeping up will be tricky. Just revisiting the field of autonomous cars with their numbers on the rise, telemetry data will be gathered by multiple 5G antennas on a continuous basis. To analyze this in real time, and keep all the cars in lane and on the road, will require high-performance computing

through storage and AI predictive services. This raises considerations about deploying the proper HPC equipment in an efficient and sustainable manner. With 4G, devices connect on a one to one basis: the cellphone connects to the telecoms tower; the tower connects to another tower and so on. 5G will allow devices to connect to multiple antennas and this presents the possibility of the utopian scenario of 100 percent reliability. However, operators will need to embrace HPC technologies like distributed file systems, in-memory data grids etc. and implement proper design methodologies as traffic scales exponentially to millions of writes per second. The possibilities and requirements of 5G will reshape how data centers are built and operated and current service providers will need an overhaul on their infrastructure in order to keep up. The backend services supporting 5G will need to be much more

Current mobile communication frequency (long wavelength) Wavelength

Extremely high frequency mmWave (short wavelength)

scalable than the previous 4G equipment, however these services will most likely be born out of already existing cloud native tools and technologies which are the center point of building scalable cloud services. Furthermore, storage and computing power will shift towards the Edge, as close to the end users as possible. As mentioned before, these will need to be designed and deployed through an HPC methodology. With data increasing vastly, the need for data analytics and management will also increase. This will happen again through HPC technologies like object stores, distributed databases and file systems. Operators will require these tools and technologies in order to streamline the deployment, management and scalability of larger data volumes. As the data moves from the secure core cloud data

center to the network Edge, new security issues arise. The more fragmented a system is, the harder it becomes to provide proper security. This need is further underlined by the critical nature of the applications which are likely to use 5G services, such as connected traffic control systems, autonomous cars, drones and the like. The arrival of 5G has propelled a diversification exercise throughout the data center industry, and the landscape is changing. Businesses like Vapor IO and EdgeConneX are attempting to create a new ecosystem of Edge modular data centers and have predicted tremendous growth for this sector due to 5G. Meanwhile, already established players within the data center sector could need to shift their focus towards deploying micro data centers, while making additional investments in already existing data centers and colocation facilities to keep up with the upcoming demand. This should not be seen as blocker in the expansion of these businesses but rather a necessary step in fully releasing the potential of mobile technology and communications. Ultimately, 5G is evolving to become what is known as a general-purpose technology (GPT) - a type of technology that has the ability to drive fundamental change across the entire global economy. Previous examples of GPTs have been the printing press, the automobile and the steam engine. As data center owners and operators come to grips with the needs and challenges of 5G and adjust their infrastructure and facilities accordingly, 5G end user numbers will soar, making it the latest and most impactful GPT to date. With the number of IoT devices connected to the Internet expected to reach the order of billions in the near future, it is through 5G that all of these devices will be able to interconnect and exchange data, more quickly and more reliably than before. It does, however, require data center owners to first find huge investments while managing stakeholders’ expectations on short term ROI. Deployment of these methodologies will need precise and careful planning and implementation. The industry as a whole is not there yet, still a lot needs to be done in order for 5G to become the network and service revolution it aspires to be, but it seems, at least for now, it is on the right path. The future is around the corner!

Telco Supplement 29


Advertorial: Schneider Electric

Getting up to Speed on 5G Strategy and Micro Data Centers Greg Jones, Schneider Electric's VP of Strategy and Offer Management for the Cloud & Service Provider Segment talks to Steven Carlini, VP of Innovation and Data Center

E

verywhere I go, it seems that people are talking about 5G. I wanted to find out more about the specific role of micro data centers in 5G so I sat down with Steven Carlini, our Vice President of Innovation and Data Centers. Steven is responsible for developing integrated solutions and communicating the value proposition for Schneider Electric’s data center segment. I knew he’d have a wealth of timely insights on the topic and he didn’t disappoint. Here is part of our conversation. What is your definition of micro data centers? I would classify micro data centers as two or less IT racks, where a massive amount of computing power or storage can be managed. Today, we have micro data centers as small as 6U that can hang on a wall or even be put in a ceiling! I see micro data centers as a critical extension of cloud data center architectures to reduce latency and add redundancy in a hybrid cloud environment. These micro data centers are a key building block. However, because they are spread out all over the place, they do present challenges in the form of troubleshooting, maintenance, and repair. Also, energy usage becomes a critical operating expense at scale. For example, let’s say you have 2,000 sites and 10 KW per site – that’s 20MW of power! That equates to roughly $20 million in electric bills a year (operating at average efficiency). This is why the efficiency of micro data centers is top of mind for a lot of companies. Is Schneider Electric investing in micro data centers and/or infrastructure changes? For Schneider Electric’s Secure Power Division, edge data centers including micro data centers are a top priority. Many projections show that the

market for local edge micro data centers will approach or exceed the market for hyperscale mega data centers. Historically speaking, the data center market has been cyclical between centralized and distributed. I see it coming again, but it may move to a more balanced architecture as core and edge become an integrated architecture. Why are micro data centers important? Applications and operations are moving closer to the user or the data on the edge and micro data centers are handling many business functions. Think of the hotels that solely rely on local data centers for coding digital room keys and managing reservations. In the future, many hotels are considering facial recognition for completely automated experiences. As the world gets more automated, it will rely more and more on micro data centers. In a 5G architecture, micro data centers are essential. And, local clusters are necessary to meet 5G performance targets. But we have the speed of light limitation of 300 million meters per second. So, with less than 1 ms of latency spec for 5G, the maximum distance is less than 200 miles round trip – and that’s a theoretical best case. We are dealing with many carriers and they are laying out their clusters in circles with a much smaller radius than 100 miles, especially in densely populated areas. Can you provide some background on 5G micro data centers? Let’s start with the two main enablers of 5G: one is the new radio access network (RAN) and the other is the data center architecture. 5G uses microwaves and millimeter waves at very high spectrum so the signal does not go far. An easy way to think of it is the three-house rule: to operate 5G you need

In a 5G architecture, micro data centers are essential. And, local clusters are necessary to meet 5G performance targets an antenna for every three houses. That is how close together they need to be. And, let’s go back to the speed of light at 300M meters per second. The only way to achieve the required latency of less than 1 ms is to build local clusters that will include micro data centers. In a recent 5G test in Chicago, a 4K movie was downloaded in 20 minutes using 4G and 19.5 minutes using 5G. Why did that happen? It was because the only 5G portion on the connection was from the small cell hanging on the light pole to the phone – about 100 feet. The movie was in a data center many miles away. In the near future, micro data centers will serve the function of mobile edge computing (MEC). They will have traditional telco functions, like call routing, and also IT functions, like content delivery. For example, that movie could have been stored in the small cell and downloaded almost instantly. What will 5G micro data centers do for energy savings? This is a lively discussion topic as Schneider and our industry have been focused on energy efficiency for larger data centers over the years. Our goal is to make the micro data centers as efficient as the hyperscale data centers with no increased OpEx. Schneider, like many companies, has carbon neutral goals and edge data centers are a big part of those goals. We know that globally, millions of units will be needed to support 5G. As we


Advertorial: Schneider Electric

discussed, the energy use is being shifted from the core to the edge, and it’s a top-ofmind issue to address energy efficiency. For 20 MW, 2,000 10kW micro data centers, a 20 percent efficiency reduction will roughly cost an extra $4 million per year. That’s why it’s important to make sure the designs of micro data centers include highest efficiency cooling technologies like liquid cooling, for example. And we can’t forget the management and maintenance aspect. A cloud-based management system is critical for looking at thousands of sites. And accurate reporting is absolutely essential so it’s clear when maintenance is needed, and it can be performed quickly and efficiently. This has been an area of focus for Schneider. Can you provide updates on Schneider Electric Secure Power strategy for 2019, and key focus areas for next year? Schneider is working across different stakeholders that are vying for a leadership spot in the data center architecture needed for 5G deployment. Telco used to be an exclusive country club with only a few members. But 5G, a new technology that embraces openness, is a public course where everyone from Tiger Woods to Happy Gilmore can show up with clubs. In addition to the carriers and traditional telco equipment providers, we are dealing with cable TV service providers, IT companies, and internet giants. Our strategy is to enable these companies to add value in the 5G ecosystem. We are focused on integrating the safest enclosures, latest battery technology, most innovative cooling, and cloud-based management systems. Of course, in 5G, as in golf, the best player will win.

Leveraging Cell Sites for Mobile Edge Computing 4.5-5G requires computational power in closer proximity to users - creating a unique edge computing opportunity for cell site owners. However, this transformation is not without its challenges. End-to-end solutions focused on power, cooling, enclosures, and management software are needed.

Schneider Electric Contact Details Steve Carlini Steven.Carlini@se.com


Flood the world with fiber If we can lower the cost to deploy fiber, we could connect the planet. Sebastian Moss reports

Sebastian Moss Deputy Editor

32 DCD Supplement • datacenterdynamics.com


Fiber to the people

W

orking in this industry, discussing the roll-out of 5G and the impact of a world full of connected machines, it can feel like we have already solved the basics of the digital age. But we’ve left half of the planet behind us. “We've reached four and a half billion Internet users,” Isfandiyar Shaheen told DCD. “How do we onboard the next three to four billion people on the Internet?” This question, of how to connect billions and provide a stable connection for billions more, has consumed the lives of many. Giant corporations have sunk millions into ambitious schemes involving balloons, satellites and huge drones. Shaheen, founder and CEO of NetEquity, believes there’s a simpler approach: Fiber. “When you compare the bandwidth of fiber to everything else, whether it's a LEO satellite, microwave, millimeter wave - those things are not even in the same quadrant. It’s literally, thousands of times less bandwidth than a single set of fiber.” His company believes that it is possible to reach a vast number of the disconnected or poorly connected people in the world by deploying fiber across utility networks. “The theory I'm working with is saying that if you follow the electrical grid to deploy fiber, you can fiberize 80-90 percent of cell towers in the world,” Shaheen said His hope is that he can convince utilities to deploy fiber along their grid infrastructure as “utilities need fiber to achieve substation automation,” Shaheen said. “Utilities have also created a standard called IEC 61 850 that relies on fiber to do key things such as integrating renewables, integrating storage, and achieving a hell of a lot more automation to reduce their line losses, which will lower operating expenses and through which they can start lowering their capital expenditures. Because when a substation goes digital, its copper footprint goes down by 80 percent.” The utilities he hopes to convince are, however, “mostly bankrupt - they're running on state subsidies. So the solution I've come up with is to say to them ‘I will build you a fiber network on my expense, and you get to lease it based on whatever the going rate for dark fiber in your market is.’ It's usually about 60 cents to 70 cents a meter a year.” “That kind of lease payment makes it feasible to raise about 80 percent of the project costs as debt.” Once the network is underway, then it becomes easier to turn to telecoms companies and say “‘hey I am bringing fiber to your tower, but instead of charging you whatever the market rate is for this service, let me make this an opex neutral deal for you.

If you are spending $500 a month per tower on running microwaves, pay me the same $500 per tower per month, and I will give you a fiber connection,’” Shaheen said. “But I won't give it as dark fiber, I will give it as a lit service. Because if I give it to you as dark fiber, you will lock away bandwidth. My goal is to make bandwidth abundant. So I would offer you abundant bandwidth, for the same operating costs of running microwaves. Now that combination of selling services to cell towers and getting these revenues from dark fiber leases from the utility that makes the whole business case hold.” By playing to the pain points of the utility and the telecoms company, Shaheen hopes to avoid the aggressive legal fights and turf wars that have stymied others’ attempts at creating utility bandwidth. A few examples of municipal fiber exist, with Shaheen pointing to efforts of small San Juan-based Orcas Power & Light Cooperative (OPALCO), which “deployed fiber at the cost of about $30 a meter. “What's been cool about this cooperative is they published a very nice payback analysis in one of their board publications. And that shows a payback period of about 12

“Currently, the dry powder of infrastructure funds is sitting at $180 billion” years. It shows the breakdown of the sources of savings, which came largely from saving the number of trips that their guys were making for repairs, because fiber gave them visibility across the grid.” For Shaheen’s plan to work, however, requires fiber to be deployed at a lot less than $30 a meter. “Fiber cables are a commodity, it costs less than $1 per meter. Yet most deployments end up costing $30 to $50 a meter, and that's because all the cost is in labor and right of way. If there is a way to bypass labor and right of way, there's no reason you can't deploy fiber at $4 a meter or around that benchmark.” Shaheen believes that there is a way to avoid these cost barriers. But to achieve a radical price cut, Shaheen has to rely on an ambitious secret hardware project by another company to be pulled off without a hitch. “What allows me to pull this stunt off is that I have this relationship with Facebook,” Shaheen said. “I signed an MoU with Facebook that gives me access to some

technologies that can substantially lower the cost of fiber deployments. It's not a completely ready technology, that's the risk associated with the project.” Facebook declined to detail the technology, which is currently under development. The project is part of Facebook’s Connectivity division, home to several projects focused on improving Internet penetration, including an ill-fated drone project and the controversial Free Basics effort. Shaheen met Facebook representatives at a Telecom Infra conference, advised the company as a consultant, and then became its first ‘Entrepreneur In Residence.’ “I have no funding from them,” Shaheen clarified. “This is important and by design, because if I were to have a deeper relationship with them, then I am subjected to their bureaucracy - they’re a 60,000 person company. I don't need the entire company, I just need to talk to a couple of really solid engineers.” Those engineers are key to Shaheen’s strategy, and roll-outs can’t proceed without the technology working. “My bet isn't so much on Facebook the company, my bet is on who I am collaborating with.” In the meantime, Shaheen is working on “building a business around the promise,” he said. “My timeline is such that for the next year to year and a half, some of the useful tech that Facebook is working on will get finalized and will get prepared. During that period, I want to generate a pipeline of deals, sign NDAs with utilities, get their grid maps, turn them into investor documents, and then start putting together investor consortiums.” So far, he says he has “made progress with utilities in Pakistan and South Africa,” and is working with a large East Asian investment group to explore a partnership. If the technology works and $4 a meter fiber is possible, Shaheen is convinced he can sign up more investors: “Currently, the dry powder of infrastructure funds is sitting at $180 billion.” But to pull off that dream of connecting the world will require a large chunk of those funds. “There’s 25 million route kilometers of power lines that are suitable for fiber deployments and at $4 a meter that’s $100 billion; at $5 a meter, that's $125 billion.” Should he be able to line up investors, Shaheen hopes to “flood the world with fiber, make sure it's not under exclusive contracts, and create a true public utility that can that can help many billions of people access the Internet at a price that they can afford.”

Telco Supplement 33


Routers in the sky Terrestrial fiber has ruled the telecoms world for a generation. Now satellites are promising to match its performance, says Doug Mohney

G

igabit-speed satellite broadband promising performance equal to or better than terrestrial fiber is almost here. Newcomer OneWeb and industry darling SpaceX are launching satellites in earnest by the end of 2019 to build global high-speed networks. Dozens of satellites at a time will go into space on each rocket with each mission building towards constellations of thousands of spacecraft overhead. With full global coverage expected to be completed in 2021, OneWeb and SpaceX’s Starlink project aims to extend broadband to underserved and unserved regions of the world, along with more profitable markets including aviation, maritime, government and enterprise sectors. In contrast, LeoSat and Telesat plan to launch satellites in 2021 specifically designed for enterprise-class services - “MPLS routers in the sky” with “fiber-like” performance according to executives from both firms with global service turnup expected by the end of 2022 to mid-2023. Looming behind them all is Amazon, with its own ambitions and needs. “Our approach to [low earth orbit broadband] is we’re building a Layer 2 satellite system,” said Erwin Hudson, vice president of Telesat LEO. “We’re designing our system to be compatible with MEF standards from the bottom up to provide enterprise-quality service with business class SLAs.” Appealing to the corporate IT department is a strategy start-up LeoSat is finding success with. “I can explain what we do to a data guy in five minutes,” chief commercial officer Ronald van der Breggen said. “We’re putting a bunch of MPLS routers in the sky, connecting them with lasers, and you can use them on any point in the world to connect. ‘When is that available?’ is the first question. We have a service which is unique, resonating with enterprise and governments.” Both satellite constellation designs

include optical cross-links between satellites, a feature providing more speed and security over traditional fiber. Customers can choose to set up a connection between sites that exclusively rides over the satellite network, bypassing traditional network exchange points and reducing the number of hops found with terrestrial connectivity. Laser light traveling through the emptiness of space in a straight line between satellites moves faster than it does traveling through a glass fiber following a meandering path under the sea, through cities, and along railroads, highways, and gas pipelines. Fewer ground hops mean fewer access points for interception or disruption, providing a level of resilience against the threat of backhoes and other fiber disruptions.

“Data centers are a key component of our network when it is fully rolled out” LeoSat and Telesat expect to have their respective networks ready to deliver service in the same timeframe, but the companies have very different financial paths to get there, with LeoSat facing a bigger hill in front of it. “It’s no secret, we’ve had difficulties in raising the equity,” said van der Breggen (see Note). “We had hoped to close the Series A right before the summer. Now we’re working very hard to find additional investors to get us over the hump. There’s a lot of interest from venture funds, strategic funds. We need another investor to line up with what we already have and get us over that hump.” LeoSat estimates it will take $3 billion to build and launch 90 satellites for its initial network. Satellite providers SKY Perfect JSAT and Hispasat placed early investments in LeoSat, but exact amounts have not been disclosed. The company also has logged

34 DCD Supplement • datacenterdynamics.com

Doug Mohney Contributor

$2 billion of customer commitments in Memorandum of Understandings (MoUs), with a diversified mix of firms including high-frequency trading markets, oil and gas, and telecom firms for good measure. Broadband speed offerings for LeoSat start at 50Mbps and range up to 1Gbps, with the most popular requests in the MoU stack 100Mbps. Latency is expected to be in the 20 millisecond (ms) range for a simple trip up and down between ground and satellite, but van der Breggen played down fixating on the simplest of latency examples. “It’s a meaningless case and only tells you so much what an individual satellite is able to do,” van der Breggen said. ”The capabilities of a service are going to be far more important. You have to add in all the fiber [involved] as well as the satellite.” Routing via satellite optical links between major financial centers is expected to be significantly faster than submarine fiber connections, he said, providing a competitive advantage to banks and stock trading firms. In comparison, privately held Telesat is one of the oldest and largest satellite operators in the world. The Ottawa, Canadabased company publishes quarterly and annual financial reports since it issues publicly traded debt, providing a transparent window into its financials. Last year, Telesat closed around $680 million in revenue and had a contracted backlog for future services of nearly $2.8 billion. Telesat plans to launch 198 satellites for global coverage in the first half of 2023, with service available in the polar regions by Q3 2022. An additional 100 satellites will be added by the end of 2023 for a total of 298 satellites in the initial constellation. User service speeds are expected to be scalable to the Gbps range with one-way latency “less than 50ms” in the same continental region. Pricing to build the system, including satellites and ground equipment, and launch it into orbit hasn’t been finalized with the constellation expected to cost “several billions of dollars,” Erwin said. A primary contractor for the system should be selected


Routers in the sky

Source: LeoSat

by the end of this year, with Telesat asking the winning company to build a factory for its satellites in Canada. “We intend to finance Telesat LEO with a combination of cash, equity and debt,” said Erwin. ”As a leading global satellite operator, we generate substantial cash flow and are able to provide a significant amount of funding for the LEO development ourselves.” With an established customer base and a large sales and marketing organization, Telesat also has the Government of Canada as an anchor customer, with a commitment of $500 million over 10 years as a part of efforts to expand broadband access. Canada will use Telesat to deliver backhaul services for ISPs and phone companies in underconnected communities. Both LeoSat and Telesat see data centers as primary partners for delivering network access. “The short answer is data centers are a key component of our network when it is fully rolled out,” van der Breggen stated. “A large portion of traffic will go into data centers.” Multi-tenant facilities such as Equinix are prime real estate, enabling a gigabit satellite service provider or a third-party handling the work within a physical facility to connect

customers to high-speed interconnection points, existing terrestrial networks, Edge computing, and other colocated resources. Telesat plans to establish PoPs at major Internet and cloud exchange points to interconnect with customer networks. “Our network architecture will drive more traffic to data centers on our global WAN,” Erwin said. Telesat also plans to work with data center operators as well as cloud service providers to simplify access to “cloud on-ramps” for its existing enterprise customers. “There’s about a dozen network access points around the world where we aggregate three to five of our earth landing stations at a common point,” said Erwin. “Regional customers can connect to our network at that point of presence.” However, data centers could be more than where LeoSat and Telesat spend money to connect into the rest of the world. “We’re not really focused on fiber replacement, but we can provide disaster recovery if [a data center] lost terrestrial connectivity,” Erwin said. Telesat can deliver 1Gbps service using a 1m satellite dish, with higher speeds possible with larger dishes. Delivering 1.2Gbps to 2Gbps of connectivity

using a 1.8m antenna is feasible. While there has been skepticism in some circles about the commercial viability of LEO broadband constellations, LeoSat, OneWeb, SpaceX, Telesat received some validation and headache this spring. Amazon announced Project Kuiper, its satellite broadband project, this spring after its ITU spectrum filings become public. The e-commerce and cloud giant wants to launch over 3,200 satellites for internal broadband use as well as to provide broadband to customers but hasn’t discussed a timetable for when it will start putting hardware in the sky or offer services. Amazon may start launching satellites by 2023, but it isn’t clear at this point in time if it will be a direct competitor to the enterprise-designed services of LeoSat and Telesat. Note: Less than 24 hours before DCD>Magazine went to print, LeoSat underwent several layoffs, including COO Ronald van der Breggen and at least two other executives. The company did not reply to multiple requests for comment. Updates to this story will be posted at datacenterdynamics.com

Telco Supplement 35


Telco sell-off

Peter Judge Global Editor

The Telco data center sell-off Telecoms providers thought they’d be great at colocation data center services. Now they’re mostly getting out of the game, says Peter Judge

36 DCD Supplement • datacenterdynamics.com

Photography: Sebastian Moss

T

he news that Telecom Italia is looking to spin-off 23 of its data centers and list them on the stock market is only the latest in a series of moves, which are seeing telecoms service providers backing away from earlier plans to make a lot of money out of data center colocation. It seemed so simple in the early years of this decade. Data centers were booming, and they are a service industry based on infrastructure hardware. To telecoms operators, it looked like a logical expansion, and many of them dived into the market. Ten years on, most of them are exiting. “Despite many telcos making moves into the data center and cloud infrastructure markets, more and more are now realizing that they would rather concentrate on their core business and let someone else manage their data centers,” says Massimo Bandinelli, marketing manager at Telecom Italia’s compatriot Aruba. Many telcos simply bought existing data center providers, often at high prices. Verizon, for instance, acquired data center provider Terremark in 2011 for $1.4 billion. Eight years later, the company decided that offering colocation services did not fit with its business model, and sold off its data centers to Equinix for $3.5 billion. Also in the US, AT&T painstakingly accumulated a network of data centers, only to sell them off to Brookfield Infrastructure and other institutional partners for $1.1 billion in 2017. Brookfield relaunched them as a new data center provider, Evoque. Also in 2017, CenturyLink sold 57 data centers for $2.3bn to a consortium that became another standalone data center provider, Cyxtera. It wasn’t a sudden change. DCD first noticed the phenomenon in 2015, when some smaller telcos unloaded their data centers. For instance, in that year Arkansas telco Windstream sold its holding of 14 data centers to TierPoint for $575m, giving that provider 179,000 sq ft (17,000 sq m) of space. Rumors started about the imminent sales at the telco giants AT&T, Verizon and CenturyLink back in 2015, but took a couple of years to come to fruition. The move took in telcos which had built out their own data centers, as well as those which acquired them. Telecom Italia, for instance, had at least some of its facilities built by a partner from the telecoms industry - Ericsson, which is primarily a network provider. Also in Europe, Telefónica SA sold off its 11 colocation data centers. They went to Asterion Industrial Partners for €550m ($600m). In the UK, BT seems to have been


selling its data centers off one-at-a-time in deals like a 2015 sale which saw a Tier III facility near Gatwick go to operator 4D. The trend extends to younger markets as well, where the telcos’ data center investment may have been much more recent. In Latin America, Mexican telco Axtel sold three data centers to Equinix for about $175m. Indian telco Tata flipped dramatically. In 2013 and 2014, it saw a period of rapid data center expansion, building or acquiring 44 data centers in India and elsewhere in

"The workforce required is very different from the one needed as a telco operator” Asia. It then floated them as an independent subsidiary, Tata Communications Data Centres (TCDC). However, in 2016, Tata decided they were more of a liability than an asset, and sold TCDC to Singapore’s ST Telemedia for around $650 million. It’s easy in hindsight to think that these telcos stumbled into data centers by mistake, getting into an area they did not fully understand, where they would face more focused competition that could run rings round them. That analysis is pretty much true, but in the latter years of this decade, telcos faced quite a few financial pressures, analysts have pointed out, Back in 2015, The Motley Fool’s Adam Levy suggested US telcos needed money because they had paid heavily for wireless spectrum, but were not yet gaining huge revenues from mobile data. “Both [AT&T and Verizon] spent heavily in the FCC’s AWS-3 spectrum auction, acquiring valuable airwave licenses for their wireless

businesses,” wrote Levy. Heavily burdened with debt, many telcos have been looking for ways to raise money in recent years. And the data center field has been a good potential source of cash as, during a period of rapid growth, facilities are valued very highly. While selling CenturyLink’s data centers, company chief executive Glenn Post told Barclays Capital’s Amir Rozwadowski that a lot of the motivation was simply that buyers were prepared to pay big money for those assets. And telecoms operators, who have been burnt in previous market crashes, have been understandably keen to get ahold of that cash, ahead of any potential future crash. “First of all, as to why now is an opportune time… valuations are obviously good right now. They can always change, but we know the market’s good.” Post said, back in 2015. “We think our cashflow could be used for investments that can drive higher returns, and better shareholder value. So that’s why we’re looking at divesting data center assets.” But there are other reasons why data centers are not such a good fit for telcos as they might have been once. There have also been changes in the data center industry over the last several years, which have moved data centers ever further from the comfort zone of telecoms providers. “The workforce required is very different from the one needed as a telco operator,” warns Bandinelli, adding that this specialization has increased, as data centers have become more evolved and more commoditized. There are also large investments required to keep up to date, adjusting to industrywide regulations, adopting standards, getting certifications, and moving to renewable energy. Alongside this, new business models are emerging like Edge resources, while at the other end of the scale, hyperscale providers are building a market for huge facilities

which are not cost-effective for a services company to deliver. When the sell-off began, Zahl Limbuwala of data center analytics company Romonet, now a subsidiary of CBRE, felt that the telcos may have taken a view that their data centers were likely to go down in value because of the investment requirements: “If your data centers are approaching 10 years old and have not had a major reinvestment, you are in for a nasty surprise,” he told DCD in 2015. Some organizations have invested in data centers seeing them as a kind of commercial real estate with very large returns. Limbuwala pointed out that they also had high costs, with a “reinvestment time” before new investment is needed, of 10 years, about half that of mainstream commercial property. Despite this gloomy picture, there are some telcos which are apparently exceptions to this trend. Japan’s NTT has a thriving data center subsidiary, which has absorbed RagingWire in the US, NetMagic in Asia, e-shelter and Gyron in Europe, and is in the process of forming them into a single coherent unit. However, NTT is an exception, and operates its data centers at arm’s length. RagingWire CEO Doug Adams contrasts its approach with that of US telcos like Verizon and AT&T: “[The US telcos] were very shortsighted, very quarterly focused,” he said in a DCD interview earlier this year. “They were getting their tushes handed to them by the Equinixes, Digitals and RagingWires of the world, and they backed out. I think NTT was extraordinarily intelligent for doubling down on this business.” Whether it is standalone data center firms, or subsidiaries like NTT Data Centers operating independently, Bandinelli believes that the world is shifting towards pure-play data center providers that are able to meet market requirements and provide competent technicians at a lower cost.

Windstream

Tata Comms

AT&T

CenturyLink

Verizon

Telefónica

Axtel

Telecom Italia

Sells to

Sells to

Sells to

Sells to

Sells to

Sells to

Sells to

Sells to

Evoque

Cyxtera

Equinix

Asterion

Equinix

???

TierPoint

2015

ST Telemedia

2016

2017

2018

2019

2020?

Telco Supplement 37


GROWTH

Colocation providers see faster

with lower risk.

EcoStruxure™ for Cloud and Service Providers brings efficiency to any site, on time. • Lower total cost of ownership with efficient, sustainable CapEx and OpEx management. • Achieve higher operational performance while reducing energy costs up to 60%.* • Get support throughout your data center’s lifecycle — from design to construction and operation.

#WhatsYourBoldIdea se.com/colo * Based on previous data, 2017. This is not a guarantee of future performance or performance in your particular circumstances. ©2019 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998_20607840_GMA-US

Galaxy™ VX UPS EcoStruxure IT


Amazon’s spotty pricing

Amazon’s spotty pricing AWS changed its Spot pricing to be smoother. The result ended up more expensive, and less transparent. Sebastian Moss reports

I

n November 2017, AWS changed how it charged for a service. The switch, made suddenly and with little fanfare, was touted as a small improvement - but raised prices and accidentally stymied a government cloud project.

AWS EC2 Spot Instances, launched in 2011, have always been something of a gamble. Available at a significantly lower price than standard EC2 instances, the Spot market allows users to bid for the remaining capacity in an AWS data center. The more bids, the higher the price - or at least, that’s the claim. While Spot Instances are cheaper, users run the risk of the work being terminated if the Spot price exceeds the maximum price bid by the user, or if the capacity is no longer available. “What you're looking at there is our attempt to recover the marginal cost of that as-yet unused capacity, capacity that has not yet been sold for demand usage or for reserve instances,” Ian Massingham, AWS director of developer technology and evangelism, told DCD last year. “So that's essentially what the Spot market is; it is AWS recovering the marginal cost of having large amounts of capacity deployed and unused by customers around the world.” At the time, however, AWS had already changed its algorithm - and Amazon has since declined numerous requests for comment from DCD. In the early years, the potential cost savings from Spot pricing proved enticing for many, including the US National Science Foundation. Rich Wolski, professor of Computer Science at the University of California, Santa Barbara, was part of a team building a federated cloud for several US universities with NSF backing. The aim of the Aristotle Cloud Federation was for the institutions to share computing resources across their data centers. “But

at some point, if all of the institutions get full, what we want to do is burst from the Federation into Amazon,” Wolski told DCD. The group decided to use the Spot market to maximize cost savings. Jamie Kinney, AWS senior manager scientific computing, said in a press release at the time: "We are excited to work with the Aristotle team to provide cost-effective and scalable infrastructure that helps accelerate the time to science.” But as it was university-led scientific research, backed with government money, the ‘bursts’ required some level of predictability. “Universities do fixed budget resource allocation, you get ‘this’ many dollars, and it has to last ’that’ many years,” Wolski said. So Wolski and his team developed an algorithm to predict Spot price changes, and the likelihood that a workload would be terminated early. “We would be able to say if you bid ‘this’ much, you'll get a day's worth of time, guaranteed with 99 percent

Sebastian Moss Deputy Editor

probability. It was a great success,” Wolski said. “This went on for a couple of years.” Then in late 2017, something happened. “We saw in the press that Amazon had changed the pricing. At first, I was overjoyed - we thought, wow, this is great. If you smooth things, the technique that we had developed should just become much more accurate. “And we started looking at it, and it didn't look right. From a mathematical perspective, from a data analysis perspective, it just didn't look like what the press was saying was happening. “Why doesn't this look right? Has something else changed? Is our method wrong?” Wolski’s team scrambled to work out what had happened. “We started digging into it, we read everything we could read, and we started seeing reports from the popular press about companies that had their own internal algorithm for optimizing their use of the Spot market. And those algorithms were breaking. “We went back in and just did a very

Issue 34 • November 2019 39


careful analysis,” Wolski said. The results, published in the research paper Analyzing AWS Spot Instance Pricing (August 2019), found that prices were higher by an average of between 37 percent and 61 percent. But price increases were not the real issue for Wolski’s team: “If you're doing fixed budget stuff, that just means you have less work to do,” he said. The problem was that it became far harder to predict which workloads would be terminated, with the system relying less on

auction-like market forces, and instead on a hidden algorithm to decide costs and when to end workloads. “This had an impact,” Wolski said. “It was suddenly unreliable, people who were depending on the fact that we can make this prediction could no longer use it.” The change crucially shifted Spot Instances from what it used to be, Wolski said. “There's no indication that after the change it's a market at all, it's just retail pricing - it's dynamically changing retail pricing. Amazon has full price control of the Spot Instances.” “I felt pessimistic,” Professor John Brevik, who worked on the original NSF technique and the subsequent paper, said. “It's sort of like this door's been shut on what was an interesting thing to figure out - this dynamic pricing mechanism and how to predict it. I'll leave it to the more corporate or economically astute to infer why that kind of change happened.” But while the 2017 shift was the final nail in the coffin for the market system, Spot Instances have always relied on hidden algorithms and an invisible hand to control pricing, Orna Agmon Ben-Yehuda told DCD. Her team at the Israel Institute of Technology studied Spot Instances when they launched in 2011 up to 2013.

“In 2011 we first showed that during the first two years of the operation of the AWS spot instances, 98 percent of the price traces were consistent with being the result of an artificial algorithm. This algorithm computed a reserve price: a price under which AWS were not willing to rent the instance.” Her work discovered “the existence of several unnatural, artificial characteristics, which had no economic justification.“ She added: “I would like to stress that the problem was never that AWS had a reserve price, or even that they changed it. The problem was that they declared their prices were based on supply and demand... and people believed it, and based their academic work and economic plans on that.” There are legitimate reasons for some control of the market, Steve Fox, CEO of AWS reseller AutoScalr, said. Users began to realize that if they bid ridiculously high prices that forced others to use the standard On Demand service it would clear the marketplace. "I only have to pay that high price for a couple of minutes, and then I'm back down at the cheap price - and I am never interrupted." Fox told DCD: "So it turned into this game of chicken, where people started bidding higher and higher and higher, just trying to keep from getting interrupted. And it got so bad, where prices were going extraordinarily high, like 1,000 times. So Amazon put a cap on it to say prices could never go above 10 times the On Demand price." For AutoScalr and its customers, the 2017 change did bring about price stabilization but left the company in the dark as to when workloads would be terminated. "So now

an IMS Engineered Products Brand

STANDARD AND CUSTOM SIZES

40 DCD Magazine • datacenterdynamics.com

5400 lbs load rating

UL LISTED RACK HEIGHTS UP TO 62 RU


Amazon’s spotty pricing you had a very predictable price that didn't change very fast, but you never knew when it was going to go away," Fox said. Previously, prices would rapidly rise when more users requested Spot Instances, and it was obvious that the chance of being terminated would rise with it. "And so we had algorithms that would diversify away from the risk and go to more stable spot markets, which meant our overall interruption rate was lower," Fox said. Now the price change is much slower - "it looks like it's on the order of days or weeks, whereas before, it was minutes. “So the challenge is, when a lot of people come in and start to use an instance,

“[AWS uses] unnatural, artificial characteristics, which had no economic justification” eventually they run out, and the price doesn't change. But how do they go about picking who to terminate? That's a big mystery." Fox, whose company is a certified AWS partner, was keen to point out that, despite the change, AWS Spot Instances are still cheap - and added that Amazon regularly makes price cuts across all of its services. "It's like, yeah, maybe the prices have crept up a little bit. But it's a dramatic way to

save on cloud compute. Maybe it wasn't as good a deal as it was a few years ago, but it's still a good deal." Customers of AutoScalr, he said, continue to use to Spot market. "You just have to lean heavier on diversification, as opposed to prediction." But for Wolski’s ‘burst’ method for the Aristotle Federated Cloud, the change proved fatal. "I don't want to ascribe to them a nefarious purpose here,” Wolski said. “I think it was more that we were so far underneath their radar that they just missed it." He believes that his technique, which was publicly available, added value to the Spot Market, making it better for other users. “You have the science community doing something that might make other people use Amazon in a more efficient way, and we're not charging for it,” Wolski said. “I sense that if the Amazon people had thought about that, maybe they would have announced this thing differently, or they would have contacted us.” His team is currently working on a replacement tool that hopes to rank various cloud companies' instances and compare them against internal capacity for a given workload. "So if I'm gonna burst with a retail product, what's my spend going to be? What's say the minimum I have to spend to get the same power? Or if I want twice as much power, how much do I have to spend? Or if I can wait twice as long and I want half the power, what do I have to spend? And

because it's retail pricing, that will be stable." While he is working on a solution, the whole experience has given Wolski pause. “It was an important lesson for the science community. Normally, we buy machines. And when you buy a machine, it's that machine until you throw it away. “It doesn't morph into something else halfway through its lifetime. If on a Tuesday it's an x86 box, it's going to be an x86 box on a Wednesday. But when you buy a service, that can happen. It was still called Spot Instances, it still had the API. “It's just that on some day, you went to it, and it behaved completely differently.”

Learn more at amcoenclosures.com/dcd

CUSTOM RACKS MANUFACTURED DAILY

CUSTOM RACKS CAN SHIP IN 2 WEEKS

MADE IN THE USA

Issue 34 • November 2019 41





Introducing the

DEKA SHIELD from

Our exclusive Deka Shield program gives you peace of mind and exclusive additional warranty protection for your Deka batteries. This benefit is just one offering of Deka Services, the North American service network operated by East Penn. Exclusive DEKA

SHIELD

Deka Shield is an innovative and exclusive program to provide optimum battery performance and life to your Deka Batteries no matter the application: Telecom, UPS, or Switchgear. By allowing Deka Services to install and maintain your batteries, your site will receive extended warranty benefits.

How do I sign up for the Deka Shield program? Installation must be completed by Deka Services or a Deka Services approved certified installer The application and installation area must be approved by Deka Services prior to installation Access for Deka Services to perform annual maintenance during the full warranty period

What coverage does the Deka Shield program provide?*: Full coverage labor to replace any defective product Full labor to prepare any defective product for shipment

Extensive DEKA

Freight allowance for new product to installation site Full return freight for defective product Extended warranty

* Terms and conditions apply – please contact us for additional information.

SERVICES

Deka Services provides full service turnkey EF&I solutions across North America. Their scope of services include, but are not limited to:

• Turnkey EF&I Solutions • Battery Monitoring • Battery Maintenance

• Battery Capacity Testing • Removal and Recycling • Engineering

• Project and Site Management • Logistical Support • Installation

All products and services are backed by East Penn, the largest single site lead battery manufacturer in the world. With over 70 years of manufacturing, battery, and service expertise let Deka Services be your full scale power solution provider.


Don't just look to the north

Virginia’s land dilemma With Northern Virginia bursting at the seams, new submarine fiber could create new hubs in the south of the Commonwealth. Will Calvert reports

N

orthern Virginia has arguably the highest concentration of data centers on the planet. In 2018, 115MW of data center capacity was leased by companies in the region alone. This is almost double the 59MW of capacity absorbed in any other US market in a single year, according to real estate firm Jones Lang LaSalle. But is having that much capacity flowing through one market something to be celebrated? Or is it something to worry about? Many experts think it’s time to start spreading capacity across the Commonwealth of Virginia as a whole instead of letting it concentrate in the north. Sean Baillie, chief of staff at data center provider QTS, told DCD: “I’ve lived in Ashburn for 19 years and we're running out of easements [permits to dig across land]. We have so much fiber here, and there is nowhere else to dig. “There is one main core ring in Ashburn, and there's a bunch of sub rings that hang out of it. The main core ring is being upgraded by a company called Fiberlight, and they're digging in the median [the central strip of the road, referred to in some countries as the central reservation]. “The public rights of way outside are full and there's nowhere else to put anything. And as soon as the median fills up, there's nowhere else to go. So, we’ve got a geography problem.” One of the ways to address the diversity problem is to bring in more fiber from outside the Commonwealth. Until recently, all fiber links to Northern Virginia had to come from other states, with the nearest submarine cables making landfall in New Jersey. In recent years, Virginia Beach in the south of the Commonwealth has become established as a fiber landing point. Telxius’ Brusa and Marea cables in Virginia Beach arrived in 2018, offering a shorter journey for terrestrial fiber connections to Northern and Southern Virginia, compared with the nearest East Coast landfall 300 miles north in New Jersey. Spanish for tide, Marea was built by Telxius, the infrastructure arm of the

Telefónica group, and funded by Facebook and Microsoft. The submarine cable stretches across the Atlantic from Virginia Beach to Bilbao in Northern Spain. Marea is capable of reaching 200 terabits per second (Tbps) of transmission capacity. Also built by Telxius, but funded by parent company Telefónica, Brusa (Brazil-USA) is a private cable that offers low latency communication links between the US and Brazil. The cable also departs from Virginia Beach, but lands in Rio de Janeiro, Fortaleza, and San Juan in Puerto Rico. The need for capacity and the arrival of submarine fiber in the South of the state is enabling growth in southern counties like

"We were hit with the outages when Hurricane Sandy rolled through the Northeast" Henrico and Virginia Beach itself, alongside the established Northern players like Loudoun and Prince William County. The Virginia Beach authorities are priming the pump with an incentive: sales tax on power, cooling and IT equipment has been dropped to 0.4 percent, and cheaper land prices are also in its favor: earlier in 2019, Digital Realty bought a 13-acre plot in Ashburn, Northern Virginia for a record $2.14 million per acre. In Richmond, Henrico County, Facebook has invested $1.75 billion in a 970,000 sq ft (90,000 sq m) data center, which already has a 1.5 million sq ft expansion planned. Just down the road, QTS opened a 1.3 m sq ft (120,000 sq m) data center back in 2010. Henrico County is only 100 miles from Virginia Beach, so this year QTS opened a NAP (network access point) there, to offer peering with the Marea and Brusa cables. Telxius also connects to other facilities including Globalinx’s carrier-neutral data center, virtually next door to the landing site.

46 DCD Magazine • datacenterdynamics.com

Will Calvert Reporter

Don't miss out on our new DCD>Virginia event in October 2020

Two more cables are on their way: next year, Google’s Dunant cable, named after Henry Dunant, founder of the Red Cross and first recipient of the Nobel Prize, will connect Virginia Beach to the west coast of France. And SAEx International is building the South Atlantic Express cable, which starts in Cape Town, South Africa. “Diversity is why we ended up in Henrico County, Virginia,” said Najam Ahmad, VP of network engineering at Facebook, at the launch of the QTS NAP earlier this year. “Ashburn has got a lot of compute power, but it then becomes a very large single point of failure and that is a concern for Facebook.” There’s one problem says Ahmad: “We need multiple paths to provide diversity. The trouble with subsea, as always, is that if a cable is cut, it might be weeks before it can be fixed. If something's out for that long, you have a good chance to pick up a second failure and cause really massive problems.” As if in answer to this, Telxius announced in October that it would connect Virginia Beach to New Jersey - the first direct fiber between two landing stations, and a useful extra route for intercontinental traffic. Marea’s other funder, Microsoft agrees, says the company’s director of global network strategy Frank Rey: “Like any other network provider or network user, we were hit with the outages when Hurricane Sandy rolled through the Northeast of the US in 2012. Because of this we saw a huge need to bring additional diversity to the East coast.”


Advertorial: ZTE

A data center to transform Bangladesh To support the growth and digitization of Dhaka, Bangladesh’s government decided to build a national data center. ZTE delivered the project on time, to a world-leading standard

B

angladesh needs digital infrastructure. The population of the capital Dhaka has reached 15 million, and that creates an urgent need to handle vast quantities of urban data, both for the city itself and for the digital consumers that live there. The Bangladesh government decided to establish high-level infrastructure, and had the help of ZTE to get a national data center built, in the Kaliakoir high tech park, about 50km (30 miles) north of Dhaka. The facility is a full turnkey project in which ZTE carried out civil work, building the data center facility and supplying ICT equipment, including servers, storage and network equipment as well as software applications. The contract was signed in May 2016, with the facility to be delivered in three years. It was formally accepted in May 2019, meeting that goal. Demanding schedule Bangladesh’s extreme weather was an issue during the building. “There are floods in succession,” N M Zeaul Alam, the Secretary for ICT in the Bangladesh Ministry of Posts, explained to DCD. Construction had to be scheduled for favorable weather windows. “ZTE has designed a perfect plan and construction table to overcome the impact of weather. This year’s floods seem to be better than previous years, and we are lucky to achieve our goals to the fullest extent possible.” Mr Alam is not exaggerating: “It rains so frequently, that nearly six months of the year is not suitable for outdoor concrete work,” explained Tongbing Huo, marketing director for data center products at ZTE. Even after the rain clears, the roads can be very, very muddy, making it difficult to transport equipment, he told DCD. Despite this, the project was finished on time, and to a very high standard. Civil engineering started in September of 2016, and was accepted in December 2018. Because the ICT part of the project was being completed in parallel, the whole project right up to the

software stack was finished and accepted by the end of December. To de-risk the project, the team built in quality, by choosing top-quality devices and materials. They were inspected before delivery and re-inspected after arrival, said Tongbing Huo: “They will be verified again before and after the installation.” The facility has such a crucial role that Dhaka needs the “best data center in the world,” said Mr Alam. To ensure the facility is reliable, the government asked for Uptime Tier IV accreditation, “the highest data center certification standard in the world”. Top reliability Specifically, the data center construction must meet the requirements of “shockproof, flood-proof, explosion-proof performance for the purpose of risk prevention,” explained Tonbing Huo. The final building’s resilience is “even higher than the initial design,” he said. Uptime accreditation is in two stages. ZTE’s co-operation got the facility’s design documents accredited before work started. The Tier IV demonstration test of the completed facility was done in December, 2018, and the facility was certified in March 2019. It’s not only the first Tier IV certified facility in Bangladesh, but the first in the whole South Asia region. The Tier IV specification is designed to deal with issues such as power blackouts, and these may happen more often than in comparable facilities, as Bangladesh’s grid is being developed. This challenge means that the facility’s diesel generators will be called for more often and must be maintained carefully.

Developing talent Staff is another key concern, and the government-backed data center will have a key role in ensuring that talented personnel are available, and the nation has a stream of tech talent for the future. To get excellent data center talent, the government offered “very favorable conditions,” said Mr Alam. The long term plan involves training: “In the future, the government will vigorously develop education, train more data center talents, and do a good job in talent reserve,” said Mr alam. And ZTE has been playing its part here, training local staff to work in the data center field, and providing on-site job training. “This is what our country and region need,” said Mr Alam. “I believe our cooperation with ZTE is a long-term cooperative relationship, which can bring common interests to both sides.” Tongbing Huo echoed that: “The team built a state-of-the-art data center. At the same time, they trained a number of experts for the country. This project will become the accelerator of the digital transformation of Bangladesh.” Award winning The data center has a potential power consumption 8500 kVA, and holds more than 600 cabinets, providing 2000Tbyte of cloud storage, in an area of 16,000 sq m. It sits alongside two 920 sq m power buildings, in a campus area of 28,000 sq m. The facility includes a power subsystem, HVAC subsystem,and a fire prevention system. During the building, the team paid close attention to the actual performance of the building and its contents, measuring the temperature in the cold aisles, and performing adjustments to make sure that the airflow was optimal and delivered high efficiency. The project won this year’s DCD APAC Award, for Data Center Construction Team of the Year, with Alam commenting: “We are honored to win this special glory.” The Kaliakor data center may be the first in Bangladesh to achieve Tier IV facility accreditation, but it won’t be the last. ”I believe that we can continue to build better data centers to meet the growing needs of our country,” said Mr Alam. “Of course, we will continue to cooperate with ZTE when conditions permit.”

Contact Details Huo Tongbing, Data Centre Product Marketing Director, ZTE Corporation 86-18623012306 huo.tongbing@zte.com.cn


Advertorial: Server Technology

Intelligent PDUs from Server Technology, such as the recently launched HDOT Cx with Switched POPs (Per Outlet Power Sensing), enable edge data center IT managers to maximize uptime and at the same time provide a rich data source for the latest AI, focused on optimizing between minimizing power consumption and reducing latency.

Come visit us at DCD>London exhibition stand 19

The industry leading and award-winning HDOT Cx PDU brings limitless possibilities to data centers with just one standard PDU offering the ultimate in flexibility:

Everything Needs Power - Powering the Next Revolution Finding the right solutions to power the systems running AI applications is critical

S

mart Cities around the globe are relying on an ever-expanding deployment of IoT devices coupled with forthcoming 5G wireless infrastructure and edge computing to bring new levels of management, coordination, service and information to their citizenry. The systems used to bring IoT, 5G, and edge computing together will require machine-based AI to provide insight, optimize performance, and improve efficiency of the systems that make Smart Cities truly smart. But the race towards AI adoption across industries and applications has really only just begun. Corporations around the globe are looking to deploy AI in their quest to reduce headcounts, improve yield, and increase profits. Trucking companies use AI to optimize routing. Hospitals employ AI to improve diagnosis, read X-rays, and deliver an allaround better quality of care. Retail and online merchants utilize AI to recommend products based on user preference and prior purchasing trends. Banks implement AI to detect fraudulent transactions and support credit scoring. City and state governments

use AI to help determine traffic patterns, improve traffic flow, and deliver improved services to their citizens. These are but a few of the many possible areas where AI can and will be utilized in the future. Artificial Intelligence Runs on Intelligent Power For the time being, all AI applications run on silicon-based computational hardware, usually accessed through a public cloud. And that hardware all requires power, be it DC (like a battery) or AC (from the grid), to deliver on the promises of better information and better control, resulting in improved lifestyles, improved citizen satisfaction, and greater efficiency. Finding the right solutions to power the systems running AI applications that enable Smart Cities, 5G networks, IoT, and edge computing is critical. The vast majority of the power that fuels them will need to be remotely monitored and managed through automation to ensure uptime while maintaining efficiency and avoiding those expensive truck rolls. When it comes to powering a server rack, data center managers are looking for density, flexibility and longevity.

• 2 different types of outlets in 1 unit (accommodates either a C14 or C20 plug) • The most densely-packed outlets per form factor • Easy load balancing with alternating phase technology • Available in Smart, Switched and POPS • Future ready for equipment updates • Highly customizable in 4 simple steps • Fast delivery - ships in 10 days HDOT Cx is available with a variety of feature sets and price points to suit even the most demanding of customer applications. For more information use the below resources: • Build Your Own Customized Data Center PDU bit.ly/STCustomizedPDU • Powering the Next Revolution - Artificial Intelligence (AI) Industry Brief download bit.ly/STAIIndustryBrief

Everything Needs Power

IoT Edge

Core AI

Smart Cities Cloud

5G Your Power Strategy Experts

Contact Details Tel: (1) 775 284 2000 Email: sales@servertech.com www.servertech.com


AI ready

Verne Global

Building a home for AI Design for the future today, lest you get left behind, Max Smolaks warns

D

ata center designers and builders have to stay on top of the latest developments in server hardware: the environments they create require a massive upfront investment and are expected to last at least 20 years, so they have to be ready for housing the IT equipment of the future. The latest trend in IT workloads that’s set to impact the way data centers are constructed is machine learning. The ideas fueling the boom in artificial intelligence are not new - many of them were proposed in the 1950s - and the power of AI is undoubtedly being over-hyped, but there are plenty of use cases where AI tech in its current state is already bringing tangible benefits. For example, algorithms are much better than people at securing corporate networks, able to pick up on anomalies that humans and their rules-based tools might miss. Algorithms are also great at analyzing large chunks of

Max Smolaks Contributor

boring text: lawyers use AI-based software to scan through case files and contracts, while universities use something similar to establish whether a paper was written by the student who submitted it, or a freelancer hired online. And finally, there’s plenty of research to suggest that optical image recognition will match and even surpass the best human doctors at spotting signs of disease on radiology scans. The variety of machine learning applications is only going to increase, introducing radically different demands on storage performance, network bandwidth and compute, more akin to something seen in the world of supercomputers. According to a recent research report by Tractica, an analyst firm focused on emerging technologies, AI hardware sales to cloud and enterprise data centers will see compound annual growth rate (CAGR) of almost 35 percent for the next six years, increasing from around $6.2 billion in 2018 to $50 billion by

2025. All this gear will have higher power densities than traditional IT, and as a result, higher cooling requirements. Up until now, most machine learning tasks have been performed on banks of GPUs, plugged into as few motherboards as possible. Every GPU has thousands of tiny cores that need to be supplied with power, and on average, a GPU will need more power than a CPU. Nvidia, the world’s largest supplier of graphics chips, has defined the current state of the art in ML hardware with DGX-2, a 10U box for algorithm training that contains 16 Volta V100 GPUs along with two Intel Xeon Platinum CPUs and 30TB of flash storage. DGX-2 delivers up to two petaflops of compute, and consumes a whopping 10kW of power - more than an entire 42U rack of traditional servers running at an average load.

"If you don't support water-cooled processors, you're excluding yourself from the top end of the market” There are two parts to almost any machine learning project: training and inference. Inference is easy, you just take a fully developed machine learning model and apply it to whatever data you want to manipulate. This process can run facial recognition on a smartphone, for example. It’s the training

Issue 34 ∞ November 2019 49


AI ready that’s the intensive part - getting the model to look at thousands of faces to learn what a nose should look like. “There are large amounts of compute used and the training can take days, weeks, sometimes even months, depending on the size of the [neural] network, and the diversity of the data feeding into it and the complexity of the task that you’re trying to train the network for,” Bob Fletcher, VP of strategy at Verne Global, told DCD. Verne runs a data center campus in Iceland which was originally dedicated to industrial-scale HPC, but has recently embraced AI workloads. The company has also experimented with blockchain, but who hasn’t done that? According to Fletcher, AI training is proving to be much more compute-intensive than traditional HPC workloads that the company is used to, like computational fluid dynamics. “We have people like DeepL [a machine translation service] in our data center that are running tens of racks full of GPUpowered servers; they are running them at 95 percent, the whole time. It sounds like a jet engine and they have been running like that for a couple of years now,” he said. What many people don’t realize is the fact that machine learning models used in production have to be re-trained and updated all the time to maintain accuracy - creating a sustained need for compute, rather than a one-off spike. When housing machine learning equipment, it is absolutely necessary to be religious about aisle containment and things like blanking plates. “You go from 3-5kW to around 10kW per rack for regular HPC, and if you are going into AI workloads, which are generally going to be around 15-35kW air-cooled, then you have to be more careful about air handling,” Fletcher explained. “If you look at something like DGX-2, it blows out 11-12kW of warm air. And it has a very low temperature spread between input and output, so the airflow is quite fast. If you are not thoughtful about positioning, and you have two of these at the same height, pointing back to back and maybe 30 inches between them, they are going to blow hot air at each other like crazy, and you lose the airflow. “So you have to place them at different heights, or you have to use air deflectors, or you can spread the aisles further apart, but whatever you do, you’ve got to make sure that all of the hot air that’s coming out of one of these devices isn’t going to be banging into hot air coming out of another.” The advent of AI hardware in the data center could at last mark the moment when water cooling finally becomes necessary. “One of the real changes from a data center operator's perspective is direct water-cooled chips to support some of these applications. These products were once on roadmaps, and they are now mainstream - the reason

they are available is because of some of these workloads around AI,” said Paul Finch, CEO at Kao Data. Kao has just opened a wholesale colocation campus near London, inspired by hyperscale designs, with 8.8MW of power capacity available in Phase 1, and 35MW in total. The project was developed with an eye to the future - and according to Finch, that future includes lots of AI. He says that data halls at Kao have been designed to support up to 70kW per rack. “Many of these new processors, the real Ferraris of chip technology, are now moving

"It sounds like a jet engine and they have been running like that for a couple of years now" towards being water-cooled. If your data center is not able to support water-cooled processors, you are actually going to be excluding yourself from the top end of the market,” Finch told DCD. “In terms of the data center architecture, we have higher floor-to-ceiling heights – obviously water is far heavier than air, so it's not just about the weight of IT, it’s going to be about the weight of the water used to cool the IT systems. All of that has to be factored into the building structure and floor loading. We see immersion cooling as a viable alternative, it just comes with some different challenges.” Floors are not the most glamorous data center component, but even floors have to be

Sean Lie, Cerebras

50 DCD Magazine • datacenterdynamics.com

considered when housing AI gear, since high density racks are also heavier. Like many of the recent data center projects, Kao houses servers on a concrete slab. This makes it suitable for hyperscale-style pre-populated racks like those built by the fans of the Open Compute Project, and enables the facility to support heights of up to 58U. And in terms of networking, Fletcher said AI requires more expensive InfiniBand connectivity between the servers - classic Ethernet simply doesn’t have enough bandwidth to support clusters with dozens of GPUs. Cables between servers in a cluster need to be kept as short as possible, too. “Not only are you looking at cooling constraints, you are looking at networking and connectivity constraints to keep the performance as high as possible,” he said. In the next few years, hardware for machine learning workloads will get a lot more diverse, with all kinds of novel AI accelerators battling it out in the marketplace, all invariably introducing their own acronyms to signify a new class of devices: these include Graphcore’s Intelligence Processing Units (IPUs), Intel’s Nervana Neural Network Processors (NNPs) and Xilinx’s Adaptive Compute Acceleration Platforms (ACAPs). Google has its own Tensor Processing Units (TPUs), but these are only available in the cloud. American startup Cerebras Systems recently unveiled the Wafer Scale Engine (WSE) - a single chip that measures 8.5 by 8.5 inches and features 400,000 cores, all optimized for deep learning. It’s not entirely clear just how this monster will fit into a standard rack arrangement, but it serves as a great example of just how weird AI hardware of the future might become.


18th Annual

> London

Global Content Partner

5-6 November 2019 // Old Billingsgate

Show Preview

See Inside Audience demographics and ecosystem Full 2-day London conference program Key themes and conference producer highlights Exhibition floor plan Preview our 2020 global events calendar

1800+

70+

85+

20+

32+

16

Delegates

Sponsors & Exhibitors

Speakers

Hours of Dedicated Networking

Hours of Expert Thought-leadership

In-depth panel discussions & fireside chats

Headline Sponsor

Lead Sponsors

Community Partner

™

www.datacenterdynamics.com

Follow us: @dcdevents #dcdlondon Issue 34 • October 2019 51


London

> Insights from

London Ecosystem

registration data

Registration data gathered from more than 1,400 pre-qualified buyers of data center products and services in advance of the DCD>London conference points to the fact that Edge, backup power generation, and thermal management solutions remain key technology areas heading into 2020.

25%

40%

<25% 75-100% 25-50%

44% 33% 23%

21%

11% Systems Integrators

As we approach the 18th annual DCD>London we are excited to bring together a stellar line-up of global industry leaders to share their insights and experiences across what promises to be a first-rate conference program.

Consulting Engineers

57

22% Building Contractors

15% Cloud Services

Service Providers

of Enterprise operators have upgrade projects in the pipeline

70%

15%

Colo/MTDC

Telco/ISP

5.92%

5.92%

5% 0%

9.87%

3.29%

33.55%

22.70%

4.61%

9.21%

Media/Broadcasting & Entertainment

15% 10%

Manufacturing & Industry

20%

Energy & Utilities

25%

Financial Services & Insurance

Enterprise

35% 30%

Healthcare & Pharma

40%

Govt, Education & Research

68%

Construction & Real Estate

Rebecca Davison Global Conference Director

Architects

AEC/ Advisory

Professional Services

Enjoy the next two days and I hope to meet with you at some point.

5% 8%

54%

New for 2019 are our Challenge Panels, injecting more end-user discussions into the agenda, and providing opportunity to dig into the issues really keeping the industry awake at night. We encourage your full engagement in every aspect of this event - join the debate, ask questions, challenge assumptions, find out what people are doing that’s at the leading edge, and how this might benefit your work and organization.

We asked the enterprise audience what percentage of their data center footprint remains on-premise

Real Estate Brokers

Retail

Our content themes for this year’s event have been specifically curated to tackle the biggest challenges and opportunities to shape the future of the industry. Day One sees us kick off the big discussion of the global climate emergency and the rising tide of environmentalism that will see governments and consumers increasingly hold the industry to account. And Day Two develops the edge story and how 5G is about to challenge how we think about data centers.

14%

50-75%

Major buyside groups

4.28%

23%

UK

Europe

8%

Rest of world

We have service providers attending from 36 different countries making this our most international event

>Speakers Adam Pool, Xtralis Adrien Viaud, Kingston Technology Alaa Salama, Google Alex Sharp, Iron Mountain Ali Moinuddin, Uptime Institute Amy Daniell, NTT Ltd Andrew Wettern, 1Energy Group Andy Lawrence, Uptime Insitute Anna Kondratenko, Systemair Anthony Robinson, Corning Astrid Wynne, Techbuyer Atle Haga, Statkraft Avner Papouchado, Server Farm Realty Bill Kleyman, Switch

Brenden Rawle, Equinix Brian Conroy, Moy Materials Cara Mascini, EdgeInfra Chester Reid, CyrusOne Chris Brown, Uptime Institute Chris Downing, Siemens Christian Belady, Microsoft Ciarán Forde, Eaton Dave Johnson, Schneider Electric David Hall, Equinix Dean Nelson, Infrastructure Masons Deborah Andrews, London South Bank University Diarmuid O'Sullivan, PM Group Dr Mark Coughlin, EnerSys

52 DCD Magazine • datacenterdynamics.com

Dr Mike Hazas, Lancaster University Dr Rabih Bashroush, Uptime Institute Ed Ansett, i3 Solutions Group Emma Fryer, techUK Gary Cook, Greenpeace Garry Connolly, Host in Ireland George Rockett, DCD Heather Dooley, Google Ian Lovatt, FNT GmbH Jack Ke, China Mobile Jack Pouchet, Natron Energy Jason Simpson, Liberty Global Jeff Omelchuck, Infrastructure Masons Jim Smith, xScale at Equinix


> Top 15 most viewed

speaker profiles online

1

Noelle Walsh Microsoft

2

Gary Cook GreenPeace

3

Jason Simpson Liberty Global

4

Mario Müller Volkswagen

5

Rhonda Ascierto Uptime Institute

6

Jim Smith Equinix

7

Alaa Salama Google

8

John Wilson Sumitomo Mitsui Banking Corporation

9

Dave Johnson Schneider Electric

10

Christian Belady Microsoft

11

Mark Thiele Edge Gravity by Ericsson

12

Dean Nelson Infrastructure Masons

13

Cara Mascini EdgeInfra

14

Heather Dooley Google

wv 15

Mike Bennett Cyxtera

wv

John Booth, Carbon3IT John Laban, Open Compute Foundation John Wilson, Sumitomo Mitsui Banking Corporation Kevin Brown, Schneider Electric Kevin Kent, OSU Wexner Medical Center Kurtis Bowman, Gen-Z Consortium & Dell EMC Lee Kirby, Salute Mission Critical Lex Coors, Interxion Maikel Bouricius, Asperitas Marc Garner, Schneider Electric Mario Müller, Volkswagen Mark Harrop, Arcadis Mark Howell, Ford Mark Thiele, EDGE GRAVITY by Ericsson

w Theme | The Business of Data Centers

Theme | 5G, AI & The Connected Edge

Access to capital, site selection, energy procurement, and connectivity are just some of the factors that developers and operators must consider when planning a new facility. This year, we look at how the European data center market is adapting to power shortages, regulation, economic and political uncertainty.

The digital infrastructure landscape is evolving fast with 5G and IoT technologies set to ignite exponential data growth and a new edge data center industry. This year’s conference brings together the foremost thinkers on the subject to help delegates better understand how data center deployment and capacity models will radically change.

2019 sees the launch of our new Challenge Panels. Mark Trevor from Cushman & Wakefield will moderate the “Financially Challenged Panel”, joined by experts including John Wilson from Sumitomo Mitsui Banking Corporation, Chester Reid of CyrusOne, Avner Papouchado of ServerFarm Realty and Romain Le Mélinaidre from InfraVia Capital to debate how the investment model for data centers is evolving. In another Challenge Panel Kevin Kent of OSU Wexner Medical Center, Sean Moloney of GWLE, and Paul Jennings of Imperial College London will discuss how the modern data center manager continues to span the chasm between IT and facilities, and how the role of the data center manager is changing in response to Hybrid IT.

Producer's Highlight:

Major Panel: Fiber, energy, politics and demand – What will the European data center map look like by 2025? How have cities like Frankfurt, London, Amsterdam and Paris (FLAPs) managed the epic growth rates of the data center sector and how will government regulation and public opinion shape their development going forward? Does the Nordic promise of carbon-free compute offer the answer to absorbing continued growth or is this a pipe dream? Join techUK, the Dutch Data Center Association, Basefarm, Statkraft and Host in Ireland as they share the challenges they face and debate how the European data center map is likely to take shape over the coming years.

Mark Trevor, Cushman & Wakefield Mike Bennett, Cyxtera Mike Hughes, Schneider Electric Nick Ewing, EfficiencyIT Noelle Walsh, Microsoft Patrik Öhlund, Node Pole Peter Hannaford, Portman Partners Paul Jennings, Imperial College London Peter Hewkin, SmartEdge DC Petter Tømmeraas, Basefarm René Kristensen, DEIF Rhonda Ascierto, Uptime Insitute Rob Cooper, CS Technology Robert Thorogood, HDR | Hurley Palmer Flatt

Debating how the data center industry will deal with exponential data growth as Moore’s Law slows down will be Christian Belady from Microsoft, Kurtis Bowman of GenZ Foundation – DellEMC, Dean Nelson of iMasons and Bill Kleyman from Swtich. Are we facing a doomsday scenario or a new era of creativity? Rhonda Ascierto from the Uptime Institute will share the Institute’s latest research on how artificial intelligence (AI) will enable very smart data centers and power operational decisions. Mark Thiele of Ericsson will kick off the 5G discussion with why 5G is about to challenge how we think about data centers. Mark will examine how 5G will release high connectivity and change the technological landscape and why the ‘data center’ in all its forms - from core to edge, to micro, to pica - will be at the center.

Producer's Highlight:

Plenary Panel: The race to build the mobile edge - What will it look like? Who will own it? Who will pay for it? 5G will require edge compute, no doubt. But what shape will it take, literally? Will this convert into ISO container micro data centers? Who will own it? The incumbent telco, the mobile network operators or will it be the major cloud players, a new breed of neutral colocation or will it be a municipal play? So many questions, so little time… we ask the Uptime Institute, Ericsson, EdgeInfra, SmartEdge DC, Equinix, and Arcadis what they think.

Rod McAllister, Penguin Computing Romain Le Mélinaidre, InfraVia Capital Seán Moloney, Great-West Lifeco Simon Allen, Infrastructure Masons Simon Binley, Wellcome Sanger Institute Sophia Flucker, Operational Intelligence Stephen Lorimer, Keysource Stewart Grierson, Upnorth Group Stijn Grove, Dutch Data Center Association Susanne Baker, techUK Susanna Kass, UN Sustainable Development Group Tim Chambers, coolDC Tobias Spilker, Siemens Tor Kristian Gyland, Green Mountain

Issue 34 • November 2019 53


London

Day 1 | Tuesday 5 November

Content themes: Energy Smart Infrastructure

07:30

Registration open

09:10

Opening remarks: George Rockett, DCD

09:20

Hall 1 - Plenary Keynote: A balancing act – delivering cloud at the intersection of sustainability, community and innovation Noelle Walsh, Microsoft

09:40

10:40

Hall 1 - Plenary Panel: How should the data center industry now respond to the global climate emergency? Andy Lawrence, Uptime Institute | Noelle Walsh, Microsoft | Dr. Mike Hazas, Lancaster University Susanna Kass, United Nations Sustainable Development Group | Gary Cook, Greenpeace | Dave Johnson, Schneider Electric | Emma Fryer, techUK Moderator: George Rockett, DCD Coffee Break | Expo | Innovation Stage Presentations | Speed Networking | VIP Brunch Briefings

Hall 1

Hall 2

Modernization & Lifecycle Management Planning for Hybrid IT 5G, AI & The Connected Edge The Business of Data Centers Building at Scale & Speed

Hall 3

11:40

Next Generation DCIM: Re-inventing a platform for the brave, new hybrid world Kevin Brown, Schneider Electric

Top trends impacting data centers today Anthony Robinson, Corning

Sustainability: Just a word or something more? Stephen Lorimer, Keysource

12:10

Mastering complexity – A case study on fire safety and consistency in planning and data center operations Chris Downing, Siemens Tobias Spilker, Siemens

Power network transformation – designing and operating the power network as a truly connected and intelligent ‘System’ to prepare for the energy transition ahead Ciarán Forde, Eaton

China Mobile Case Study: The challenge of global expansion Jack Ke, China Mobile

12:40

Vertically Challenged: How is the role of the consulting engineer changing in the era of hyperscale? Ed Ansett, i3 Solutions Robert Thorogood, HDR | Hurley Palmer Flatt Sophia Flucker, Operational Intelligence Amy Daniell, NTT Ltd Moderator: Peter Hannaford, Portman Partners

Vertically Challenged: What conversation should tenants be having with colos about technology adoption? Mike Bennett, Cyxtera Lex Coors, Interxion Rob Cooper, CS Technology Ian Lovatt, FNT Moderator: Dan Loosemore DCD

Financially Challenged: How is the investment model for data centers evolving? John Wilson, Sumitomo Mitsui Banking Corporation Chester Reid, CyrusOne Avner Papouchado, ServerFarm Realty Romain Le Mélinaidre, InfraVia Capital Moderator: Mark Trevor, Cushman & Wakefield

13:20

Lunch | Expo | Networking | Innovation Stage Presentations | VIP Lunch Briefings

15:00

Major Panel: What are the barriers to building at scale and speed and how can the industry overcome them? Mike Hughes, Schneider Electric Alex Sharp, Iron Mountain Jim Smith, xScale at Equinix Diarmuid O’Sullivan, PM Group Moderator: George Rockett, DCD

Major Panel: How will the data center industry deal with exponential data growth as Moore’s Law slows down? Kurtis Bowman, GenZ Consortium & Dell EMC Dean Nelson, iMasons Christian Belady, Microsoft Bill Kleyman, Switch Moderator: Sebastian Moss, DCD

Major Panel: Fiber, energy, politics and demand – What will the European data center map look like by 2025? Stijn Grove, Dutch Data Center Association Petter Tømmeraas, Basefarm Atle Haga, Statkraft Garry Connolly, Host in Ireland Moderator: Emma Fryer, techUK

16:00

Why now is the time for immersion cooling to go from niche to scale Maikel Bouricius, Asperitas

Make the global CO2 challenge an opportunity for your business Patrik Öhlund, Node Pole

Rightsizing and resilience through modularity – the next generation of modular UPS to solve the capacity challenge John Booth, Carbon3IT

16:30

Case study: Data center lifecycle management for the human genome project Simon Binley, Wellcome Trust Nick Ewing, EfficiencyIT Peter Judge, DCD

Panel: The journey towards a circular economy for the data center industry Susanne Baker, techUK Astrid Wynn, Techbuyer Alaa Salama, Google Rod McAllister, Penguin Computing Moderator: Deborah Andrews, LSBU

A Fireside Chat with Google: Are we running out of staff and out of options? Heather Dooley, Google Rhonda Ascierto, Uptime Institute

17:15 17:30 19:00

Isle of Harris Gin Drinks Reception & Networking on Expo Floor - sponsored by

>Awards | Global Finalists Announcement Close of Day One

54 DCD Magazine • datacenterdynamics.com

in partnership with


Day 2 | Wednesday 6 November 07:30

Registration open

09:10

Opening remarks: George Rockett, DCD

09:20

Hall 1 - Plenary Keynote: Why is 5G about to challenge how we think about data centers? Mark Thiele, EDGE GRAVITY by Ericsson

09:40

Hall 1 - Plenary Panel: The race to build the mobile edge – What will it look like? Who will own it? Who will pay for it? Mark Thiele, EDGE GRAVITY by Ericsson | Cara Mascini, EdgeInfra | Peter Hewkin, SmartEdge DC | Brenden Rawle, Equinix | Mark Harrop, Arcadis Moderator: Rhonda Ascierto, Uptime Institute

10:40

Coffee Break | Expo | Innovation Stage Presentations | VIP Brunch Briefings

Hall 1

Hall 2

Hall 3

11:30

OREO to ESCO: Innovation in legacy data center energy strategy – Virgin Media case study Jason Simpson, Liberty Global Stewart Grierson, Upnorth Group

Critical IT facilities at the Edge Mark Howell, Ford

Resiliency in the age of Cloud: 99.99 red flags? Andy Lawrence, Uptime Institute

12:10

Very smart data centers: How artificial intelligence will power operational decisions Rhonda Ascierto, Uptime Institute

Energy storage technology trends and implications for mission critical infrastructure Jack Pouchet, Natron Energy

What upgrades and modifications have the biggest positive impact on your data center’s energy efficiency? John Booth, Carbon3IT

12:40

Technologically Challenged: What role will data centers play in “the grid of the future”? David Hall, Equinix Andrew Wettern, 1Energy Group Bill Mazzetti, Rosendin Electric Mohan Gandhi, Sustainable Digital Infrastructure Alliance (SDIA) Moderator: Peter Judge, DCD

Technologically Challenged: What’s holding the industry back from massadoption of liquid cooling in the data center as talk turns to 100kW per rack? John Laban, Open Compute Dr Rabih Bashroush, Uptime Institute Tim Chambers, CoolDC Chris Brown, Uptime Institute Moderator: Kisandka Moses, DCD

Vertically Challenged: How is the role of the data center manager changing in response to Hybrid IT? Kevin Kent, OSU Wexner Medical Center Sean Moloney, GWLE Paul Jennings, Imperial College London Moderator: Ali Moinuddin, Uptime Institute

13:20

Lunch | Expo | Networking | Innovation Stage Presentations

Hall 1

Hall 2

Workshops | Day 1 15:00

A deep-dive into the newly released data center sector energy routemap Emma Fryer, techUK

iMasons Global Member Summit 1. End User Summit Read-Outs 2. What industry challenges should the iMasons membership be tackling in 2020? Interactive roundtable discussion session Jeff Omelchuck, iMasons, Simon Allen, iMasons, Patrik Öhlund, Node Pole

17:00

12:10 - 13:00 Exec Workshop: An academic perspective on quantifying energy usage per IT service. Dr. Mike Hazas, Lancaster University Cityside West Room 16:00 - 17:00 Exec Workshop: Why has the #clickingclean report become so important to the data center industry? Gary Cook, Greenpeace Cityside East Room

End of conference

VIP Briefings | Day 1

VIP Briefings | Day 2

Private Brunch 10:50-11:40am

Private Brunch 10:50-11:40am

Private Brunch 10:50-11:40am

Cityside East Room

Cityside West Room

Cityside West Room

Private Lunch 1:30-2:40pm

Private Lunch 1:30-2:40pm

Cityside East Room

Cityside West Room

VIP Briefings by invitation only

Workshops | Day 2 12:40 - 13:20 Fireside Chat: Energy Smart meets Circular Economy – step by step towards the next-gen sustainable data center Susanna Kass, United Nations Sustainable Development Group | Alaa Salama, Google Cityside West Room

Issue 34 • November 2019 55


London

Sponsors, Exhibitors & Partners > Exhibitors Theme | Planning for Hybrid IT Effectively managing capacity and operations between on-premise, colocation and the cloud is an increasingly complex task that requires new types of risk assessment and skill sets. This year’s conference program provides new perspectives on the intersect between facilities and IT operations in a hybrid world.

Producer's Highlight:

Vertically Challenged: What conversation should tenants be having with colos about technology adoption? When an organization chooses to colocate their critical data center infrastructure in a 3rd party facility the lines of who is responsible for what aspects of the physical infrastructure and management systems can get quickly get blurred. This panel brings together industry leaders including Mike Bennett of Cyxtera, Lex Coors of Interxion, Rob Cooper of CS Technology and Ian Lovatt of FNT, as they explore the challenges that both the end-user and colocation operator face in order to get the most out of their infrastructure investment and deliver a win-win approach to energy efficiency, availability, and capacity management.

Armstrong Fluid Asperitas Bouygues E&S CBRE China Mobile Corning Cummins Datacenter People DCPRO E+I Engineering Ltd East Penn Eaton EkkoSense EnerSys

48 65 23 43 46 2 33 30 68 58 36 60 37 49

European Data Center Associations Pavilion Danish Data Center Industry 4 Data Centers By Sweden 18 Dutch Data Center Association 17 Innovation Norway 5 Finning CAT FNT GmbH Future Facilities Green Revolution Cooling Hewlett Packard Enterprise InfraNorth Jacobs Janitza Electronics GmbH Kingston Technology Legrand Moy Materials Nlyte Software Operational Intelligence PermAlert PM Group Schneider Electric Siemens Socomec Starline Structuretone Submer Sunbird Software Systemair Tate Europe Tileflow Uptime Institute Winthrop Xtralis

45 29 1 3 66 32 7 47 42 19 50 25 8 26 35 55 64 44 28 54 27 39 31 52 53 38 56 51

>Exhibitors DATA CENTRE SOLUTIONS

DATA CENTERS BY SWEDEN

PermAlert

TM

Division of Perma-Pipe, Inc.

Liquid Leak Detection & Location Systems

®

Innovation Stage | Day 1 - Tuesday 5 November 2019 10:50am | A low carbon footprint for your data center: Using direct and indirect free-cooling units Anna Kondratenko, Systemair

11:30 | Batteries - the most critical component in an evolving data center Mark Coughlin, EnerSys 2:05pm | Accelerate your data center capacity planning with the Digital Twin Mark Fenton & Adam Smith Future Facilities

11:10am | How can Kingston support your technology needs for performance in data centers? Adrien Viaud, Kingston Technology

1:25pm | Green roofing the data center; leading in mission critical environmental sustainability Brian Conroy, Moy Materials

2:25pm | Optimized power train for highly resilient data centers Marc Garner, Schneider Electric

Innovation Stage | Day 2 - Wednesday 6 November 2019

PRO 56 DCD Magazine • datacenterdynamics.com

>Knowledge Partners


Theme | Energy Smart Infrastructure

Theme | Modernization & Lifecycle Management

Theme | Building at Scale & Speed

From future battery technology to microgrids and the latest cooling tech this year’s conference program will provide data center operators with expert opinion and guidance on how to design and operate highly-efficient and highly-reliable mission critical environments that support the next generation of high density IT.

Retrofitting existing data centers to manage modern workloads and maximise capex might be less glamorous than greenfield data center design and build, but 60% of attendees say these are priority projects. This year’s conference brings together operators, technology vendors and design consultants to share best practice for data center upgrades.

It’s crucial for data center developers and operators to reduce time to construction especially within the colo and hyperscale sectors. This content theme explores new greenfield data center design, construction techniques, framework agreements and supply chain dynamics.

Noelle Walsh from Microsoft will open the conference with a plenary keynote on delivering cloud at the intersection of sustainability, community and innovation. Noelle will share her perspective on this unique intersection, along with incremental and profound changes she believes the industry will need to make to deliver on the world’s digital needs, responsibly.

The Wellcome Genome Campus will be the focus of a fireside chat with the data center manager and design consultant. Simon Binley from the Wellcome Trust and Nick Ewing of EfficiencyIT will look at the journey through the lifecycles of 4 data halls over 15 years – the upgrades, the challenges and the present-day deployments for high density HPC environments that are AI-ready.

John Laban of Open Compute, Dr Rabih Bashroush of Uptime Institute and Tim Chambers from CoolDC will debate what’s holding the industry back from mass-adoption of liquid cooling in the data center as talk turns to 100kW per rack.

Kevin Brown from Schneider Electric will address how DCIM failed to live up to early market hype, and how next generation DCIM tools may well deliver a re-invented platform for the brave, new hybrid world.

Participants in the CEDaCI Project, led by Deborah Andrews of LSBU and joined by Alaa Salama from Google, will discuss the journey towards a circular economy for the data center industry. Hear the approaches they are taking to minimize waste, design for repairability, and keep products and materials in circulation for as long as possible to ensure sustainable data center operation.

Producer's Highlight:

Plenary Panel: How should the data center industry now respond to the global climate emergency? DCD’s George Rockett will be joined by global experts from across the digital infrastructure ecosystem to tackle the increasingly critical issue of how the data center industry is going to sustain growth rates that support exponential data usage in the face of a global climate emergency. Will this ever be a sustainable industry? Find out where the industry goes next from Microsoft, Greenpeace, Schneider Electric, Uptime Institute, Lancaster University, techUK and the United Nations Sustainable Development Group as the ‘heat’ turns up.

Producer's Highlight:

OREO to ESCO: Innovation in legacy data center energy strategy – Virgin Media case study

Producer's Highlight:

Panel: What are the barriers to building at scale and speed and how can the industry overcome them? Not a week goes by without a new mega-data center build being announced somewhere in the world, often with power consumption cited in the hundreds of MWs and technical areas the size of many premier league football grounds. This expert panel brings together Mike Hughes of Schneider Electric, Alex Sharp of Iron Mountain, Jim Smith of xScale at Equinix, and JDiarmuid O'Sullivan of PM Group to share their insights on the latest construction techniques and the many supply chain challenges faced when building at scale.

Virgin Media is on the final phase of a 4-year award winning energy efficiency project that has seen nearly 200 ’subscale’ legacy telco sites across the UK receive free air cooling upgrades and improved monitoring and controls leading to a 20% improvement in PuEs Project OREO. But what's next? Jason Simpson, leading energy strategy at Virgin Media’s parent company, Liberty Global, and Stewart Grierson, CEO of VM’s key strategic partner in energy infrastructure, Upnorth Group, will have a robust, and at times uncomfortable, debate about unrealistic expectations, untested technologies, unclear policy and regulation, aggressive sustainability commitments and the art of the possible – audience participation will be expected!

45%

of the audience are exploring backup generation, UPS, water and air cooling solutions

Check out our new website for the most up to date event details datacenterdynamics.com Issue 34 • November 2019 57


New York

>2020 Event Calendar

> LATAM Digital Week 17-19 March > New York 31 March - 1 April

New York

Energy Smart

>Datacenter-nomics 31 March > Jakarta 15 April >Energy Smart Stockholm 27-28 April > Madrid 27 May > Shanghai 11 June > San Francisco 15-16 June

31 March - 1 April 2020

27 - 28 April 2020

New York Marriott Marquis

The Brewery, Stockholm

More than half of last year’s enterprise audience was made up of Financial Services organizations including buyers from every Fortune 100 bank in NYC with more than $8.7 trillion in assets. Colo companies that attended last year’s event operate 14,700,000 sq.ft of data center space in the New York Tri-state area alone. Construction companies that attended last year’s event had in excess of $13.5 billion worth of data center revenues in 2018. And consulting engineers and architects that attended recorded more than $382 million in fees for data center projects.

DCD returns to Stockholm, probably the most energy aware capital in the world, for the third edition of DCD>Energy Smart. This international conference with 600+ attendees, focused on helping the data center and cloud infrastructure industry meet capacity demands sustainably, connects the builders of digital infrastructure with the builders of energy infrastructure to have an energy smart conversation.

DCD>New York is the largest marketplace of its kind in the United States.

The 2020 agenda will explore how circular economy thinking can be applied to the data center industry, how the industry can speed up adoption of the UN sustainability goals, and the latest energy efficient technologies that are ready for deployment.

31 March

NEW

>Datacenter-nomics Colocated with DCD>New York

NEW

5-6 Oct >Virginia

Check out our new website for the most up to date event details

datacenterdynamics.com 58 DCD Magazine • datacenterdynamics.com

> Sydney 13 August > Santiago 9 September > Singapore 15-16 September > Mexico 30 September > Virginia 5-6 October > Mumbai 16 October >Dallas 26 October

New events for 2020 17-19 Mar LATAM Digital Week

> Bangalore 16 July

>São Paulo 3-4 November >London 10-11 November > Beijing 3 December >Canada Digital Week 8-10 December


Genomic treasures

Mining genomes As genomic data emerges at ever-increasing rates, can the data center at Wellcome Sanger Institute keep up?

Y

ou could say the Wellcome Genome Campus, near Cambridge UK, is the CERN of bio-sciences. It leads the world’s efforts to apply genomic research to benefit human health, while CERN leads the world’s particle physics research from its base in Geneva. The Campus has built up around the Wellcome Sanger Institute, set up in 1992, and now includes a rapidly-expanding cluster of other bio-science and bioinformatics organizations (see box: Wellcome to the world of genomics). Everything on the campus seems new, and DCD’s visit begins in the Ogilvie Building (opened 2016) with a graphic illustration of progress in DNA sequencing. Some of the Human Genome Project’s early equipment is on display, which took 13 years to sequence the first reference human genome. Next to it are subsequent systems, which do it much quicker (see box: Faster genomes). The lights dim, and a video plays on one wall, illustrating the rapid progress in the field. Now, we are told fresh genomes are sequenced every day by banks of dozens of machines. There’s a glut of data: sequences for humans, for cancer cells, for parasites, and for bacteria. These genomes hold the keys to new medicine. So far, we have failed to eliminate malaria, which kills half a million people a year in Africa. One problem is the parasite plasmodium falciparum develops resistance to drugs. Checking the genome of malaria samples lets health bodies track that resistance and keep one step ahead, targeting new drugs where they are needed. Meanwhile, hospitals live in fear of the

Peter Judge Global Editor

“superbug” MRSA, which is resistant to standard antibiotics. In fact, analysis of MRSA’s genome has shown that several strains can be dealt with by specific antibiotics. Sequencing and analysis could save the lives of patients in hospitals struck by the superbug. And a full genome sequence could

“We are the single largest user of sequencing consumables in the world" potentially improve the healthcare an individual receives. Our genetic fingerprint influences how likely we are to suffer specific diseases or conditions. Beyond that, our genome determines which treatments will be effective, and which will have side effects if we succumb to illness. As the video ends, the wall it’s projected on slides apart. Through a two-way mirror, in a bright gleaming laboratory, we see

a bank of NovaSeq 6000s - the latest sequencing machines from Illumina. Each is fed a continuous stream of genetic samples, and each takes just one day to repeat the task which took the Human Genome Project 13 years. Between them, they are pumping out petabytes of genomic data. Everything that happens at the Wellcome Sanger campus flows from this firehose. Keeping up with the data deluge is the mission of the Wellcome Sanger Institute’s data center manager, Simon Binley. It’s his job to share it and make it useful to scientists on the campus and around the world. “We are the single largest user of sequencing consumables in the world,” says Binley. He’s proud of his facility, but has no illusions about who is the star of the show: “Our priority is to make sure the science gets the grunt it needs to perform world-class science.” That goal places unique demands on the Wellcome Trust’s data center, he says: “The original sample, that piece of human tissue, or organic matter, that will be lost. Eventually, it will decay. So the only reference we've got to that is the data stored here. If it is referenced in a paper, we have to retain the data forever.”

Issue 34 • November 2019 59


Genomic treasures To see the data center, Binley walks us to the Morgan Building, an older space which opened in 2005, On the lowest floor, there are four 250 sq m (2,700 sq ft) data halls, color-coded red, yellow, green, and blue, which hold some 35,000 computer cores in 400 racks. The life-cycle of this data center is far more interesting than even these raw figures suggest. When it opened in 2005, the Sanger Institute planned to adapt to technology changes, by reserving a “fallow” hall. Three rooms were gradually populated, and the blue hall left empty, waiting for a new generation of equipment. The fallow hall remained empty for a long while, while the first three halls got some updates over the data center’s first 14 years. As a result, the three “legacy” halls have some quite recent equipment, and they’ve recently implemented comprehensive cloud-based data center infrastructure management (DCIM) using Schneider’s EcoStruxure. The infrastructure management extends beyond the data center to communications rooms through the campus - and also to the crucial sequencers in the Ogilvie building. While we were there, Binley pointed out individual UPS systems sitting by each one, all under the DCIM control. Binley shows us one of the legacy halls: there is conventional air conditioning with no aisle containment, and the air around is quite chilly as it’s drawn upwards through the racks. The racks each have about 10kW of load, and the room totals 750kW, with a PUE (power usage effectiveness) which Binley is bringing down from around 1.8 towards 1.4, partly by raising the temperature from 19°C (66°F) to 21°C (70°F). In the blue hall we feel the difference. Engineers are installing racks, but equipment there is already in use, and the hall is noticeably warmer than the others. Here more power is available, and more is going where it is needed. This hall has a potential capacity of 2.2MW, nearly as much

"We've got to keep the sequencers running. If they dry up, our reason to be here goes away" as the other three halls combined. “We needed something that we could start wrapping much heavier workloads into,” Binley explains. “We needed a number of racks where up to 30kW could be accommodated.” This density demands liquid-cooled back-of-rack chillers, says Binley: “These coolers can handle 35kW and burst to 40kW,” Binley says. “The temperature is 34 degrees (93°F) at the back of the rack, and goes up to 50 or 60 (122-140°F) six inches later. The back-of-rack coolers take it back to half a degree below the inlet temperature.” This method allows Binley to tailor the cooling in different parts of the room. There are 25 racks with back-of-rack cooling, and half of them were occupied when DCD visited. There are also air-cooled rows, which now have state-of-the-art aisle containment. Fully populated, the 25 water-cooled

Faster genomes IT professionals are used to rapid increases in performance, as solid state circuits improve. Genomic technology has been increasing at even faster rates. The latest NovaSeq 6000 from Illumina can deliver 6Tb or 20Tb reads of an entire genome, in less than two days. Overall, the costs fall faster. The last generation of sequencers cost £700,000 ($900,000 for a unit that produced 2Tb a day. The new ones cost £300,000 ($390,000) to half a million and generate 4Tb per day. While earlier sequencing methods required manual effort and collation, newer systems automatically generate results in a form compatible with a library information management system (LIMS). The end result is huge quantities of data, which are delivered ready for use, which are vitally important to science. It is up to the Wellcome Sanger Institute’s data center to keep up with that flow of data, by using the best affordable IT.

60 DCD Magazine • datacenterdynamics.com

racks would consume some 750kW, as much as an entire legacy halls. This still leaves more than 1MW to take up, he says: “We’re planning to use this for the next 15 years, and technology is not going to stand still.” A quick mental sum suggests that he could pretty much fill the room with 30kW racks. The increase in energy demands from the blue hall might be a concern, as the campus has rather average-quality power. It’s stuck at the end of the grid, Binley says, and suffers occasional brownouts and outages. One current proposal to deal with unreliable grids is to use a microgrid, where some power is generated locally to increase reliability. The Sanger Institute is on trend here: for more than seven years it’s had a combined cooling, heat and power (CCHP) system on the Morgan building’s roof. It uses natural gas to deliver 2MW of electrical power, while capturing waste heat for use in the buildings, while also providing energy for cooling systems, so it can deliver about 1MW of cooling. This makes good economic sense, since gas costs around half the price of electricity per kWh, and it could conceivably provide primary power, with the grid switched for backup and office use. But that’s not how the Wellcome campus works, says Binley: “We are not the only essential service on campus. We've got to keep the sequencers running. If they dry up, our reason to be here goes away.” So the CCHP doesn’t support the data center directly. It puts 2MW into the campus ring, and increases reliability overall. Another data center efficiency trend fell foul of economics, however. Some data centers that are keen to reduce their environmental footprint are making their waste heat available for use by their neighbors.


It’s normally thought that liquid cooling makes this easier, because it delivers waste heat in a more concentrated and easily usable form. But it’s not that simple, as the Sanger Institute found. The warm air from the legacy halls were relatively easy to plumb into the CCHP’s heat reclamation system, but in the blue hall, the use of liquid cooling means the waste air is no longer warm enough to consider. “The heat recycling in the legacy hall is legacy heat recycling,” he explains. “It supplements the building‘s heating in the winter. In the new hall, the air is cooler.” Meanwhile, it turned out recycling the heat from the water wasn’t viable, because of the level of investment required: “This is a £9 million ($11m) room, directly supporting the science,” he explained. Adding heat reuse would have added £4m ($4.8m) to the cost, and saved a much smaller figure. “That £4m could pay for a number of PhD students,” he says. “They could work on a program like eradicating malaria. “For us, the important thing is to enable world-class science,” he says. “The Wellcome Trust is a charity. It gives its results away and doesn’t get government grants. We have to be as careful with our money as a commercial organization.” All data center managers look to the future, as the demands on their systems develop and the technology evolves. For Binley, there are multiple developments to follow, as the technology of sequencing develops in parallel with the IT. Once again, the sequencers get priority, and the Institute has to balance the available resources between doing more gene sequencing, and ensuring that there’s enough IT resource to handle the output.

This could have interesting results in future. As IT evolves into ever more compact forms, Binley thinks he may be able to continue to grow the power of the data center, while shrinking its size. It could soon be possible to take all three legacy halls and provide a larger IT resource in a single hall. In five years, he can imagine fitting all the resources the campus needs into two halls. When that happens, he says, it might be possible to switch two halls for lab space, allowing for more sequencers right next to the data halls. Why would he be considering this? Those machines would have the benefit of close network links to the data center, and they’d be directly on its protected power supply, no longer needing remote dedicated UPS. Once again, it comes back to the primacy of the research. This could be the largest biosciences data center in Europe - but it’s still just there to serve the scientists.

Hear Simon Binley talk at DCD>London on 5 November at 16:30 in Hall 1

Wellcome to the world of genomics The Wellcome Sanger Institute dates back to 1992, when it was launched to take part in the Human Genome Project, the world’s largest biological science project. The same year, the European Molecular Biology Laboratory (EMBL) created the European Bioinformatics Institute (EMBLEBI) - a resource to store and share the growing number of DNA sequences emerging from genetic research - and decided to locate it on the same campus at Hinxton, near Cambridge. Few people involved at that time could have predicted the speed with which the field would take off. The Human Genome Project took thirteen years, and completed its sequence of a single reference human genome in 2003. Now full genomes can be sequenced in less than a day, and it is possible to look for DNA variations within a single individual. EMBL-EBI began in 1972 as a paperbased library for the few DNA sequences then known, says Steve Newhouse, IT manager of EMBL-EBI. Now, from its outstation at Hinxton, the institute handles the long term archiving of petabytes of genomic data, which are served freely to researchers around the world. As well as a large chunk of one of Binley’s legacy halls, EMBL-EBI is opening a resource in the Kao Data Park, just outside London, a short hop along the M11 motorway. “As well as genomic data, we've just started an imaging archive. We also have protein array data and structural data,” he tells DCD. “Our archives take in approximately six petabytes a year.” As the science exploded, other organizations have located at the Wellcome Genome Campus, including Genomics England, whose 100,000 Genomes Project is paving the way to personalized medicine on Britain’s NHS. There is also the BioData Innovation Center - an incubator for genomics startups - and a conference center.

Issue 34 • November 2019 61


POWER DISTRIBUTION UNITS ISO 9001: 2015 Quality Assured

Bespoke Offering

Steel Construction

INTELLIGENT PDUs CUSTOMISED TO YOUR SPECIFICATION

Quick delivery

The Olson intelligent remote monitoring PDU is a modular system which comprises of a main monitor module with a built-in graphical display and keypad. Offering the option to connect up to four 8x output switching modules allowing a total of 32 16A switched outputs to be controlled (total 32A). • Monitor energy consumption • Remote monitor & switching • External temperature & Humidity sensors

• Programmable UPS • SNMP enabled

FOR MORE INFORMATION VISIT OLSON.CO.UK

+44 (0)20 8905 7273 sales@olson.co.uk olson.co.uk

Designed & manufactured in the UK


Storage wars

Batteries lead the charge for a better grid In future, energy will be stored in huge concrete towers and underground compressed air vaults. For now, batteries are still the best option for data centers, says Natron Energy's Jack Pouchet

T

he increasing deployment of renewable energy systems is leading to greater grid instability, demand for additional grid services, and local energy storage. Data center owners and operators have an interesting opportunity to add stability to the grid, add new revenue models for internal and external clients, and add new levels of resiliency to their operations all with ‘new’ energy storage systems. Although ‘new’ is perhaps not much more than new approaches to existing proven systems albeit with innovative chemistries. When it comes to energy storage, the news media and corporate board rooms are enamored with swashbuckling energy storage plays like ‘I’ll ship you 100MW in 100 days’ and Popular Science cover projects such as giant concrete blocks, caverns of compressed air, and nearly perpetual motion machines. But when we look behind the curtain, we find that there are several practical energy storage systems commercially available today that may align well with the power profiles required to sustain our mission critical applications. Breaking it down - what type of storage

requirements are there? Or how do we describe our power profile: power (kW), speed, acceleration, and duration (kWh). Think of it as ranging from ultracapacitors for 10 to 30 seconds all the way to months in duration from pumped hydro. From a power profile we may need 100 percent of the available power immediately and for as long as it will last, as in batteries. From a storage perspective speed refers to the rate at which the storage system reaches full power capacity. Instantaneous in the case of ultracapacitors and certain battery chemistries to ten minutes or more when speaking of gravitational/compressed storage systems. Acceleration refers to peak loads/highly variable load profile and the ability of the energy storage system to provide bursts of power as and when needed. In the data center world, we prefer to control our destiny. Hence the reliance on closely coupled batteries and on-site generators. Although the combination of new, nonflammable energy storage systems of varying energy capacities coupled with alternative energy resources that can be dispatched in seconds (fuel cells) or minutes (turbines) may change the standby generator

status quo. More on diesel as a convenient energy source later. From a technology perspective we are seeing renewed interest and innovation in gravity and pumped/compressed media systems. This is in large part due to their respective abilities to store energy in high capacity - albeit not necessarily in overall high density. In the case of pumped hydro (technically gravity for generation) this could be in GW months. On a global basis 95 percent of today’s stored energy is comprised of Pumped Hydro Electric Storage. Unfortunately, we won’t see many new large-scale PHES systems due to challenges from environmentalists and lack of political willpower. Large inertia and compressed gas systems certainly draw media attention. Their practicality has yet to be demonstrated, but there are real examples where these apparently sci-fi concepts have come to life. For instance, Energy Vault, a Swiss/ Californian startup has proposed a system to store gravitational energy that does not require flooding a valley with a dam. The company’s “energy tower” has a six-armed crane, which raises and lowers giant 35-tonne concrete blocks in a 35-story tower,

Issue 34 • November 2019 63


Storage wars

storing and retrieving electrical energy from the grid. It’s still at an early stage, with a oneseventh scale prototype but it has had $110 million investment from the Softbank Vision Fund. In Australia, the Hydrostore project has had some AU$9 million from government sources including the Australian Renewable Energy Agency (ARENA) scheme, to develop an underground compressed air storage facility, at the abandoned Angas Zinc Mine near Adelaide. The Advanced Compressed Air Energy Storage (A-CAES) project will cost $30 million in total, and should be able to hold 10MWh of energy, and deliver it at a rate of 5MW, synchronized and regulated to be compatible with the local electric grid. The system uses surplus electricity (off peak or from renewable sources) to compress air which is stored underground, kept at pressure by water displaced into a reservoir. During discharge, the water flows back and the expanding air turns a turbine to deliver electricity. To increase efficiency, the heat released during compression is stored, and used to warm the air again during expansion. These kinds of systems have the potential to store a large amount of energy for perhaps four to 24 hours. However, their power profiles are not instantaneous. They do not go from zero to 100 percent (or more for peak loads) of capacity quickly and will require batteries at or near the load for bridging purposes.

"There is a simple reason for all the interest in batteries: they work!" A tower in the desert could add value in a blended, smart-grid ecosystem - so long as it doesn’t shade the solar field or block the wind turbines. Compressed gas in an abandoned mine could do the same. Just don’t plan on one near your data center any time soon. Coming back down to earth, the vast majority of venture capital and government funding (grants to companies and academia research grants) goes towards R&D associated with batteries. This tends to be clustered around a few hot subject areas including chemistry, with lithium still top of the list, anode/ cathode chemistry and material science, break-through approaches such as thermal batteries applications like EV Fast Charging. There is a simple reason for all the interest in batteries: they work! And this includes the tried-and-true lead-acid battery, which is not going away any time soon. The lead-acid battery is well known, the characteristics and life-cycle well understood, it can be relatively inexpensive,

64 DCD Magazine • datacenterdynamics.com

and it is the easiest material to recycle with over 90 percent of the lead in new batteries derived from recycling. On a global basis, lead-acid batteries are already providing somewhere on the order of 15 minutes backup for four to eight percent of the grid. Perhaps more when we include legacy telco and industrial plants with half-hour to fourhour backup requirements. Batteries receive the greatest amount of investment and R&D for energy storage systems. Most of the focus is on lithiumbased batteries seeking new chemistries that reduce the use of rare earth and conflict minerals, reduce the potential for fire and explosions, improve electron flow through the cathode and anode (reducing fire and explosion risks), manufacturability, and supply chain improvements. Fortunately, research continues on promising chemistries including zinc derivatives (in early commercial production) and various sodium-ion options such as Prussian Blue and ceramic carbon (still R&D). Of these, the Prussian blue sodium-ion battery exhibits many of the traits we seek in the data center space - nonflammable, no thermal runaway conditions, extremely high power capacity, wide operating temperature range (-20°C to +45°C negating the need for special battery room cooling), extremely fast recharge (eight minutes), and very high cycle-rates & life with >60,000 cycles a reasonable expectation. In summary, the energy storage market will continue to see significant financial investment. There will be exciting announcements of incredible engineering projects, and headline news of huge battery plants. In the meantime, the data center will continue to deploy practical systems based upon lead-acid, lithium, and sodium batteries and back them all up with diesel gen-sets as diesel fuel remains the best energy source for continuous operations once the batteries expire. Jack Pouchet is at Natron Energy


www.fossilfreedata.com


Power + Cooling

Do I really have to say it?

Learn more about data center efficiency at DCD>Energy Smart on 27-28 April

“When building a data center, we don’t argue whether it needs sturdy floors because gravity is up for debate”

66 DCD Magazine • datacenterdynamics.com

L

ast month, on the way to visit a renewable energy-powered data center in Sweden, I found myself trapped in a turgid conversation with someone in this industry. There is not enough space on this page to describe the feeling of mounting dread that overcame me as I slowly realized that this man was a climate change denier. No, he was not quibbling about whether we should invest more in wind and solar, or whether nuclear is the best approach - he was denying the need to do anything at all. It’s a viewpoint that still exists among some in this industry. Sometimes it rears its head in the comments section of our website. Other times it is muttered by an audience member of a conference during an energy efficiency panel. Occasionally it is said during a very long train ride through Sweden. This is meant to be an industry of engineers, whose actions are based on scientific certitude. This is meant to be an industry of entrepreneurs, whose accurate forecasts of the future form the basis of robust business plans. Hell, this is meant to be an industry of data professionals. How are there still people refusing to accept the data? The Intergovernmental Panel on Climate Change’s multiple reports, created by hundreds of scientists, based on thousands of research articles, using multiple highresolution supercomputer simulations, across decades of study, paints a stark picture: Climate change is real, and we are to blame. This is not a point to be argued. When building a data center, we don’t argue about whether it needs sturdy floors because ‘gravity is up for debate,’ we don’t argue about connecting it to the grid because ‘there’s no proof electricity exists,’ and we don’t argue about cooling it because ‘we don’t know what will happen to this data center’s climate.’ Now, this is not to say that we should end all discussion on the matter. There are tough questions that need answering: Which renewables should we invest in? Who should be responsible for deploying energy storage solutions? Are power purchase agreements enough? Should Google and Microsoft fund think-tanks and lobbyist groups that deny climate change? Okay, that last one isn’t that tough. We need to start arguing how to actually combat this problem, not whether there is one. It is too late for us to pretend that anthropogenic climate change is a myth dreamed up by scientists for reasons unclear. Once we accept the data, then the real change can begin.


DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER

Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation. Learn more at http://www.cat.com/datacentre

© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

Visit Cat Dealer Finning UK at DCD London 2019, 5th - 6th November Stand No. 45


Still relying solely Switch to automated on IR scanning? real-time temperature data.

Introducing the

Starline Temperature Monitor Automated temperature monitoring is the way of the future. The Starline Critical Power Monitor (CPM) now incorporates new temperature sensor functionality. This means that you’re able to monitor the temperature of your end feed lugs in real time– increasing safety and avoiding the expense and hassle of ongoing IR scanning. To learn more about the latest Starline CPM capability, visit StarlineDataCenter.com/DCD.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.