DCD>Magazine Issue 29 - CERN New Data Centers

Page 1

August/September 2018 datacenterdynamics.com

Our 16 page supplement uncovers the truth behind the hype

PROBE THE ORIGINS OF THE UNIVERSE

OPENS UP ON SELLING INFRASTRUCTURE TO MILLENNIALS

Going cloud native: rewriting the virtualization rulebook

The inventor of Lithium-ion batteries is still charged with ideas


bone unks

Room 1 We’ve got your back!

We’ve got your back! Connect to maximum density Storage Connect to maximum density We’ve got your back!

Computer Room 1 Computer Room 1

with with faster faster installations. installations. Computer Room 2

Storage

Computer Room 2

Storage Computer Room 1

Storage Storage

Computer Servers/Compute 1 (EquipmentRoom Distribution Area) Computer Room 1

Storage

Computer Room 2 Computer Room 2

Servers/Compute Storage (Equipment Distribution Area)

bone unks Area bution

Computer Room 2

Storage

Computer Room 2

bution Area

bone unks

Computer Room 2

Storage

Storage

Storage

Computer Room 2

Storage

Computer Room 2

Intra-Room Backbone Cabling/High-Fiber-Count Trunks

Intermediate Distribution Area Servers/Compute (Equipment Distribution Area)

Storage Storage

Intra-Room Backbone Storage Cabling/High-Fiber-Count Trunks

Servers/Compute Intra-Room Servers/Compute Backbone (Equipment Distribution Area) (Equipment Distribution Area) Cabling/High-Fiber-Count Trunks

Servers/Compute

Main Distribution Area

Intra-Room Backbone

Inter-Room Cabling/High-Fiber-Count Trunks Cabling/High-Fiber-Count Trunks Intra-Room Backbone (Equipment DistributionBackbone Area) bution Area Cabling/High-Fiber-Count Trunks Area Intermediate Distribution ckbone Servers/Compute Intra-Room Backbone

Trunks

(Equipment Distribution Area) Cabling/High-Fiber-Count Trunks

ribution Area

Intra-Room Backbone Cabling/High-Fiber-Count Trunks Area Main Distribution

Intermediate Distribution Area Servers/Compute (Equipment Distribution Area)

IntermediateServers/Compute DistributionIntra-Room Area Backbone (Equipment Distribution Area) Cabling/High-Fiber-Count Trunks

Intra-Room Backbone Main Distribution Area Inter-Room Backbone Cabling/High-Fiber-Count Trunks Cabling/High-Fiber-Count Trunks Area Intermediate Distribution Backbone Cabling/High-Fiber-Count Trunks

ckbone Trunks

Inter-Room

ckbone Trunks Area ribution

Inter-Room Backbone Cabling/High-Fiber-Count Trunks

ribution Area

ZA-4256

ZA-4256

Intermediate Distribution Area

Inter-Room Backbone Cabling/High-Fiber-Count Trunks Inter-Room Backbone Cabling/High-Fiber-Count Trunks

ZA-4256

ZA-4256

ZA-4256 ZA-4256 ZA-4256

ZA-4256

uting and big data grows, the demands for high-speed As data grows, grows,the thedemands demandsfor for high-speed Asthe the popularity popularity of cloud computing and big data are greater than ever before. Corning’s preterminated transmission and data capacity are greater than ever before. Corning’s preterminated high-speed transmission data capacity aredata greater than before.for As the popularity of cloudand computing and big grows, theever demands ® high-fiber-count MTP trunks used in the data backbone address ed in the datayour center backbone address yourwith most Address most challenging data center concerns ourever high-fiber-count high-speed transmission and data capacity arecenter greater than before. your most challenging data center concerns. MTP® trunks, a preterminated offering increased density, easier cable Address your most challengingsolution data center concerns with our high-fiber-count s. ZA-4256

reduced installation time. MTP® aand preterminated solution offering increased easier for cable As the trunks, popularity ofenabling cloud and bigpull data grows,density, the demands high-speed •management, Reduced installation timecomputing due to single trunk management, and enabling reduced installation time. transmission and data capacity are greater thanfor ever before. Corning’s preterminated • Increased density with less conduit congestion uting and big data grows, the demands high-speed o singleAs trunk pull MTP the popularity of cloud computing anddata big data grows, the demands high-speed ® You Corning Connected? high-fiber-count trunks used in the center backbone address for your most •Are Easier cable management due to reduced number of cables are greater than ever before. Corning’s preterminated transmission and data capacity are greater than ever before. Corning’s preterminated duit congestion data center concerns.quality •challenging Ensures consistent termination Are You Corning Connected? ® uting and big data grows, the demands for high-speed high-fiber-count MTP trunks used in the dataabout center backbone •Visit Reduced risk ed in the data center backbone address your most www.corning.com/edge8 to learn more the benefitsaddress of our your most o reduced number of cables • Reduced installation time due to single trunk pull challenging data center concerns. are greater than ever high-fiber-count MTPbefore. trunks. Corning’s Visit www.corning.com/edge8 to learn more preterminated about the benefits of our s.quality • Increased density with less conduit congestion high-fiber-count MTP trunks. •• Reduced installation time due trunk pull ed in the data center backbone address your most Easier cable management due to to single reduced number of cables

Increased density with less conduit congestion 2018 Corning Optical Communications. LAN-2187-AEN / July 2018 Ensures consistent termination quality os.single©•••©2017 trunk pull Easier cable management due to reduced of cables Corning Optical Communications. LAN-2133-A4-AEN /number January 2017 •© 2018 Reduced risk Corning Optical Communications. LAN-2187-AEN / July 2018 duit congestion • Ensures consistent termination quality Visit Corning in Booth #97 at DCD Enterprise in New York and learn more about the benefits • trunk Reduced risk oo single pull reduced number of cables of our high-fiber-count MTP trunks. duit congestion


Contents

ISSN 2058-4946

August/September 2018 6 News Rising sea levels to impact data centers

28

14 Calendar of Events Keep up-to-date with DCD event and product announcements 16 CERN: Probing the Universe The particle accelerator is getting upgraded, along with its data centers

CEO FOCUS 20 Rob Johnson, Vertiv Who pays for critical infrastructure hardware in an everything-as-aservice world? The CEO of Vertiv discusses change - from market change to generational change.

16

23 The Edge supplement A special in-depth look at the Edge Computing landscape 26 What is the Edge? Sifting through the hype to define a nascent concept 28 The shape of Edge Data transmission is changing, And Edge will bring new IT architectures

39

42

30 The Telco Edge We talk to the companies who want to put compute at the base of cell towers 36 Edge problems Why the mobile network Edge might not deliver on its promises

48

39 The restless inventor The father of the lithium-ion battery hopes to change storage once again 42 Who’s afraid of cloud native? New software tools are remaking the data center, but don’t panic 46 Extreme Data Center Award We need your help to find the most extreme data center in the world 48 Growth in Singapore The region’s booming, so here’s a look at the biggest highlights of the year 50 Getting rid of serverless computing

30

46 Issue 29 • August/September 2018 3


Touching the Edge ...of the Universe

W

hen I first spoke to the European physics lab at CERN, I thought I'd found the ultimate Edge

computing use case. Edge manifests when an application needs a quick response, so resources have to be moved from the distant cloud, to servers close to the users. Experimenters at CERN's large hadron collider (LHC) deal with such huge amounts of data, they are placing servers right by the equipment (p16). But as we found in preparing our Edge supplement, this is not what most people mean by Edge computing.

Can Edge applications curb their need for money and power? Edge is more about modest amounts of data which have to be delivered and consumed fast by the Internet of Things and services like Netflix. Our supplement (p23) looks at a fairly predictable range of products aimed at the sector (many of them micro data centers) but finds little so far in the way of actual roll outs, apart from the aforementioned video streaming service, and CDNs like Akamai. The power available in most Edge sites may not be enough for the job, and the overheads of a small box could crush the economics. We see the business case, but a lot of people will be waiting for Edge 2.0.

That's not a problem for equipment vendors. There's plenty of demand for data center space which needs their kit. The question is, who exactly will pay for the cooling units and power equipment, and how will they monetize their purchase, when everyone wants to buy things as-aservice? Vertiv CEO Rob Johnson put this eloquently: "Millennials don’t want to own anything. They want to pay for things in chunks as they use them.” Read our interview (p20) for more insights from the industry leader.

Extreme Data Centers are the audience-vote category in this year's Datacenter Dynamics Awards. But what do we mean by that? Our initial shortlist (p46) includes facilities in space, deep underground, and colocated with a nuclear power station. But we want more. Last year, reader submissions showed us there were more Beautiful Data Centers than we could ever have imagined. This year, show how you and your colleagues are taking things to extremes.

800k

From the Editor

The number of processor cores available as part of the Worldwide LHC Computing Grid, a distributed network of data centers dedicated to research in physics, combining 170 facilities across 42 countries

4

News Editor Max Smolaks @MaxSmolax Senior Reporter Sebastian Moss @SebMoss Reporter Tanwen Dawn-Hiscox @Tanwendh Editor LATAM Virginia Toledo @DCDNoticias Assistant Editor LATAM Celia Villarrubia @DCDNoticias SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Chris Perrins Designer Mar Pérez Designer Ellie James Head of Sales Yash Puwar

Conference Director Giovanni Zappulo

Head Office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

Lithium-ion batteries have changed the world since their invention in the beginning of the 1980s, enabling mobile devices, and starting the electric vehicle revolution. Inventor John B. Goodenough wants more (p39). At the age of 96, he has a new idea that could solve our energy storage crisis.

PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor

bit.ly/DCDMagazine

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.

Intelligence

Global Editor Peter Judge @Judgecorp

Conference Director, NAM Kisandka Moses

Dive deeper

Events

Meet the team

Debates

DCD Magazine • datacenterdynamics.com

Training

Awards

CEEDA

www.pefc.org

© 2018 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


BECAUSE YOUR CUSTOMERS DEPEND ON YOU…

Cat Electric Power understands that loss of power means loss of reputation and customer confidence. Your customers demand an always-on, robust data storage solution without compromise, 24 hours a day, 365 days a year. Cat power solutions provide flexible, reliable, quality power in the event of a power outage, responding instantly to provide power to servers and facility services, maintaining your operations and the integrity of your equipment. ®

Your Cat dealer and our design engineers work with you to design the best power solution for your data center, helping you to consider: • Generator sizing for current and anticipated future operations growth, fuel efficiency and whole life costs • Redundancy for critical backup and flexible maintenance

© 2018 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, their respective logos, ADEM, “Caterpillar Yellow” and the “Power Edge” trade dress, as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

• Remote monitoring for constant communication and performance analysis • Power dense designs optimising space dedicated to data center equipment • Interior or exterior installation requirements from enclosure design for noise and emissions to exhaust and wiring designs After installation, trust Cat to provide commissioning services to seamlessly integrate the power solution into the wider data center system. Our dealers also provide training, rapid parts and services support alongside a range of preventative maintenance offerings. To find out more about Caterpillar Electric Power and our Data Center experience, come visit us at DCD > South East Asia, or visit: www.cat.com/dcd0809

Demand Cat Electric Power


Whitespace News in brief AT&T to sell 31 data centers to Brookfield for $1.1 billion The telecoms giant will continue to use the facilities to deliver its services, and plans to use the money to pay down its debt - which could rise to $180bn should its Time Warner acquisition continue.

Carter Validus Mission Critical REIT sells its last data center

White space A world connected: The biggest data center news stories of the last two months

“I am pleased to announce we consummated the sale of our last data center asset,” CEO Michael A. Seton said.

Arista to pay Cisco $400m to settle patent lawsuit In 2014, Cisco alleged that Arista infringed on intellectual property in its software, namely the commands used for network equipment configuration.

Virus shuts down TSMC factories, impacting chip production $255m revenue hit predicted after WannaCry variant ran rampant across unpatched systems.

Kaiam stockpiles optical transceivers in case of USChina trade war “As patriots, we believe a transceiver reserve is necessary for our domestic security.”

Reports: AWS building site burns in fatal Tokyo fire A building under construction in the Tokyo suburb of Tama which caught fire in late June appears to have been a data center, due to be completed later this year. The building site is “highly likely” to be a nascent AWS data center, according to “several industry stakeholders,” Nikkei xTech reports. The blaze continued for eight hours, killing five people and injuring 50.

6

Data centers, fiber optic cables at risk from rising sea levels Rising sea levels are set to damage fiber optic cables, submerge network points of presence (PoPs) and surround data centers, researchers have warned. In a study analyzing the effects of climate change on Internet infrastructure in the United States, University of WisconsinMadison and University of Oregon researchers found that a significant amount of digital infrastructure will be impacted over the coming years, and cautioned that mitigation planning must begin immediately. The peer-reviewed study Lights Out: Climate Change Risk to Internet Infrastructure, authored by Ramakrishnan Durairajan, Carol Barford and Paul Barford, combined data from the Internet Atlas - a global map of the Internet’s physical components - and projections of sea level changes from the National Oceanic and Atmospheric Administration (NOAA). “Our analysis is conservative since it does not consider the threat of severe storms that would cause temporary sea level incursions beyond the predicted average,” it notes.

DCD Magazine • datacenterdynamics.com

At a particular risk are fiber optic cables buried underground, which - unlike submarine cables - are not designed for prolonged periods of submersion. According to the study, in 15 years some 1,186 miles (1,908km) of long-haul fiber and 2,429 miles (3,909km) of metro fiber will be underwater, while 1,101 termination points will be surrounded by the sea. “Given the fact that most fiber conduit is underground, we expect the effects of sea level rise could be felt well before the 15 year horizon.” Additionally, “in 2030, about 771 PoPs, 235 data centers, 53 landing stations, 42 IXPs will be affected by a one-foot rise in sea level.” The US networks most at risk belong to AT&T, CenturyLink, and Inteliquent, with a particularly strong impact expected across New York, Miami, and Seattle metropolitan areas. “Given the large number of nodes and miles of fiber conduit that are at risk, the key takeaway is that developing mitigation strategies should begin soon.” bit.ly/GoIntoTheArk


Whitespace

Vox Box

Nancy Novak SVP Compass Datacenters Why is standardization important during build out? You have to be standardized enough that you are not killing the workforce trying to put this stuff in place, you stay competitive, you build on time and you can meet the needs of smaller clients, large cloud clients and edge point clients. You need appropriate flexibility: You’re not chaotic, you’re not so rigid that you can’t be flexible, you have enough that you have this kit of parts that can be adjusted to meet someone’s needs.

Microsoft wants to deploy 72 gensets in Quincy, WA Microsoft has asked for permission to deploy 72 generator sets on its hyperscale data center campus in Quincy, Washington. The company has requested an updated air quality permit from the Washington Department of Ecology – which is accepting comments from local residents. The move marks a considerable change of direction: back in 2012, Microsoft said it was going to eliminate backup generators from a number of data centers in the US, including its facilities in Boston, Chicago and Quincy. Microsoft opened its first data center in Quincy back in 2007. Today, the campus stretches across 270 acres of land and supports hundreds of megawatts of IT equipment, with its electricity provided by a nearby hydroelectric station and its water plant shared with the City of Quincy in return for a symbolic $10 annually. In 2012, the company announced plans to reduce its reliance on generator sets, ensuring a reliable power supply though fuel cells and diverse grid connections; in the worst case

LAND

SECURE THE Peter’s diesel factoid

What should colos keep in mind when dealing with crypto miners? Data center operators must work with the miners to evaluate how much infrastructure they are actually need to put some GPUs or Antminers in their data center. If we are going to stack 15 miners into a typical rack, that’s 25kW of power, give or take. These demands are very important in the industry right now. But, most importantly, I would ask for a heavy deposit up front.

bit.ly/ReadyGensetGo

SECURE THE

bit.ly/AStandardCompass

Kelly LeValley Hunt Global VP BlockApps

scenario, Microsoft would simply move the workloads to another data center. But it looks like this strategy hasn’t worked out, and Microsoft will have to install millions of dollars’ worth of power equipment to provide a total of 210MW of backup power. The Department of Ecology said that, even though diesel generators release pollutants like carbon monoxide, nitrous oxide, and volatile organic compounds into the air, the data center will meet health criteria if operated according to the permit. The expansion places Microsoft into a new air permitting category and the company will also be required to submit an Air Operating Permit application within one year after receiving approval to install the new equipment. In addition to the generators, the updated permit includes 136 evaporative cooling towers. Microsoft’s generators in Quincy have caused controversy before: in 2010, the campus attracted complaints from local residents, including former Mayor Patty Martin, who were concerned about air quality and attempted to get the air quality permit revoked, without success. Microsoft was operating nearly 40 gensets at the time.

Diesel backup generator sets are mostly idle, but can fall foul of pollution legislation. For instance, the EU emissions trading scheme (ETS) requires registration based on capacity, not output. That may change in 2020, with Phase IV of the EU’s ETS, which could include a minimum emissions threshold

DATA

Registrations of Interest are now open for data centre operators looking to expand into Western Australia. • Subsea and cross continental cables linking to Asia, Europe and the USA • Political and geotechnical stability • A moderate climate

• Sunshine hours excellent for power generation • A shared time zone with around 60% of the world’s population • Reliable water and power supplies

For more information or to register your interest visit landcorp.com.au/data

bit.ly/CrypticCurrencies

Issue 29 • August/September 2018 7


Whitespace

Sentinel buys Washington Dulles Gateway for $82.5m Sentinel Data Centers is buying a 280acre plot in Dulles, Loudoun County, for $82.5m. One of the largest available properties for data center development in the region, the site was marketed by real estate management company JLL on behalf of its majority owner, H. Christopher Antigone. Named the Washington Dulles Gateway by JLL, it offers a net constructible area of 140 acres. QTS Data Centers came close to buying the property in October last year, however whether the wholesale provider pulled out - or Sentinel put a better offer forward - is unclear. QTS did not immediately respond to requests for comment made by DCD. JLL managing director Mark Levy said that his team had “worked diligently through leveraging our global platform to identify the most qualified buyer. “Given the significant demand that continues to exist for data center product in Loudoun County, we were able to achieve a tremendous outcome for our client,” he added. North Virginia is indeed one of the world’s densest data center markets, offering rich interconnectivity and availability of qualified professionals and power. Furthermore, favorable tax regime and regulations have helped turn the region into a data processing hub. bit.ly/ADullesNewsItem

Accenture secures US Library of Congress data center contract Accenture’s Federal Services (AFS) division has won the contract to build, fit out and manage a new data center for the Library of Congress. As part of a $27.3 million deal, Accenture will be responsible for the facility’s design, sourcing vendor-neutral hardware and software, and will assist with systems migration. Based on an assessment of the library’s 250 applications, the company will define where each app should be hosted, with a choice between shared hosting servers, private cloud systems, managed colocation facilities and externally managed services. The Library of Congress is the largest library in the world, with 218 years of history. It is said to contain more than 167 million items on approximately 838 miles of bookshelves, including 24,356,449 books, 72,061,060 manuscripts and 14,897,266 photographs. It is also home to the world’s largest collection of cartographic materials, fire-insurance maps of US towns and cities, comic books and US telephone directories. bit.ly/SomebodyPleaseSendMeThere

Intuit sells largest data center to H5, moves to AWS Financial software company Intuit has sold its largest data center, as it shifts to Amazon Web Services’ public cloud. H5 Data Centers will acquire the 240,000 square foot (22,300 sq m) facility in Quincy, Washington, with the sale expected to result in a GAAP operating loss of $75 to $85 million for Intuit. But the impact is expected to be offset by tax benefits related to the sale, sharebased compensation and the reorganization of a subsidiary. “We chose to move to Amazon Web Services to accelerate developer productivity and innovation for our customers, and to accommodate spikes in customer usage through the tax season,” H. Tayloe Stansbury, Intuit EVP and CTO, said. “Our TurboTax Online customers were served entirely from AWS during the latter part of this tax season, and we expect to finish transitioning QuickBooks Online this year. ”Now that most of our core applications are in AWS, the time is right to transition the ownership and operation of this data center to a team who will expertly manage the infrastructure through the remainder of this transition.” bit.ly/TheyreJustNotThatIntuit

8 DCD Magazine • datacenterdynamics.com


Whitespace

Still relying solely Switch to automated on IR scanning? real-time temperature data.

Introducing the

Starline Temperature Monitor Automated temperature monitoring is the way of the future. The Starline Critical Power Monitor (CPM) now incorporates new temperature sensor functionality. This means that you’re able to monitor the temperature of your end feed lugs in real time– increasing safety and avoiding the expense and hassle of ongoing IR scanning. To learn more about the latest Starline CPM capability, visit StarlinePower.com/DCD_Aug.


Whitespace

Proposed Irish government data center could hit planning law trouble Plans to build a €30 million ($34m) data center to host the Republic of Ireland’s public sector data on the Backweston campus in Celbridge (Dublin) could be held back by the country’s statutory planning process, according to a tender issued by the Office of Public Works (OPW) seen by the TheJournal.ie. The OPW, which manages the government’s property portfolio, addressed an estimate to the engineering consultants, detailing its proposal for a 7,000 square meter (75,350 sq ft), two-story data center. Under the proposal, construction would begin early next year and would be completed within eighteen months. Another tender will soon be issued to secure a building contractor for the project. The Backweston campus is a €200m ($228m) development launched by the OPW in 2002. It currently houses the State Laboratory and the Department of Agriculture, Food and the Marine (DAFM) Laboratories. The State Laboratory undertakes analyses required by government departments, such as the verification of food quality and safety, or screening for illegal practices in the pharmaceutical or automotive industries. The DAFM laboratory offers services to the country’s farming and agri-food businesses, and is shared by two veterinary group laboratories, the Pesticide Control Service, the Dairy Science Laboratory and the Seed Testing and Plant Health Laboratories. Ireland’s planning laws remain a hot topic for the data center industry and communities in areas of interest for potential developments.

Microsoft to open two cloud regions in Norway Microsoft is building its first two cloud regions in Norway to provide its Azure, Office 365 and Dynamics 365 cloud services locally. Each region will comprise multiple availability zones, which can be made up of one or multiple data centers. The first will be in Oslo, and the second in Stavanger. While Azure services are expected to be made available as soon as next year, the company says the other two will follow, without specifying a date. The move is seen as a victory by the Norwegian government, which the minister of Trade Torbjørn Røe Isaksen said is “deeply committed to helping Norway thrive as a hub for digital innovation.” Isaksen hopes that Microsoft’s investment will “ensure the competitiveness and productivity of Norwegian businesses and government institutions” whilst having a positive impact on what it calls its “responsibility to [its] citizens to create an inclusive working life, to the environment and to [its] economic development and job growth.” While it may be slightly optimistic to expect so many benefits to stem from the new data centers, Microsoft’s decision to build in Norway rather than in Sweden or Denmark - where Facebook, and Apple have chosen to erect their facilities and where Google has bought land (twice in Denmark, once in Sweden) - is another sign that the Norwegian data center industry appears set for growth. The country exempted data centers from paying property taxes from the start of 2018. Further exemptions were proposed as part of the country’s ‘Norway as a Data Centre Nation’ proposal put forward in February. bit.ly/OsloNewsDay

bit.ly/HeldBackweston

Peter’s heatwave factoid The market for waste heat may shrivel as the climate heats up. In 2018, Sweden had the hottest summer since records began, and suffered a continuous wave of wildfires during July and August

DigiPlex data center will help keep Oslo warm Nordic data center operator DigiPlex has announced plans to recover waste thermal energy from its campus in Ulven, Oslo, and use it to heat residential properties. The company has agreed a partnership with a subsidiary of utility provider Fortum, which has similar arrangements with data center owners across Scandinavia. The company will also provide DigiPlex with cold water for its cooling needs. The facility in Oslo could heat as many as 5,000 apartments once the system is fully operational. Waste heat recapture can considerably improve data center efficiency, helping avoid venting valuable energy into the atmosphere, but it requires a district heating system, an extensive network of pipes carrying hot water to its intended destination, as opposed to heating it on the spot. Such networks are especially popular in Northern Europe – for example in Stockholm, nearly 90 percent of buildings are plugged into the district heating system. Smaller examples exist elsewhere, such as Amazon’s US headquarters being warmed by a (non-Amazon) data center. bit.ly/TheInternetofWarmth

10 DCD Magazine • datacenterdynamics.com


Whitespace

US Air Force, IBM unveil world’s largest neuromorphic digital synaptic supercomputer The Air Force Research Laboratory has built the world’s largest neuromorphic digital synaptic supercomputer, using IBM’s TrueNorth technology. Blue Raven, installed at AFRL’s Information Directorate Advanced Computing Applications Lab in New York is the culmination of a 2017 partnership between the Air Force’s R&D division and IBM. AFRL claims the new supercomputer has a processing power that is equivalent to 64 million neurons and 16 billion synapses, while only consuming 40 watts (impressive, but up from the 10W touted last year). The system fits in a 4U-high (7in) space in a standard server rack. “AFRL was the earliest adopter of TrueNorth for converting data into decisions,” Daniel S. Goddard, AFRL’s director of information directorate, said. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.” TrueNorth chips were originally launched in 2014 after several years of research as part of DARPA’s $53.5 million SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program. AFRL notes that it is currently investigating applications for the technology, but highlighted pattern recognition and integrated sensory processing, as well as the challenge of autonomous systems. bit.ly/ABrainMachine

China Telecom to host Apple’s China iCloud until 2020 Guizhou-Cloud Big Data Industry Co (GCBD), which recently took over the operation of Apple’s iCloud service in China, has announced that it will be using China Telecom’s cloud storage service. The announcement triggered fresh consternation that the Chinese government might be able to gain access to the Chinese users’ content, according to The Verge. However, Apple maintained that only company employees hold the encryption keys required to access data, and promises that its procedures will handle government requests for customer information. Apple handed over management of its iCloud operation in China to GCBD, a governmentcontrolled hosting company in February, when it lost its fight against new government requirements compelling it to use Chinese hosting providers for user content. Apple will set up a new data center in Guizhou, to be operational in 2020, and GCBD will rent servers from China Telecom and other local companies as an interim measure. Elsewhere, a report on state mouthpiece Global Times focused on the business and

technical aspect of the deal, noting that Apple will be using China Telecom’s object-oriented storage (OOS) system. Designed as a highly-scalable system to store and retrieve “any amount” of data, the service is understood to be commercialized and already deployed in more than 20 Chinese cities. An expert cited by the report hailed working with China Telecom as a smart move. The rationale? China Telecom as one of three incumbent telecommunications operators can deliver the requisite network performance required for a continued seamless experience with Apple’s iCloud service in China. The news comes as Google is thought to be considering how to return to China, planning a raft of services - including its Cloud Platform. Again, with restrictions on Western businesses operating in the region, Google will have to seek a partner. Leading the pack, Bloomberg reports, is Tencent and Inspur Group, who would own the data centers Google would end up using.

Etix Everywhere, Compunet set to build Tier IV data center in Colombia European data center specialist Etix Everywhere and managed services provider Compunet have partnered up to build a colocation facility in Cali, Colombia. The modular data center will be located in Zonamerica, the recently established free trade zone that offers tax benefits and a comprehensive portfolio of corporate services, replicating the model that had proved successful in Uruguay. Once fully built out, the data center will offer 840kVA of power capacity and enough space to house 240 server racks. It is expected to obtain Tier IV certification from the Uptime Institute, testifying to its advanced redundancy features. “As we want to offer world class services and the best protection to our customers handling mission-critical projects, it is logical for us to build a Tier IV certified data center,” said Guillermo Lopez, founder and Board Member of Compunet. bit.ly/EtixOneMorePlace

bit.ly/StoreDifferent

Issue 29 • August/September 2018 11


Whitespace

Visa details cause of widespread outage, blames data center switch failure

Bloom Energy goes public on NYSE

Visa has explained what led to a significant system outage in Europe this June, after the UK’s Treasury Committee asked the company to detail what happened. The company operates two active-active redundant data centers in the UK, each of which is meant to be able to independently handle 100 percent of the transactions for Visa in Europe. Each center has built into it multiple forms of backup in equipment and controls. Specifically relevant to this incident, each data center includes two core switches (a piece of hardware that directs transactions for processing) - a primary switch and a secondary switch,” European head Charlotte Hogg said in a letter to the Treasury Committee. “If the primary switch fails, in normal operation the backup switch would take over. In this instance, a component within a switch in our primary data center suffered a very rare partial failure which prevented the backup switch from activating. When the switch failed, it took some five hours to deactivate the system due to the complexity of the fault. The issue caused two periods of peak disruption, one lasting 10 minutes, another 50 minutes, where the failure rate was 35 percent. Over the course of 10 hours, around 5.2 million transactions failed to process. Hogg offered “unreserved” apologies for the outage. bit.ly/ACashlessWorld

Fuel cell specialist Bloom Energy has made an initial public offering on the New York Stock Exchange on July 25, pricing 18 million shares at $15 apiece. The share price saw a 67 percent increase in the first day of trading, closing at $25 - an early indicator of success for an innovative technology business. Bloom makes solid oxidebased fuel cells that can be used to transport and store energy for extended periods of time. Among other applications, these Bloom Energy Servers have been used to ensure uninterruptible power supply in data centers. Bloom is yet to make any profit – it lost $281.3 million in 2017. But it expects to become cash-flow positive this year. bit.ly/ABusinessInBloom

Intel CEO Brian Krzanich resigns over consensual employee relationship Brian Krzanich, Intel’s CEO, has resigned after the chip company discovered he had had a past consensual relationship with an employee, a violation of Intel’s manager-level nonfraternization policy. Robert Swan, the company’s CFO, has been appointed as the new interim CEO, effective immediately. The board has begun a search for a permanent CEO. bit.ly/IntelInsideKrzanichOutside

Diane Bryant quits COO position at Google Cloud The former leader of Intel’s data center business Diane Bryant has left Google after serving less than a year as COO of the company’s cloud business. After 32 years leading Intel’s DCG, Bryant has been credited with Intel’s success in the data center industry, which ensured the company’s growth despite slowing PC sales. When she left, the group was responsible for $4.2 billion of Intel’s quarterly $14.8 billion in revenue. As per Intel’s last proxy statement, Bryant is the company’s second largest individual shareholder in the company with 315,246 shares, including 204,300 stock options. Over the course of its 50 year history, Intel - which is currently searching for a new CEO (see above) - has never appointed a CEO who had not previously worked at the company in some capacity. bit.ly/UpdatingHerLinkeIntel

12 DCD Magazine • datacenterdynamics.com


NEW CATEGORY 8 cabling and connectivity provide huge benefits in data centres. Learn about the eight ways an RJ-45 based Cat 8 system creates a robust infrastructure for today's network needs and prepares you for future network demands. Learn more at Leviton.com/Breasons

LEVI


DCD Calendar Stay up to date with the latest from DCD - as the global hub for all things data center related, we have everything from the latest news, to events, awards, training and research

Events

DCD>Verticals eBook Series: Mission Critical IT in Retail

DCD>London | United Kingdom Nov 5-6 Old Billingsgate

2018

Digital transformation is affecting every sector. The latest in our DCD>Verticals series examines infrastructure in retail.

Lightning plenary keynote: The rise of reference architecture – How is Uber responding to its unique computing requirements? Rapid changes to the digital infrastructure landscape are largely driven by the demands of advances in high-tech applications such as cloud, IoT and AI. The data center and cloud infrastructure industry needs to keep up with the speed of R&D which is reshaping the world’s digital highways. Join this session as one of the world’s most renowned transportation service companies shares their reference architecture with our world – a common software term which serves as a template solution for an architecture for a particular area. bit.ly/DCDRetailFocus De an er | Ub on els N

What are the building blocks required by the data center ecosystem when catering to the IT infrastructure needs of cloud and hightech IoT applications such as autonomous cars? How do we approach on-prem vs. cloud vs. edge strategies to deliver on these?

DCD>Colo+Cloud | Dallas Hyatt Regency Dallas

Oct 30 2018

Plenary panel: Where are the hyperscalers headed?

bit.ly/DCDLondon2018

What are the drivers that determine where a hyperscaler plants its next huge data center campus? Does this mean other smaller players will inevitably follow to new locations and how does it affect the plans of colo players across the USA?

DCD>México | México City Expo Santa Fe México

Oct 3 2018

bit.ly/DCDDallas2018

Keep up-to-date Don’t miss a play in the data center game. Subscribe to DCD’s magazine in print and online, and you can have us with you wherever you go! DCD’s features will explore in-depth all the top issues in the digital infrastructure universe.

Andrew Schaap David Liggitt Michael Lahoud Aligned Energy datacenterHawk Stream Data Centers

Tag Greason QTS

Subscriptions datacenterdynamics.com/magazine To email a team member firstname.surname@datacenterdynamics.com Find us online datacenterdynamics.com | dcd.events | dcdawards.global

14 DCD Magazine • datacenterdynamics.com

DCD>Converged | Hong Kong The Mira Hong Kong

Nov 15 2018


DCD Calendar

DCD>Debates - A new webinar format bringing the dynamism of our live panel discussions to a global audience

DCD>Debates How is the data center responding to Industrial IoT demands?

PRO

From industry-certified courses to customized technology training, including in-house development, DCPRO offers a complete solution for the data center industry with an integrated support infrastructure that extends across the globe, led by highly qualified, vendor-certified instructors in a classroom environment as well as online.

New Course Launched Watch On Demand

>Debates

Content Partner:

For companies to store, process and act upon the vast quantities of data that a factory can create, they need extensive digital infrastructure. Our panel investigates the opportunities and risks of Industry 4.0. Register here:

M&E Cyber Security DCPRO has developed a new two hour online module that covers M&E Cyber Security fundamentals. The ‘Introduction to M&E Cyber Security’ course will teach students all the basic principles and best practices that prevent cyber-attacks from happening. Learn about Industrial Control Systems (ICS) and the influence that IoT has on them, as well as why Operational Technology (OT) is a huge target for breaches. The course also covers policies, regulations and organizations as well as prevention, defense, operation and monitoring.

bit.ly/DCDdebate

2018 Course Calendar

Mark Bartlett Arup

Mark Howell Ford Motor Company

Victor Avelar Schneider Electric

For more course dates visit: www.dcpro.training

DCD>Debates How does Edge computing transform enterprise data center strategy? For more information visit

Data Center Design Awareness – London | Sep 17-19 Energy Professional – London | Sep 25-27 Energy Efficiency Best Practice – Singapore | Sep 20-21 Critical Operations Professional – Singapore | Sep 24-26 Cooling Professional – Madrid | Sep 17-19 Energy Professional – Mexico City | Oct 18-19

Watch On Demand

Take our free Data Center Health & Safety course today!

bit.ly/SchneiderEdgeComputing

DCD>Debate What does the next generation Watch On of hyperscale network Demand architecture look like? For more information visit

bit.ly/NextgenHyperscale

DCD>Debate What are the benefits of evaporative cooling designs for hyperscale? For more information visit DatacenterDynamics

Watch On Demand

Safety should not have a price. Take our 1 hour online health & safety course for free today!

bit.ly/EvaporativeCoolingDesign DCDnews

DatacenterDynamics

www.dcpro.training/dc-health-safety

Issue 29 • August/September 2018 15


PROBING THE UNI VERSE The world’s largest particle accelerator, CERN’s large hadron collider, is getting an upgrade. So is CERN’s data center infrastructure, discovers

Peter Judge Global Editor

T

Peter Judge

he large hadron collider (LHC) sings. Its superconducting magnets hum and whistle as they fling particles at nearly light-speed around a 27km (17 mile) circular tunnel that straddles the Swiss/French border. But next year, the world’s largest particle accelerator will go silent. In 2012 the scientists at CERN, Europe's nuclear physics laboratory, used the LHC to find the Higgs boson - the elusive particle that explains why matter has mass. Then, after increasing the energy of the LHC, they began to probe the Higgs’ properties. The LHC works by smashing particles together, as fast and as hard and as often as possible, by crossing two precisely focused beams rotating in opposite directions around the underground ring. Sifting the shrapnel from such collisions proved the existence of the Higgs particles, and there are more to discover. In 2019, it’s time for a second upgrade. The “luminosity” of the LHC will be increased tenfold (see box), creating 15 million Higgs bosons per year, after an upgrade process

16 DCD Magazine • datacenterdynamics.com

which will shut the particle accelerator down twice. Two of the experiments spaced around the ring, called ALICE and LHCb, aren’t waiting for the new LHC. When the installation goes quiet next year, they will upgrade their equipment to prevent scientific data going to waste. LHCb is looking at the b or “beauty” quark to try and solve a puzzle in physics: what happened in the Big Bang to skew the universe so that it contains mostly regular matter, with very little anti-matter (see Box: baryogenesis)? The LHCb experiment will be updated to capture more data, said Niko Neufeld of the LHCb’s online team. It already captures information from a million collisions each second - a rate of 1MHz. That’s an almost unimaginable feat, but LHCb is missing most of the action. When the experiment is active, there are 40 million collisions each second. The detectors can quickly select a million likely collisions to capture. But what is the team missing in the other events? “Right now we trigger in an analogue way at 1MHz,” Neufeld said. The detectors quickly decide whether to take a picture of specific


Cover Feature

Luminosity The high luminosity HL LHC upgrade will generate many more collisions. In the last three years, the LHC has generated 1.5 x 1016 collisions, referred to as 150 “inverse femtobarns.” After the upgrade, it will produce some 60 percent more - a figure which could eventually increase to 4,000 inverse femtobarns. In other words, the current LHC setup produced around three million Higgs bosons in 2017, and the highluminosity version will churn out 15 million each year - along with other potential discoveries. The HL-LHC upgrade will be installed in two Long Shutdowns, between 2019 and 2021, and 2024 to 2026.

events. But LHCb wants to collect all the data, and search it more carefully after the event. “We want to upgrade the sampling to 40MHz, and move the decision to a computer cluster,” he explained. Instead of making a selection of collision data at the sensor, all information will be captured and analyzed. ALICE, near the village of St GenisPouilly in France, is also doing fundamental work, recreating conditions shortly after the Big Bang by colliding lead ions at temperatures more than 100,000 times hotter than the center of the Sun, so quarks and gluons can be observed. As with LHCb, ALICE is currently losing a lot of potentially useful data, so it will be getting a similar upgrade. These upgrades will cause certain fundamental problems. The detectors digitize the data, but vast quantities must be transmitted in fibers which gather readings from thousands of sensors. The data is encoded in a special protocol, virtually unique to CERN, designed to carry high bandwidth in a high-radiation environment. The transfers add up to 40 terabits per second (Tbps) carried over thousands of optical fibers - and Neufeld said the HL-LHC upgrade will increase this to 500Tbps in 2027. This kind of cable is fantastically expensive to run, so the experimenters have been forced to bring their IT equipment close to the detectors. “It is much cheaper to put them right on top of the experiment,” Neufeld explained. ”It is not cost-efficient to transport more than 40Tbps over 3km!” It’s not possible to fit servers below

ground right next to the detectors, Neufeld said. Apart from the lack of room in CERN’s tunnels, there’s the problem of access. The environment around the LHC is sensitive, and access is fiercely guarded, so random IT staff can’t just show up to upgrade systems. So the computing is hosted above ground in data center modules the size of shipping containers, specially built by Automation Datacenter Facilities of Belgium. There’s only a 300m length of fiber separating them from the experiments. Two “event building” modules handle LHCb’s I/O, taking the data from the fiber cables and piecing together information from the myriad detectors to reconstruct the events detected by LHCb. There are 36 racks in the I/O modules, all 800mm wide and loaded with specialist systems built for CERN by a French university, which translate the data from the CERN protocol to a regular form, and enable it to be collated into actual events. The event data is transferred to four modules which search for interesting patterns, using general-purpose GPUs for number crunching. These contain 96 of the more customary 600mm-wide racks. All the racks in the modules are rated for 22kW, except for 16 of the compute racks (four in each module) which are rated up to 44kW. The total power drawn by all six modules is capped at 2.2MW. Meanwhile, the ALICE experiment will have four other modules for its own event filtering, with a similar total power draw. The modules could be placed anywhere, said Juergen Cayman of Automation: the company's SAFE MultiUnit Platform just needs a concrete base, power and water. At CERN, they sit between the cooling towers and the refrigerant systems for the LHC itself. The modular facility has a reliable power supply, but Neufeld points out there is no need for fully redundant power: “We are providing a power module with A and B feeds. There is a small amount of UPS on the ALICE system, but it’s built with no redundancy. If the electricity drops, there are no particles.” Removing heat efficiently is not a problem in the near-Alpine climate of CERN. Modular data centers are cooled by air using Stulz CyberHandler 2 chiller units, mounted on top of the Automation modules, with indirect heat exchangers supplemented by evaporative cooling, and no compressors. The whole system will have a power usage effectiveness (PUE) rating of below 1.075. Liquid cooling could appear in the future, but Neufeld isn’t ready to use it on the live

40Tbps

data rate LHCb wants to capture

LHCb systems. He’s evaluating a system from European liquid cooling specialist Submer (see box, p19) but thinks the technology won’t be necessary during the available upgrade window: “You have to use the technology that is appropriate at the time,” he explained. The technology has to be solid, because Neufeld gets just one shot at the upgrade. The first two modules arrive in September 2018, and the rest of the technology has to be delivered and deployed while the LHC is quiet. Before the experiments resume in 2021, the new fiber connections must be installed, and the IT systems must be built, tested and fully working. Physicists will be waiting with great interest. When the LHC finds its voice again in 2021, the LHCb and ALICE will be picking out more harmonies, making better use of the available data. Even before the luminosity increases, this upgrade could shine an even brighter light on the structure of the universe.

Baryogenesis - where did the anti-matter go? According to relativistic quantum mechanics - our best theory to make sense of the Universe - every kind of particle has a corresponding anti-particle, with exactly opposite properties. No one has explained to everyone’s satisfaction why we hardly ever see anti-matter. If the Universe came from nothing in a Big Bang, the laws of symmetry and conservation would predict an equal amount of matter and anti-matter. Since Paul Dirac proposed anti-matter in 1928, physicists have been asking: where is it, exactly? In 1967, dissident and physicist Andrei Sakharov proposed that the conservation of matter (the “baryon” number) could be violated under certain conditions, allowing so-called “baryogenesis”. LHCb is searching for collisions which could prove or disprove Sakharov’s ideas, and answer a question which is really very fundamental. Like our physical universe, we ourselves only exist because of baryogenesis.

Issue 29 • August/September 2018 17


CERN: THE SECRET SAUCE Max Smolaks News Editor

Want to run the same infrastructure stack as CERN? You certainly can, says

Max Smolaks

F

or the past two decades, there has been plenty of crosspollination between the worlds of open source, higher education and science. So it will come as no surprise that CERN – one of Europe’s most important scientific institutions - has been using OpenStack as its cloud platform of choice. With nearly 300,000 cores in operation, digital infrastructure at CERN represents one of the most ambitious open cloud deployments in the world. It relies on application containers based on Kubernetes, employs a hybrid architecture that can reach into public clouds for additional capacity, and its current storage requirements are estimated at 70 Petabytes per year – with the storage platform based on another open source project, Ceph. The OpenStack cloud at CERN forms part of the Worldwide LHC Computing Grid, a distributed scientific computing network that comprises 170 data centers across 42

countries and can harness the power of up to 800,000 cores to solve the problems of the Large Hadron Collider, helping answer some of the most fundamental questions in physics. Today, CERN itself operates just two data centers: a facility on its campus in Geneva, Switzerland, and another one in Budapest, Hungary, linked by a network with 23ms of latency. According to a presentation by Arne Wiebalck, computing engineer at CERN, these facilities contain more than 40 different types of hardware. “The data center in Geneva was built in the 1970s,” Tim Bell, compute infrastructure manager at CERN and board member of the OpenStack Foundation, told DCD. “It has a raised floor you can walk under. It used to have a mainframe and a Cray – we’ve done a lot of work to retrofit and improve the cooling, and that gets it to a 3.5MW facility. We would like to upgrade that further, but we are actually limited on the site for the amount of electricity we can get – since the

70PB

The amount of data stored by CERN each year

18 DCD Magazine • datacenterdynamics.com

accelerator needs 200MW. “With that, we decided to expand out by taking a second data center that’s in Budapest, the Wigner data center. And that allows us to bring on an additional 2.7MW in that environment. “The servers themselves are industrystandard, classic Intel and AMD-based systems. We just rack them up. In the Geneva data center we can’t put the full density on, simply because we can’t do the cooling necessary, so we are limited to something between six and ten kilowatts per rack.” When CERN requires a new batch of servers, it puts out a call for tenders with exact specifications of the hardware it needs – however bidding on these contracts is limited to companies located in the countries that are members of CERN, which help fund the organization. Today, there are 22, including most EU states and Israel. “We choose the cheapest compliant [hardware],” Bell said. CERN initially selected OpenStack for its low cost. “In 2011, we could see how much additional computer processing the LHC was going to need, and equally, we could see that we are in a situation where both


Cover Feature

Why CERN is not diving into liquid cooling For possible future use, the LHCb team is testing a liquid-cooled unit, Submer Technologies’ SmartPod, which can take power densities up to 50kW. Submer’s CEO Daniel Pope told DCD that future upgrades of the experiment may require the computing to be installed underground, even closer to the experiment: "They can be right next to the detectors." Down there, space is at a premium, so the high density would be important, and running separate chillers wouldn’t be practical. Also, the cooling systems of the LHC’s superconducting magnets could potentially remove heat from computing modules. Niko Neufeld of CERN's LHCb experiment is interested, but he’s not keen to use cutting edge technology in an upgrade where he has only one shot to get it right, and he doesn’t want to have his IT system underground, where the LHC’s technicians tightly control access. However, Neufeld is very keen to get his hands on the Submer units, which should arrive in September 2018, at the same time as the modular units intended for production use (see p17). The reason? The units can be installed above ground and used for extra capacity processing actual data from the LHC experiments. “I’m very pleased to support leading-edge European technology,” said Neufeld. “We want to test the viability of the hardware in a public area, and allow other experiments to get access to it.”

staff [numbers] and budget are going to be flat. This meant that open source was an attractive solution. Also, the culture of the organization, since we released the World Wide Web into the public domain in the nineties, has been very much on the basis of giving back to mankind. “It was very natural to be looking at an open source cloud solution – but at the same time, we used online configuration management systems based on Puppet, monitoring solutions based on Grafana. We built a new environment benefiting from the work of the Web 1.0 and Web 2.0 companies, rather than doing everything ourselves, which was how we did it in the previous decade. “It is also interesting that the implementation of high performance processes [in OpenStack], CPU pinning, huge page sizes - all of that actually came from the features needed in a telecoms environment," he added. “We have situations when we see improvements that others need, that science can benefit from.” While investigating the origins of the universe, the research organization is not insulated from the concerns of the wider IT industry: for example, in the beginning of the year CERN had to patch and reboot its entire cloud infrastructure - that’s 9,000 hosts and 35,000 virtual machines - to protect against infamous Spectre and Meltdown vulnerabilities. Like any other IT organization, the team at CERN has to struggle with the mysteries of dying instances, missing database entries and logs that grow out of control. “Everyone has mixed up dev and prod once,” Wiebalck joked during his presentation. “Some have even mixed up dev and prod twice.” Everything learned in the field is fed back to the open source community: to date, the team at CERN has made a total of 745 commits to the code of various OpenStack

projects, discovered 339 bugs and resolved another 155. “Everything that CERN does that’s of any interest to anyone, we contribute back upstream,” Bell said. “It’s certainly nice, as a government-funded organization, that we

are able to freely share what we do. We can say ‘we are trying this out’ and people give us advice – and equally, they can look at what we’re doing and hope that if it works at CERN, chances are it is going to work in their environment.”

Issue 29 • August/September 2018 19


Freedom from hardware Who pays for critical infrastructure, in an everything-as-a-service world? Peter Judge gets some thoughts from Vertiv’s CEO Rob Johnson

C

ritical infrastructure company Vertiv has taken a long hard look at its business and has come to a painful realization, according to CEO Rob Johnson: “We provide a necessary evil,” he told DCD. “People don’t want to own it.” That’s a tough pill to swallow, but it’s true. The company, which spun off from Emerson Electric in 2016, is justly proud of the products and services it offers, mostly in power distribution and backup. But, to the customers, those products are just a means to an end.

The customers want reliable computing. To get it, they have to have reliable power. So they will buy batteries, UPSs and power distribution if needed, but Johnson admitted in an interview: “No one wants to own the physical infrastructure.” They’d much prefer to have power, battery and UPS provided as a service. “We have to do business the way our customers want to do business.” “It’s the way of the world,” he explained. “Millennials don’t want to own anything. They want to pay for things in chunks as they use them.” It’s important to understand this, because it crystallizes a lot of other things, he said. Just like every other CEO, Johnson told us his company is moving more towards selling “solutions” and “focusing on the channel.” Those might sound like clichés but, in Vertiv’s case, there’s real meaning behind them. The solutions it develops are packages that start from customer needs, and its channel work is about finding the link in the supply chain which is best able to foot the bill and monetize the product. “We are doing some things today, working with telcos, where they pay a monthly fee,” Johnson said. “That is where the world is going.” With the rise of ride-sharing apps like Uber, and self-driving cars moving closer to reality, consumers are beginning to consider letting go of even the most high-status possessions - and data centers are moving even quicker. “Hyperscalers are doing deals with colos, so they don’t own the space,” he said. “They are counting on SLAs. Now that is moving to the M&E infrastructure as well.”

This model will never be universal, he said: “There will always be organizations like banks, that need to own the physical assets, because of regulations.”

This change is just part of several developments happening simultaneously. The most notable is the transformation of Vertiv itself, from a division of Emerson into an independent company. After 18 months, Johnson says there are real results: “Emerson was customer-focused, but our organization structure can take the innovation dollars and focus on solving real problems.” A big part of that change involves Vertiv adopting the same solutions which the industry is selling to its customers: the much-hyped road to “digital transformation.” Applying digital techniques to its own internal processes and customer interactions is delivering hundreds of millions of dollars in benefits, he told us. It also enables the fundamental shift in customer relations, enabling the company to replace some of its products with payas-you-go services: “We traditionally had a direct sales model, but we want to be more channel-friendly,” he said, adding, “part of our transformation is to go from point products to solutions.” And Vertiv is driving its solutions further into the white space of the data center. “We haven’t participated much on the data center floor,” he said. This is set to change now, as Vertiv plans to provide the entire mission critical stack, standardized globally as far as possible. This model could effectively allow the company to provide a complete data center, or at least the supporting infrastructure, but it won’t sell data centers itself: “We use appropriate general contractors, not just looking at individual pieces, but looking at the entire system, from power to thermal, down to making the racks ready for servers, and power distribution to the racks.”

20 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor


Power + Cooling Meanwhile, the data center sector continues to be both the origin point and the victim of rapid change. Johnson sees three areas of interest set to shape the market: hyperscale and colocation are rapidly growing; and then there’s the much-hyped Edge sector. Starting with colocation and hyperscale, he says these fields are converging: “The two worlds were completely different,” he said. Roughly speaking, hyperscale was bigger than 50MW, and colo was anthing smaller. But now, hyperscale providers are buying capacity differently. They may buy it from wholesale colocation providers, so the difference is truly eroding: “Over the last 18 months, we’ve seen hyperscalers doing half their capacity themselves, and half with local colos,” Johnson told us. "That’s a really interesting model to see - the hyperscalers will always own half the capacity, to keep the colos honest.”

The hyperscalers are also buying capacity in smaller chunks, because their products are becoming more speculative: “When Azure, Apple, Google and the rest want to launch a new product, they aren’t sure what the demand will be,” Johnson said. “So they want to build in bite-size chunks. They may only need two, three or four megawatts at a time.” The cloud providers also chase each other round the globe - so if Microsoft extends Azure to a new country, Google will want to be there too, and if it’s not there already, Amazon will also move in quickly: “They try to keep their plans secretive, but we work with all of them." With demand growing rapidly and in small increments, Vertiv is packaging up its infrastructure into building blocks, putting everything required for 3MW to 4MW of colo or hyperscale capacity on a single power skid. This pre-packaged approach has another benefit, one he returned to later in our talk: “There’s a huge shortage of talent, and one way of getting round this is to pre-engineer it in the factory, and ship it out for a final install.” With cloud and colocation covered, we turned to the Edge, bracing for a blast of the kind of evangelism that’s become somewhat customary, as vendors tell us that Edge capacity is a huge change and a giant opportunity.

To our surprise, Johnson was much more cautious: “We’re seeing the beginnings of growth in the Edge.” Like anything new, he said, the Edge will be over-hyped, then when it can’t live up to the hype, it will likely be under-invested in. Vertiv wants to avoid these extremes, by addressing the actual needs of digital companies. “The reality,” said Johnson, “is that the edge is happening.” But the industry needs to understand that this is not a single market: Vertiv, for example, believes there are four main Edge “archetypes” or variations on the Edge use case. Edge will appear in unexpected ways, Johnson said. For instance, traditional retailers trying to compete with a new generation of online businesses will use their brick-and-mortar sites for edge capacity, adding small-scale, high performance IT resources on-site. Retail is an Edge case which is actually taking off, he said: “As retailers upgrade their infrastructure, at retail stores as well as warehouses, they are using their presence to ship locally. And this means availability is becoming more and more important.”

Vertiv has to be on top of technology changes, and Johnson talked knowledgeably about changes in refrigerants, as well as the variety of lithium-based batteries on offer, along with the rules which regulate their use. Even though it’s a new chemistry with different safety requirements, he sees lithium as a big opportunity once those are understood: “It’s going to be great. It’s what we need to move to.” He also predicted that micro-grids will make some big strides, with more energy storage in data centers, and more sites will want to distribute power at medium voltage - and promised Vertiv will lead there. But Johnson's biggest concern is not the market, or the technology. It’s the skills, and getting the younger generation involved. “The industry is lacking talent for building and operating data centers,” he said. “We just don’t have enough youth coming forward and being trained. And those in the industry are coming up for retirement.” To this end, the company has several hundred interns around the world, but there is no magic bullet for the crisis. Johnson wants to raise awareness in schools, and amongst people who currently might not consider a career in data centers: “We are working hard to bring more diversity and more women into the product management areas.” Artificial intelligence also addresses some of the problems. “Without AI, we are not going to get predictive failure analysis,” he said. He wants augmented reality too, to help data center staff learn with AI guidance. This approach needs more and more sensors on the equipment, and multi-talented staff who can - with AI support - pick up tasks across a diverse range of hardware. In the end, Johnson wants AI to become a “trusted colleague,” but admits we are still in the early days of the technology.

"Millennials want to buy things in chunks, as they use them"

The other big Edge opportunity is for telcos to install equipment at cell towers and fixed line telephone exchanges. Both types of locations are giving telcos an advantage over traditional colo players, enabling them to become a local computing provider: “We’ve seen this in China,” Johnson said. “The traditional telco will take an exchange and convert it into space for local compute colocation, perhaps putting in a four to five rack data center.” Phone exchanges have a building but cell towers usually don't, so they need a different class of equipment adapted for purpose: “Cell towers need hardened data centers on the site, in a container or a small building.” Whether it’s at cell towers or fixed line exchanges, the telco Edge is something he welcomes: “We’ve been in the telecoms space for a long while.” And it's no surprise - Vertiv is set up to provide the DC power plant favored by telecoms providers: “We understand what to do with the DC power business, the AC power business and the combination thereof.” One major difference between the Edge and older architectures is the need for remote management and security. The Edge capacity in the field will have to be controlled from a distance, and operated in a lights-out environment, with only an ocassional visit from an engineer. As a specialist equipment provider,

That’s a vast range of products and services to cover, and some of the areas could demand expertise in the form of acquisitions, like the recently purcased Energy Labs and Geist. More Vertiv acquisitions could be on the cards, he hinted, as they bring in useful new technology. "[Since the spin-off] we’ve really flattened our organization to move quickly and make quick decisions. We can decide where to spend our R&D dollars, and we don’t have to compete with the other parts of the organization." Overall, he said Vertiv is the right size to act “like a $4.4 billion startup.”

Issue 29 • August/September 2018 21


Entry Deadline September 14, 2018

> Awards | 2018

The Data Center Awards are Open for Entries! Category 1

Category 2

Category 3

Category 4

Category 5

Living at the Edge Award

The Infrastructure Scale-Out Award

The Smart Data Center Award

The Data Center Eco-Sustainability Award

Energy Efficiency Improvers Award

Category 6

Category 7

Category 8

Category 9

Category 10

Mission Critical Innovation Award

The Open Data Center Project Award

Cloud Migration of the Year

Data Center Operations Team of the Year – Enterprise

Data Center Operations Team of the Year – Colo+Cloud

Category 11

Category 12

Category 13

Data Center Manager of the Year

Design Team of the Year

Industry Initiative of the Year

New!

Category 14

Category 15

Category 16

Category 17

Young Mission Critical Engineer of the Year

Corporate Social Responsibility Award

Business Leader of the Year Award

Outstanding Contribution to the Industry

New!

Charity Partner

Public voting For more sponsorships and table booking information, please contact: global.awards@datacenterdynamics.com

The Most Extreme Data Center on the Planet

www.dcdawards.global

Do you know an extreme Data Center? If so, please email us at: extreme@datacenterdynamics.com


> Edge | Supplement

INSIDE

Powered by

What is it?

The shape of it

The telco factor

The one percent

> While some say it is a location, others say applications will define it - but all agree it will be everywhere

> How putting processors close to end users will create new data hierarchies

> Are cell towers the prime data center locations of tomorrow, or a vendor’s pipe dream?

> Dean Bubley says the mobile network edge is important, but overhyped


GET THIS CLOSE TO THE EDGE When latency becomes a real issue and milliseconds of downtime equals a wreck, you can rely on Vertiv for reliable mission critical edge infrastructure.

Explore more solutions at: VertivCo.com/OwnYourEdge VertivCo.com/OwnYourEdge-EMEA Š2018 Vertiv and the Vertiv logo are trademark or registered trademarks of Vertiv Co.


A Special Supplement to DCD August/September 2018

Contents

Powered by

Bringing the infrastructure to your door

Features 26-27 What is the Edge

E

28-29 The shape of Edge 30-32 The telco Edge 36-37 The one percent Edge

26

28

30 30

dge is coming to get you. Digital infrastructure is moving ever closer to users, sensors and intelligent devices. Tiny data centers will be popping up in cell towers, office buildings and the white goods in your kitchen. Or will they? According to the theory, all of these trending technologies have something in common. They cannot work without fast connections to local data storage and processing. So the industry is scrambling to deliver just that - or so we are told. This supplement aims to examine the reality behind the Edge hype. We explain what the Edge is (p26), what architectures it might use (p28), along with a look at the currently fashionable implementation - the telco Edge (p30). In recent news, HPE has promised to invest another $4bn in what it describes as the "Intelligent Edge." The budget is earmarked for data collection, and tools to translate that data into intelligence and business actions. HPE promises to deliver security, machine learning and automation products to support Edge computing, a strategy it has been working on for the past two years. HPE clearly sees this as an emerging market, not a set of products in a catalog. With the arrival of Edge, even old hardware has a new pitch, and there are plenty of containerized data centers aimed at the same niche (see Edge Contenders, p26-27).

The telco Edge is an interesting play. Cell towers and the "central offices" of the fixed telephone networks all have fast links to endusers' homes and devices. But there are hurdles. The industry has spent years aggregating resources into giant cloud data centers to save costs. Any Edge facility will lose the economies of scale, and each Megabyte and core will cost much more there. The benefits of low latency must outweigh the extra overhead of a small unit of IT resource. It's also not clear what the business model will be. If the Edge resource is at the telco tower, will it be owned and operated by the telco, or by a conventional cloud or colo player, running a virtual facility across a vast number of locations? One class of company has already deployed in this manner: content delivery networks (CDNs). Their business model is based on boosting content to users, and turning that distributed content at the Edge into a protection against low speeds and network failures. Other business models will emerge, but for now the Edge is an existing niche for specialists, combined with a large theoretical market. It's still possible that market may not take shape as predicted. Both Google and Microsoft have an alternative vision, which pushes the Edge processing further out, into AI chips on the mobile devices which will consume Edge services. In the following pages, we look at how the big picture of Edge will emerge.

Issue 29 • August/September 2018 25


What is the Edge? Edge is more than a surge of hype. Tanwen Dawn-Hiscox talks to the pioneers to find just how real it is

E

Tanwen Dawn-Hiscox Reporter

"Data is now coming from the customer and being distributed from point to point."

verything’s got to change, we hear. Computing has to move from big data centers to the “Edge” of the network, closer to users. Is this just a periodic surge of hype for a new buzzword, or is real change happening? As with all hype, we can predict two things. Firstly, Edge will change. The Edge that succeeds won’t be the Edge that is being debated now, just as the things you are doing with mobile data aren’t the things that were predicted ten or twenty years ago. And secondly, if and when Edge wins, you’ll never hear the term again, because it will be like air; all around without needing a second thought.

What is Edge right now? According to some, it describes micro facilities at cell towers (see p28-29). Others say it represents responsive applications, wherever they happen to be. Today, we centralize computing in remote locations, to benefit from economies of scale. But as processing and content distribution requirements grow, compute will be needed to be placed ten miles, five miles or less from the end-user. “The smart thinker’s answer is that the Edge is at the telecoms base station,” Peter

Edge contenders

Hopton, the founder and CEO of liquid cooling systems provider Iceotope, told DCD. Usually “within a mile and a half to two miles from your location,” mobile network towers are also increasingly used for media content on phones, for smart vehicles, and for the sensors which make up the Internet of Things (IoT). But the Edge will go further than this. The next generation of mobile networks, 5G, is still being defined; it promises faster links - but over a shorter range. Cell towers are “going to be getting a lot closer,” as close as “hundreds of meters” away. Just as Green IT was a great marketing buzzword ten years ago, Edge is the marketing gold mine of the moment. But Hopton says it is a real trend, and those who grasp it will “come out on top.” Edge also helps handle the change in the way data circulates. “It used to be that everything was made in Hollywood and distributed to customers. We had huge downloads but small uploads,” Hopton explained. “With the growth of social media, everyone wants to be a YouTuber and upload

A selection of vendors who want a piece of the Edge

AWS

Compass

DartPoints

EdgeMicro

Iceotope

Snowballl is a 20TB portable storage device designed to ship data to Amazon Web Services (AWS) data centers. It can now run EC2 cloud instances as an appliance intended for temporary Edge capacity.

Data center builder Compass Datacenters’ EdgePoint subsidiary makes 10-rack, 80kW micro data centers. Two 40 sq m demonstration units can be seen at its Texas headquarters.

Since 2014, early entrant DartPoints has been selling Schneider micro data centers custom designed for the Edge. The first customers are businesses and real estate owners in the Dallas area, the next target is telcos.

Aimed at the telco Edge, EdgeMicro takes Schneider containers and adds its own Tower Traffic Xchange (TTX) network technology. A demonstration unit was seen with Akamai servers inside.

The EdgeStation is a sealed portable liquid cooled system that immerses electronics in a dielectric fluid. It can run free-standing without a raised floor or air conditioning.

26 DCD Magazine • datacenterdynamics.com


Edge Supplement their own content. “Data is now coming from the customer and being distributed from point to point. It’s no longer from the core outwards, it’s from the outside back to the core and then back out again.” This new dynamic is likely to evolve further as new technologies emerge.

Developing and commercializing these technologies may be dependent on the Edge’s distributed infrastructure, but some distributed approaches already exist, and will be improved by it. Arguably the oldest Edge companies are content delivery networks (CDNs) such as Akamai. For them (see box), the Edge has evolved into a protective shield that keeps content safe against attacks and outages. For the newcomers, Edge is up for reevaluation. A group of vendors including Vapor IO, bare metal cloud provider Packet and Arm recently published a report detailing their understanding of Edge and defining its terms. The State of the Edge describes

Edge use cases

> 12kW Financial modeling

Power

< 3kW

the creation of Edge native applications, but points out that some apps may merely be enhanced by the availability of local processing power. Some companies already run applications that rely on distributed infrastructure: HPE, which recently announced a $4bn investment in Edge technologies, said this model is being used in large businesses and stadiums enabling WiFi connectivity; and in manufacturing environments, for predictive maintenance. For example, Colin I’Anson, an HPE fellow and the company’s chief technologist for Edge, said its IoT sensors and servers are used by participants in the Formula 1 Grand Prix for airflow dynamics: "There are rules from the FIA and they allow you to only have a certain amount of energy use, a certain amount of compute use. “We've purposed that capability for the IoT so we've got a low power ability to place a good server down at the Edge. We are capable then of running significant workloads.” On this basis, it’s clear that Edge is not a single thing, but a dynamic term, for a dynamic set of functions, delivered on a highly varied set of hardware.

On-demand 3D rendering for customers

Facial recognition program for pharmacy security

ML of robotic factory machines

AI for a unmanned shipping container

Cold storage for account documents

Collecting data from sensors

Expanding WiFi on a college campus

Office

Warehouse Location

Outside

Edge as a defense Akamai, the world’s largest content distribution network, was arguably the first company to succeed with an Edge service as we understand it. Using global request routing, failover and load balancing, it caches and offloads content close to end-users. James Kretchmar, the company’s vice president and CTO for EMEA and APJ, explained: “In the early days, we saw that the Internet was going to experience future problems as more and more users accessed heavier and heavier content, and that a centralized infrastructure wasn’t going to make that possible.” Early on, he told DCD, this largely consisted of “delivering video at really high quality bit rate," but now the distributed network also serves to “absorb the largest attacks on the Internet.” Distributed Denial of Service (DDoS) attacks are hard to defend against, because they come from all sides and can be colossal in terms of bandwidth: Kretchmar described a recent case in which the company defended a site against bogus traffic totaling a terabit per second. Operating a distributed network not only means that attacks can be absorbed, but that they can be “blocked at the Edge, before they get to centralized choke points and overwhelming something or getting anywhere close to a customer’s centralized infrastructure.” As well as DDoS defense, the company provides web application firewalls and bot detection tools at the Edge.

Source: Maggie Shillington, IHS Markit 2018

Rittal

Schneider

Stulz

Vapor IO

Vertiv

Rittal’s pre-configured Edge Data Center modules include climate control, power distribution, uninterruptible power supply, fire suppression, monitoring and secure access, holding OCP Open Rack and Open19 designs.

Schneider’s preassembled Micro Data Centers have shock resistant 42U racks, two UPS systems with a total capacity of 5kVA, switched rack PDUs offering a 230V AC supply and a Netbotz rack monitor 250.

The 48U micro Edge unit from Stulz combines direct-to-chip liquid cooling from CoolIT, air-cooling, and optional heat reuse. It is available for HPC and standard applications.

The six-rack circular Vapor Chamber, with 36U racks, is shipped in a a custom-built 150kW container with three mini racks in the upper chamber for switches or low power equipment.

Vertiv offers micro data centers designed for cell towers, but believes the Edge will not be a single opportunity, and offers compact UPS and cooling units for custom applications.

Issue 29 • August/September 2018 27


The shape of Edge

Tanwen Dawn-Hiscox Reporter

Not all of your Edge data has to hit the core. If you build it right it could even save your phone battery, Tanwen Dawn-Hiscox finds

E

dge applications imply that data is collected and acted upon by local users or devices, and that not all of this data has to travel to the "core." In fact, it is preferable if most of

it doesn’t. To illustrate, Iceotope’s Peter Hopton gave the following example: “Imagine you walk into a room and there’s a sensor recording masses of data, such as ‘is there someone in the room this second.’ “Eventually, you turn that data into a bunch of statistics: ‘today on average so many people were in the room, these were the busy times of the room.’ That’s processed data, but the raw data is just bulk crap. You want to pull the gems out of that in order to make it usable.” In the case of autonomous cars, for instance, you might have “seven videos in HD and a set of sensor radar,” but “all you want to know is where are the potholes.” “So you take that data, you put it into a computer at the Edge of the network, finding potholes, diversions and traffic conditions, and reporting them back to the core. Next day other vehicles can be intelligent, learn, and change their rule set.” Similarly, HPE installed AI systems for a customer in the food and beverage industry, and placed servers at the Edge for the collection and observation of data, sending it back to the core for the learning part of the process. You can’t shift all the data to the core, whether centralized colocation or cloud data centers, Hopton explained, because of the limits of physics. “If you transmitted all that raw data from all seven cameras and that radar and sonar from every autonomous car, you would just clog the bandwidth.” He added: “There are limits for the transmission of data that are creeping up on us everywhere. Every 18 months we get twice

as much data from the chips using the same amount of energy because of Moore’s Law, but we’ve still got the ceiling on transmitting that.” For Steven Carlini, the director of innovation of Schneider Electric’s IT division, putting compute at the Edge is “an effort from customers that want to reduce latency, the primary driver of Edge - in the public eye, at least.” A lot of the cloud data centers were built outside urban or very populated areas, firstly because infrastructure operators didn’t want to have to deal with an excess of network traffic, and secondly because it cost them less to do so. As the shift to the cloud occurred, however, it became clear that latency was “more than they would like or was tolerable by their users." “The obvious examples are things like Microsoft’s Office 365 that was introduced and was only serviced out of three data centers globally. The latency when it first came out was really bad, so Microsoft started moving into a lot of city-based colocation facilities. And you saw the same thing with Google.” As well as addressing issues of bandwidth and latency, the Edge helps lower the cost of transmitting and storing large amounts of data. This, Carlini said, is an effort on service providers’ part to reduce their own networking expenses. Another, perhaps less obvious argument for Edge, explained Hopton, is that as 5G brings about a massive increase of mobile data bandwidth; an unintended consequence will be that mobile phone batteries will run out of charge sooner, because they will be transmitting a lot more data over the same distance. The obvious answer on how to improve on this, he said, is to make the data “transmit over less distance.”

28 DCD Magazine • datacenterdynamics.com

“Otherwise, everyone’s going to be charging their phones every two hours.” However distributed it is, the Edge won’t replace the cloud. All of the results and logs will be archived using public infrastrcture. “It’s not a winner takes all,” Loren Long, co-founder and chief strategy officer at DartPoints, told DCD. “Neither is the IT world in general.”

“The outlook that the Edge is going to compete with or take over the core is very binary”


Edge Supplement

“The outlook that the Edge is going to compete with or take over the core,” he said, “is a very binary outlook,” which he joked is probably only used “to sell tickets to conferences.” Long compared the situation to city planning, and the human body. Building residential streets, he said, doesn’t “lessen the need for highways,” nor does having capillaries at our fingertips reduce the need for large arteries. “So as the Edge grows, the core is going to continue to grow.” This doesn’t mean that applications at the core, or the cloud computing model, will remain the same, however. “But it’s not going to kill anything; everything is going to continue to grow.” Long talks about a stratification of processing, “where a lot more data analytics and processing and storage happens at the core, but onsite processing happens at the Edge.” “There’s no competition,” he said. “It’s all complementary.” While the cloud might continue to grow, HPE’s Colin I’Anson thinks the percentage of data processed at the Edge will increase.

Research firm Gartner agrees: according to a recent study, half of all data will be created or processed outside of a traditional or cloud data center by 2022. At the moment, Edge is tantalizingly poised between hype and implementation. Some say the telecom Edge is ready to go, while others point to problems (see article). We’ll leave the last word to the most optimistic provider, Schneider’s Carlini:

“What we’re seeing is a lot of companies mobilizing to do that, and we’re actually negotiating and collaborating with a lot of companies. We’re actually looking at and rolling out proof of concept 5G test site applications.” The opportunity is there, but it’s still taking shape. While that happens, everyone’s pitch is changing to meet the current best ideas about how Edge will eventually play out.

Edge application by product type > 12kW

Power

Rack: enclosure with cooling (micro DC) rPDU: metered outlet

Rack: enclosure with cooling (micro DC) rPDU: monitored

Rack: enclosure with electrical locking doors rPDU: metered outlet

Rack: enclosure with cooling (micro DC) rPDU: switched with outlet metering

Rack: NEMA 12 enclosure with cooling rPDU: switched with outlet metering

Rack: Four-post open frame rack rPDU: basic

Rack: NEMA 12 enclosure rPDU: switched

Rack: NEMA 12 enclosure with cooling rPDU: switched with outlet metering

< 3kW Office

Warehouse Location

Outside

Source: Maggie Shillington, IHS Markit 2018

Issue 29 • August/September 2018 29


The telco Edge

Tanwen Dawn-Hiscox Reporter

Where will the Edge be implemented? For many people, it will be located at the telephone exchanges and cell towers which are close to end-users and devices, reports Tanwen Dawn-Hiscox

W

hile the cloud made a virtue of abandoning specific geographic locations, the Edge is all about placing data and services where they are needed. But there’s an ongoing debate about where exactly that is. DartPoints co-founder Loren Long said we shouldn’t fixate on where the Edge is: “Edge isn’t a geographical location, it’s not pointing at cell towers and saying ‘that’s the Edge.’ “Edge is an instance. It is the place where the compute and storage is required to be to get the optimal performance. That could be a cell tower, it could be further back.” For Long, Edge computing isn’t new. He cites EdgeConneX, which deploys servers in conventional data centers across small cities for Comcast: “because that’s as far as Comcast needs to go.” Schneider calls this the regional Edge, and says it is mainly used for high bandwidth content delivery and gaming - which Steven Carlini, senior director of Schneider's data center solutions division, describes as a very large market, and also one which is keeping the desktop computing industry alive. The recent State of the Edge report proposes the four following principles to define the Edge: “The Edge is a location, not a thing; there are lots of edges, but the Edge we care about today is the Edge of the last mile network; the Edge has two sides, an infrastructure edge and a device edge; compute will exist on both sides, working in

coordination with the centralized cloud.” Others are less reticent about location. Several vendors are looking hard at cell towers, which are the confluence between the digital infrastructure and its users, both humans and connected devices. Cell tower Edge installations need the characteristics of the traditional data center high availability, power and cooling, overlaid with network hardware that is the traditional domain of mobile operators. DartPoints is creating small data centers for just this niche. The company’s CEO Michael Ortiz says content providers and application providers will benefit from the so-called “telco Edge,” but the carriers will gain most, through new services like 5G, IoT and autonomous cars that “can’t live anywhere except the cell tower.” Schneider agrees that the next aim of Edge computing is to move even closer to the users, which can be on-premise, at the base of cell towers, but also bringing in the fixed-line operators' facilities, so-called “central offices.” Central offices were originally built decades ago by legacy telco providers to store analog telephony equipment. As the networks went digital, this was replaced by much more space-efficient digital equipment, leaving empty rooms with great power and networking, potentially ideal for local data center usage, or else using the CO as “a prime location to cache a lot of data.” In the Central Office Rearchitected as a Datacenter (CORD) initiative, IT and telecoms vendors

40,039

cell towers owned by Crown Castle in the US

30 DCD Magazine • datacenterdynamics.com

aim to create definitions and standard platforms to make it easy to roll out data center services in these telecoms spaces, using technologies including software defined networking (SDN) and network function virtualization (NFV). The closely related OpenCORD aims to offer this as open source. Central offices are set up in prime locations and have direct access to networks, “because they’re in the same building,” Carlini said. “So it’s kind of a win-win.” “We see this happening either before or in parallel with the cell tower build-out,” which Schneider predicts will happen soon, even though Carlini admits we are not seeing a massive effort yet. Caching data at cell towers, he said, will allow “5G to operate and deliver on its promises. “Information is going to have to be cached within a community - a small area that’s going to share that data where the 5G network is going to operate. “Whereas 4G operates in a very oneto-one relationship with the devices, 5G operates in a shared model where we have a bunch of towers interfacing with the devices and antennas instead of one.” Akamai is also considering equipment at the base of cell towers in the future - but this option is “more in an R&D phase right now,” explained James Kretchmar. Specifically, the company is exploring the use cases that would make it worthwhile, and weighing the pros and cons. How real is all this?


Schneider’s Carlini says he’s seeing this model emerge, but as a “continuation of the build-out of 4G.” Until the successor, 5G, comes along, there will be no urgent need to cache data at cell towers or in central offices en masse. “When 5G kicks in, that’s when you’re going to see a huge wave,” Carlini said. That’s a drawback, as the actual technologies 5G will be using are still being defined. Despite this, there’s real activity now, he assured us: “From a Schneider perspective, there are actual projects that we’re working on now.” In the long term, the arguments for cell tower and central office deployments are strong. The hurdles include developing the business model and sorting out ownership. In practice, building the mobile network Edge won’t be easy, but vendors persist in claiming it won’t present any problems. DartPoints’ Long recalls pitches along the lines of ‘you put it out there and it solves all your problems.’ “But that’s ridiculous,” he laughed. “Nowhere in technology has anything deployed itself that easily, and especially something as complicated as the Edge because a lot of things have to change from

the core out: all the networks, all the way to how routing cables are done today.” Companies may not be willing to make these changes, because they perceive them as threatening their competitive advantage over one another. “If you think about wireless carriers, AT&T, Verizon, Sprint, T-Mobile, they all fight about who has the best network. They see that as their unique differentiator.” And so, he said, “when you bring compute, storage, content and applications to the Edge, that are now on a completely different backhaul network, the unique networks each of the carriers operate disappear, and the carriers are almost relegated to antennas, so they may not necessarily be that excited.” If carriers were keen to adopt such a model, they would still need to collaborate

with companies up and down the stack to define it. First, Long said, businesses will need to stop “claiming” the Edge as their own. “In fact, it is very much going to be a micro ecosystem.” Carlini concurs: “At the cell tower, there are multiple stakeholders. There’s the landowners, there’s the tower companies that actually own the enclosures that the equipment goes in, there’s the service providers, and there’s even governments that are involved, regulating what can and can’t go into these sites.” Nor is there a standard process for deployment: “It’s not clear whether the equipment can go in the huts that they have there already or if there’s going to be prefab containers that we’re going to have to drop on site as an additional hut.”

“This market is going to be colossally huge, so we shouldn’t be fighting over

it”

Issue 29 • August/September 2018 31


Edge Supplement The latter option - placing modules on the property beside the cell tower would complicate things more, he said, because “that’s when you run into a lot of these issues with government regulations and local jurisdiction on what can be there and what can’t. So that opens a whole ‘nother can of worms.” In any case, such an upheaval will mean that many mistakes are likely to be made: “This applies to our level, the tower operators, mobile operators, all the way up to the content and application providers.” Another potential issue is outlined by telecoms specialist Dean Bubley (p36-37): the power capacity at cell sites might not be sufficient to accommodate much compute. He argues that instead, device-based applications will offload workloads to the network, or cloud providers will distribute certain aspects of their applications. The network Edge, he stated, will be the control point for applications like security, and as an evolution of content distribution networks, but nothing more. Most of the compute-heavy applications will either be processed on device or in the cloud. Even content distribution specialist Akamai has its doubts: while cell towers and central offices may provide perfect locations for data offloading closer to users, and are being considered among “a number of different environments,” they have drawbacks, too: concessions on disk and storage space. “In some of these locations you’re going to have a smaller amount of disk space available, or storage space available than you would have in other spaces, so you’d be making a trade off for ‘which content does it make most sense to have there, that would get a benefit from offload that a lot of end-users would be requesting’ versus if you were to make a step or two out, more like the traditional Edge of today, then you have more disk space available, and you can take other trade-offs into consideration.”

What such a business model would look like is still unclear, and this will be the company’s final concern once it has identified which use cases are worthwhile “from any combination of making the user experience better, making it easier for the network operators by offloading the network, and making it better for our customers in being able to do that.” But physical expansion isn’t everything, and being in as many locations and geographies as possible is only one aspect of Akamai’s ambitions to improve its services; in parallel it is developing more efficient software to avoid having to multiply servers, despite increasing demand.

10%

The share of enterprise-generated data created and processed outside a traditional centralized data center or cloud in 2017

Rest assured, there is light at the end of the tunnel for the cell tower Edge. As DartPoints’ Long puts it, “nobody is going to own [the Edge], there’s not a single definition, there’s not a single implementation and this market is going to be colossally huge, so we shouldn’t be fighting over it.” Driving these decisions, he continued, will be the customers, all of whom are likely to have their own set of specifications to suit their needs. “Whether it’s Google, Microsoft, Amazon, Facebook, LinkedIn, Netflix,” content and application providers are most likely to “know where they need to go for themselves.” Customers will require modular, tailored solutions to deploy capacity at the Edge. Depending on their purpose, the configuration, deployment, security layer and redundancy requirements will vary. “The intent here is that our components may have similarities, but our solutions are very different.” Long said cell tower applications are “most likely to be a smaller containerized modular solution.” But it’s not a single solution: “Not all cell sites are cell towers; they’re called buildings with antennas on top,” and different products might meet the need for data centers in storage rooms or office blocks.

Living at the Edge Award DCD>Awards | 2018

Open for Submissions

From a rooftop in Manhattan to a car park in New Delhi, a factory floor in Frankfurt to a mobile mast in Manila, the Edge has many definitions. This updated award category seeks to celebrate the practice of building innovative data centers at the Edge, wherever that may be. bit.ly/DCDAwardsEdge

32 DCD Magazine • datacenterdynamics.com

Lessons from history To understand what the Edge will entail, and what forms it will take, businesses would do well to learn from a previous technological transition telcos had to adapt to, Jason Hoffman, CEO of Deutsche Telekom subsidiary MobilEdgeX, said at DCD>Webscale this June. "If we look at what’s happened in mobile networks over the last 25 years, it started with people talking to each other, moved to messaging, and now we also consume video on it - the vast majority of network traffic today is video consumption." This pattern will be replicated in the new Edge world, he said, but with some crucial updates: "There's going to be the equivalent of messaging for this world but it's going to be between machines instead of human beings. "And then there’s going to be a video analog, and that's going to be a tremendous change, as it's going to be video coming into the network instead of going out. We'll have devices and machines out there generating video – automobiles, security cameras, body cameras, things like that." How we build the Edge, "from a network design to a base infrastructure to what type of fundamental capabilities will start to show up" will be defined by this, Hoffman believes. "And that’s going to be a major transformation for the industry. If you just think about how the telcos went from voice calls to video, that was a big deal."


4 Annual th

> Colo+Cloud | Dallas October 30 2018 // Hyatt Regency Dallas

The future of digital infrastructure for Colo, Telco, Cloud & MSP EDGE FOCUS DAY

October 29 2018 Building the Edge

DCD>Debates How is the ‘Edge’ transforming the Data Center Services Sector?

Oct 2 11.00am CST

As a prequel to this event, we are hosting a live webinar discussion starring the conference keynote speakers. If you are unable to join us in Dallas, or want to get ahead of the game please join us. Watch the full debate:

bit.ly/DCDDebates

Principal Sponsor Lead Sponsors

To sponsor or exhibit, contact: alastair.gillies@datacenterdynamics.com @DCDConverged #DCDColoCloud

Datacenter Dynamics

For more information visit www.DCD.events

Global Content Partner

DCD Global Discussions


Advertorial: Vertiv

Enabling a Future of Edge-to-Core Computing The growth in edge computing represents one of the bigger challenges many organizations will face in the coming years.

T

he growth in edge computing represents one of the bigger challenges many organizations will face in the coming years. Cisco projects that there will be 23 billion connected devices by 2021, while Gartner and IDC predict 20.8 billion and 28.1 billion by 2020 respectively. These devices have the potential to generate huge volumes of data.

Of course, this represents only one side of the edge equation. While the amount of data being generated at the edge is growing, so too is the amount of data being consumed. According to the Cisco Visual Networking Index, global IP traffic is expected to grow from 1.2 zettabytes in 2016 to 3.3 zettabytes by 2021. Video, which accounted for 73 percent of IP data in 2016, is expected to grow to 82 percent by 2021.

The edge will play a role in both enabling the effective use of data from connected devices and in delivering data to remote users and devices. Part of the challenge will be one of scale: how quickly can we deploy the distributed computing infrastructure required to support these rapidly emerging use cases. But, there is also another challenge to be considered. In many cases, the growth of the edge will require a fundamental shift from the current core-to-edge computing model, in which the majority of data flows from the core to the edge, to a model that reflects more interaction and more movement of data from edge-tocore.


Advertorial: Vertiv

To download the full report on edge archetypes, and access other edge resources, visit www.VertivCo.com/Edge

Taking a Data-Centric Approach to Edge Infrastructure Despite the magnitude of its impact, there exists today a lack of clarity associated with the term edge computing and all that it encompasses. Consider the example of a similarly broad term: cloud computing. When IT managers make decisions about where their workloads will reside, they need to be more precise than “in the cloud.” They need to decide whether they will use an on-premises private cloud, hosted private cloud, infrastructure-as-aservice, platform-as-a-service or software-asa-service. That does more than facilitate communication; it facilitates decision making. Vertiv has attempted to bring similar clarity to edge computing by conducting an extensive audit and analysis of existing and emerging edge use cases. What emerged was the recognition of a unifying factor that edge use cases could be organized around. Edge applications, by their nature, have a data-centric set of workload requirements. This data-centric approach, filtered through requirements for availability, security and the nature of the application, proved to be central to understanding and categorizing edge use cases.

3. Machine-to-Machine Latency Sensitive The Machine-to-Machine Latency Sensitive Archetype, while similar to the HumanLatency Sensitive Archetype in that low latency is the defining factor in both archetypes, is even more dependent on edge infrastructure. Machines not only process data faster than humans, requiring lower latency, they are also less able to adapt to lags created by latency. As a result, where the cloud may be able to support HumanLatency Sensitive use cases to a certain point as they scale, Machine-to-Machine Latency Sensitive use cases are enabled by edge infrastructure. 4. Life Critical The Life Critical Archetype includes use cases that impact human health or safety and so have very low latency and very high availability requirements. Autonomous Vehicles are probably the best-known use case within the Life Critical Archetype. Based on the rapid developments that have occurred, and the amount of investment this use case is attracting, it is now easy to envision a future in which Autonomous Vehicles are commonplace. Yet, we’ve also had recent reminders of both the criticality of this use case and the challenges that must be addressed before that future vision becomes a reality. Once the technology matures and adoption reaches a tipping point, this use case could scale extremely quickly as drivers convert to autonomous vehicles.

About Vertiv Vertiv designs, builds and services critical infrastructure that enables vital applications for data centers, communication networks, and commercial and industrial facilities. For a more detailed discussion of edge archetypes, read the report, Four Edge Archetypes and their Technology Requirements.

P

2. Human-Latency Sensitive The Human-Latency Sensitive Archetype includes applications where latency negatively impacts the experience of humans using a technology or service, requiring compute and storage close to the user. Human-Latency Sensitive use cases fall into two categories: those which are already widely used but supported primarily by cloud or core computing, such as natural language processing, and those that are emerging, such as Smart Security and Smart Retail. In both cases, edge infrastructure will be required to enable these use cases to scale with the growth of the businesses or applications that depend on them.

|V ert iv V

The result of our analysis was the identification of four archetypes that can help guide decisions regarding the infrastructure required to support edge applications. These four archetypes are:

These four archetypes are described in more detail in the Vertiv report, Defining the Edge: Four Edge Archetypes and their Technology Requirements. They represent just the first step in defining the infrastructure needed to support the future of edge computing. But it is not one that should be understated. When we shared the archetypes with industry analyst Lucas Beran of IHS Markit, he commented that, "The Vertiv archetype classification for the edge is critical. This will help the industry define edge applications by characteristics and challenges and move toward identifying common infrastructure solutions." Edge computing has the potential to reshape the network architectures we’ve lived with for the last twenty years. Working together, we can ensure that process happens as efficiently and intelligently as possible.

se n

Defining Edge Archetypes

1. Data Intensive The Data Intensive Archetype encompasses use cases where the amount of data is so large that layers of storage and computing are required between the endpoint and the cloud to reduce bandwidth costs or latency. Key uses cases within this archetype include High-Definition Content Delivery and IoT applications, such as Smart Homes, Buildings, Factories and Cities. With bandwidth the limiting factor in Data Intensive use cases, these applications typically scale by the need for more data to improve the quality of service.

r ti Ma

n

l O

Martin Olsen, Vice president, global edge and integrated solutions E: Martin.Olsen@VertivCo.com VertivCo.com Martin Olsen brings more than 15 years of experience in global mission-critical infrastructure design, innovation and operation to his role as vice president of global edge and infrastructure solutions at Vertiv.


The one percent Edge

Dean Bubley Disruptive Analysis

Mobile edge devices, and nodes to support them, will represent less than one percent of the power of the cloud, says Dean Bubley

I

keep hearing that Edge computing is the next big thing - and specifically, in-network edge computing models such as MEC. (See Box for a list of different types of "Edge"). I hear it from network vendors, telcos, some consultants, blockchain-based startups and others. But, oddly, very rarely from developers of applications or devices. My view is that it's important, but it's also being overhyped. Network-edge computing will only ever be a small slice of the overall cloud and computing domain. And because it's small, it will likely be an addition to (and integrated with) web-scale cloud platforms. We are very unlikely to see Edge-first providers become "the next Amazon Web Services, only distributed."

Why do I think it will be small? Because I've been looking at it through a different lens to most: power. It's a metric used by those at the top and bottom ends of the computing industry, but only rarely by those in the middle, such as network owners. This means they're ignoring a couple of orders of magnitude. Cloud computing involves huge numbers of servers, processors, equipment racks, and square meters of floorspace. But the figure that gets used most among data-center folk is probably power consumption in watts, or more usually kW, MW or GW. Power is useful, as it covers the needs not just of compute CPUs and GPUs, but also storage and networking elements in data centers. Organizing and analyzing information is ultimately about energy, so it's a valid, top-level metric. The world's big data centers have a total power consumption of roughly 100GW. A typical facility might have a capacity of 30MW, but the world's largest data centers can use over 100MW each, and there are plans for locations with 600MW or even 1GW. They're not all running at full power, all the time of course.

This growth is driven by an increase in the number of servers and racks, but it also reflects power consumption for each server, as chips get more powerful. Most racks use 3-5kW of power, but some can go as high as 20kW if power - and cooling - is available. So "the cloud" needs 100GW, a figure that is continuing to grow rapidly. Meanwhile, smaller, regional data-centers in second- and third-tier cities are growing and companies and governments often have private data-centers as well, using about 1MW to 5MW each.

So what about the middle, where the network lives? There are many companies talking about MEC (Multi-access Edge Computing), with servers designed to run at cellular base stations, network aggregation points, and also in fixed-network nodes. Some are "micro data centers" capable of holding a few racks of servers near the largest cell towers. The very largest might be 50kW shipping-container sized units, but those will be pretty rare and will obviously need a dedicated power supply.

The "device edge" is the other end of the computing power spectrum. When devices use batteries, managing the power-budget down to watts or milliwatts is critical. Sensors might use less than 10mW when idle, and 100mW when actively processing data. A Raspberry Pi might use 0.5W, a smartphone processor might use 1-3W, an IoT gateway (controlling various local devices) could consume 5-10W, a laptop might draw 50W, and a decent crypto mining rig might use 1kW. Beyond this, researchers are working on sub-milliwatt vision processors, and ARM has designs able to run machine-learning algorithms on very low-powered devices. But perhaps the most interesting "device edge" is the future top-end Nvidia Pegasus board, aimed at self-driving vehicles. It is a 500W supercomputer. That might sound like a lot of electricity, but it's still less than one percent of the engine power on most cars. A top-end Tesla P100D puts over 500kW to the wheels in "ludicrous mode." Cars' aircon alone might use 2kW. Although relatively small, these deviceedge computing platforms are numerous. There are billions of phones, and hundreds of millions of vehicles and PCs. Potentially, we'll get tens of billions of sensors. So at one end we have milliwatts. multiplied by millions of devices, and at the other end we have Gigawatts in a few centralized facilities.

36 DCD Magazine • datacenterdynamics.com

Definitions of the Edge: • D ata-center companies call smaller sites in 2nd/3rd-tier cities the Edge. • Fixed and cable operators think of central offices (exchanges) as mini data-centers at the Edge, or perhaps white-box gateways/ servers on business premises. • Mobile operators think of servers at cell-sites or aggregation points as the Edge. Some vendors pitch indoor small cells as the Edge. • IT companies refer to small servers at company sites, linked to cloud platforms, as the Edge. • Mesh-network vendors/SPs think of gateways or access points as the Edge. • IoT companies think of localised controllers, or gateways for clusters of devices as the Edge. • Device and silicon vendors think of a smart end-point (eg a car, or a smartphone or even a Raspberry Pi) as the Edge. • Some cloud players also have a "software Edge", such as Amazon's Greengrass, which can be implemented in various physical locations.


Edge Supplement Enterprise / Regional

Edge (Network-Edge & Device-Edge)

Hyperscale

EQSCALE vision processing chip = 0.2mW

GE Mini Field Agent IoT Gateway = 4W

10mW

100mW

1W

EdgeMicro = 50kW

NVIDIA Pegasus = 500W

10W

100W

1kW

10kW

Switch Las Vegas = 300 MW

100kW

1MW

10MW

Normal mobile cell-tower = 1kW

Raspberry Pi Zero = 0.5W

Virtuosys Edge Platform = 30W

The actual power supply available to a typical cell tower might be 1-2kW. The radio gets first call, but perhaps 10 percent could be dedicated to a compute platform (a generous assumption), we get 100-200W. In other words, cell tower Edge-nodes mostly can’t support a container data center, and most such nodes will be less than half as powerful as a single car's computer. Cellular small-cells, home gateways, cable street-side cabinets or enterprise "white boxes" will have even smaller modules: for these, 10W to 30W is more reasonable.

Five years from now, there could probably be 150GW of large-scale data centers, plus a decent number of midsize regional datacenters, plus private enterprise facilities. And we could have 10 billion phones, PCs, tablets and other small end-points contributing to a distributed edge. We might also have 10 million almost-autonomous vehicles, with a lot of compute.

Now, imagine 10 million Edge compute nodes, at cell sites large and small, built into Wi-Fi APs or controllers, and perhaps in cable/fixed streetside cabinets. They will likely have power ratings between 10W and 300W, although the largest will be numerically few in number. Choose 100W on average, for a simpler calculation. And let's add in 20,000 container-sized 50kW units, or repurposed central-offices-asdata centers, as well. In this optimistic assumption (see Box 2: Energy for the Edge) we end up with a network edge which consumes less than one

Standard server rack = 5kW

100MW

1GW

Typical telco domain for computing / cloud

Typical DC = 1-5MW

percent of total aggregate compute capability. With more pessimistic assumptions, it might easily be just 0.1 percent. Admittedly this is a crude analysis. A lot of devices will be running idle most of the time, and laptops are often switched off entirely. But equally, network-edge computers won't be running at 100 percent, 24x7 either. At a rough, order-of-magnitude level, anything more than one percent of total power will simply not be possible, unless there are large-scale upgrades to the network infrastructure's power sources, perhaps installed at the same time as backhaul upgrades for 5G, or deployment of FTTH. Could this 0.1-1 percent of computing be of such pivotal importance, that it brings everything else into their orbit and control? Could the "Edge" really be the new frontier? I think not. In reality, the reverse is more likely. Either device-based applications will offload certain workloads to the network, or the hyperscale clouds will distribute certain functions. There will be some counter-examples, where the network-edge is the control point for certain verticals or applications - say some security functions, as well as an evolution of today's CDNs. But will IoT management, or AI, be concentrated in these Edge nodes? It seems improbable. There will be almost no applications that run only in the network-edge - it’ll be used just for specific workloads or microservices, as a subset of a broader multi-tier application. The main compute heavy-lifting will be done on-device, or on-cloud. Collaboration between Edge Compute providers and

Copyright Disruptive Analysis Ltd 2018

industry/hyperscale cloud will be needed, as the network-edge will only be a component in a bigger solution, and will only very rarely be the most important component.

One thing is definite: mobile operators won’t become distributed quasi-Amazons, running image-processing for all nearby cars or industry 4.0 robots in their networks, linked via 5G. This landscape of compute resource may throw up some unintended consequences. Ironically, it seems more likely that a future car's hefty computer, and abundant local power, could be used to offload tasks from the network, rather than vice versa. Dean Bubley is founder of Disruptive Analysis www.disruptive-analysis.com.

Energy for the Edge • 1 50GW large data centers • 50GW regional and corporate data centers • 20,000x 50kW = 1GW big/ aggregation-point "networkedge" • 10m x 100W = 1GW "deep" network-edge nodes • 1bn x 50W = 50GW of PCs • 10bn x 1W = 10GW "small" device edge compute nodes • 10m x 500W = 5GW of in-vehicle compute nodes • 10bn x 100mW = 1GW of sensors

Issue 29 • August/September 2018 37


GET THIS CLOSE TO THE EDGE When latency becomes a real issue and milliseconds of downtime equals a lost sale, you can rely on Vertiv for reliable mission critical edge infrastructure.

Explore more solutions at: VertivCo.com/OwnYourEdge VertivCo.com/OwnYourEdge-EMEA ©2018 Vertiv and the Vertiv logo are trademark or registered trademarks of Vertiv Co.


Power + Cooling

Source: The University of Texas

The restless inventor The father of the lithium-ion battery believes he has invented a worthy successor. Others aren't so sure. Sebastian Moss reports

I

'm living on borrowed time,” John B. Goodenough told DCD. “So we have to live day by day. We never know what's going to happen.” At 96, the inventor of the lithium-ion battery could be forgiven for taking a break and enjoying the fruits of his labor. But he is still working, still trying to create a battery that will solve the crisis he first saw coming nearly 50 years ago. To understand Goodenough’s quest to build the ultimate battery, one must revisit key points in his life, that worked together to push him towards this goal. The first came at the start of World War II, when Goodenough was studying classics and mathematics at Yale. Knowing he would be spending some time in the Army, the young academic was at a loss over what to do in life. Then he read Alfred North Whitehead's

seminal book Science and the Modern World, which analyzed the impact of scientific discovery on different historical periods. “I just had a feeling that what I was supposed to do was science,” Goodenough said. “I had no money - how was I going to go to graduate school? I didn't have the vaguest idea. But I knew if I had the opportunity I should go study physics.” When the war ended, he was given his chance with President Roosevelt's G.I. Bill which granted university stipends to help veterans readjust. “I was very lucky,” he acknowledged. After studying solid-state physics under Professor Clarence Zener, Goodenough was given a job at the Lincoln Laboratory of the Massachusetts Institute of Technology (MIT). There, he helped develop the SemiAutomatic Ground Environment (SAGE) system for air defense. Not only was he part of a team responsible for inventing random access memory (RAM), Lincoln also served as the next pivotal moment in his life. “It was interdisciplinary, so physics, chemistry and engineering were all involved together,” Goodenough said. “And that gave me the opportunity to really move in the direction of materials science and engineering. It's that opportunity to work

Sebastian Moss Senior Reporter

with chemists, physicists and engineers that really matured and developed me.” Unfortunately, after a few more years studying magnetism and ceramics, outside forces once again conspired to change the course of his life. In 1969, Congress passed Section 203, an amendment put forward by Senator Mansfield that forbade the use of military funds for research on projects that were not related to specific military functions. “There came a moment where they said ‘well you're in a laboratory that is sponsored by the Air Force, and the basic research that you're doing is not targeted to a mission. You can't do it anymore.’” This proved to be one of the most fortuitous lay-offs in modern history. After spending a little over a year working on a traveling-wave amplifier, “the energy crisis came.” Seeing people lined up at gas stations, Goodenough knew he had to work on something related to energy. “So that's why I turned to studying energy materials and then I was invited to go to Oxford. And I officially became a chemist at that point.” The 1970s energy crisis also inspired others. Early in the decade, Stan Whittingham discovered a way to diffuse lithium ions into

Issue 29 • August/September 2018 39


sheets of titanium sulfide, something that could be useful for building a rechargeable battery, and was given a job at Exxon. Lithium ions are able to quickly and easily intercalate within layers of titanium sulfide, a reversible process which could allow the creation of a highly conductive, rechargeable battery. With a huge budget, Whittingham’s team set to work on creating the first roomtemperature, rechargeable lithium cell. They succeeded, but there was a catch - “within months they discovered they were having fires and explosions in their batteries,” Goodenough said. “When you charged them, the lithium formed dendrites that grow across the electrolyte and give you an internal short circuit. And that causes fires and explosions. So, at that point, that program was stopped.” In addition to dendrite issues, the titanium sulfide was expensive, hard to synthesize, and posed health risks, so research was needed to find alternative compounds, such as other layered oxides that could be used. Goodenough, asked by Exxon to join the team but having chosen Oxford instead, watched the program with interest. “I thought, now I've got students that need something to do, and I've been working on oxides for a while.” So he decided to try to solve the problem. By 1980, he had an answer - to use LixCoO2 as the cathode for the battery. The lithiumcobalt oxide was more stable and lighter than other oxides, and soon proved a success. “That was the birth of the first lithiumion battery, which was licensed to the Sony Corporation to make the first lithium-ion camcorder and launched the wireless revolution,” Goodenough said. With that invention, and the subsequent boom in electronics, Goodenough became a much-lauded figure in scientific circles, receiving the Japan Prize, Enrico Fermi Award and National Medal of Science, to name a few. But he was not satisfied with his invention, nor subsequent innovations, such as the identification of LixFePO4 as a cheaper cathode material. “When we were doing it, we had in mind that the people at that time were developing wind farms in order to get wind energy, and photovoltaic cells in order to get solar energy and they needed batteries to store it or those technologies aren’t any good,” he said.

what it did and I didn't anticipate how clever the electrical engineers were going to be in making all the things that you're now putting in your pocket… but it doesn't solve what we originally set out to do - this realization that modern society and its dependence on the energy stored in fossil fuels is not sustainable.” Lithium-ion batteries have now matured to a point where they are viable for grid-scale energy storage solutions, although concerns still exist about how many charge cycles they can support, as well as the supply chain issues, with both lithium and cobalt being relatively rare. Goodenough wants to improve on this, and also wants “to be able to make electric cars that are safe and that you can charge fast - not overnight.” The current liquid electrolyte battery system needs to be charged slowly, and often with the help of an expensive battery management system - otherwise it can form dendrites. The issue is nowhere near as severe as in Whittingham’s original invention, but it still plagues batteries, and haunts Goodenough: “You’ve got to eliminate dendrites, you've got to eliminate the safety problem.” Now, just four years away from his centenary, the inventor is convinced he has the solution, another trick that will remake the world. But this time, something is different. His work has not been hailed as game changing - instead, it has remained shrouded in controversy, with others in the field casting doubt. Goodenough's new battery relies on solid glass electrolytes that enable the use of an alkali-metal anode without the formation of dendrites. But the science behind the technology includes claims that it can be cycled 23,000 times without degrading, and that it gains capacity over time. These sound fantastical - and many in the industry say this is exactly what they are. "It is my belief that the claimed mechanism by which the 'Goodenough battery' is said to work is not supported by

Clever electrical engineers have not solved our dependence on fossil fuels

the evidence given, and appears to violate the first law of thermodynamics," battery researcher Matt Lacey wrote in an extensive refutation of the new technology on his personal website. "Most of the claims related to the properties of this battery are also not supported by the available evidence." “There are skeptical people out there, because they don't understand,” Goodenough told DCD. “We did answer [the criticism] but they wouldn't allow our rebuttal be published. That's because of vested interests and competition. “We'll publish. I believe we've got a different route. And the problem is the electrochemists do not understand the physics well enough to know what's going on in their batteries. We have no problems. The point is we do know what we are doing.” Goodenough has set a timeline: "give us another year," he said. Even if the science behind the battery, which has both lithium and sodium variants, is sound, it requires a manufacturer to make it. “I have one company that has got government money, and I think they have the equipment and the people that are able to do what needs to be done. “Prototypes will be made, I am sure within a year or two at most.” The clock is ticking, but Goodenough remains hopeful. Should he fail, there are plenty of others who are trying to become the battery revolutionaries of the modern age. “This field is moving very rapidly and I don't know who's going to put together the best package," he said. "But I think there will be several different types of batteries that will break into the market within a few years that will use lithium or sodium as the anode and a solid electrolyte, at least for part of the electrolyte.” Should they succeed, they could join the pantheon of scientific visionaries that can be traced from Alessandro Volta, the inventor of the electrical battery, to Gaston Planté, inventor of the lead–acid battery, to John B. Goodenough. “I'm very happy with where we are, what we’ve done and what we're doing, I'm very optimistic. And if I don't do anything else, we've had a lot of fun learning some new physics.”

DCD>Debates How is the data center industry rethinking mission-critical power?

Sept 12 11am CDT

In this DCD>Debate, our expert panel looks at innovation in power management - whether at the edge of the network, applied to IoT, or in the hyperscale cloud. Listen to speakers from Raritan, Morgan Stanley, Exyte Group and DCD to learn more.

“The wireless revolution was an intermediate step which was very useful for

40 DCD Magazine • datacenterdynamics.com

bit.ly/DCDDebatesPower


Advertorial: Moy Materials

Keeping a lid on the data center Moy Materials is the partner of choice on some of the largest data center roofing projects in the EMEA region, delivering a premium, world class service for over 25 years. Its global Factory Mutual Approved (FM) project experience and technical support set the benchmark for major multinational construction projects

M

oy Materials offers a number of FMapproved waterproofing systems, each allowing for rapid deployment of individual system components, delivering a watertight roofing system at the earliest possible stage so as to allow for internal construction works to commence. Its innovative, long-lasting waterproofing solutions backed by Factory Mutual accreditations and its award-winning technical support have earned Moy Materials

a raft of high-profile clients and established it as a category leader in the field of data centers. Recognizing the crucial steps in delivering a successful data center project from pre-construction phase all the way through to the maintenance of its premium waterproofing systems, Moy Materials insists on regular communication with the wider design and construction team throughout a data center project’s lifecycle. Cathal Quinn (Director), Head of DC Projects for Moy Materials, recognizes “early preparation to be key to success.” Selecting the appropriate system and offering a bespoke design enables Moy Materials to meet all the necessary design objectives, reduce lifecycle costs and eliminate potentially costly legacy issues on such sensitive and high-value construction projects. Site monitoring and reporting by its team of roofing professionals ensures compliance with system installation requirements and provides a guarantee to the construction team that each system component is being installed as set out in its detailed specification and detailing package. Moy Materials’ wealth of experience in multinational data centre projects has attracted the trust of Project Architects across the globe. Mr Patrick Carney, Director

of RKD Architects, speaks positively of his experience of working with the company in the data center arena: “RKD Architects have established a strong relationship with Moy Materials, due to their commitment to the client’s needs and trust in them to continuously deliver high performance roofing systems on every project. “In the past two decades the collaboration between RKD, Moy and our mutual clients has resulted in over half a billion square meters of space covered, protecting over 500MWe of data for the respective end-users. With the roof on a data center you have to be sure, and have the confidence that Moy Materials bring to these projects allows all of us, from designer to client, to have that assurance.” Quinn explains the value of FM accreditations covering a variety of system options: “Moy Materials provide more than just waterproofing systems: we supply peace of mind, from the initial design meeting through the life-cycle of the roof, which could be up to 25 years. “The Moy systems are Factory Mutual (FM) Approved, something we have worked extremely hard on for over a quarter of a century. FM Approvals are by far the most difficult but also the most valuable accreditation available, testing and certifying products to withstand the toughest weather phenomena. We have 25 years’ experience in FM certification and testing and we are a European leader in this specialist market. “Moy Materials has an array of systems that have been tried and tested by their research and development team and take into account the design, construction and life-cycle of each data center project we work on, and we believe our bespoke technical approach and various FM-approved assemblies are unique in the European market.”

Contact Details London +44 (0) 1245 707 449 Glasgow +44 (0) 141 840 660 Dublin +353 (0) 1 463 3900 www.moymaterials.com dcenquiries@moymaterials.com DC Projects Lead cathal.quinn@moymaterials.com

+353 (0) 1 463 3900


Who’s afraid of cloud native? New software tools are remaking the data center but there’s no reason to panic, says Max Smolaks

by a team from the Cloud Native Computing Foundation (CNCF) – the non-profit organization tasked with the management of Kubernetes.

Max Smolaks News Editor

T

he world of IT is changing at breakneck speed, and much of it is down to just one technology - application containers, and Kubernetes in particular. Designed for cloud computing, this approach has given birth to an entire software ecosystem under the banner of ‘cloud native,’ including projects like Prometheus, Fluentd, rkt, Linkerd and many others. But what exactly is cloud native? And is it going to automate data center jobs out of existence? Luckily, just a few months ago a helpful definition was developed (and argued over)

In a true open source fashion, it has been published on GitHub and remains a work in progress. It states: “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. “These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.” This movement emerged from the experiences of some of the world’s most successful digital businesses. “Over the previous decade, a lot of the younger, forward-moving companies have been

42 DCD Magazine • datacenterdynamics.com

figuring this stuff out,” Dan Kohn, executive director of CNCF, told DCD. “Google is certainly the best known, but really, it’s everybody who tried to operate at a higher scale – Amazon, Twitter, Yelp, Pinterest, SoundCloud – all of them have come up with relatively similar solutions. Once they established that, it was somewhat natural for them to start putting all that learning into a set of open source projects, having common infrastructure underneath, and then they can move up the stack to innovate. I think it’s a really great thing that this innovation is now becoming available to anyone.” One of the important things to note is that going cloud native doesn’t necessarily require a public cloud – you can just as easily run Kubernetes in your on-premises data center, as long as it supports some form of private cloud architecture. “The key idea is that, rather than running an application on one machine and saying ‘OK, this is my database server, this is my


Colo + Cloud and yet it’s driving the strategies of all the top companies.”

web server, this is my middleware server,’ you containerize all of your applications, establish a set of requirements for them, and then have them automatically orchestrated onto your machines for you,” Kohl said. “It really is a paradigm shift in how you go about managing and deploying infrastructure. “Kubernetes is one of the fastest-growing open source projects in history – depending on how you count, it’s basically number two behind Linux, and often described as ‘the Linux of the cloud.’ There is enormous momentum behind it and, in a lot of ways, it is becoming the default choice for people who want to migrate to modern infrastructure.” “The whole ecosystem is moving faster,” said Jacob Smith, co-founder and SVP of engagement at Packet – a public cloud service for application developers. “If you ask an IT person from 10 years ago, they would be making a big bet and investing in, let’s say, VMware, for seven years on an amortization schedule. Kubernetes is barely four years old

The cloud native umbrella also includes CI/ CD - continuous integration and continuous deployment - a term used to describe agile development methodologies, closely related to DevOps. “CI/CD or DevOps culture should be a goal for enterprises, regardless of whether they want to go cloud native or not,” said Patrick Hubbard, ‘head geek’ and marketing director at SolarWinds. “Solving for how agile do you want to be is probably the first step that ends up making cloud more successful, because cloud native services are all designed to be managed by APIs, they are designed to be managed programmatically.” Hubbard added that some cloud vendors talk about DevOps like it comes with an SKU – like organizations can pay for it, install it, and suddenly they have adopted an agile methodology. Unfortunately, that’s not the case: like containers, working with DevOps requires additional skills, but also a structural change within the business itself - one of the reasons why it is still considered an alternative, rather than a standard. But should the old guard of the data center be worried about this sudden influx of new technologies? After all, they survive on their technical knowledge, and Kubernetes presents a new and unfamiliar territory. “They should see this as an opportunity,” Kohl said. “The skillsets build on each other. If you’re an experienced sysadmin and you’re comfortable configuring machines and working in Linux, it really is very feasible for you to apply almost all of those skills to this modern world of DevOps and microservices and containers. “Rather than burying your head in the

sand, my argument is look at some of the training courses on offer, spend the time learning this new space and you’re going to find – because there’s still Linux underneath and there are still servers underneath - all of your existing knowledge is still useful, there’s just some additional functionality that we are layering on top.” Hubbard had a similar opinion: “We’re finding that – when you look at our IT trends report, when you look at a number of other surveys – there seems to be this career reboot opportunity with cloud that we didn’t really have with virtualization. “Before VMs, you had a ‘rack and stack’ person in your data center who could terminate cables and make sure that the physical hardware was going in and out of the rack. That desk is now empty, and you use a contractor who comes in and pulls all of the Ethernet and fiber, and then they leave and you don’t see them for two years because you don’t need them anymore. “Learning to code, and I don’t mean being a developer, but being able to take a basic policy in your head and converting it into Python that can execute while you sleep is how people will have a job five years from now. “In SMBs and smaller operations, you will still have people who like using command line, who like managing on-prem resources, but I’m finding that there are a lot of people who have been in technology for 15-20 years and are discovering this ability to wield enormous resources that are API-delivered - especially cloud native services - and the ability to quickly transform, to scale provisioning up and down as necessary. “When they encounter it, they are almost reborn into this thing that they love; it’s what they came to do in IT in the first place.”

Issue 29 • August/September 2018 43


> London | United Kingdom

17 Annual

th

5-6 November 2018 // Old Billingsgate

Europe’s high-density expert meet-up for data center and cloud infrastructure Core Sponsor

Principal Sponsor

Lead Sponsors

To sponsor or exhibit, contact: alastair.gillies@datacenterdynamics.com @DCDConverged #DCDLondon

Free

passes for qualified end-users and consultants

Datacenter Dynamics

For more information visit www.DCD.events

Global Content Partner

DCD Global Discussions


Advertorial: TIME dotCom

AIMS Data Centre: Redefining data centres in Malaysia & ASEAN As technological advances and data trends continue to shape business operations, AIMS strives tirelessly to provide the international connectivity and enhanced infrastructure required for growth With nearly three decades of industry experience, AIMS is well-placed to fulfil modern data centre demands thanks to its status as ASEAN’s leading carrier-neutral data centre and managed services provider. AIMS provides internationallycertified data storage facilities and ancillary services, augmented by an unrivalled platform for inter-connectivity.

International Footprint and Connectivity

D

ata centre requirements are evolving. With bandwidth needs now driven by trends such as wearable technology, heavycontent applications, big data, artificial intelligence (AI) and others, organisations are viewing, building and planning their data centres differently. As a result, data centre providers need to keep pace with the times and anticipate customers’ needs.

Strategically headquartered in Malaysia since 1990, AIMS has emerged as the most densely populated communications facility in the ASEAN region. Besides the Malaysian cities of Kuala Lumpur, Penang and Cyberjaya, it also counts data centre presence in four other key regional locations with satellite centres in Singapore, Hong Kong, Thailand and Vietnam. Leveraging on parent company TIME dotCom Berhad’s strategic investment in an extensive international network, AIMS is able to link worldwide Points-of-Presence (PoPs) in a competitive package. This value-add enables AIMS to serve as a single platform for everything from data storage to international connectivity, evolving

into more than just a commodity data centre service provider. Customers can bypass the hassle of managing multiple vendors from data storage and interconnectivity to international networks, consolidating their IT architecture management into one centralised organisation. As the anchor site for the Malaysian Internet Exchange (MyIX), AIMS hosts all domestic and over 80% of the foreign telecommunication carriers based in Malaysia, with close to 100 peering partners in AIMS. Its carrier-neutral status allows it to extend services from all local telecommunication providers to its customers, granting customers the freedom and flexibility to choose the services they want. This unrivalled connectivity builds a vast and dynamic ecosystem of networks, carriers and IT service providers, to which AIMS customers enjoy instant and direct access. AIMS’ client base is extremely diverse, hailing from industries that range from financial services and telecommunications to social media and e-commerce and many more.

Looking to the Future AIMS thinks that content-intensive applications, online services and cashless transactions will be the game changers for how businesses operate in Malaysia in 2018 as they signal an increase in data and analytical potential for businesses to discover. The growing conversation around the importance of data and how it will be managed positions data centres as an increasingly crucial success factor for business in the age of information. Businesses will demand more comprehensive and innovative data centre services to meet their needs. It is a challenge that AIMS is determined to rise to in keeping with their motto: Designed to Adapt, Built to Last.

Contact Details www.aims.com.my marketing@aims.com.my 1-800-18-1818 (Malaysia) or +603 5021 2122 (International)


Taking it to the extremes This year a new category in DCD’s Awards will honor the most extreme data centers on the earth - or beyond. Peter Judge found it’s a tricky subject

E

ach year, DCD’s industry awards invite the industry to nominate the best projects, teams and practitioners in the world, and a team of judges decides on the winners. Last year we asked our readers to decide one category themselves, and it was an interesting experience. In 2017, the voting category was Beautiful Data Centers. Readers contacted us to nominate facilities which not only did their job, but added looking good into the bargain. The winner was a supercomputer in a medieval church. This year, we’d like you to tell us about extreme data centers: facilities that operate from places no one would expect, like in outer space, a battlefield, or the Arctic wastes; or sites which have some ability which makes them tougher or harder than any other. To start the ball rolling, here’s a a short list of suggestions, featuring nuclear bunkers, bomb proof sites, and data processing in hot climates and high altitude. To boldly go In 2017, HPE built a Spaceborne Computer, using off-the shelf equipment, to go to the International Space Station (ISS). A rugged casing - and some software designed to react to external conditions - is all it takes to make a system which can continue to operate in spite of radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power and irregular cooling. Courtesy of SpaceX and the ISS, the system will experience the same conditions as a mission to Mars.

Peter Judge Global Editor

To boldly go - HPE built a Spaceborne Computer, to go to the International Space Station (ISS).

Nuclear option - Udomlya in Russia has a data center with a highly reliable power source

Neutrinos at the South Pole - The IceCube Neutrino Observatory, buried 2.5 kilometers below the South Polar ice sheet

Nuclear option Udomlya in Russia has a data center with a highly reliable power source - though it’s in a location which might make some think twice. It’s in the town’s nuclear reactor. The site will have some 10,000 racks, and the

46 DCD Magazine • datacenterdynamics.com


Global Awards 2018

Not so extreme?

Going underground - 125 feet beneath Kansas, Cavern Technologies has more than 300,000 sq ft of data center space

Bomb-proof facility - CenturyLink’s data center at Moses Lake, Washington was a command center for the US Titan missile defense program

nuclear utility, Rosenergoatom, has reserved up to 80MW of the power station’s constant 4GW output. Neutrinos at the South Pole The IceCube Neutrino Observatory is a huge particle detector buried 2.5 kilometers below the South Polar ice sheet. It has detected high energy astrophysical neutrinos, ushering in a new era of astronomy. The observatory needs sophisticated data processing on site: it has 1,200 computing cores and three petabytes of storage, and communicates with the rest of the world via the Iridium satellite network. Bomb-proof facility CenturyLink’s data center at Moses Lake, Washington, was a command center for the US Titan missile defense program, and was designed to withstand a 10 Megaton detonation a quarter of a mile away. To live up to its setting, the data center installed there is designed to N+1 concurrently maintainable standards - and it also has renewable power, with 85 percent of its needs supplied by hydroelectric dams on the nearby Columbia River.

Going underground 125 feet beneath Kansas, Cavern Technologies has more than 300,000 sq feet of data center space, in a colossal complex of abandoned limestone mines which now holds a huge range of businesses, along with archives including original Hollywood film negatives and John Kennedy’s autopsy report. The limestone caves in the Meritex Business Park, Lenexa are protected from natural disasters and provide a fully contained environment with self-closing doors.

The category opened up a philosophical issue. Are these data centers really “extreme?" All of them provide a safe and secure environment for their servers but, to do that, they have to "nomalize" their chosen environment, so their servers have conditions as good as those provided by conventional facilities. In many cases, they also have to operate without any hands-on maintenance. The atmosphere inside the data center, in other words, is anything but extreme. Two possible contenders, whose data center projects have surmounted huge engineering challenges, surprised us by opting out: “We don’t see Project Natick as extreme,” said Ben Cutler of Microsoft, who has now placed two separate data centers in chilly water at the sea bed. We had a similar response from Nautilus, whose idea is to float a data center on a barge, tethered to the shore or river bank, fed by power and data cables and cooled by the water beneath it. “Placing a data center in the ocean is the opposite of extreme,” said Cutler, pointing out that Natick’s subsea location allowed his team to provide constant cooling, and a sealed, low humidity, oxygenfree environment for the servers. The servers are also completely safe from EMP events. Nautilus made a similar point. This is a category for reader nominations. Tell us about data centers you consider to be extreme, and we’ll add them to a list. But you can’t vote for Project Natick or Nautilus.

DCD>Awards 2018 The Most Extreme Data Center

Nominations open

Do you have an extreme data center in mind? Are you aware, or part of, a facility that dares to do something different and looks to push the boundaries of possibility? Nominations are open now - just let us know at extreme@datacenterdynamics.com Public voting opens on October 1st. For more information, visit the link below: bit.ly/DCDExtremeShoutOut

Issue 29 • August/September 2018 47


Growth in Singapore

Peter Judge Global Editor

A round up of the latest from APAC's leading data center hub, from Peter Judge and Paul Mah

S

ingapore is a crucial hub for Asia, with more than 230,000 square meters (2.5m sq ft) of data center space. And it's not standing still. Cloud giants and colo providers are moving fast to match escalating demand from users.

Google announced its third data center in Singapore this August, looking to scale up capacity for its services in the region. The announcement came hot on the heels of the news of a new availability zone for Google Cloud Platform, offering segregated mechanical and electrical systems for greater reliability. The new multi-story facility will be located in Jurong West, down the road from the company’s two existing data centers. It is expected to come online in 2020. The building will use machine learning technology to reduce energy consumption, and recycle 100 percent of its waste including heat and water. The new facility will be built on a plot of land the size of the first two data centers combined, so it will double the company’s footprint in the Asian business hub. Those two earlier data centers were launched in 2013 and 2017. The expansion will bring Google’s long-term investment in infrastructure in Singapore to US$850 million - from which we calculate the cost of the third facility to be $350 million, taking into consideration a statement in which the cloud giant had estimated the cost of its previous data center in the country at US$500 million. Could this new data center support a further availability zone? The company wouldn’t tell us, but it’s not that likely. Only one Google Cloud region in the world currently has four zones, and it is located located in Iowa, United States. Google says the third data center will support rapid growth of the digital economy across the whole of Asia, including nearby countries such as Indonesia and India, where usage is growing “really quickly.”

But if that’s the case, why build this data center in Singapore and not in Indonesia or India, given the far larger population base in those Asia Pacific countries - and the fact that Google already has Singapore covered? “[Google chose Singapore] because of the strategic location here. Having a stable infrastructure in place is something we look out for. It has also been a home for our regional data centers for a long time,” Google spokesperson Angeline Leow explained, pointing out that Singapore also has a highly skilled workforce. “In the last year alone, we’ve expanded our cloud infrastructure in Asia. This includes opening Southeast Asia’s first Cloud Platform Region in Singapore and launching our third Cloud Platform Zone to provide companies in the region with greater reliability and faster access to our products and services. With our third data center in Singapore, we hope to build on this momentum to help more businesses benefit from our cloud services,” Rick Harshman, the managing director of Google Cloud in the Asia Pacific and Japan, said. According to Google, new customers that have hopped onto its cloud include Singapore Airlines, logistics firm Ninjavan and travel search engine Wego. China Mobile announced its first data center in Singapore in July. It will be China Mobile International’s (CMI) second facility in the wider region, following the global network center (GNC) located at the Tseng Kwan O industrial estate in Hong Kong. The Singapore facility will be built at Tai Seng, with a gross area of 7,330 square meters (79,000 sq ft) and a total capacity of 2,100 racks. The facility is understood to be currently undergoing Tier Certification with Uptime Institute for both Design and Construction. The data center will be interconnected with other China Mobile sites to offer telecommunications solutions and connectivity services such as data, voice and cloud computing.

48 DCD Magazine • datacenterdynamics.com

Where do Singapore users put their IT? •Colocation - 38% • In-house - 26% • Public cloud - 16% • Managed service - 11% • Other options - 9% Source: DCD Intelligence


APAC Both AWS and Google claim Singapore Airlines as a customer. AWS also mentions Air Asia, DBS Bank and the Genome Institute of Singapore. Worldwide, Google has 15 Google Cloud Platform (GCP) regions, while AWS has 16. Global Switch has turned to Gammon Construction, the joint venture between infrastructure giant Balfour Beatty and British conglomerate Jardine Matheson, to build a data center in Singapore. The SD$253 million (US$189m) project will see Gammon build a six-story facility with ten data halls in the northern region of the island. The data center is due to open in 2018, offering offer 25,000 square meters (270,000 sq ft) of technical space, supported by 30MVA of utility power. Aiming for a LEED Gold rating and a Platinum BCA Green Mark, it has a design power usage effectiveness (PUE) of 1.38. The Singapore Woodlands data center will be the first building on the island to adopt prefabricated mechanical, electrical and plumbing (MEP) techniques on a large scale. Sixty percent of the MEP elements will be put together off-site, and over 70 percent of the facility’s structure has been precast.

China Mobile currently owns parts of several submarine cable systems in the APAC region, including SJC, APG and SMW5, as well as the upcoming SJC2. It is planning at least four other submarine cable projects linking Asia Pacific, Eastern Europe, North America and Oceania. “CMI Singapore data center, being the first CMI overseas data center commencement of construction, marks the kickoff of CMI’s global data center deployment,” said Dr Li Feng, chairman and CEO of CMI, citing Singapore’s advantageous location as an economic, financial and shipping center. Microsoft is putting solar panels on rooftops, with Sunseap Group. The 60MWp (megawattpeak) distributed solar project is the largest solar deal in Singapore. The 20-year initiative will put panels on hundreds of rooftops across the nation-state, leasing space and exporting energy to the national grid. Microsoft will buy 100 percent of the renewable energy for its Singapore data center, which delivers a variety of cloud services including Azure and Office 365. The Singapore government has backed green IT since 2014, but the state has very little wind and hydropower. The government wants Singapore to get five percent of its power from tropical sunshine by 2020, but the country has a lack of land area for large-

scale solar farms. In 2016, looking for somewhere else to put solar panels, Singapore launched the world’s largest floating solar solar energy test-bed, which could be replicated in the 17 reservoirs that make up a significant part of the country’s surface area. Amazon Web Services (AWS) is not falling behind. Three months before Google’s expansion, AWS already opened its third availability zone in Singapore. Both cloud providers say their latest efforts will make it easier for customers to build highly available services in the region, by increasing the number of zones in Singapore, all of which are served from third party data centers, rather than facilities built and owned by AWS.

Ask Engie - French utility Engie’s energy efficiency innovation center in Singapore, is offering a service which can spot problems in digital infrastructure before they cause any damage. The service uses analytics and machine learning, and is based on Engie’s Avril Digital platform. The service has been launched in Singapore because Singapore is the top Asia Pacific location for data center operations, predicted to overtake Europe by 2021. Despite this, Pierre Cheyron, CEO of Engie Services Asia Pacific, says: “Most data centers in Singapore were designed and constructed without sustainability and energy conservation in mind.” Cheyron believes that the Engie initiative will help data center operators to comply with requirements such as the BCA-IMDA Green Mark for Data Centers scheme, launched by Singapore’s infocomm media development authority (IMDA) in November 2017.

DCD>South East Asia | Singapore

Sept 11-12 2018

DCD>South East Asia will cover the full ecosystem, from how data centers are being redefined by the economics of digital business, to how IT and data center service delivery are being reshaped. Supported by associations and statutory boards such as IMDA, IASA and itSMF, the two-day event will gather 1,300 IT experts, and feature more than 100 hours of conference tracks across three halls. bit.ly/DCDSingapore2018

Issue 29 • August/September 2018 49


Colo + Cloud

Glory be to the server

A

“Why do you all keep doing this to yourselves?” Jamie Zawinski (jwz) on cloud computing

h, the humble server – the foundation of our digital world. These beautiful machines, full of precious minerals and dressed in sheet metal, are architected down to a nanometer and embody some of the most advanced scientific principles known to man. They are the Morlocks to our Eloi, hidden away in the dark corners of the world, working endlessly for our benefit. They keep us nourished and clothed - but fortunately don’t try to eat us. Imagine my dismay when I’m reading about serverless computing – the latest trend in public cloud infrastructure. Of course, it still requires servers, since you can’t conjure up processing capacity out of thin air. The term refers to simplified cloud services called ‘functions’ or ‘actions’ that are so far removed from the underlying infrastructure that you’ll never have to think about provisioning, scaling, and managing any hardware. All you need to do is configure an API, and suddenly you’ll have a distributed, auto-scaling database with pay-as-you go pricing. The term ‘serverless’ entered mainstream consciousness around 2016, and was popularized by Amazon Web Services. Microsoft, Google, Oracle and IBM are all offering their take on this novel idea, and the Apache Foundation has adopted OpenWhisk – a serverless framework originally developed at IBM - as one of its Incubator projects. But here’s my problem: calling it ‘serverless’ suggests that this model doesn’t require operations and facilities people; no servers also means no floor tiles, no power, no cooling, and here at DCD we happen to hold all of these things in quite high regard. Calling it ‘serverless’ is a disservice to the hard-working men and women who actually run the infrastructure that provides us with entertainment, stores our money, educates our children and makes the trains run on time. Downplaying the importance of servers in delivering those vital services is an affront to hardware designers and engineers. And finally, calling it ‘serverless’ is just plain confusing – quite a feat in a field full of meaningless buzzwords. I hope it is one of those terms that will eventually be consigned to the dustbin of history, along with unified inboxes, netbooks, phablets and microservers. Next time you need to talk about serverless computing, just say Functionsas-a-Service (FaaS) - a broadly synonymous term that is also self-explanatory. This industry frequently struggles to maintaining a positive public image, so be sure to mention your servers whenever you can: at work, while having lunch, in public transport, even in church. Talk about the number of cores, and the great things they do; talk about the storage, and all the wisdom contained therein. Be proud of your servers.

Max Smolaks News Editor

50 DCD Magazine • datacenterdynamics.com


PRO

Mission Critical Training

Data Center M&E Cyber Security 2 hour online course www.dcpro.training/cyber-security

Whilst most organizations have a dedicated team for network security, facilities’ security concerns often fall between the gaps. New regulations in the financial services sector have put a spotlight on the need for more education. Security breaches are happening more and more often. Don’t be next! www.dcpro.training | +440203771907 | info@dc-professional.com


Best server chilled CyberAir 3PRO from STULZ stands for maximum cooling capacity with minimum footprint. Besides ultimate reliability and large savings potential CyberAir 3PRO offers the highest level of adaptability due to a wide range of systems, variants and options. www.stulz.de/en/cyberair-3-dx


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.