DCD>Magazine Issue 25

Page 1

December 2017/January 2018 datacenterdynamics.com

CHILDREN OF THE COLD WAR

Picking up good regulations

Building the edge

Cables in depth

The EU’s data center efficiency standard, 10 years on

A huge infrastructure project requires someone to foot the bill

A special supplement all about copper and fiber cables


Connect to improved inventory management.

Jumper in a box is an innovative packaging solution holding up to 70 jumpers in a small, stackable footprint. With a selftracking inventory feature and bold, color-coded part identification, it’s never been easier to grab what you need. Options include single-mode and multimode LC uniboot configurations in a variety of lengths up to 20 m.

Are You Corning Connected? We pack. You stack. It tracks. It’s that easy! Visit booth #8 at DCD London for more information. © 2017 Corning Optical Communications. LAN-2237-AEN / September 2017


ISSN 2058-4946

Contents December 2017/January 2018 ON THE COVER

16 Children of the Cold War Bunkers from the last century are finding a new role as reliable data centers

25 21

7 NEWS

CABLES AND RACKS

25 Our special supplement examines the state-of-theart in the most basic IT infrastructure

FEATURES 21 Building the Edge

40

Telcos will have more power in edge networking 39 Ten years of the EU Code of Conduct A decade of chasing efficiency

REGIONAL FEATURES 40 LATAM: Peru’s standard

16

A Peruvian technical standard for best practices in data center design is emerging 42 APAC: Hungry markets

39

14

Asian markets have a rapidly growing population of demanding consumers

CALENDAR 14 Events and training DCD’s Advent Calendar of special happenings and training courses

EDITOR’S PICK

44

DCD Global Awards 2018 Who are the data center leaders? DCD recognized them in a glittering gala at London’s Lancaster Hotel: our annual celebration of infrastructure excellence!

Issue 25 • December 2017/January 2018 3


Decking the bunkers with copper cables

W

e should be pleased to have abandoned nuclear bunkers. At some level, this must mean that the perceived risk of nuclear war is lower now than it used to be. It also means that there are resilient facilities available - for inventive builders to turn into data centers. Sebastian Moss spoke to some of them (p16) and found two distinct viewpoints. One developer was happy that the government did the "heavy lifting" for the project. Another found that drilling through hardened concrete was so much trouble he has shifted to building his own bunkers.

The DCD Awards winners come from all around the world. Turn to p44! Building at the edge is a challenge. Peer-to-peer networks and the Internet of things need to place resources close to users and devices, making the network component crucial, and altering the balance of power between the telecoms providers and the IT and facilities people who make traditional data centers. DCD's series of edge-focused events began at DCD>Colo+Cloud in Dallas, and continues at DCD>Enterprise in New York in 2018. In this issue (p21), we report on the debate from Dallas and look forward to New York.

The DCD Awards brought a festive flavor to the latter part of 2017. The leaders of the industry were honored at glittering gala dinner in London's Royal Lancaster Hotel. The winners come from all across the globe. Who are they? Turn to p44 for a list of the industry's top talent.

Cables are changing - but maybe not as fast as some might expect, according to our special focus on cabling (from p25). Network speeds within data centers are now far beyond what anyone expected copper to provide, so why are they not entirely linked with fiber? Because standards-makers have found ways to extend the capacity of backwards-compatible copper. Also in this infrastructure special, we look at the two open standards emerging for racks. There's also a focus on testing - that vital process which sometimes gets overlooked. And Max Smolaks found something unexpected. The are new MiFID II financial regulations (p35), which will actually make some people re-cable their data centers, to provide a level playing field for customers.

He sees you when you’re sleeping, He knows when you’re awake, Santa broke GDPR, With his festive data lake! * 1 Santabyte = 1m Zettabytes

a data center, or anywhere needing to give Wi-Fi signals free passage. But we know - with help from intelligent automated systems - you will find time and space to relax and celebrate this holiday season. DCD wishes you everything you most desire and a happy holiday.

bit.ly/DCDmagazine

To email one of our team firstname.surname@datacenterdynamics.com Find us online datacenterdynamics.com | dcd.events | dcdwards.global | dcpro.training DatacenterDynamics DCDnews DatacenterDynamics

DCD Magazine • datacenterdynamics.com

Meet the team Global Editor Peter Judge @Judgecorp News Editor Max Smolaks @MaxSmolax Reporter Sebastian Moss @SebMoss Reporter Tanwen Dawn-Hiscox @Tanwendh US Correspondent David Chernicoff @DavidChernicoff Editor LATAM Virginia Toledo @DCDNoticias Assistant Editor LATAM Celia Villarrubia @DCDNoticias SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Christmas Perrins Senior Designer Holly-n-Ivy Tillier Designer Marzipan Perez Head of Sales Yash Puwar Global Account Manager Aiden Powell

Head office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

Tinsel and baubles have no place in

Subscriptions datacenterdynamics.com/magazine

4

1Sb*

From the Editor

PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor

www.pefc.org

© 2017 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to Jon McGowan, jon.mcgowan@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


0845 123 2222 | info@node4.co.uk | node4.co.uk


You didn’t set out to build a network. You set out to supercharge your path to the cloud.

For decades, IT’s top priority has been managing total cost of ownership (TCO). Recent PwC findings show that TCO is now being eclipsed by a strong focus on security and automation. Put simply, enterprise IT leaders are reflecting on the new reality, which is that the datacenter has evolved from a cost center to a strategic powerhouse for the business. To view the PwC report and learn more visit juniper.net/automateyournetwork


White space

News in brief Marvell offers to buy Cavium for $6bn The deal has been unanimously approved by the boards of directors of both companies and could create a technology giant with $3.4 billion in annual revenues.

A world connected: The biggest data center news stories of the last two months

Facebook to spend $1bn on expansion of Los Lunas campus Construction of the initial building began in October 2016, with the company stating it could see up to six phases of expansion. This was followed by another building which broke ground earlier this year.

Google buys another 131 acres in Denmark But says it has no plans to build anything. Yet.

Zayo to acquire Spread Networks for $127 million Spread operates a 825 mile (1,328 km) fiber route connecting New York and Chicago known for its low latency.

DHL to invest $365m in Malaysian data center The Deutsche Post subsidiary plans to spend the money on more energyefficient equipment, and will allow the company to adopt a hybrid cloud model.

AWS takes on colo providers with bare metal instances Amazon Web Services has announced a new bare metal offering for workloads that need to run directly on the server hardware, in a move that could undermine one of the selling points of colo providers. Currently available in preview, the new I3 Bare Metal instance will allow customers to run their applications directly on an Intel server with access to the rest of the AWS platform. It was launched together with two other EC2 instances: the big data and dataintensive workloads optimize H1, and the M5 which offers greater computation, memory and enhanced networking. Applications running on the I3 bare metal instances can gain direct access to Intel Xeon E5-2686 v4 processors, 512GB of memory, 36 hyperthreaded cores, and 15.2TB of NVME SSD storage. It was designed for workloads that are not virtualized, require specific types of hypervisors, or have licensing models that restrict virtualization, says AWS. The cloud giant took pains to emphasize that I3 Bare Metal instances are not simply “repackaged” bare metal servers, but offer

the same flexibility and interoperability found in normal EC2 instances. For instance, it supports security group settings, can leverage Elastic Load Balancers, use Elastic IP addresses and access Amazon Elastic Block Storage (Amazon EBS) volumes. Amazon has also created a dedicated hardware platform called Nitro to run the bare metal service and other new EC2 instances. Nitro stems from Amazon’s long-term goal to make EC2 instances indistinguishable from bare metal, Peter Desantis, vice president of AWS global infrastructure, said at re:Invent, AWS’ annual event, attended by DCD in November. “AWS continues to expand and enhance what was already the cloud’s broadest and most capable compute service. Most of our customers have diverse computing needs, and they’ve told us having the right instance for the right workload really matters,” Matt Garman, vice president of AWS Compute Services, said.

Intel’s Diane Bryant joins Google Cloud as COO Diane Bryant, former general manager of Intel’s Data Center Group, will join Google Cloud as its chief operating officer. Reporting to Google Cloud’s head Diane Greene, Bryant will join a division that, despite strong growth, lags far behind Amazon Web Services, as well as Microsoft and IBM.

bit.ly/FirstWholeFoodsNowWholeWorld

Issue 25 • December 2017/January 2018 7


Whitespace

Altice to sell Green.ch and Green Datacenter to InfraVia Netherlands-based multinational telecoms company Altice plans to sell its Swiss data center businesses Green.ch and Green Datacenter to InfraVia Capital Partners. The 214 million Swiss franc (US$217m) transaction comes as Altice struggles to deal with €50 billion ($59.6bn) in debt and a rapidly falling share price - that puts its market cap at a fifth of its debt. Altice grew aggressively, becoming Europe’s biggest issuer of junk debt as it financed acquisitions the world over, but now faces steep hurdles on lowering profit outlooks, difficulties with its French division SFR, and changing EU and US monetary policy that could make future debt refinancing harder. bit.ly/TheGreenchWhoSoldAtChristmas

8

DCD Magazine • datacenterdynamics.com


NEWS

Whitespace

Qualcomm launches 48-core, 10nm Arm server processor, the Centriq 2400 Qualcomm’s long-anticipated Arm-based Centriq 2400 server processor family has finally been launched. The 48-core, 64-bit Centriq line aims to take on Intel, with Qualcomm highlighting cost and performance advantages over the Skylake processors. The Centriq has a list price starting at $1,995, which the company claims gives it a more than 4x better performance per dollar and up to 45 percent better performance per watt than Intel’s highest-performance Skylake processor, the Intel Xeon Platinum 818. The processor’s 48 cores are connected with a bi-directional segmented ring bus with 250Gbps of aggregate bandwidth. The design has 512KB of shared L2 cache for every two cores, and 60MB of unified L3 cache distributed on the die. It offers six channels of DDR4 memory and can support up to 768GB of

total DRAM capacity, with 32 PCIe Gen3 lanes and 6 PCIe controllers. At the same time, it consumes less than 120 watts. At a launch event in San Jose, the company announced a host of partners who have tested the Centriq 2400 series - including Alibaba, Cloudflare, Canonical, HPE, MariaDB, Mellanox, Microsoft, Netronome, Packet, Red Hat, Solarflare, SUSE, Uber and Xilinx. Google also welcomed fresh competition to the space. “[This] announcement is an important achievement and the culmination of more than four years of intense design, development and ecosystem enablement effort,” Anand Chandrasekher, SVP and GM of Qualcomm Datacenter Technologies, said. “We have designed the most advanced Arm-based server processor in the world that delivers high performance coupled with the highest energy efficiency, enabling our customers to realize significant cost savings.” The server processor launch comes as Qualcomm itself faces an unsolicited takeover bid from Broadcom, which has offered $130 billion for the company, including debt. Qualcomm rejected the unsolicited offer, but negotiations are ongoing. bit.ly/ArmsLatestHope

Vox Box

Kevin Kettler CTO Flex What will 5G mean for edge computing? 5G will lead to new demands on the infrastructure, with the high bandwidth that 5G will bring to the table, as well as the low latency requiring a shift to the edge. It really requires that you think about whether all of that traffic coming in from 5G will need to flow the whole way back to a cloud data center in a centralized location, or whether there’s a better model for that data to reside more locally. bit.ly/MoreGsMoreDCs

To Lease: Raw Data Center Space in Zurich, Switzerland We lease 215sqm of raw space in a city data center building in Zurich, Switzerland. The building is 11mins from Zurich airport (by public transportation), features a 24x7 security lodge and is host to all major network carriers. Ideal for companies seeking to establish an office / data center / network presence in Switzerland / Europe with a need for elevated security and excellent network connectivity.

Interested parties please email to: datacenterinquiry@gmail.com

Peter’s chip factoid The technology roadmap for semiconductors predicts 5nm silicon should be available around 2020. IBM demonstrated a 5nm chip in June 2017

Kushagra Vaid General Manager Microsoft Azure One year on - what’s the status of Project Olympus? The hardware is 100 percent complete, it has been fully open sourced, so people can download the specifications, the CAD files, the schematics, the software. The most important part about today’s announcement is that it’s in volume production in Microsoft data centers. It’s not just a specification on paper, it have been deployed globally in our data centers, and it is running production services. bit.ly/OlympusHasLanded

Issue 25 • December 2017/January 2018 9


Whitespace

Google to open Hong Kong cloud region in 2018 Google Cloud Platform will open an infrastructure region in Hong Kong next year, consisting of three separate data center zones. It will become its sixth region in Asia Pacific, after Mumbai, Sydney, Singapore, Taiwan and Tokyo. The company also teased that there would be further announcements about upcoming Asian regions ‘in the coming months.’ “Hong Kong is an international commercial hub and is among the world’s leading service-oriented economies,” Rick Harshman, Google Cloud’s managing director for Asia Pacific, said. “By opening this region, customers in Hong Kong will benefit from low latency and high performance of their cloud-based workloads and data.”

Google had actually planned to open a data center in Hong Kong some time ago - back in 2011 it held a groundbreaking ceremony for a $300m facility set to come online within two years. Despite some progress, plans for the facility were shelved in 2013 over claims that the company found it difficult to acquire “spacious land.” With it pursuing sites in the even more spatially-challenged Singapore, as well as already owning a 2.7-hectare plot of land in Hong Kong, some have speculated that the reasons might have been political. Now, with Amazon Web Services set to open in Hong Kong next year, Google is ready to try and enter this market once again. Part of the push includes trying to integrate Google Cloud with Hong Kong’s smart city initiatives. “The GCP Hong Kong region dovetails with our commitment to boosting Hong Kong’s digital economy and smart city efforts,” Harshman said. bit.ly/GoogleGoesEast

Data Centre Cooling Your partner for ultra-efficient solutions For more info how to obtain a low TCO in your data centre please visit www.systemair.com/dcc. dcc@systemair.com www.systemair.com

10 DCD Magazine • datacenterdynamics.com


Whitespace

OVH rolls out private cloud service in the US French hosting giant OVH is now offering its Hosted Private Cloud in the United States, including enterprise cloud services as well as expanded disaster recovery and hybrid cloud solutions. The move follows on the company’s acquisition of VMware’s vCloud Air business earlier this year, and with it, the data centers used to offer the Hosted Private Cloud service. The company’s Enterprise Dedicated Cloud runs on VMware’s software-defined infrastructure and Intel hardware, with VMware vSphere, vCenter Server and NSX platforms also available. The new disaster recovery service can be used between on-premises and cloud environments, or to backup from cloud to cloud, with software options including VMware’s vSphere Replication, Zerto and VMware HCX. And finally, customers are able to use VMware’s HCX integration service alongside OVH’s private cloud, to create a hybrid cloud environment OVH recently announced it will be closing two of its French data centers in Strasbourg and migrating customers to other facilities on campus, following a two-hour electrical failure, which caused it to question the viability of facilities made out of shipping containers.

Daimler and HPE research hydrogen fuel cells for data centers

bit.ly/BonAppetitOvh

Flexenclosure to deliver up to 20 data centers to Australian telco Virtutel Australian telecommunications provider Virtutel is planning to create a network of up to 20 edge data centers around the country. Its data center subsidiary, VirtuDC, has sought the services of Swedish manufacturer of prefabricated data centers, Flexenclosure, which will ship its modules across the globe over the course of three years. The pair have stated that the facilities, designed to order and built to meet Uptime Institute’s Tier III requirements, will allow VirtuDC to provide network and interconnection services at end-point level, and fill gaps in currently underserved areas. Some of the upcoming facilities will be equipped with ‘dark site’ functionality, allowing them to be operated, monitored and managed without human intervention, from a central network operations center. “We selected Flexenclosure for their deep experience in deploying prefabricated data centers in very challenging environments and their flexibility in making sure that the ultimate design was exactly what we wanted,” said David Allen, managing director at Virtutel.

“We are confident that Flexenclosure is the right partner for what is to be a significant rollout of IT infrastructure across Australia.” Telcos are increasingly branching out into the edge data center market, since they have the necessary real estate and decentralized infrastructure in place. Companies like VaporIO and EdgeMicro are seeking to capitalize on the industry trend by engineering dedicated edge systems - the former in a distinctive chamber shape, the latter in modular containers. The Flexenclosure facilities will be the first data centers to be fully owned by Virtutel, which has points of presence in colocation sites across Australia, as well as in New Zealand, the US, Hong Kong and Singapore. To date, Flexenclosure has deployed 43 of its eCentre modules in 27 countries, including Colombia, Myanmar, Ethiopia, the Republic of Palau and Samoa. The company also makes eSite, an integrated power system for telecommunications sites, designed for the outdoors.

German automotive firm Daimler could bring its hydrogen fuel cells to the data center, in partnership with Hewlett Packard Enterprise (HPE), LiteOn’s Power Innovations (PI) and the National Renewable Energy Laboratory (NREL). Starting this year, the partners aim to develop prototype continuous power solutions and stationary power systems for data center backup. “The maturity of automotive fuel cell systems is unquestioned today. They are ready for everyday use and constitute a viable option for the transportation sector,” Prof. Dr. Christian Mohrdieck, fuel cell director at Daimler AG and CEO of Daimler’s NuCellSys, said. “However, the opportunities for hydrogen beyond the mobility sector - energy, industrial and residential sectors - are versatile and require the development of new strategies.” Hydrogen fuel cells combine hydrogen and oxygen to produce electricity, with water created as a byproduct. bit.ly/Hydrogenerators

bit.ly/APrefabricatedStory

Issue 25 • December 2017/January 2018 11


Whitespace

China leads supercomputer pack in latest Top500 ranking

Verne Global rolls out HPCas-a-service

The 50th Top500 list of the fastest supercomputers in the world, as ranked by the group of the same name, shows a HPC landscape that continues to be dominated by China. The country claimed more supercomputers in the Top500 than ever before, at 202, far outranking the US which descended to an all-time low of 143. For the first time in the list’s 25-year history, China also overtook the US in aggregate performance, claiming 35.4 percent of the Top500 flops, while the US was responsible for just 29.6 percent. China’s Sunway TaihuLight continues to hold the top spot with a High Performance Linpack (HPL) score of 93.01 petaflops. It is followed by China’s Tianhe-2, at 33.86 petaflops, and Switzerland’s Piz Daint at 19.59 petaflops, which was upgraded last year with Nvidia Tesla P100 GPUs. The fourth fastest supercomputer is a new entrant to the top ten - Japan’s upgraded Gyoukou system which is used by the Agency for Marine-Earth Science and Technology, and has reached 19.14 petaflops. At number five is America’s five-year old Titan, which manages 17.59 petaflops. The US also takes the sixth spot with Sequoia, a Lawrence Livermore National Laboratory system that is capable of 17.17 petaflops. There’s a new American entry at number seven - Trinity, recently upgraded to hit 14.14 petaflops. It is followed by the US-made Cori with 14.01 petaflops, deployed at the National Energy Research Scientific Computing Center. Japan rounds out the top ten with two supercomputers the 13.55 petaflop Oakforest-PACS and the 10.51 petaflop Fujitsu K supercomputer.

Icelandic data center provider Verne Global is expanding beyond its wholesale offering with the introduction of a cloud powered high-performance computing (HPC) as-a-service platform, hpcDirect. The clusters are built with Intel’s Xeon Skylake processors, which can be provisioned on any scale and incrementally increased over time. Verne Global, launched five years ago, offers colocation and wholesale data center services at its 44 acre data center campus in a former NATO command center in Keflavík, Iceland. The country abounds with cheap hydroelectric power, which has helped attract colocation and bare metal customers from distant regions - something it hopes the new cloud service will help it grow further. Additionally, the company thinks that the offering will allow it to capitalize on the growth in demand for HPC by giving customers the flexibility of purchasing it as-a-service. Dominic Ward, the company’s managing director, said: “We take the complexity and capital costs out of scaling HPC and bring greater accessibility and more agility in terms of how IT architects plan and schedule their workloads.” But Verne Global is not the first to explore the HPCaaS model - earlier this year, Cray teamed up with Markley Group to offer supercomputing as-a-service, and followed that with a partnership with Microsoft to offer supercomputing-as-a-cloud-service. At the time, Cray VP Dominik Ulmer, told DCD: “In order to reduce the barrier to entry we thought maybe it would be good to reduce the work that you have to do to get to a HPC system.”

bit.ly/MadeItInChina

Volkswagen in Google quantum computer research partnership Volkswagen has entered into a research partnership with Google to use one of its universal quantum computers. The car company aims to use the experimental computing platform to explore traffic optimization, research new materials, with high performance batteries in particular, and AI with new machine learning processes. “Volkswagen has enormous expertise in solving important, real-world engineering problems, and it is an honor for us to collaborate on how quantum computing may be able to make a difference in the automotive industry,” Hartmut Neven, director of the Google Quantum Artificial Intelligence Laboratory, said. The collaboration is not the car company’s first foray into quantum computing - this March, Volkswagen turned to D-Wave to use the Canadian company’s quantum annealing

bit.ly/OnDemandSuperPower

computer to calculate traffic flow for 10,000 taxis. Google’s quantum computer, of which little is officially public, is more complicated than an annealing system. Dr Dominic Walliman, applications engineer for D-Wave, explained the difference in 2015: “In quantum annealing, what you’re trying to do is harness the natural evolution of quantum states, although you don’t have any control over that evolution. “You set up the problem at the beginning and you let quantum physics do it’s natural evolution - and the configuration at the end corresponds to the answer you are trying to find. “In gate model quantum computing, the aim is a lot more ambitious. What you’re trying to do there is to try to be able to control and manipulate the evolution of that quantum state over time. Now this is a lot more difficult because quantum systems tend to be incredibly delicate to work with, however having that amount of control means you can solve a bigger class of problems.” Google was previously rumored to be releasing a 50 qubit universal quantum computer by the end of this year, with internal slides suggesting it will offer access to the machine on Google Cloud. bit.ly/DontMentionWaymo


PRO

www.dcpro.training

Take the Mission Critical Awareness certificate online We’ve upgraded the industry’s most flexible series of online training. Join 1000s of your peers and enroll today! info@dc-professional.com

1. Mission Critical Engineering 2. Reliability & Resiliency 3. Electrical Systems Maintenance 4. Fundamentals of Power Quality

Can you put a price on safety? We couldn’t. Take our 1 hour Health and Safety course online for free.

Take Free Course

www.dcpro.training/dc-health-safety

info@dc-professional.com

www.dcpro.training


THE 12 DAYS OF DCD As we celebrate these festive times, it’s always good to know that the future can be just as bright - filled with the joy that only an informative data center event or training course can bring...

1

Submarine cable in the sea

5

Golden racks

DCD>Energy Smart Stockholm March 13 2018 // The Brewery Conference Center, Stockholm Responding to the digital infrastructure energy challenge www.dcd.events

DCD EVENTS >Thailand | Bangkok Feb 13 2018 // Centara Grand and Bangkok Convention Centre, Bangkok >Indonesia | Jakarta Apr 5 2018 // The Ritz-Carlton Jakarta, Mega Kuningan, Jakarta >Focus on | Hyderabad Apr 26 2018 // The Westin Hyderabad Mindspace, Hyderabad

14

8

Servers a stacking

DCD Magazine • datacenterdynamics.com

PRO

Training Courses Data Center Design Awareness London February 26-28 Data Center Cooling Professional Melbourne February 26-28 Energy Efficiency Best Practice London March 1-2


DCD Calendar

10 Start-

PRO

Energy Efficiency Best Practice Melbourne March 1-2 Data Center Technician Sydney March 12-13

ups a folding

Data Center Power Professional London March 12-14

“The holly and the ivy, When they are both full grown Of all the trees that are in the wood The holly bears the crown O the rising of the sun And the running of the deer The playing of the merry organ Sweet singing of the choir...”

Data Center Design Awareness Perth March 14-16

2

Data Center Design Awareness Singapore March 19-21

Jingle bells, jingle bells, Jingle all the way. Oh! what fun it is to ride in a one-horse open sleigh.

12

11

Routers routing

Switches switching

PRO

Data Center Design Awareness | Singapore January 29-31 Data Center Cooling Professional | London February 19-21

Ho Ho Ho... Merry Christmas from DCD!

DCD>Enterprise New York May 1-2 2018 // New York Marriott Marquis The Enterprise Data Center & Cloud Infrastructure Transformation Summit New York Focus Day Building Out the Edge April 30 www.dcd.events Issue 25 • December 2017/January 2018 15


CHILDREN OF THE COLD WAR

D Sebastian Moss Post-War Reporter

Sebastian Moss finds out how an apocalyptic legacy is being used to build data centers ready for the next major disaster 16 DCD Magazine • datacenterdynamics.com

uring the height of the Cold War, with the Doomsday Clock minutes from midnight, the United States government drew up plans for all out war with the Soviet Union. Secretly investing billions, it created huge underground structures - some for launching devastating strikes, and others for surviving incoming nuclear attacks. Remains of that vast infrastructure can be found in Texas, where huge missile silos lie hundreds of feet beneath unassuming farmland. Elsewhere, in Florida, the backbone of a post-apocalyptic communications system sits behind 42-inch walls, dug into the side of a hill.


Cover Feature After the threat of nuclear disaster receded and the Soviet Union collapsed, these sites have lain dormant for decades. Now, they could find new life as data centers - because of fresh security concerns. “I believe that we're getting into a time in the world where things are becoming less and less secure,” Mark Oxley, CTO and founder of Florida-based Data Shelter, told DCD. “You need to build to the highest level that you can really afford to.” A nuclear bunker might be considered overkill for some people, but not for Oxley. His data center is located just 10 miles away from the coast of Florida, and yet he can say: “if there's a hurricane coming, I really wouldn't worry.” This confidence in surviving nature’s worst is based on the history of the site, which was originally part of the North American Aerospace Defense Command (NORAD). It was built in 1964 by AT&T and the Department of Defense (DoD) as an independent, self-powering installation for Autovon - a long-distance military telephone system designed to withstand enemy attacks. Autovon emerged from the Army's Switch Communications Automated Network (SCAN) system, and used a complex nonhierarchical routing structure to survive the destruction of multiple nodes - something that eventually helped inspire the Internet. Even though Autovon could withstand the loss of some nodes, its facilities were built to remain operational in almost all eventualities. “I have 3,000 pound blast doors,” Oxley said. “The external wall is 42 inches thick, the internal walls are 12 inches thick, poured concrete, with metal rebars in them.” In the 1990s, Autovon was replaced with the Defense Switched Network, leaving a hardened shell which Oxley is turning into an extremely well-protected data center. His “sole vision of the last ten years,” he hopes to open the facility in late 2018. “I wanted to fix all the problems that I've seen in the data center industry,” Oxley explained. “Fifteen or so years ago, it was acceptable for eBay to go offline for three hours every weekend. That's not acceptable today, and I don't think data centers have actually caught up to that.” Oxley became fascinated with the concept of downtime and how to minimize outages: “I realized there are probably three areas of critical impact to a data center - equipment failure, natural disaster and human error.” For the last three years, Data Shelter has tried to design its site to mitigate these areas as best they can - “we've looked at how to solve these problems using intelligent design, hardened infrastructure and process education,” Oxley said, with the facility an Uptime Tier IV Design Certified site.

The threats Oxley hopes to protect against include hurricanes, tornadoes, and even chemical spills on the highway, he said, and the data center “can withstand a pretty large amount of radiation, including electromagnetic pulses (EMPs).” The sheer strength of the physical walls is almost incidental. Oxley admitted to DCD: “I wasn't necessarily looking for a former nuclear bomb shelter. I'm not building this to withstand a nuclear blast, but I am looking to really mitigate everything surrounding it.” Oxley may not have been searching for a nuclear bomb shelter, but for others such shelters have formed the basis of their entire business model. Larry Hall, owner of the Survival Condo Project, has specifically sought out Atlas missile silos to engineer modern-day fallout bunkers. Hall’s work building luxury doomsday condos for the super rich has seen widespread media coverage and - we are told - consumer interest. But he is also building data centers, Hall revealed to DCD in his first ever interview about his company’s plans. “I used to own an Internet company in Florida. I built a couple of different data centers, just Internet service and colocation facilities, but nothing massive. After 9/11 I thought there would be a need for nuclearhardened data centers,” he said. “I was going to sell data centers, so companies can buy one or more floors and put their equipment there and it would be protected from aircraft flying into the facility, or bombs going off, or EMPs.” With previous clients including defense contractors Northrop Grumman and Harris Corporation, Hall says his proposal got a great response at first, but then the dotcom crash happened: “You could buy data centers for 20 cents on the dollar. They opted for buying multiple locations.“ Instead, he turned to selling luxury bunkers: condos with all the features of a high-end living space - including indoor pools and spas, cinemas and gyms - that happen to be in F-series Atlas missile silos. The last of the Atlas ICBM sites to be built, the F-series are made from concrete mixed with epoxy resin, along with roughly 600 tons of steel rebar, creating some of the strongest structures ever built by man. Now, Hall believes the market is once again ready for him to try and build data centers: “Eight years later, it's coming back.”

Hall has designed data center projects that could turn silos into facilities which would protect personnel, data and hard physical assets. Most are covered by nondisclosure agreements, but one company based in Texas “is actively looking for me to convert one of the silos into a data center that is 70 percent for data, and 30 percent for key personnel.” Hall claims that “one big national company” is “secretly going out there trying to get commitments for occupancy,” and has asked “for a price to convert five silos into these data centers” that again saves space for personnel. Such sites would provide a perk for senior staff, he said: “What they're doing, in my mind, is finding a way to use their corporate needs for security to incorporate, as a fringe benefit, protection for executive-level families.” In the last eight years, companies have started reassessing their vulnerabilities in the light of emerging threats: “Because of terrorism, the North Korean situation and floods, they're coming up with new requirements. “One of the scenarios doesn't have Kim Jong Un dropping a nuclear bomb. He sets off an EMP over Kansas, in the middle of the US, and watches the Stone Age return.” Hall thinks his silos offer a unique cost advantage, since the military did the heavy lifting back in 1960 when the US government spent about $15m (~$125m in 2017) on each site. “The outfit in Texas told me that when they build a new Tier IV data center, they're looking north of $12-14,000 per square foot for their facilities. I can bring them into a nuclear hardened facility, which is beyond the physical protection that they're paying that price for - for $750 a square foot or less.” But over in Florida, Oxley isn’t so sure that the cost benefit is fully worth it. When asked about the challenges of designing a data center in a fixed, hardened structure, he let out an audible sigh: “Well, the current facility actually posed a lot of challenges because of the footprint of different rooms.” “I can't just blow out walls,” he said. “I can't move a wall six inches because I don't have enough room. So there was a huge challenge in finding the right HVAC equipment and power equipment that could fit in certain rooms to operate the facility as it currently stands.”

Issue 25 • December 2017/January 2018 17


That’s why any future Data Shelter facilities could be entirely new sites. “If this concept really is what I think it will be to the industry, then the next one we would do will be a ground up build. “I would build to the same or similar design standards as the current facility; underground, in a heavy-duty concrete bunker. But I wouldn't look to retrofit. I think we spent a lot more time and a lot more energy retrofitting than we would have, had we built from the ground up. And that's a lesson learned.” Hall also hopes to build new data centers, and is in talks with one “customer that already owns large tracts of land in remote areas that don't have a missile silo on them, and they want me to build a bunker for them.” To meet this need, the company has developed what it calls 'the next generation

of bunkers:' "We're looking at building some of these in Texas, and we're pricing another one out for a client in Idaho. They are essentially underground domes connected with tunnels.” There are only 72 Atlas F bunkers, so “you need something more scalable, and I think the underground monolithic dome is a perfect solution,” he said. Oxley thinks facilities like those from Data Shelter and Survival Condo will create a whole new segment of the data center market, built to last: “A lot of the issues that are occurring are because people are building these warehouse-style facilities that are thrown up very quickly, inexpensively, with very little design thought. “They're aging and having failures. The processes and the education level in running the facilities are not up to par. I believe that's where the industry needs to change.”

He thinks customers “should demand more - you have to look at the data center lifespan as a whole, and say: over the lifespan, how many possible outages could we have and does it make sense financially to not build to the best design you can? I don't believe that you should settle for anything less.” In Texas, Hall sees this as a dramatic reversal of his life’s work, essentially forging swords into plowshares. “I used to work for the government and saw what they're doing on continuity of government structures. In data centers, I worked for some of the big militaryindustrial complex companies and it was a very challenging job," he said. “But there's a big difference between building weapons of mass destruction and converting those same facilities into lifesaving facilities.”

Ration s

Ration s

Ration s

Ration s

CEMENT

18 DCD Magazine • datacenterdynamics.com


Cover Feature

A NUCLEAR FAMILY

O Air Filtration

Reinforced concrete

State of the art fire prevention systems

utside of America, there are several other data center projects located in the shells of nuclear bomb shelters, offering levels of security beyond the usual tall fence, CCTV and a handful of security guards. In Paris, as Cold War tensions reached new levels after the assassination of JFK, the French government ordered the construction of a secret nuclear fallout shelter capable of housing 300 people. The site was built in ’Abri Lefebvre,’ a passive defense shelter dating back to 1937 that was extensively upgraded and expanded by Parisian authorities to survive attacks on the capital. It was kept operational until 1991, but left abandoned after the fall of the Soviet Union. After two decades of neglect, the state auctioned off the derelict property in 2012. Colocation company Online.net picked it up, made necessary repairs and opened a data center last year. A similar story played out in Riga, Latvia in 2009. European cloud and colocation specialist DEAC has converted a Soviet army command bunker into a small data center, home to some 80 racks. 'Grizinkalns' is buried 12 meters underground, beneath a 1.5m-thick lead dome, ready to absorb the radiation from a nuclear blast. Perhaps the best known data center-in-a-bunker conversion is Bahnhof's facility in Pionen. A DCD 'World's Most Beautiful Data Center Award' runnerup, Pionen is a sight to behold - found 30m down,

protected by 40cm-thick steel doors, its design was chosen intentionally to look like a James Bond villian's lair, and makes several references to Silent Running, the 1970s classic sci-fi film in which all plant life on Earth has gone extinct. But cities are always at risk of nuclear attack after all, that's where the people are. To be truly safe, one might be advised to head for the mountains. Deltalis is located deep in the Swiss Alps, near the Swiss-Gotthard Massif. Formerly housing the Command and Control center for the Swiss Air Force (with space for about 1,500 personnel), it is now home to 10,000 square meters (107,639 square feet) of data center space. Surrounded by reinforced concrete, heavy steel doors and granite rock, it is designed to survive most attacks.

50 megatons The yield of the most powerful nuclear weapon ever detonated, the Soviet Union's Tsar Bomba, "The King of Bombs" There are others dotted around the world: DSN's 64,000 sq ft (5,946 sq m) bunker in Nova Scotia, Canada, UK ISP Bogons' planned site in Comrie, Scotland (bought for just £150,000/$200,000) and Finger Lakes' New York munitions storage conversion, to name but a few. Together they ensure that - if nothing else - if the end does come, at least some racks will survive, a small island of civilization in a bleak, empty world.

Issue 25 • December 2017/January 2018 19


>Webinars

2017

Greatest Hits Archive!

is This not a D

C real

We have selected five of our favorite webinars from 2017. Have a listen and share with colleagues and friends.

Highest number of live attendees

Highest number of leads delivered

Highest audience engagement scores

Featuring:

Featuring:

Featuring:

Chris Ortbals, QTS Data Centers Andrew Boardman, QTS Data Centers

John Schmidt, CommScope Peter Judge, Global Editor, DCD

Moderator: Stephen Worn, DCD

Peter Panfil, Vertiv Tony Gaunt, Vertiv Thomas McKinney, Forsythe Data Centers

u Watch Now: bit.ly/HybridITWebinar

u Watch Now: bit.ly/BankingOnBatteries

Just released

Most resources downloaded

u Watch Now: bit.ly/HighSpeedMigrationSteps

Also available to view

How ready is your Edge infrastructure for the high-velocity demands of the IoT? u Watch Now: bit.ly/ReadyingTheEdge

Featuring:

Featuring:

Sushmita Singal, PwC Richard Northrop, UK/I Data Center & Cloud Lead

Antti Romppanen, Nokia Networks Bill Carter, Open Compute Project Foundation (OCP)

Industry 4.0, Data centers & The Internet of Things

Moderator: Stephen Worn, CTO, DCD

Moderator: Stephen Worn, DCD

u Watch Now: bit.ly/Industry40AndDataCenters

u Watch Now: bit.ly/BeingCloudReady

u Watch Now: bit.ly/TheOpenRoadToCloud

Catch up with our extensive archive of webinars and new releases by visiting www.datacenterdynamics.com/webinars Issue XX, Month 20XX • datacenterdynamics.com 20


Core > Edge

Edge networks demand a whole new compromise Peter Judge Global Editor

Data has to be located close to people and devices: Edge could mean a new surge of strength for telcos, reports Peter Judge

W

e all think we know what edge computing is. Resources have to be close to sensors and endusers, to handle the volume of data demanded by real-time Internet of Things (IoT) applications. But what does this mean in practice? On one hand, networks will become crucial; but on the other, the applications and edge devices using those networks will also be vital, so the players from all sides of the infrastructure industry will have to collaborate, in accordance with their strengths. DCD’s Edge summit in Dallas earlier this year heard some clear indications of how that compromise will be reached. Networks will come to the fore because wireless bandwidth is scarcer at the edge. The finite capacity of a wireless link will be more important than the ever-increasing power of processors. Caroline Chan, Intel’s vice president for 5G, previously told us: “You have Moore’s law; we have Shannon’s” (DCD 24. p38). Telecoms companies will have more sway, as cell towers are the sites where limited wireless networks meet the much higher capacity of fiber: “What we see is a shift in power back towards the telcos,” said Cole Crawford, CEO of Vapor IO. “What drives a prime site?” asked Alan Bock, VP of corporate development at Crown Castle.

“You need a location near a substation and near fiber.” Crown Castle owns 40,000 cell towers and will be adding micro data centers based on Vapor’s eponymous "chamber" enclosure to offer localized, small-scale edge colocation. Other telcos will offer micro-cells within buildings. “We are supporting the densification of wireless networks, with small cells based on distributed antenna systems inside buildings,” said Cliff Kane, co-CEO of New York fiber network Cleareon. “We can aggregate traffic and feed it back through fiber.” The edge transformation will be driven partly by users handling transactions on their phones while on the move, said Eddie Schutter, head of critical infrastructure at eBay: “That requires ubiquity of networking and access to stored data,” he said. “You need connectivity, you need speed, and then you need a user experience to keep your customers.” This edge will be more distributed than anything we’ve built before, and that means humans will be unable to manage it unless it is virtualized and automated - since tasks will be repetitive, time consuming and uneconomical. Crawford said: “The question isn’t about how you manage 40 sites, it’s how you manage 40,000 sites. You can’t solve that by putting a person in front of a spreadsheet, you have to start doing site selection algorithmically.”

Issue 25 • December 2017/January 2018 21


u Edge will also be more flexible than today’s networks: “We have a business based on long-term customer contracts,” Kane said. “We are looking to transition that into something else.” Customers used to get an Ethernet circuit for one to three years; now they want it for an afternoon. Schutter agreed: “The only way you can scale is to deploy virtualization in a way that gives you the dynamic orchestration you need, where businesses need to be connected.”

Connected cars,

But that spending will be partly justified in getting more revenue out of the alreadyexisting backhaul networks. “Carriers, to their credit, have been very good at deploying capex and building networks,” Bock added, but argued that the longhaul part of the network will be small compared with the edge: “There will be more capital spend on the edge in the next few years than has been spent in the entire history of telecoms. It’s crazy amounts of money.”

50%

If all this is going to

augmented reality (AR) happen, the infrastructure of enterprise data and virtual reality industry will have to find will be processed (VR) are the three ways to reduce that cost outside the data center by most-often quoted by sharing investments, 2022 (Gartner) applications the panel said. The making big technology to do this will demands of edge include network functions networks. All three require virtualization (NFV), which allows fast round-trip response times. Vehicles may multiple service providers to share the big, need to refer to networked resources while fat, dumb network pipe, and offer revenuetaking real-time evasive action, while VR and generating services, with much lower capital AR equipment must show a realistic response investment. to users’ movements, or else the inner ear “We know for a fact that margins for will detect a lag. spectrum are going away,” Crawford said. “Connected cars need a four or five “What we are seeing is innovation in millisecond round-trip time,” Crawford said. software, while managed network operators “You have a sub 5ms decision time, so you (MNOs) are seeing innovation in capital need to have GPUs very close to that car, so reduction, which benefits everybody.” data can get to that car and back. Likewise, if you have anything more than a 5ms roundData center operators and telecoms trip [for VR data], you will feel ill.” operators are already building digital Network companies have a good position infrastructure. Both operate in a marginas the edge emerges, but they will need help. based business, but network builders are Mobile network providers have become experts at adding capacity, while data center expert at building backhaul capacity quickly operators excel at reliability. enough to meet expanding demands from It’s not exactly clear what form the human users. The trouble is, the automated hardware required by the edge will take, but applications of the new edge will demand elements of both will be there, along with scaling at a much faster rate. entirely new ideas designed to combine “The edge is the biggest part of the these strengths. The discussion has begun network. The biggest part of spending is - DCD will be following its future direction always going to be at the edge,” Bock said. very closely.

Building at the Edge The biggest issue associated with edge capacity is how to build it. Possible answers range from micro facilities as small as a single cabinet, to customized pods installed within existing colocation data centers. The edge will also place resources in new locations, including cell towers, where they will be integrated with mobile networks (see main feature). Installing resources at cell towers will require network services and some will provide extra network resources. There is also specialized hardware aimed at cell towers, such as the Vapor Chamber from Vapor IO, or the units offered by EdgeMicro which are based around Schneider Electric containers. One recent announcement is the RuggedPOD, a liquid-cooled outdoor chassis designed to operate unattended at a cell tower, created by French startup Horizon Computing and currently being tested in Italy by a service provider. The edge could also involve more experimental approaches such as small amounts of server capacity distributed and installed within homes and offices, handling anonymized calculations while their waste heat is warming the building. A number of startups, such as France's Qarnot, are offering services on this model, and increasingly proposing them as an edge solution. These and other edge issues will be covered at DCD's Enterprise event in New York in early May, which will include a special Building at the Edge focus day. The event will explore strategies for deploying data centers at the “edge,” both on-premises and in colocation facilities. It will put those technologies in front of business leaders and IT services companies that need to get resources closer to users and devices to reduce latency.

bit.ly/buildingattheedge

22 DCD Magazine • datacenterdynamics.com


Uptime is everything—

So don’t fall for the imitators. Trust 30 years of innovation and reliability.

Originally released nearly 30 years ago, Starline Track Busway was the first busway of its kind and has been refining and expanding its offering ever since. The system was designed to be maintenance-free; avoiding bolted connections that require routine torqueing. In addition, Track Busway’s patented u-shaped copper busbar design creates constant tension and ensures the most reliable connection to power in the industry—meaning continuous uptime for your operation. For more information visit StarlinePower.com


> Enterprise | New York

THE DATA CENTER & CLOUD INFRASTRUCTURE TRANSFORMATION SUMMIT May 1-2 2018 // New York Marriott Marquis Limited sponsor and exhibitor opportunities - contact chris.hugall@datacenterdynamics.com Headline Sponsor

Global Content Partner

#DCDEnterprise

For more information visit www.DCD.events


> Core > Edge | Supplement

INSIDE

Powered by

The future is both copper and fiber

> Standards for copper and fiber cables are evolving but can they match the demand for bandwidth?

When financial rules change your cables

There’s more than one way to open a rack

> The EU’s MiFID II is a set of financial standards. Who knew they would affect cabling?

> Webscale data centers demanded a new rack design, then enterprises wanted another


WHETHER YOUR NETWORK IS PHYSICAL OR VIRTUAL The Need For Testing Is Real T-BERD®/MTS-5800-100G This compact handheld network tester with dual 100G ports is the one tool that network technicians and engineers need to install, turn-up, and maintain their networks. Learn more at viavisolutions.com/hyperscale


A Special Supplement to DCD December 2017/January 2018

Powered by

Contents

Plugging in, stacking up

I

Features 28 Blending copper and fiber

am sure that data center engineers have the best fairy lights. At your homes, I bet your treetops are glistening, and children are listening for the sound of sleighbells integrated in a high-tech display of synchronized festive wizardry. And those lights will also be cabled perfectly. Power and data connections will all meet relevant standards, the installation will be tested, and the extra cable lengths will be neatly and safely coiled to avoid any health and safety issues. We live by cables, racks and the physical fixings of our facilities - so it's a festive pleasure to spend some time discoursing on the subject in this special section.

32 Testing Times: Viavi Advertorial 35 Financial standards rewire colos 36 Open Rack Standards

28

Copper and fiber have been

32

36

symbiotically entwined for decades, linking data centers and handling the data traffic within them. Like so many other inventions, optical fibers are based on long-established techniques. Total internal reflection and "light pipes" have been known since the 1840s. Digital transmission over fibers began in the 1960s - and it is fitting that a data center park in London is named after one of the pioneers, Sir Charles Kao, and located where he developed fibers for telecoms. Fiber has been poised to take over for a long time inside the data center, but copper remains stubbornly cheap, and new standards keep squeezing more speed out of it. The innovations of webscale organizations in the Open Compute Project look like pushing copper into a smaller part of the in-building network (p28), but there's still room for copper.

"Only connect," said novelist E. M. Forster. If he'd been an engineer, he would have written "Only connect - and test!" but perhaps Howard's Termination would not have had the same literary success as Howard's End. Testing is a vital part of any network project, and any innovations which make it faster and more reliable have a direct impact on costs and profits. Our sponsor, Viavi, gives an update (p32) on how testing techniques are keeping up with newer networks.

Financial standards might seem to be well up the software stack, and have little impact on the physical infrastructure but it turns out otherwise. European standards known as MiFID II, which come into effect in 2018 demand that customers get equal performance - and that means the same latency. This affects how colocation providers must cable their facilities (p35).

Opening presents is a Christmas pleasure, and the industry believes that opening (and changing) racks can be equally rewarding. The 19in rack, like fiber optics, is older than the data center, and it seemed completely fixed, but in recent years, webscale providers led by Facebook and Microsoft have proposed alternatives, Open Rack and Open19, which fit more kit into the same footprint, and distribute power better. I reckon data center people also have neater stacks of presents under their trees than other people. Peter Judge DCD Global Editor

DCD Core>Edge Supplement • datacenterdynamics.com 27


Blending copper and fiber Copper and fiber cables are evolving to meet the needs of data centers, but both will have a place in the future of networks, says Martin Courtney

N

ot a week goes by without a new data center opening somewhere, or a large hosting provider expanding its existing facilities. Recent research from iXConsulting backs up that trend. Its 14th Data center Survey polled companies each controlling around 25 million square feet of data center space in Europe, including owners, operators, developers, investors, consultants, design and build specialists, large corporates, telcos, systems integrators, colocation companies and cloud service providers.

All expressed a desire and intention to build out their current data center footprint, both in-house and through third parties, with 60 percent saying they would increase in-house capacity in 2017 and 38 percent in 2018. Over a third (35 percent) said they would expand their third party hosting capacity by 2019. More than any other part of the market it is the hyperscale cloud service providers which appear to be currently driving that expansion. Canalys suggests that the big four cloud players on their own - Amazon Web Services (AWS), Google, IBM and Microsoft - represented 55 percent of the cloud infrastructure services market (including IaaS and PaaS) by value in the second quarter of 2017, in total worth US$14bn and growing 47 percent year on year. Irrespective of the size of the hosting facilities being owned and maintained, the unrelenting growth in the volume of data and virtualized workloads being stored, processed and transmitted as those data centers

expand will put significant strain on the underlying data center infrastructure. And that is especially true for internal networks and underlying cabling systems that face an acute lack of bandwidth and capacity for future expansion with current technology and architectural approaches. In each individual data center the choice of cabling will depend on a number of different factors beyond just capacity, including compatibility with existing wiring, transmission distances, space restrictions and budget. Unshielded (UTP) and shielded (STP) twisted pair copper cabling has been widely deployed in data centers over the past 40 years, and many owners and operators will remain reluctant to completely scrap existing investments. As well as being cheaper to buy, copper cabling has relatively low deployment costs because there is no need to buy additional hardware, and it can be terminated quickly and simply by engineers on site. Fiber needs additional transceivers to connect to switches, and also requires

28 DCD Magazine • datacenterdynamics.com

Martin Courtney Freelance Analyst

specialist termination. By contrast, copper cables use the same RJ-45 interfaces, backwards compatible with previous copper cabling specifications which simplifies installation and gradual migration over a longer period of time. Standards for copper cabling have evolved to ensure this continuity (see box: copper standards evolve). Data center networks that currently rely on a combination of 1Gbps and/or 10Gbps connections at the server, switch and top or rack layers today are likely to see 25/40Gbps as the next logical upgrade. But in order to avoid bottlenecks in the aggregation and backbone layer, they will also need to consider the best approach to boosting capacity elsewhere, and particularly over longer distances which copper cables (even Cat8) are ill equipped to support. Many data center operators and hosting companies have plans to deploy networks able to support data rates of 100Gbps and beyond in the aggregation and core layers, for example.


Core > Edge

That capacity will have to cope with the internal data transmission requirements created by hundreds of thousands, or millions of VMs, expected to run on data center servers in 2018/2019, and most are actively seeking solutions that will lay the basis for migration to 400Gbps in the future.

Where that sort of bandwidth over longer cable runs is required, the only

realistic choice is fiber - either multimode fiber (MMF) or single mode fiber (SMF). MMF is cheaper and allows lower bandwidths and shorter cable runs. It was was first deployed in telecommunications networks in the early 1980s and quickly advanced into enterprise local and wide area (LAN/WAN) networks, storage area networks (SANs) and backbone links within server farms and data centers that required more capacity than copper cabling could support. Meanwhile, telecoms networks moved on to single mode fiber, which is more expensive and allows greater throughput and longer distances. Most in-building fiber is still multi-mode, and the network industry has created a series of developments to the fiber standards, in order to maximize the data capacity of those installations (see box: making multimode do more). As data centers have continued to expand however, the distance limitations of current MMF specifications have proved restrictive for some companies. This is particularly true for hyperscale cloud service providers and those storing massive volumes of data like Facebook, Microsoft and Google which have constructed large campus facilities spanning multiple kilometers. Social media giant Facebook, for example, runs several large data centers across the globe, each of which links hundreds of thousands of servers together in a single virtual fabric spanning one site. The same is true for Microsoft, Google and other cloud service providers for whom east to west network traffic (i.e. between different servers in the same data center) requirements are particularly high.

Copper standards evolve Most facilities currently rely on a mixture of Category 6 (Cat6) and Cat7 copper cabling that supports 10Gbps bandwidth over 100m, and higher data rates of up to 40Gbps over much shorter distances. But the evolution of those copper cabling specifications is now fundamental to meeting the requirements of not only hyperscale cloud service providers, but also larger enterprises and telcos with big ambitions to expand their use or delivery of either private or hybrid cloud hosted applications and services. In 2016, the Telecommunications Industry Association (TIA) TR-42 Telecommunications Cabling Systems Engineering Committee approved the next stage in that evolution - Cat8, compatible with 25/40GBase-T over short runs of 5 to 30m shielded twisted pair cabling with a standard RJ-45 Ethernet interface. Due to its relatively short reach, Cat8 is, for the moment, targeted at switch to server connections in top of rack or end of row topologies.

Issue 25 • December 2017/January 2018 29


Making multi-mode do more

inside smaller data centers but gave no flexibility for longer link lengths in larger facilities, and it wasn’t future proof: there was no likelihood of bandwidth upgrades beyond 100Gbps. Whilst Facebook wanted fiber cabling that would last the lifetime of the data center itself, and support multiple interconnect technology lifecycles, available single-mode transceivers Ethernet developed supporting link lengths of Four possible specifications at Xerox PARC over 10km were overkill. They were created by different provided unnecessary reach groups of network vendors. and were too expensive for its Facebook backed the 100G purposes. specification from the CWDM4MSA, which was submitted to the Open So Facebook modified the 100G-CWDM4 Compute Project (OCP) and adopted as part MSA specification to its own needs for of OCP in 2011. reach and throughput. It also decreased Facebook shifted to single-mode because the temperature range, as the data center it designed and built its own proprietary data environment is more controlled than the center fabric, and was hitting significant outdoor or underground environments met limitations with existing cabling solutions. by telecoms fiber. Its engineers calculated that to reach 100m at It also set more suitable expectations for 100Gbps using standard optical transceivers service life for cables installed within easy and multi-mode fiber, it would have to rereach of engineers. cable with OM4 MMF. This was workable

What these companies ideally wanted was single-mode fiber in a form that was compatible with the needs and budget of data centers: a 100Gbps fiber cabling specification with a single mode interface that was cost competitive with existing multi-mode alternatives, has minimal fiber optic signal loss and supports transmission distances of between 500m and 2km.

1973

30 DCD Magazine • datacenterdynamics.com

Defined by their core and cladding diameters, multi-mode fiber types are designated by the IEC as OM1 through to OM4. When OM1 bandwidth requirements surpassed 100Mbps, its 62.5 µm diameter was reduced to 50 µm (OM2) to improve capacity to 1Gbps and even 10Gbps over shorter link lengths of 82m. That was boosted again with OM3 (or laser optimized multimode fiber LOMMF) in the 1990s. OM3 used vertical cavity surface emitting laser (VCSEL) rather than LED based equipment to increase the reach of OM2, now supporting transmission rates of 10Gbps over 300m. Various enhancements to OM3 pushed bandwidth and reach to 40/100Gbps over distances up to 100m, but the arrival of OM4 (which uses the same 50 µm diameter and VCSEL equipment) extended 10Gbps bandwidth to 550m and allowed 100Gbps data rates over 150m. All four types of MMF cabling are still found in many of today's data centers, but OM3/4 predominate due to their higher bandwidth, longer reach and VCSEL compatibility. A fifth implementation - OM5 previously known as wide band MMF (WBMMF) uses short wave division multiplexing (SWDM) and was published as the TIA-492AAAE standard in 2016. It uses the same 50 µm diameter and VCSEL equipment as OM3/4 and is fully backward compatible with its predecessors, but increases the capacity of each fiber by a factor of four to support much higher data rates up to 100Gbps over duplex fiber connections and in the future 400Gbps over the same 8-fiber MPO interfaces. There has been little OM5 deployment in data centers to date, largely because few manufacturers have produced appropriate transceivers in any volume. Suppliers only formed the SWDM MSA group in March 2017, whilst Finisair announced it has started to produce QSFP28 SWDM transceivers supporting 100Gbps over a single pair of fibers the following November. There is little doubt that OM5 will rapidly become the de facto MMF implementation for new data centers in 2018, whilst operators will also begin to upgrade existing facilities with new cabling and transmission equipment as required.


Core > Edge The OCP now has almost 200 members including Apple, Intel and Rackspace. Facebook also continues to work with Equinix, Google, Microsoft and Verizon to align efforts around an optical interconnect standard using duplex SMF, and has released the CWDM4OCP specification which builds on the effort of CWDM4-MSA and is available to download from the OCP website. The arrival of better multi-mode fiber (OM5 MMF) and the lower-cost single-mode fiber being pushed by Facebook, could change the game significantly, and prompt some large scale providers to go all-fiber within their hosting facilities, especially where they can use their buying power to drive the cost of transceivers down. In reality, few data centers are likely to rely exclusively on either copper or fiber cabling – the optimal

solution for most will inevitably continue to rely on a mix of the two in different parts of the network infrastructure for the foreseeable future. The use of fiber media converters adds a degree of flexibility too, interconnecting different cabling formats and extending the reach of copper-based Ethernet equipment over SMF/MMF links spanning much longer distances.

So while future upgrades to the existing Cat6/7 estate will involve Cat8 cabling supporting 25/40Gbps data rates will handle increased capacity requirements over short reach connections at the server, switch and top of rack level for some years to come, data center operators can then aggregate that traffic over much larger capacity MMF/SMF fiber backbones for core interconnect and cross campus links.

Single mode for the rest of us Recognizing the gap in provision and the potential size of the market opportunity, several network cabling suppliers formed multi-source agreements (MSAs) to collaborate on delivering single-mode fiber in a form usable in data centers. Four potential candidates for a suitable specification have emerged in the last few years. The 100G CLR4 Alliance spearheaded by Intel and Arista Networks aimed to create a low power, 100G-CWDM solution in QSFP form factor supporting 100Gbps bandwidth over duplex SMF at distances up to 2km. The OpenOptics 100 Gigabit Ethernet MSA was jointly founded by Mellanox Technologies and optical start-up Raniovus. It proposed a 100 GbE specification and 1550nm QSFP28 optical transceiver with a 2km reach using a combination of SMF and silicon photonics to offer capacity of 100G/400G and beyond based on WDM. Supporters include Ciena, Vertilas, MultiPhy and cloud service provider Oracle. The CWDM4-MSA also targets 100G optical interfaces for 2km cable runs using 4 lanes of duplex 25Gbps SMF. The five founding members were Avago Technologies, Finisar, Oclaro, JDSU and Sumitomo Electric, with additional members including Brocade, Juniper Networks and Mitsubishi Electric. Though an interface was not specified by the consortium, the expectation is that the QSFP28 form factor will be applied. The Parallel Single Mode 4-Lane (PSM4) MSA defined a specification with a minimum 500m reach that transmits 100Gbps over eight single mode fibers (four transmit and four receive) each transmitting at 25Gbps and supporting QSFP28 optical transceivers. Original members included Avago, Brocade, Finisar JDSU, Juniper Networks, Luxtera, Microsoft, Oclaro and Panduit.

Issue 25 • December 2017/January 2018 31


Advertorial: xxxxxx

Testing times for the data center of the future By Amie Cox, PhD, Global ICP Sales Leader, Viavi Solutions

Advertorial: Viavi xxxxxx

T

he technology titans – Alphabet, Amazon, Apple, Facebook and Microsoft – are transforming the data center. Content is driving network traffic growth at breakneck pace – and data centers are at the epicenter of this trend. According to a recent study from Cisco, network traffic will grow to 15 zettabytes by 2020 with a CAGR of over 27 percent. As new technologies and applications come online, traffic volumes will only intensify and could even surpass these estimates over the next two to three years. Along with this data growth, data center managers also face challenges such as interoperability, complex multifiber infrastructure and rising costs. Changes are afoot that require a new mind-set in the realm of test and measurement to ensure 24/7/365 uptime. For operators, the network is their product and they conduct extensive tests – and more tests – almost every time they introduce a new service. Internet Content


Advertorial: Viavi

Providers (ICPs) have a different mind-set. Their product is content and the network is the vehicle to reach the user. Often ICPs conduct limited tests prior to the launch of a new service – they find it time consuming and a constraint on the pace of their growth. Understanding the ICP mind-set and its impact on network infrastructure along with their requirements is fundamental to the continued rapid expansion of data centers.

Increasing automation Usually, telecom operators have hundreds of engineers to manage the network and most are hardware proficient. They consider the lifespan for hardware and technology to be around 10+ years. ICPs, on the other hand, have much smaller operations teams and their forte is typically routing and software. As such, they create open application programming interfaces (APIs) and softwarebased automation to maximize workloads and networks. What’s more, the phenomenal growth rate of ICP infrastructure requires them to rip out and replace technology every three to five years, so they view hardware with a limited lifespan. ICPs often find that the standards bodies move too slowly to meet the needs of their business model. As such, they frequently white box technology from multiple vendors – often before industry standards have even been agreed. The fact is, ICPs have experienced rapid growth and need to be agile to operate in an ultra-fast paced world – yet for data center infrastructure, all this can create problems of interoperability and downtime. If those challenges were not enough, data centers are also having to buckle up for faster network speeds.

Maximize speed, reduce power Speeds at data center interconnects (DCIs) and intra-connects are already at 100G and soon 400G will be the norm. Yet as speeds increase, infrastructure managers will have to maintain that momentum while living within their power constraints. A major challenge for data centers is to reduce power consumption across their infrastructure while delivering high-speed connectivity and feeding the growing demand for data. A study last year found that data centers globally had consumed well over 400 terawatt hours of electricity – far higher than the UK’s total consumption – and this could triple in the coming decade. As pressure mounts on data centers to reduce energy consumption, some ICPs have looked to colder climates such as the Nordics for facilities. Kolos, a US - Norwegian joint venture, is working on the world’s largest data center in the Arctic Circle that could tap into hydropower and cut energy costs by 60 percent.

Tested to the limit As ICPs continue to expand, they will build more data centers to accommodate the rising levels of content and require seamless DCI to deliver services to users at lightning fast speeds. Given the pace at which these businesses have grown, ICPs have had little time to put in place the rigorous procedures necessary for testing to ensure seamless DCI. This has been a major challenge for some data center managers who have also had to grapple with the rising costs of cabling infrastructure as well as a plethora of protocols to interoperate. All these challenges might seem like a tsunami, but there are steps that data center

managers can take in the realm of test and measurement - inside the data center, within DCI and in network monitoring - to steady the ship. Within the data center, automated testing tools can inspect and certify fiber endfaces for faster network build-outs and test functionality for MPOs (multi-fiber pushon). Effective AOC (Active Optical Cables) and DAC (Direct Attach Cable) test practices are essential to ensure optimum network performance and to address the challenges brought on by the growth of multifiber connectivity. To stay ahead and prepare for increasing DCI speeds, ICP engineering labs need to test 400G interfaces with a versatile platform that can handle different applications and ports. Running simultaneous test modules, comparing and evaluating the results and performance of open APIs/protocols such as NETCONF/YANG on racks at high speeds of 100G, 200G and 400G can help to pinpoint potential issues and troubleshoot infrastructure complications before they arise. Network monitoring needs to be automated and virtualized so that data center managers have the capability to monitor, diagnose and resolve anomalies on virtual, physical and cloud-based infrastructure. The fiber networks they depend on require robust testing from end-to-end to maintain peak performance.

The data and content mind-set Similar to how oil transformed economies in the 20th century, data is the world’s most valued resource today. The ICPs driving the new ‘data economy’ have experienced a meteoric rise. It is a similar tale with data traffic – it has grown in leaps and bounds. Data center managers have had to adopt a new mind-set to stay ahead of new platforms, practices and protocols – not to mention data speeds – to keep pace with the ICP business model. Test and measurement is no different. To support data center infrastructure and stay ahead of the curve for ICPs, test and measurement needs to be agile, virtual and automated.

Contact Details Phone: +1 844-468 4284 www.viavisolutions.com/hyperscale

Advertorial: Viavi


Downtime Latency Throughput Network performance challenges can cost you revenue and customers

What are you waiting for? Ensure the in today’s ever changing data market with the most comprehensive triple-threat solutions available. Utilizing these technologies in tandem allows you to stabilize your current network architecture, as well create a highly agile and scalable infrastructure tly transition with next generation technologies. to quickly and co

Signature Core™ Fibre Optic Cabling System HD Flex™ 2.0 Fibre Cabling System PanMPO™ Fibre Connector

www.Panduit.com/hdflex | P: 020 8601 7219 | E: marketing_emea@panduit.com quoting Ref: DCD11-17


Core > Edge

How long is a piece of string?

Max Smolaks News Editor

Upcoming European regulation makes colocation providers pay attention to the length of their network cables

T

he European banking industry is about to experience a major shake-up: in January 2018, the Markets in Financial Instruments Directive II (MiFID II) will become law, causing misery among algorithmic traders, investment bankers, hedge fund managers and anyone else employed to grow and multiply money. It will also affect colocation facilities and trading exchanges that serve more than one financial organization. MiFID II includes the requirement to provide customers with “equal conditions” in terms of data center resources like power, cooling and networking. Power and cooling are easy but networking is not, since latency depends on the length of the cable used to transfer data. This means a server located closer to the router - and thus having a shorter fiber cable - would have a minuscule yet measurable speed advantage over a server that is located further away. This might not have much of an impact on traditional data center workloads, but it could mean a difference between profits and losses in highfrequency trading. The only way to eliminate this difference is to ensure that everybody’s cables are of equal length. This will require lots of additional fiber, and it will require lots of measurement. “A decade or so ago, a lot of these

businesses located right inside big trading exchanges, as opposed to enterprise data centers, because the biggest contributor to latency is distance. For every meter of distance you get about five nanoseconds [0.000000001s] of latency,” Stephen Morris, senior product manager for Panduit's data center connectivity solutions, told DCD. “What’s happened under this EU law is the focus has shifted from latency to the distance between where the equipment is located and where the transaction is taking place. If Customer A is 100 meters further away from the exchange equipment than Customer B, someone is going to have an unfair trading advantage. If it’s 40-50 meters, it’s going to be 250 nanoseconds." The main text of MiFID II is 196 pages long, and obviously, not all of it relates to cabling. The new rules were designed to restore investor confidence following the 2008 financial crisis, and to make European markets more transparent in order to avoid another banking-induced meltdown. That’s where the requirement for equal conditions originates: the notion is that latency variations within a facility are not fair, and could even lead to shady deals between traders and infrastructure owners. In general, data center operators go to great lengths to improve network performance for their customers, and that’s

what makes this piece of legislation so unusual. “We normally talk about reducing latency, but in some cases [with MiFID II] we are actually increasing latency: think about it like a speed bump or a traffic light,” Morris explained. “We can’t physically bring them down to the shorter [cable] length because of the space constraints in a data center. So what you have to do is actually take them all out to the maximum length. If the maximum length between all the customers is 100 meters, then the guy who’s on 20 meters now also needs a 100-meter fiber optic cable.” It sounds simple in theory, but there are several considerations to keep in mind. One is mechanical - you will actually need to stash away hundreds of meters of cable. According to Morris, in preparation for MiFID II organizations have been hiding additional fiber all around their data centers: in cable trays, under the floor, and in the racks themselves - this is where structured cabling can be of great help. Another consideration is the precision of measurement - to ensure compliance, cables have to be pre-terminated and cut to very specific lengths. “When you think about length of 170-180 meters, that’s twice the length of a football pitch, you can’t get that exactly right, there’s got to be a tolerance there which is normally about 10 percent. What we found with one particular customer is they were asking for precision that was unheard in the industry around half a percent. “So not only did we have to change our process internally, we also had to work with partners that install the product to test and verify on site, with equipment that’s not been used in the UK before.” Morris wouldn’t reveal the type of equipment used, but he did hint that it was borrowed from the world of telecommunications: “Very few people involved in installing data centers would know this stuff exists.” New EU regulations like MiFID II and GDPR show that legislators are becoming increasingly aware of the role of data centers as the backbone of modern business, and how even minuscule changes on the digital infrastructure side can have far-reaching consequences. It’s safe to say we can expect this type of interference to increase in the future.

Issue 25 • December 2017/January 2018 35


Opening the racks Rack standards are getting an upgrade, but will Open Rack or Open19 come out on top? Dan Robinson reports

Dan Robinson Correspondent

S

ervers in the data center get refreshed on a regular basis, like other equipment such as network switches and power distribution systems, but the one thing that stays constant is the physical infrastructure that houses it all, the rack. Or does it?

In the last few years many of the largest names on the Internet have started to look at whether the humble rack needs updating for the hyperscale era. The 19in rack has been with us for some considerable time, with some sources indicating it was originally created to house relay circuits for the rail industry before being adopted by telecoms firms. Later, it was co-opted by the computer industry as a handy ready-made infrastructure solution for housing equipment in the server room or data center. But with the rise of large Internet companies and their sprawling data centers, there has been a perceived need to adapt rack design for large-scale deployments. In particular, there has been a desire to cram in more compute capacity, and to cut down costs through greater efficiency. “In terms of the rack, if you look at the 19in standard, which was the only one until

the last five years, typically, weight loadings have got higher and racks have got bigger,” said Andy Gill, engineering director at Rittal, a firm specializing in IT infrastructure. The first concerted effort at change came from the Open Compute Project (OCP), a consortium of various big names in the industry that was founded by Facebook as a way to jointly develop technology optimized for the data center. OCP’s Open Rack standard specifies a wider IT equipment space of 21in while maintaining the same 24in column width as a 19in rack, which is driven by standard floor tile pitch. This design allows for three halfwidth server motherboards to fit side by side, or for a chassis with five 3.5in drives arranged side by side instead of four. It also specifies a slightly bigger rack unit height of 48mm, called an OpenU or OU, which allows for increased airflow for cooling. A more significant feature of the Open Rack design is a power supply busbar that extends the full height of the rack. This

36 DCD Magazine • datacenterdynamics.com

distributes a 12v feed from a dedicated power shelf to every node in the rack, eliminating the need for each individual server to have its own internal power supply. This not only cuts costs, but does away with the power distribution unit and the cluster of power cables taking power to each individual node - and it allows kit to be easily slid in from the front of the rack.

In fact, as DCD has previously noted, the Open Rack design is rather like a blade server architecture scaled up to rack level, but with no proprietary lock-in. According to Gill, Open Rack hardware is largely being taken up the big hyperscale companies like Facebook and Google, the latter of which joined the OCP about 18 months ago, although there are some enterprises and finance companies adopting it in the US. The picture is complicated, however. Some OCP projects, such as Microsoft’s Project Olympus server, are designed to fit standard 19in racks. This is because Microsoft

The Open19 Foundation has won backing from over 75 companies


Design + Build

Why change the 19in rack?

needs kit based on these specifications to be able to fit into its existing rack infrastructure. In fact, until there’s significant demand for a fully engineered Open Rack version of any given piece of kit, there’s been a trend for a lot of equipment to continue in the older 19in form factor, with vendors bolting it to a 21in sled. “The bottom line is that the standard 19in rack is still the dominant technology, but as you go forward, you could easily see in two to three years’ time that Open Rack will start to take market share. I would estimate somewhere between 15 and 25 percent of the hyperscale market could be using an Open Compute platform of some description by 2021,” said Gill. With Google now in the OCP, the newer Open Rack 2.0 specifications have also gained a 48v busbar option for power distribution, which Google claims is 30 percent more energy efficient than 12v equipment. This may mean that Open Rack will find favor with telecoms companies, as much of their equipment already runs at this voltage. Meanwhile, a similar rack initiative has been started by LinkedIn, which formed the Open19 Foundation to oversee the development of its specifications. The goals

of Open19 are to cut the cost of the physical infrastructure, as well as the time needed to install the actual IT hardware into it. “We wanted to bring in a situation where we are reducing the cost of the racks. Racks in general are really expensive: look at smart PDUs - they cost thousands of dollars,” said Yuval Bachar, Principal Engineer of Architecture and Strategy at LinkedIn, speaking at an event earlier in 2017. In contrast to the OCP, Open19 takes as a starting point the need to fit into existing 19in rack infrastructure, and specifies a rackmount enclosure for this purpose. Dubbed a Brick Cage, this is actually little more than a metal frame into which server, network and power modules simply slot. These modules - or Bricks - also conform to 19in rack norms. Thus a standard Brick is 1U high and half the width of the rack, so that two can fit side by side, and customers can fill the Brick Cage with any combination of standard, double-width or double-height Bricks. But the real beauty of Open19 is in the backplane. Each Brick Cage accepts a snapon cable harness at the rear that forms what Bachar calls a virtual chassis, distributing power and data to each Brick from a power shelf and a network switch fitted into the

Despite being incorporated into a US Electronic Industries Association standard (EIA-310), there is actually little about 19in racks that is standard: the depth, type of mounting hole and number of support posts can vary greatly, which has driven the need for tighter standardization, as seen in the Open Rack specifications. But many of the new initiatives like the Open Compute Platform go beyond this, and are looking to address broader issues faced by enterprises and data center operators when modernizing their infrastructure to meet the challenges of today’s more dynamic environment. Part of this involves moving facilities such as power supply and cooling fans and putting these into the rack itself rather than into each node, thereby making the nodes simpler and less costly. Spread out over a large number of racks or even data centers, this can equate to a substantial saving for operators over a certain size. Another related issue is modularity - enabling component parts to be swapped out or upgraded instead of an entire node. Intel, among others, is one firm aiming to address this with its Rack Scale Design, which ultimately aims to allow memory, storage, or even processors to be upgraded without having to replace the whole server.

Cage. This arrangement dramatically cuts the cost, he claimed. “A typical cable to connect 100GbE to a server is between $60 and $80. In this architecture, we’re sub-$10 per server. Just by that, we knocked down the cost by $70 per server,” he said. The Open19 Foundation has won backing from over 75 companies, including HPE, Supermicro, Inspur and QCT, partly because the Brick form factor is a good match for existing half-wide motherboards from these firms in many cases. Because it melds well with existing 19in infrastructure, Open19 may well appeal more to enterprises and mid-size hosting firms than OCP’s Open Rack. However, despite all the updates and enhancements, the 19in rack looks set to remain a part of the data center in one form or another for decades to come.

Issue 25 • December 2017/January 2018 37


Cut Test & Certification Time in Half

It’s Better with VIAVI.

LEARN MORE viavisolutions.com/hyperscale


Power+Cooling

With good intentions Tanwen DawnHiscox Reporter

Ten years on since its inception and still, the prophesized legislation has not come, reports Tanwen Dawn-Hiscox

T

en years ago, European bureaucrat, policy analyst and editor-in-chief of Energy Efficiency Journal, Paolo Bertoldi, started an industry project hoping to limit data centers’ effect on the environment. Prior to 2007, Bertoldi organized roundtables with European government department representatives and manufacturers, discussing how to improve energy efficiency in the data center - from the manufacturing of components to operational practices and IT, power and cooling equipment.

The text of the framework offers advice on day-to-day operations, refurbishment, and building an energy efficient data center from scratch. It asks that participants monitor their facilities, engage in an action plan to reduce their energy consumption, and keep it low over time. The effort is not exclusive to EU companies, and involved parties include the Carbon Trust, Energy Star, the Green Grid, specialist vendors, facility operators, industry bodies and equipment manufacturers. The consortium finally released the EU Code of Conduct for Energy Efficiency in Data Centres in November 2008. So, as 2018 draws nearer, what can be said for how the guidelines have impacted the data center industry? We’re now on version 8.1.0, written this year, and over 300 data centers have joined the initiative. At the recent DCD>Zettastructure event at the Old Billingsgate in London, contributors and industry members gathered to take stock of how far the set of guidelines has evolved in the past ten years, and where it is expected to go from here. Rather than putting red tape in the way of economic benefit, as was feared when the code of conduct was first published, the framework is being promoted as a way of keeping a green standard in industry hands and ensuring mutually beneficial ends, i.e. reducing greenhouse gas

emissions on the one hand, and saving costs on the other, by establishing best practices. John Booth of CarbonIT, one of the co-authors of the document, said: “In the EU we have 100 standards that apply [to data centers], including the EPI and the Tier V standards that aren’t standards. “But ultimately, it’s a standard, you don’t have to abide by a standard. What I will say is that it’s probably not a good thing not to be aligned with your competitors in this space.” For Lex Coors, an ardent supporter of the Code of Conduct, it’s a no-brainer, as operators having joined have all, without fail, saved on costs, and this, rather than a guilty conscience, is what drives the decision to “go green.” The question one must ask, he said, is: “Do you want to wait, and see what happens, or do you want to sit at the table and make decisions?” The idea of making the standard enforceable is not just undesirable, but, according to Tomoo Misaki, senior researcher and senior manager at the Nomura Research Institute, also highly impractical, as standardization would be both difficult to implement, and difficult to enforce. “There was a voice in a Brussels meeting with Paolo, saying ‘why don’t we make an EU-wide standardized rule?’ But across Europe that’s impossible. The Code of Conduct is best practice and recommendations, that’s it.”

One drawback to the possible expansion of the program is that the text only exists in English, said Mark Acton, head of data center technical consulting at CBRE, who also helped write the latest version of the CoC. “This can get in the way of adoption for countries outside of the Anglophone world, especially so as it is a technical document,” he said. But as adoption of the Code of Conduct grows, and the regulation of greenhouse gas emissions across all industries in the EU looms, the group unanimously agrees that it is in operators’ best interests to follow the program, sooner rather than later.

Issue 25 • December 2017/January 2018 39


Latin America

Peru’s standards drive Virginia Toledo Editor LATAM

A new initiative could improve the state of digital infrastructure in Peru, reports Virginia Toledo

Peru’s best practice * Created by the National Institute of Quality, (INACAL) * INACAL operates under the Ministry of Production and reports to ISO

* Six working groups: energy and protection, architecture and construction, air conditioning, telecommunications, security and governability *Contributors include end-users, suppliers and consultants * It will consider standards from ANSI / BICSI, ANSI / TIA, Uptime Institute, ICREA and ISO / IEC, among others * The standard is due by early 2018

40 DCD Magazine • datacenterdynamics.com

A

Peruvian technical standard for “best practices in design, construction and implementation of data centers” has taken its first steps, with the creation of a technical committee that aims to spread good engineering approaches among industry representatives. The goal is to professionalize a sector that currently suffers from inefficient designs and implementations, although much progress has been made in raising awareness of the importance of

following standards in building robust data centers. A technical committee was created in April 2017, but the movement was born at least four years earlier, according to Juan Francisco Cisneros, technical secretary of the CTN center for data and environments, organized by the National Institute of Quality, (INACAL). This is the Peruvian standards body which reports to the International Organization for Standardization (ISO) but operates under the local Ministry of Production.


Before the committee could be created to take on the job of developing a technical data center standard, IT professionals and others responsible for data centers had to be made aware of the importance of the issues that are involved. “A favorable climate had to be created in the country for maintaining the availability, reliability and security of ICT operations,” said Cisneros. This inevitably required events and training. INACAL made certain requests, which were captured and organized, to help shape the entity that would lead on the issue. This required administrative work as well as plenty of paperwork and technical meetings, arguing for the importance of a local critical infrastructure standard, and the creation of a technical standardization committee to make it a reality. The technical committee is made up of end-users, as well as industry members (manufacturers and suppliers), and technical experts (consultants, specialists and academics). It set up six working groups: energy and protection, architecture and construction, air conditioning, telecommunications, security and governability. In each working group, a team led by a coordinator will collect and analyze technical information to report on its specific subject area, with input from external experts, as well as commercial companies. The teams plan to interview and visit those behind major private and public sector data centers in Peru, to learn about the needs, experiences, concerns and expectations for the future standard. The ideas will be made public so the industry can see the progress and the shape of the deliverables. The objective is to create a mandatory technical standard to tighten up the design and construction of data centers, which can currently be somewhat “informal” in Peru - both in the public sector and private enterprise. Foreign companies will have to comply with these technical standards for good practice and relate the ideas emerging from the global standards community to the local reality in Peru, and the country’s current legislation. The standard in question will consider both resilience and energy efficiency and will be in line with existing standards in the global market, including those from ANSI / BICSI, ANSI / TIA, Uptime Institute, ICREA and ISO / IEC, among others. “Those of us who have made the links for

the authorization believe that gunpowder no longer has to be invented - you don’t have to start again from scratch,” Cisneros said. “We will try to get the best out of them and adapt them to the reality of the country, since there are several factors that influence the designs and implementations.” Peru has coast, sierra and jungle, as well as places more than 5,000m (16,400ft) above sea level, he pointed out, and its own distinctive energy and telecommunications issues, among others. It has not yet been decided whether all data centers will have to be certified, because the issue has not yet been addressed in its entirety in the working sessions of the technical committee. The question will be answered as more details of the scheme emerge. “It is clear that as best practices for the continuity of the business and operation of the infrastructure, the ideal is to have a certified data center in the future and another subcommittee will be formed to see the issue of the certification of a data center,” says Cisneros. The standard will most likely include several different levels, with four or five under consideration. This may be necessary because there is a big gap in ICT expertise and infrastructure between the central, regional and local governments,” says Cisneros. “The idea is that the norm covers the expectations of all public and private organizations according to the reality of the country,“ he added. The committee expects to have its standard ready as this article goes to press, by the end of 2017 or in the first quarter of 2018. It will then be delivered to the permanent technical committee which will review it and consider any contributions from professionals as well as any improvements that INACAL might suggest, to make sure this complies with present and future laws.

As more details emerge, it will be decided whether all data centers have to be certified

Best practice standards are often maligned: by their nature, they do not contain leading-edge developments technologies, but rely on trusted technology with a history of success. However, they have a role in spreading technology to a wider audience as a market matures. A Peruvian best practice standard will be a sign of technical strength for Peru.

Issue 25 • December 2017/January 2018 41


Asia: the hungry markets The data center markets in the Asia Pacific region are keen to grow. Peter Judge reports

T

he Asia Pacific (APAC) data center markets are poised to become as big as those in the US and Europe, according to a report from CBRE. If current trends continue, the APAC region could require a further 140MW of data center supply right away, which equals roughly two to four million square feet of raised floor space.

In financial terms, this means data center revenues in the region would double to $32 billion by 2022, according to a Frost & Sullivan estimate. Total hyperscale cloud power demands in the region could reach 2,000MW by 2020, adding around 34 million square feet, says CBRE. The real estate consultancy notes that outsourcing is vital for large corporates to keep pace, and Asia is catching up after a slow start. Research by BroadGroup suggests only around 12 percent of Asia Pacific-based

Peter Judge Global Editor

enterprises had outsourced their data centers in 2013, but this number doubled in the last four years to almost 25 percent - nearly the same as in Western Europe.

These are dramatic figures. Behind them is a region which made a late start but could capitalize on that by jumping to a modern, cloud-based infrastructure: “Asia Pacific’s relatively late adoption of cloud computing could turn out to be a blessing rather than a curse, depending on how the transition is planned and executed,” says the report. As with the US and Europe, Asia has gateway cities leading the way: Singapore, Tokyo, Hong Kong and Sydney. These are the places where data centers serve the surrounding territories, as opposed to the more locally-based data centers elsewhere. Tokyo is the rising star, CBRE believes, with the potential to become a market similar to Northern Virginia in the US.

Asia’s gateways are still a little smaller than their equivalents elsewhere. The data center sector in each of Asia’s gateway cities consumes an average of 213MW, compared with 253MW for the equivalent sector in the US and 249MW for Europe. The biggest difference between APAC and the other regions is fragmentation, which stands in contrast to the relatively homogeneous markets of the US and Europe: “In Asia, there are many different sets of laws and regulations to negotiate, meaning that formulating the right strategy based on reliable intelligence and advice will be critical for service providers and end users.” Another difference is that Asia has a very strong manufacturing sector - the region is the leading producer of physical goods across many industries. This means that the move to so-called Industry 4.0, where factory floors are automated and sensor-driven, will be crucial, driving demand for Edge services.

Ensure your Invaluable is backed up by our Reliable WE DESIGN • WE BUILD • WE DELIVER 42 DCD Magazine • datacenterdynamics.com


Asia Pacific

in Asia Pacific are high: some 20 percent to 40 percent more than those in the US, CBRE says. However, in 2016, as the market matured, they declined pretty steeply - by some 8.8 percent. In the long term, rents should come close to parity, as the biggest investment is in the mechanical and electrical equipment which has a consistent global cost. However, while there’s not enough supply to meet demand, the rents will stay high in some places: rents in Japan actually increased by five percent in 2016. CBRE predicts that more localized data centers will develop in APAC, for two reasons. First, the far-flung geography of the region means customers in Australia and Singapore can’t be served from the same facility. And second, the privacy concerns will bring a demand for data sovereignty, where citizens’ data remains in their own country.

Given all these factors, it’s no wonder that

And finally, APAC has a huge population. Including China and India, the Asia Pacific region is home to 60 percent of the planet’s inhabitants. As technology moves to an era of personalization, there’s clearly room for a huge expansion that can happen there.

To give an idea of the potential, and the current state of development: the APAC region currently has around 27MW of data center capacity for every million residents in its cities. This is less than half the figure in Europe, which has 56MW per million of the urban population. This is the reason that CBRE thinks the four gateway cities in Asia Pacific could easily grow by 16 percent, taking on 140MW of new supply, equivalent to between five and

nine new data centers in the Tier 1 cities. In fact, more than two-thirds of the new data center supply will be built outside the Tier 1 cities - the majority in China and India. Hyperscale development will be particularly demanding. Cisco believes there were 86 data centers operated by 24 hyperscale operators across the region at the end of 2016, which is expected to nearly double to 160 by the end of 2020. Such companies tend to build 500,000 sq ft (46,451 sq m) per site, and need multiple sites for resiliency. They tend to require a minimum of two locations within a market, with mature markets needing three locations, and the possibility of phased expansion. As a more immature market, where there’s still unmet demand, rents for data centers

the rest of the world views the Asia Pacific region as a hotbed of data center opportunity, says CBRE. New players are lining up to make a bid to enter the market, but generally find the competition is intense, and often need to partner with local companies - if only for connectivity (Equinix and China Unicom). Existing APAC players are also using their local strength to launch into the West: for instance, Alibaba has a facility in California. There are risks involved with any market that is changing this rapidly, but the Asia Pacific has the potential to become the world’s leading data center region. From Niche Space To Mainstream Market: Writing The Next Chapter In The Asia Pacific Data Centre Evolution is available from bit.ly/CBREresearchgateway

SMART SWITCHGEAR & DATACENTER SOLUTIONS

WWW.THOMSONPS.COM

1.888.888.0110 Issue 25 • December 2017/January 2018 43


>Awards | 2017 WI N

NERS 2017

WINNER

WINNER

Vapor IO

ICICI BANK

WINNER

Living at the Edge

İşbank

Sponsored by Panduit

Project Volutus Project Volutus puts a data center inside a cell tower, and offers a retail colocation model, using Vapor IO’s cylindrical edge design. A proof of concept in Austin, Texas is being rolled out in multiple cities across the USA.

The Smart Data Center Award

The Infrastructure Scale-Out Award

Sponsored by Uptime Institute

Sponsored by STULZ

ICICI’s data center drives more than 14,000 ATMs and 4,700 branches worldwide. The bank has combined IoT based data center environmental management, centralized building management (BMS), adaptive capacity management and predictive analytics into a comprehensive software defined data center tool.

Project ATLAS TURKEY Isbank has dedicated 800,000 man-hours to a state-of-the art 38,500 sq m data center in Istanbul, completed in June 2017. It is the first Uptime Institute Tier IV constructed facility in Turkey.

DCIM and Analytics Project

Issue 25 • December 2017/January 2018 45


WINNER

WINNER

Shea McKeon

Octave Klaba, OVH

Young Mission Critical Engineer of the Year

Business Leader of the Year

Sponsored by Microsoft

Sponsored by DigiPlex

Shea McKeon from Morrison Hershfield

Octave Klaba, OVH

Many say that one’s problem solving isn’t best tested until ‘the *proverbial* hits the fan.’ When Superstorm Sandy hit New York, a disaster that had epic repercussions for the local data center market, Shea was on call and selflessly braved a flooded home and the chaos of an unprecedented natural disaster to serve his local client.

Octave Klaba is founder and CEO of OVH, a European cloud provider specializing in open technologies. In particular, OVH has spearheaded the use of services based on the OpenStack open source cloud platform. OVH also picked up the VMware’s public cloud, vCloud Air, when VMware tired of it. After all, you should always have options!

WINNER Dean Nelson

Outstanding Contribution to the Data Center Industry Sponsored by Mercury Dean Nelson, Uber Compute

As well as founding the Infrastructure Masons group of distinguished data center engineers, Dean built data centers for organizations including eBay. He is now Head of Uber Compute.

WINNER - Mare Nostrum Barcelona Super Computing Center The World’s Most Beautiful Data Center Sponsored by Quality Uptime Services

MareNostrum is the name of the main supercomputer in the Barcelona Supercomputing Center (BSC). It is the most powerful supercomputer in Spain and is housed in the most stunning location. The Chapel Torre Girona in Barcelona dates from the 19th century and MareNostrum has been operational since 2005, demonstrating that beauty doesn’t age.

46 DCD Magazine • datacenterdynamics.com


Awards Winners 2017

WINNER Teraco

Energy Efficiency Improvers Award WINNER Hydro66

The Data Center EcoSustainability Award Sponsored by Schneider Electric

Hydro66 Data Center Sweden Hydro66 is led and financed by Internet industry veteran David Rowe, founder and CEO of Easynet. Its first data center is located in the leading cloud and data center cluster in the Nordics and is powered using locally generated green hydropower.

Sponsored by StarLine

WINNER Microsoft

Isando Data Centre 7 in collaboration with STULZ Teraco is a vendor neutral data center operator in South Africa making energy efficiency one of its highest priorities. Working closely with Stulz, it upgraded the Isando 7 Data Centre 7 in Johannesburg to achieve PUE values as low as 1.1 in a very challenging climate.

Mission Critical Innovation Award Sponsored by CBRE

The Stark and Simple Data Center Microsoft’s ‘Stark and Simple’ design collapses the entire energy supply chain into a small fuel cell at the top of each rack. The result is a very efficient generator that produces electricity and by-products that can be reused; like clean water, high grade heat, and pure CO2.

The Open Data Center Project Award Sponsored by Rittal

FishOS

WINNER

FishOS is the world’s first energy-optimizing and utilization-improving automation system for OpenStack clouds. It addresses the full lifecycle automation requirements of OpenStack, encompassing deployment, operation and upgrade phases. It contains OpenStack software and has been validated to provide API compatibility for OpenStack core services.

Sardina Systems, Estonia

Issue 25 • December 2017/January 2018 47


Awards Winners 2017

WINNER

WINNER

WINNER

Repsol

Indra Sistemas S.A.

DigiPlex

Cloud Journey of the Year Sponsored by Anixter

Hybrid Ready Project Repsol’s IT transformation project has successfully moved the company from a traditional IT operating model to a service-oriented cloud model making the business more cost effective and agile in meeting global capacity demands.

Data center Operations Team of the Year - Enterprise

Data Center Operations Team of the Year – Colo+Cloud

Sponsored by Future-tech

Sponsored by Eaton

Indra DCSO, Keystone Services Delivery Project

DigiPlex Norway - Ulven Upgrade Team

Indra systems - the Spanish Information technology and defense company - convened a team to develop a highly successful infrastructure management capability at the San Fernando Data Center in Madrid, originally commissioned in 2011.

Ulven was the oldest and worst energy performing of all DigiPlex’s data centers. In summer 2016, a team was convened to improve the facility’s performance, whose work resulted in annual energy cost savings of £400,000 for a capital outlay of £1.2 million.

WINNER

WINNER

Page

RISE SICS North AB

Design Team of the Year Sponsored by Vertiv

Best Data Center Initiative of the Year

RagingWire Dallas TX1 Data Center

Sponsored by DCPRO

RagingWire selected Page to partner on the design of a new 1 million sq ft TX-1 campus in Garland, Texas. Whilst the client has traditionally used in-house design and construction personnel, this unique hybrid integrated project delivery model was developed to work at scale which improved design, quality, speed-to-market, and saved costs.

RISE SICS North Research Data Center

48 DCD Magazine • datacenterdynamics.com

RISE SICS is a leading research institute for applied information and communication technology in Sweden, founded in 1985. Its new ICE research data center supports universities and industrial companies with an experimental environment for cloud and infrastructure technology.


> Energy Smart | Stockholm

RESPONDING TO THE DIGITAL INFRASTRUCTURE ENERGY CHALLENGE March 13 2018 // The Brewery Conference Center, Stockholm Limited sponsor and exhibitor opportunities - contact chris.hugall@datacenterdynamics.com Global Content Partner

Event Co-Host

#DCDEnergySmart

For more information visit www.DCD.events


Configure your turnkey private cloud solution.

Discover it. Rittal Edge Data Center. Build IT environments to meet the challenges of Industry 4.0 and the Internet of Things quickly and easily - with standardised, preconfigured infrastructure modules from Rittal. A Rittal Edge Data Center is comprised of two, four or six Rittal TS IT racks, plus components for climate control, power distribution, UPS, fire protection, monitoring and access protection, tailored to the specific application.

www.rittal.com


Viewpoint

You wanna play rough?

I

Nice data you have there. It would be a shame if something happened to it.

n the past ten years, cybercrime has transformed from something mostly seen in (terrible) works of fiction - films like Hackers and books like Digital Fortress - into a subject that concerns all levels of corporate leadership, national governments and even my mother. But in 2017, we’ve crossed a new boundary. You see, previously, one of the most difficult aspects of cybercrime was monetization - stealing millions of credit card numbers or valuable intellectual property is the easy part, actually getting paid for the effort is another matter. In case of credit card details, you would need to clone the cards and send armies of ‘cashers’ to multiple ATMs to get your hands on the money. Another method would have cyber criminals enlist ‘stuffers’ to purchase goods from online vendors using stolen data, then send them to several ‘drop’ sites, repackage them and resell them. In case of IP, you would need to find a corporate buyer who would be prepared to negotiate with an anonymous party instead of going to the police getting rid of stolen goods is always risky. Considering just how much work goes into getting paid for hacking, it is little wonder that stolen customer data is mainly used for shady marketing purposes - and that’s not going to put anyone’s kids through college. The same data could also be used for various forms of phishing, but that takes time, effort and ability. This landscape has changed with the arrival of a new threat ransomware. In essence, ransomware is a classic shakedown or extortion tactic employed by criminal entrepreneurs since time immemorial. The model where your assets aren’t actually damaged as long as you pay, something they call ‘pizzo’ in Italy. It is the equivalent of saying: “Nice data you have there. It would be a shame if something happened to it.” In May 2017, Britain’s National Health Service was hit by ransomware, with the attack affecting hospitals, surgeries and pharmacies. In the US, Butler County in Kansas, Montgomery County in Alabama and Mecklenburg County in Virginia were among those suffering serious disruption after ransomware infected local government systems. We’ve seen similar attacks in Scotland and Ukraine, India and Japan. Ransomware is gaining popularity because it enables cyber criminals to be paid directly, using one of the countless new cryptocurrencies that are also making the news [don’t invest in Bitcoin, it’s a bubble]. In the past, cyber criminals would go after online shoppers and social media mavens. But now, they are going after our sick and disabled, our public servants and elderly relatives. We need to fight back. We need proactive security policies and thousands of young specialists that can track the attackers back to their homes. We need judges that understand the cyber domain. We need to reshape our firewalls into minefields. They send one of ours to a virtual hospital, we send one of theirs to a virtual morgue. In this context, the recent announcement by NATO is a step in the right direction. The alliance is considering a more ‘muscular’ and ‘aggressive’ approach to state-sponsored hackers, especially if they come from Russia. We should all learn from NATO, and break their virtual legs.

Max ‘Scarface’ Smolaks News Editor

50 DCD Magazine • datacenterdynamics.com


Structural Ceiling Grid by Tate

Versatile. Flexible. Engineered. • Support a wide selection of heavy data center accessories including cable trays, bus bars, and containment • Continuous threaded slot allows for unlimited flexibility • Decreases both installation time and cost • Multiple extrusion types available – including 3⁄8”, 1⁄4”, 1⁄4” hidden slot, M10, and light structural support

A Kingspan Group Company

www.tateinc.com

Access Floors | Airflow Panels & Controls | Containment | Structural Ceiling Grids

www.TateInc.com

800-231-7788


Will your data centre handle your next big idea?

In a connected world, IT service availability is more important than ever. EcoStruxure™ for Data Centers ensures that your physical infrastructure can quickly adapt to the demands of the cloud and the edge — so you’ll be ready for that next big idea. Join the conversation #WhatsYourBoldIdea

schneider-electric.com/ecostruxure-datacenter

©2017 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. • 998-20120074_GMA-GB


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.