June/July 2018 datacenterdynamics.com
LAND OF THE GIANTS Resilience matters
But do you go with the public cloud or rely on 2N hardware?
Keeping cool
A special 14-page supplement on the fascinating world of cooling
Nigeria’s awakening MainOne’s CEO explains how she’s bringing a nation online
You say challenge. We say opportunity.
RocketRibbon™ Extreme-Density Cables Can your network support emerging 5G technology, where high-fiber availability is critical? We created RocketRibbon™ extreme-density cables, designed for data center and telecom applications, to help meet your network performance challenges. Experience industry-leading density from a cable that is also easy to manage, identify, and trace. And suddenly, your network challenges may feel more like opportunities. Are you 5G ready? Visit www.corning.com/rocketribbon/dcd to learn more about our cabling solution for emerging network requirements.
Are You Corning Connected? © 2018 Corning Optical Communications. LAN-2317-AEN / April 2018
Contents
ISSN 2058-4946
June/July 2018
6 News Apple’s Irish woes, Three colos reign supreme 14 Calendar of Events Keep up-to-date with DCD event and product announcements 16 Land of the giants Google, Amazon, Microsoft and Facebook share what it takes
23
23 Reinventing retail Robots and open source software lead a shopping revolution
CEO FOCUS
16 26
26 Funke Opeke, MainOne When she launched MainOne in Nigeria eight years ago, she faced a region lacking basic telecoms infrastructure. Now, the company provides connectivity to most of the country, as well as colocation, cloud, and hosting services 31 The cooling supplement A look at the biggest data center markets, and how to cool their facilities
34
34 The hottest and coolest markets These locations are a hotbed of activity, defining the industry’s cooling solutions
37
37 Liquid cooling at the edge After HPC, will edge computing bring a new wave of liquid cooling systems? 40 The great refrigerant shortage Unintended consequences arise from greenhouse gas-busting EU regulations
42
42 Getting into hot water The story of how SuperMUC pioneered hot water cooling, and what’s next 46 A new cooling frontier Take advantage of a scientific phenomenon: Send heat into space 49 2N versus hybrid resilience Build it on the cloud, or build it double? Both can lead to trouble 52 Powering the golden state The California Energy Commission talks about a renewable future
46
52
54 Mark Shuttleworth’s candid moment The open source community has a new bad boy
Issue 28 • June/July 2018 3
From the Editor
Meet the team
T
he hyperscale giants operate in a league of their own - behemoths like Google, Amazon and Microsoft have had to overcome challenges others have never even imagined. On p16 we talk to the big three about their data center designs, their capacity planning efforts, and what they expect from the future. Plus, on p20, Facebook shares why it has just one data center operator per 25,000 servers - the secret is machine learning. With their vast scale, cloud companies claim to offer high reliability, while others insist that owning redundant infrastructure
The ultimate goal is to create the illusion of infinite capacity is a better approach. The truth lies somewhere in between (p49). Whatever the method, the facility hosting your data will need to be cooled. In our supplement, we detail the various approaches companies have taken to beat the heat, and discuss the challenges still to come (p31). Perhaps liquid cooling will make a resurgence at the edge (p37), or operators may decide to beam their excess heat into space (p46). Alternatively, they could look to a German supercomputer for inspiration, and embrace the concept of hot water cooling (p42).
Taking an unconventional approach to cooling seems like the smart choice, with new EU regulations putting the squeeze on traditional refrigerant gases, forcing change (p40). How to adapt to these shifts? Depends on where you are. The biggest data center hubs each have different benefits and disadvantages that you need to be aware of (p34). It may not be the largest market just yet, but Africa’s online presence is growing fast. In the west, much of the transformation can be credited to one person - Funke Opeke, MainOne's CEO. What began as a submarine cable project evolved into an ambitious plan to turn Nigeria into a digital state (p26). California, which is almost a nation unto itself, has had a slightly easier ride, with the state being one of the most digitally advanced locations on the planet. But now it is dealing with a different issue: how to power the servers, as well as everything else, using renewable energy. The California Energy Commission tells us how it hopes to pull off the monumental task ahead (p52). While the US state’s goals seem achievable, those in retail have a far more serious challenge: survival. In the face of the world’s largest cloud company’s ‘other division,’ the rest of the industry risks oblivion. Adopting new tech is the only way to prosper, with some pursuing open source solutions, while others are turning to robot revolutions (p23).
628
Land of the giants: The hyperscale story
The number of hyperscale data centers Cisco believes will exist by 2021 Owned by just 24 companies, these data centers will account for 53 percent of all servers - nearly double the current share By then, Cisco expects 94 percent of enterprise workloads to be hosted in the cloud
4
DCD Magazine • datacenterdynamics.com
Training
Reporter Tanwen Dawn-Hiscox @Tanwendh Editor LATAM Virginia Toledo @DCDNoticias Assistant Editor LATAM Celia Villarrubia @DCDNoticias SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Chris Perrins Designer Mar Pérez Designer Ellie James Head of Sales Yash Puwar Global Account Manager Aiden Powell
Conference Manager, EMEA Merima Dzanic Head office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
Peter Judge DCD Global Editor
bit.ly/DCDMagazine
Debates
Senior Reporter Sebastian Moss @SebMoss
Conference Director Giovanni Zappulo
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
News Editor Max Smolaks @MaxSmolax
Global Conference Director Rebecca Davison
Dive deeper
Events
Global Editor Peter Judge @Judgecorp
Awards
CEEDA
www.pefc.org
© 2018 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
R
High Rate, Long Life Batteries for Critical Power - Data Centers
RELIABLE When you need long term reliable backup power, Narada High Rate and High Operating Temperature batteries deliver. Engineered and manufactured in state of the art facilities, Narada provides solutions to meet your Critical Power needs.
ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties
Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339
Whitespace News in brief Etix Everywhere receives ‘significant’ capital investment Says this will allow it to pursue its vast expansion plan
Telehouse announces London data center expansion
White space
A world connected: The biggest data center news stories of the last two months
Opening two more floors in the multistory facility
Nlyte launches ‘cognitive’ DCIM with IBM Watson Announced at DCD>Enterprise, the new cognitive’ platform promises the ability to predict upcoming issues
Digital Realty’s CIO and SVP of sales and marketing to step down Scott Peterson, one of the company’s cofounders, was responsible for executing over $17 billion in investments
Cologix appoints Bill Fathers as Chairman & CEO Fathers has spent the last 20 years working in the data center industry
US launches world’s most powerful supercomputer IBM and Nvidia’s Summit system is live
CyrusOne plans two data centers in Dublin The Texas-based data center operator will build both facilities to be two-stories high, accompanied by an office block over three levels. Altogether, the data centers will span almost 35,500 square meters (382,120 sq ft). The company’s chief executive, Gary Wojtaszek, told The Times that he hoped to “deliver capacity” in 2019, also stating that the company’s European expansion constituted a “top priority.”
6
Apple cancels $1bn Irish data center after planning delays Apple has canceled its planned €850m (US$1bn) data center campus in Athenry, County Galway, Ireland, after years of delays. The project was announced in early 2015 alongside a planned Danish data center, but was beset by legal challenges that stymied development. The data center in Denmark, meanwhile, opened in 2017 and is set to be joined by another site in 2019. Galway gave permission for the site to be developed as a campus in September 2015, but more than 20 appeals were soon lodged. Multiple hearings were held. In August 2016, planning authority An Bord Pleanála approved the data center, as well as an electrical substation, but that too was challenged. Local residents Allan Daly, Sinead Fitzpatrick and Brian McDonagh believed that the Environmental Impact Assessment (EIA) that An Bord Pleanála carried out was flawed, as it only looked at the first data center, and not the potential eight facilities that Apple was considering. Daly and Fitzpatrick also raised concerns over power consumption, greenhouse gas
DCD Magazine • datacenterdynamics.com
emissions, the site’s size and claimed that it “is not of strategic importance and is not supported by regional policy.” Over a year later, despite efforts to fasttrack a decision and after a delay in finding judges for the case, the Commercial Court agreed with An Bord Pleanála. Justice Paul McDermott ruled that the EIA had been carried out in an appropriate manner, and that Dublin-based McDonagh was not living physically close enough to the site. McDonagh had been trying to build a data center in Wicklow, a town neighboring Dublin on the East coast of Ireland. He had also previously tried to sell the land to Apple. Daly and Fitzpatrick then attempted to appeal the decision - a process that was itself delayed by Hurricane Ophelia - but were refused permission by the High Court. In December, they applied to the Supreme Court for permission to appeal the refusal. On the 3rd of May 2018, the Supreme Court agreed to hear the appeal. Now, Apple has said it will no longer fight the case. bit.ly/ADelayaDayKeepsTheAppleAway
Facebook plans 1m sq ft data center near Salt Lake City Facebook has confirmed that it plans to build a $750m, 1,000,000 square foot (92,900 sq m) data center campus on a 487.5-acre property in Utah’s Eagle Mountain, following weeks of secret deliberations. Operating under the alias of Stadion LLC, the social networking and advertising giant pushed for hefty tax incentives, with local bodies criticizing the speed of the process, the company’s anonymity, and the size of the tax breaks. Eventually, however, each group ultimately capitulated and approved the tax package. And thus, with the latest project, Facebook will receive $150m in tax subsidies to offset the cost of building the promised power, water, sewage and road infrastructure, as well as a 20-year personal property exemption estimated at $375m over the entire period, and finally, an additional $375m break on the value of its real estate investment. In return, Facebook agreed to invest $100m in local infrastructure, as well as the $750m data center, which it says will be powered exclusively by renewable energy sources. This is not the first time Facebook has considered opening a data center in Utah.
Back in 2016, the company played the state off against New Mexico as it tried to exact the best deal from each location. After again keeping itself secret, it lobbied for tax break legislation at a state level, and fielded increasingly enticing offers from the communities of Los Lunas in New Mexico, and West Jordan City, Salt Lake County, Utah. But when Utah’s tax break pitch went for a vote with Salt Lake County, SLC’s Mayor and The State School Board, it was rejected, leaving Facebook to choose New Mexico. In return for the jobs, construction, and an annual payment from Facebook that starts at $50,000 and rises to less than $500,000, Los Lunas will not collect any property taxes for the next 30 years. Facebook will also receive tax breaks on the cost of computer equipment it will install in the facility. While the tax breaks were significant, an impact report funded by the Village of Los Lunas paints a positive picture. Should Facebook expand the site to six data centers, as it has said it is considering, total economic output of the site over the next 10 years is projected to reach $1.88 billion. “Although the project is expected to require a considerable amount of public infrastructure, the project is also expected to generate significant new revenues for the Village of Los Lunas,” the report stated. bit.ly/LessTaxMoreSelfies
Vox Box
Svein Atle Hagaseth CSO Green Mountain AS What do you think of Norway’s new data center strategy? We’re very happy. Ever since the oil and gas price dropped in Norway about 4/5 years ago, the government has looked for alternative sustainable industries. And now they have announced that data centers are something they would specifically like to have as a growth engine moving forward. There are going to be a lot of tax incentives - electricity tax cut 97 percent, property tax gone. But also connectivity investments. bit.ly/NorwaysNewPlan
The top three colocation providers keep growing faster than the market The world’s largest colocation providers are continuing to grow their revenues faster than the rest of the market, increasing their market share and consolidating the industry in the hands of the few. According to Synergy Research Group, Equinix, Digital Realty and NTT all had a bumper quarter, due in part to their “aggressive” merger and acquisition strategies. “Enterprises are pushing more of their data center operations into colocation facilities and are also aggressively driving more workloads onto the public cloud, where cloud providers themselves use a lot of colocation facilities,” John Dinsdale, chief analyst and research director at Synergy, said. According to the study, Equinix, Digital Realty and NTT maintained their market lead by a comfortable margin: in the first quarter of the year, Equinix controlled 13 percent of the worldwide market, Digital Realty was responsible for around eight percent, and NTT around six percent.
Ari Kurvi Data Center Manager Yandex Oy Does Yandex Oy’s data center reuse waste heat? Since our opening three years ago, we started to reuse heat. At the moment we can reuse 30 percent of the energy that we take in, we sell it to the local community. We also sell it to the grid we serve. We export about 17 gigawatt hours annually and are expanding our system to reach 30. It’s profitable, we see a return in three years. Plus it helps our reputation as an innovative company. bit.ly/WasteHeatHeatsUp
bit.ly/ThePowerTrio
Issue 28 • June/July 2018 7
Whitespace
Jury rules in BladeRoom’s favor in IP theft case Emerson Electric Co. is set to pay $30m in damages to UK-based prefabricated data center manufacturer BladeRoom, after a jury in California agreed that the company copied the designs used to build Facebook’s data center in Sweden. BladeRoom will receive $10m in compensation for the profits lost, and $20m for Emerson’s “unjust enrichment.” While Facebook settled the case in April, Emerson and BladeRoom spent 20 days in court. The jury took less than a day to reach a verdict, concluding that Emerson either disclosed or used two of the four trade secrets in the construction of Facebook’s Luleå facility, acting in a “willful and malicious” way. The outcme of the case rested on six crucial documents which appear to show that Facebook and Emerson conspired to use BladeRoom’s designs without commissioning its services. The documents include an email in which the pair agreed to have a “direct debrief” after meetings with BladeRoom, an internal document that stated their intention of working “with Emerson, to build a BladeRoom solution” and an email between Emerson employees that discussed “leveraging what BladeRoom has done.” Emerson said that the company would “definitely” appeal the verdict. bit.ly/WhatWouldYouSpend30mOn
Disused coal plant near Chicago eyed for massive data center campus A former coal-fired power plant in Northwest Indiana could be turned into a hyperscale data center, as a group of developers are working on plans to build a million square foot campus, with its own associated renewable energy generation facilities. Each of the 100,000 sq ft (9,290 sq m) facilities would be equipped with solar arrays, as well as additional panels to be installed on the ground. Developers said that they would “likely” build wind turbines as well, and that they are even exploring the possibility of generating hydropower by installing turbines in Lake Michigan. The site sits on the shore of Lake Michigan, and remains home to several electrical substations. The property is located fifteen miles outside of Chicago, but in the state of Indiana – hence the coal plant’s name, State Line Energy - meaning that the project is entitled to approximately $20m in tax subsidies. bit.ly/MakingCoalGreatAgain
an IMS Engineered Products Brand
STANDARD AND CUSTOM SIZES
8
DCD Magazine • datacenterdynamics.com
5400 lbs load rating
UL LISTED RACK HEIGHTS UP TO 62 RU
Kaspersky Lab to build Swiss data center in bid to ease national security concerns Global cyber security firm Kaspersky Lab has confirmed plans to transfer the data of a majority of its customers to a new data center located in Switzerland. Alerts from the international intelligence community suggested that the company has been collecting sensitive customer data on behalf of the Russian establishment. Various government departments in the UK, US and The Netherlands were subsequently banned from using the software. Kaspersky Lab continues to deny the allegations, but has acknowledged that the international community’s concerns must be addressed. The facility, due for launch in late 2019, will be built in Zürich at a cost of £12 million ($16.26m). The company is also planning to relocate its programming team to Switzerland, where it will open a “transparency center” to allow foreign bodies to verify and vet its anti-virus software as it is being developed. Additionally, an independent organization will be created to supervise the implementation of new processes throughout Kaspersky Lab. The UK’s National Cyber Security Centre (NCSC) chief executive, Ciaran Martin, stated that the move constituted a ”step in the right direction,” but that it would require further assurances from the company to drop its ban on the software. bit.ly/OutsideRussiaWithLove
Russia’s RSC deploys hot water cooled supercomputer
For more on hot water cooling, see p42
Russian supercomputing company RSC Group has installed a hot watercooled high performance computing (HPC) system at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast. Named after the former Director of JINR’s Computing and Automation Lab, Nikolay Nikolayevich Govorun, the system is expected to have a peak performance of 500 teraflops (double precision). “We are glad that RSC’s innovative HPC solutions will help to advance the Russian science and to improve the efficiency of international scientific research cooperation that has a long history in JINR,” Alexander Moskovsky, CEO of RSC Group, said. bit.ly/HavingOblast
DISA to close St. Louis data center The United States Department of Defense combat support agency, the Defense Information Systems Agency (DISA), will close its data center in St. Louis, Missouri. As part of a wider federal data center consolidation initiative, the agency will also cut staff at a number of its facilities around the world, shifting to a “lights dim” status. DISA now says it expects to realize cost savings of more than $695 million over the next 10 years. bit.ly/DimmingoftheLight
Learn more at amcoenclosures.com/dcd
CUSTOM RACKS MANUFACTURED DAILY
CUSTOM RACKS CAN SHIP IN 2 WEEKS
MADE IN THE USA
Issue 28 • June/July 2018 9
Whitespace
Fire suppression failure at DigiPlex brings down Nordic Nasdaq Helsinki’s Nasdaq Nordic stock exchange was closed for most of April 18 due to a problem in a DigiPlex data center. As a result, trades were halted in Copenhagen, Helsinki, Reykjavik, Riga, Stockholm, Tallinn and Vilnius. Oslo’s exchange was spared, however, as it operates independently from Nasdaq. A gas-based fire suppression system was triggered at the DigiPlex data center in Väsby, 30km (18.6 miles) north of Stockholm, taking a data hall occupied by the Nasdaq Nordic stock exchange offline. DigiPlex told DCD that no other customers were affected - and there was no actual fire. Nasdaq Nordic migrated its operations to the site in 2015, as part of a plan to diversify the technology services it offers to the global financial services industry. The data center’s backup system should have kicked in immediately, but according to Maria Rekola of Finland’s Financial Supervisory Authority, it took some time for this to happen. She stated that the body would be leading an investigation into what caused the delay. Meanwhile, DigiPlex said it was taking the issue “very seriously” and that it was “performing a thorough root cause analysis,” for which it brought in an external specialist. Fire suppression systems are often guilty of bringing down data centers. Gaseous, inert fire suppression systems can cause issues as the eruption of gas sends a shockwave, which can damage a data center’s equipment. In the past three years, such incidents have struck an data centers belonging to ING Bank, an unnamed university in London, Glasgow City Council, and Microsoft. bit.ly/TheStopMarket
‘Power event’ at AWS data center disrupts US-EAST-1 Amazon Web Services suffered a disruption to its operations in the US, with a “power event” affecting one of its cloud data centers in Northern Virginia, comprising the US-EAST-1 region. A single Availability Zone saw connectivity issues, impacting services like RDS, Redshift, WorkSpaces, EC2 and EBS for approximately 30 minutes. The company warned that the outage led to hardware failures, meaning that some customers that were keeping their workloads in a single Availability Zone might never be able to recover their instances. The issue was made worse by the fact that, around the same time, customers were experiencing minor problems with the US-EAST-2 region located in Ohio. The problems with US-EAST-1 started at 3:13 PM PDT on June 1st, with some customers unable to
10 DCD Magazine • datacenterdynamics.com
Fire suppression system causes State of New Jersey data center outage The triggering of a fire suppression system in the State of New Jersey’s data center brought down local government agencies for more than 24 hours, the New Jersey Globe reports. One of the affected agencies was the Election Law Enforcement Commission (ELEC), meaning that campaigns were unable to file contribution reports, which must legally be submitted within 48 hours of expenditure, ahead of the primary elections taking place on May 5th. Although the deadline was not enforced for those who failed to submit reports within the obligatory timeframe, there are no compensatory measures planned to make up for any lost advantages which could have been gained through the disclosure of contributions. These can make all the difference in such instances, as voters can base their decisions on which interest groups back candidates. Systems were reported to be back online the next day, and New Jersey ELEC deputy director told StateScoop that any notices filed on the day the outage took place would be posted online before 5pm EDT. The incident in New Jersey is also a reminder that democratic elections are increasingly reliant on digital infrastructure for such instances as registering to vote, and, in some cases, enabling online voting. Last year, Kenya’s electoral agency launched two data centers ahead of presidential elections, after major security flaws were found in the existing registration system. bit.ly/TheStatesContribution
reach their compute instances. “We can confirm that there has been an issue in one of the data centers that makes up one of US-EAST-1 Availability Zones. This was a result of a power event impacting a small percentage of the physical servers in that data center as well as some of the networking devices,” a report on the AWS Service Health Dashboard said. “Customers with EC2 instances in this availability zone may see issues with connectivity to the affected instances. We are seeing recovery and continue to work toward full resolution.” Numerous AWS customers took to Twitter to apologize to their own users for the outage. Northern Virginia, where US-EAST-1 is located, is the largest data center market in the United States. It is home to hyperscale facilities run by Google and Microsoft, and an upcoming $1 billion data center campus from Facebook. In terms of colocation, local operators include Iron Mountain, Sabey Data Centers, COPT, Infomart Data Centers, DBT Data, H5 Data Centers, RagingWire, and Equinix. bit.ly/PowerEventThenPowerWent
Cloud & 2N resiliency are explained on p49
Peter’s random factoid In June, an unidentified “hardware failure” on Visa’s European payment network left customers and businesses unable to make or receive payments, amidst a Twitter campaign promoting #CashFreeFriday
Uptime is everything—
So don’t fall for the imitators. Trust 30 years of innovation and reliability.
Originally released nearly 30 years ago, Starline Track Busway was the first busway of its kind and has been refining and expanding its offering ever since. The system was designed to be maintenance-free; avoiding bolted connections that require routine torqueing. In addition, Track Busway’s patented u-shaped copper busbar design creates constant tension and ensures the most reliable connection to power in the industry—meaning continuous uptime for your operation. For more information visit StarlinePower.com/DCDjune.
Whitespace
Google’s latest machine learning chip to use liquid cooling At its annual I/O conference, Google unveiled the latest generation of its Tensor Processing Unit, the TPU 3.0. Due to the high power density of the hardware, the application-specific integrated circuit (ASIC) will be liquid cooled - a first for the search and advertising giant. The company expects to begin large scale deployments within a few months. “These chips are so powerful that, for the first time, we’ve had to introduce liquid cooling in our data centers,” Google CEO Sundar Pichai said. “And we’ve put these chips in the form of giant pods. Each of these pods is now eight times more powerful than last year’s [TPUs], well over 100 petaflops. This is what allows us to develop better models, larger models, more accurate models, and helps us tackle even bigger problems.” Google did not, however, share the benchmark used to claim 100 petaflops performance. “For a while we’ve been investing in the scale of our computational architecture,” Pichai said. “[The TPUs] are driving all the product improvements you’re seeing today, and we’ve made it available to our cloud customers.” Each TPU 3.0 has 128GB of high-bandwidth memory, twice the memory of Find out its predecessor. Further about how specifications of the architecture Google were not provided. bit.ly/GettingTensorsWet
works on p16
Qualcomm may give up on Arm servers; Cavium launches ThunderX2 American chipmaker Qualcomm is considering giving up on Arm-based CPUs for the data center market, sources told Bloomberg in early May. Later that month, the head of the company’s data center division, Anand Chandrasekher, stepped down. He was originally reassigned to the position in 2013 from his job as Chief Marketing Officer after he called Apple’s 64-bit A7 chip a “marketing gimmick.” The world’s largest smartphone chip supplier launched its first server processor line, the Centriq 2400, only last November. The company declined to comment. But there is still life in the Arm server market - Cavium has announced the general availability of the ThunderX2, the Arm-based system-on-a-chip it first announced in 2016.The chip is primarily based on Broadcom’s Vulcan SoC. Cavium quietly bought the designs and hired the engineers behind the unfinished chip after Broadcom left the market. bit.ly/LosingAnArm
Microsoft previews FPGA-based machine learning service for Azure Peter’s random factoid Facebook is rumored to be developing its own FPGAs or application-specific integrated circuits (ASICs) for its data centers
Microsoft is set to capitalize on its work on integrating field programmable gate arrays (FPGAs) into servers for machine learning workloads, by launching a specialized Azure cloud service. Project Brainwave will be offered in preview, with a limited set of capabilities and allocations, and will only be available in the East US 2 region to begin with. The company also plans to offer Brainwave servers as on-premises edge deployments. “I think this is a first step in making the FPGAs more of a general-purpose platform for customers,” Mark Russinovich, chief technical officer for Azure, said at the Microsoft Build conference. Brainwave uses Intel’s Stratix 10 FPGAs, and
12 DCD Magazine • datacenterdynamics.com
builds upon Microsoft’s previous work with these chips. In Project Catapult, the company deployed FPGAs to speed up Bing searches, and FPGA-based network interface cards are currently used to accelerate its networks. Every new Azure server comes with FPGAs pre-installed, already totaling more than an exaflop of compute power. “FPGAs allow us to unlock our software smarts by being able to put it in the server and continually update the algorithms and the neural networks that run on these FPGA clusters,” Microsoft’s senior director of data center strategy David Gauthier told DCD. Soon after Microsoft’s announcement, Intel launched a multi-chip module integrating a Xeon Skylake processor with an FPGA. The Xeon Scalable 6138P includes an Arria 10 GX 1150 FPGA, Intel’s most powerful FPGA, linked to the CPU cores using Intel’s Ultra Path Interconnect (UPI). bit.ly/AzuresBrainwave
Flexible Power: From overhead to in-rack
When data centre power requirements call for higher density, multiple types of receptacles, and future flexibility, Starline goes above and beyond. Though it may look like other power distribution systems, Starline’s innovative design provides users with the flexibility to choose and use different types of receptacles on the Cabinet Busway system. To see how Starline is changing power distribution in data centres, visit StarlinePower.com/DCDjune.
DCD Calendar Stay up to date with the latest from DCD - as the global hub for all things data center related, we have everything from the latest news, to events, awards, training and research
Events DCD>Energy Smart San Francisco Marriott Marquis
Jun 25 2018
DCD>Webscale | San Francisco Jun 26 2018
San Francisco Marriott Marquis
Plenary panel: Gaining customer intimacy at webscale, how ready are we? 10:00am – June 26 Where are most companies today on redesigning their infrastructures for digital transformation? Where are the bottlenecks that hold back corporate management - In technology components? In system architectures? In human capital? Panelists will give their top three priority recommendations.
DCD>Australia | Sydney International Convention Centre
Panel: What does ‘energy smart’ mean within the context of the Australian data center industry? Experts will discuss data center energy efficiency in the Lucky Country: What is needed to make renewables work, and how will data centers and smart grids intersect? Featuring Dennis Lee, NABERS Head of Technical Standards; Kevin C. Kent, data center operations manager at The Ohio State University; and Peter Blunt, general manager of data center development, FKG Group and Pulse Data Centre. bit.ly/DCDAustralia
DCD>SE Asia | Singapore Marina Bay Sands
bit.ly/DCDwebscale
Aug 23-24 2018
Sep 11-12 2018
South East Asia’s most in-depth data center and cloud event returns
Jason Hoffman MobiledgeX
Mark Thiele Apcera
Rebecca Wanta One Degree World
Cole Crawford Vapor IO
Jul 18-19 2018
DCD>Webscale | Bangalore Sheraton Grand Bangalore Hotel
The 8th annual Webscale Summit in Bangalore, India features the region’s largest players discussing the industry’s biggest topics across a packed two-day conference. bit.ly/DCDBangalore
Satyavathi Divadari Wells Fargo EGS India
Suresh Shan Mahindra Finance
Yuval Bachar LinkedIn
DCD>Colombia | Bogotá Ágora Bogotá Convention Center
Mustapha Louni Uptime Institute
Jun 20 2018
14 DCD Magazine • datacenterdynamics.com
Learn about digital transformation and the rise of edge computing; living between on-prem, colo and cloud; being data center energy smart; and consolidating data centers and extending their useful life. With new networking formats, the APAC Awards, world-leading speakers, Uptime Institute content, lightning panels, in-depth roundtable discussions, and live demonstrations... this is an event not to be missed. bit.ly/DCDSingapore2018
Manik N. Saha SAP Asia Pacific & Japan
Lee Kirby Uptime Institute
Dan Thompson 451 Research
Krupal Raval Digital Realty Data Centres
Read the latest issue of the Spain and LATAM DCD Magazine today: bit.ly/DCDSpain
DCD>Mexico | Mexico City Expo Santa Fe Mexico
Oct 3 2018
DCD Calendar
DCD>Debates - A new webinar format bringing the dynamism of our live panel discussions to a global audience
How is the data center responding to Industrial IoT demands?
From industry-certified courses to customized technology training, including in-house development, DCPRO offers a complete solution for the data center industry with an integrated support infrastructure that extends across the globe, led by highly qualified, vendor-certified instructors in a classroom environment as well as online.
New Course Launched
Webinar: July 10 | 3:00 BST Content Partner:
>Debates For companies to store, process and act upon the vast quantities of data that a factory can create every day, they need extensive digital infrastructure. And, for optimum results, that infrastructure should be located right there, near the factory itself. Register here:
PRO M&E Cyber Security Online Course DCPRO has developed a new 2 hour online module that covers M&E Cyber Security fundamentals. The ‘Introduction to M&E Cyber Security’ course will teach students all the basic principles and best practice that prevents cyber-attacks from happening. Learn about Industrial Control Systems (ICS) and the influence that IoT has on them, as well as why OT (Operational Technology) is a huge target for breaches. The course also covers policies, regulations and organizations as well as prevention, defense, operation and monitoring. For more info go to: www.dcpro.training/cyber-security
bit.ly/SchneiderElectricIoTDemands
2018 Course Calendar Energy Efficiency Best Practice – Monterrey | July 5 Data Center Design Awareness – Bogota | July 9 Data Center Design Awareness – Melbourne | July 11 Critical Operations Professional – Singapore | July 16 Data Center Design Awareness – London | September 17 Energy Professional – London | September 25 Mark Bartlett Arup
Mark Howell Ford Motor Company
Victor Avelar Schneider Electric
DCD>Debate What does the next generation Watch On of hyperscale network Demand architecture look like?
For more course dates visit:
www.dcpro.training
Take our free Data Center Health & Safety course today!
For more information on our ‘On Demand’ webinars, please visit: www.datacenterdynamics.com/webinars
Keep up-to-date Don’t miss a play in the data center game. Subscribe to DCD’s magazine in print and online for free, and you can have us with you wherever you go! DCD’s features will explore all the top issues in the digital infrastructure universe in unparalleled depth. Subscriptions datacenterdynamics.com/magazine To email one of our team firstname.surname@datacenterdynamics.com Find us online datacenterdynamics.com | dcd.events | dcdawards.global | dcpro.training
Safety should not have a price. Take our 1 hour online health & safety course for free today! www.dcpro.training/dc-health-safety
DatacenterDynamics
DCDnews
DatacenterDynamics
Issue 28 • June/July 2018 15
LAND OF THE GIANTS Sebastian Moss Senior Reporter
Sebastian Moss reports on how Google, Amazon and Microsoft create the illusion of infinite capacity
16 DCD Magazine • datacenterdynamics.com
T
he construction of the world’s digital infrastructure has been a uniquely collaborative affair, with governments, research institutions and corporations all playing their part in the creation of a monumental web of data centers, cables, towers, satellites and sensors. But there are a few companies whose contributions to the whole has been unrivaled, firms that built networks responsible for a vast portion of digital traffic, and which are spending billions to extend their dominance even further. “Fifteen years ago when I started at Google I didn't imagine that we would be building the world's largest network, or the world's largest compute infrastructure,” Benjamin
Cover Feature Treynor Sloss told DCD. As vice president of 24/7, his job is to keep Google online - all of it, from Search to Maps to Cloud Platform. “If Google ever stops working, it's my fault,” he said. “That's my job. You know, one year at a time.” When Sloss joined the business, he couldn’t predict how large the company and its infrastructure requirements would become. “I just knew that we had a set of things that we needed to do in the next three months and I could extrapolate out with a great deal of confidence for the next two to three years.” At Microsoft, the experience was similar for David Gauthier, the company’s senior director of data center strategy and architecture. “I've been at Microsoft about 19 years, and I've been involved in our data center infrastructure that whole time,” he told DCD. “It's been quite a journey - coming up from the early days of Microsoft, with MSN, and then the original push into algorithmic search with Bing. We thought we were hyperscale back then: I don't think we had any real grasp of what was coming. This thing has just taken off in a way that is really unique to any industry.” Google, too, has had to deal with a extraordinary growth, further exacerbated by its entry into the cloud services market. “As we started to offer a public cloud product we used the same data centers and really the same infrastructure, the same network, same servers, the same everything, that we were already using,” Sloss said. His goal, now, is to enable Google Cloud Platform customers “to build a service with the same availability and the same performance and the same feature richness as Google Search or Gmail.” Handling this challenge requires a careful balancing of ideas, roadmaps and priorities, Sloss said, likening his job to that of a portfolio manager. “I've got 5,000 people in my team, and in each area I've got some people working on things that are not needed in the next three to six months.” A lot of Google staff are working on iterative improvements and things that will "eventually become forced moves," while “a fair fraction” are focused on “larger leaps that have a lower probability of success,” Sloss said. “The two halves have to go hand in hand.” Employees are encouraged to follow Google’s 70-20-10 philosophy (70 percent on core business, 20 on core related projects and 10 on unrelated projects). “We invest in a number of those [further out] efforts each quarter in order to get the few that actually do pan out, to turn into projects that can move the needle quite significantly. “For example, using machine learning
to make huge power efficiency gains - the person who proposed that was from one of the iterative teams,” Sloss said. In 2016, Google’s DeepMind division announced it had achieved a 15 percent improvement in power usage efficiency (PUE) at one of the company’s data centers. Details are limited on how widely Google has adopted the algorithm: “I would say this: It is being rolled out, and we will continue to roll it out as we build or retrofit new data centers. But if you were to look at the majority of data center capacity that we have at this point, they're already benefiting from it," Sloss said. As for Microsoft, Gauthier said: “We're all using AI and machine learning to optimize our infrastructure, bringing down the energy consumption and the water and other resource consumption.” Machine learning has also been enlisted to help with capacity planning. Last year, Amazon Web Services’ CEO Andy Jassy said that “one of the least understood aspects of AWS is that it’s a giant logistics challenge, it’s a really hard business to operate.” “We are, of course, using machine learning in many areas,” AWS technical evangelist Ian Massingham told DCD. “Capacity forecasting is a classic sequence prediction machine learning use case. So why wouldn't we be doing it? We actually have customers that are doing that as well. Games publisher Electronic Arts is using machine learning for planning its own EC2 capacity fleets so when they launch new games, they've got enough capacity ready.” “They haven't been terribly specific about what they meant but I can take a pretty educated guess,” Sloss said about AWS’ capacity planning. “Demand planning isn't just a simple extrapolation of a logarithmic curve. There are actually predictable peaks and troughs,” like how demand for Google Search grows between September and May and then flattens between June and August, when people are not in school. “So you can see that there are these historical effects. We can plan our capacity, and it becomes important to get those five percent efficiency improvements when you're talking about billions of dollars of infrastructure. I'm assuming Amazon is roughly doing the same thing. We haven't
said anything about it, but we've been doing things like this for more than 15 years.” Gauthier was equally coy: “I think it would be fair to assume that we could be using that - I can’t confirm it.” To be certain that there is enough capacity to meet sudden demand increases, every time Microsoft Azure launches a new region, it ensures that its data center locations have space for new data halls, and its utility providers have additional resources. “The last thing we want to do is go open a brand new region and not have options for growth,” Gauthier said. “I like to say that we maintain the illusion of infinite capacity. That's really the challenge in cloud computing as a lot of the infrastructure and the hardware is getting mature. How we do capacity planning to maintain that illusion is really where a lot of ‘special sauce’ is today.” One way to keep some control over sudden capacity shifts that AWS has pursued is introducing limits on how many instances a client can start without discussing their plans with the cloud company. “So if you want to go above those account limits, you raise a request form and that’s subject to a quick review of your use case,” Massingham said. “Then we know what the potential consumption footprint is, and we can use that to inform our capacity planning.”
$30bn
The amount Google spent on infrastructure in the three years leading up to 2018
To allow the company to overprovision without losing too much money as a result, AWS also operates the EC2 Spot market, a discounted auction-style market where customers bid for resources which can be reclaimed if another customer buys them using a classic market model. “What you're looking at there is our attempt to recover the marginal cost of that, as yet unused, capacity; capacity that has not yet been sold for demand usage or for reserve instances,” Massingham said. The spot market was an initiative requested by the AWS community, a community which Massingham believes gives the cloud company a unique edge. u
Issue 28 • June/July 2018 17
u “We were early, and for some reason other people that may have become competitors didn't realize the potential impact of cloud computing for quite a few years, so we had a head start,” he said. That head start has resulted in a massive collection of customer feedback and usage data, “so we have really good insight into what customers find important about the existing services and have a great opportunity to talk with customers about what they want us to add to the platform,” he said. When it comes to cloud market share, AWS might be ahead at the moment, but each company has a huge R&D department trying to find the next big thing, the improvement, feature or innovation that will give them the edge - or at least cut the cost of their internal services. “We run a number of different programs that span two years to five years and beyond, to try to keep ourselves abreast of where technology is going,” Gauthier said. “We look at the rest of the things in the data center that are taking time and money and energy to run, from the generators, to the UPSs, to the power distribution, and then see how necessary those really are if you have a well-designed hyperscale system that handles faults in software and handles availability challenges by distributing workloads.” The company is also trying to escape its reliance on the electric grid, experimenting with hydrogen and methane gas-powered fuel cells at the rack level for the past five years. “You take out all the losses of the grid, take out all the distribution challenges
of transformers, and bring them into one extremely efficient package. Our pilot data center is running very well for us, and it allows us to show the proof of possibility to the supplier ecosystem around fuel cells.”
space that is something we're watching very closely. We maintain a little bit of a trigger point where we start moving in the direction of other cooling technologies.” Google is also looking into liquid cooling. With its latest generation Tensor Processing Another advantage, Gauthier Unit, the TPU 3.0, it has turned to this said, is the fact that anything technology for the very first time. “Other that eliminates the need things being equal, liquid cooling is more for diesel generators in expensive than air cooling because you have data centers will make more pipes and more copper and more heat gaining permits exchangers and you have to have a little for new sites thing sitting on top of every chip,” Sloss said. significantly easier. “So you don't do it unless you really need to, “I can't give you a but physics requires that you do it because of timeline for when the power density of these machine learning fuel cells will be in systems.” a production data Before adopting TPUs and other internal center, but I can say hardware products, Google usually tries out that it's definitely the equipment among its tens of thousands a top priority for us. of staff. “When we first came out with them, It’s a super interesting let's face it, they were clunky,” Sloss said. technology and it's “You had maybe 20 of them and they needed something we're sharing constant service to work. It was not really in with the ecosystem. We have a a form where you could offer it as a service.” regular conference where even some In cases like this, Google turns to of our competitors come and talk about the ‘dogfooding,’ the process of using its own technology and how we can mature it for the employees as a test base. “It's a large enough industry.” user base that you're going to find all sorts of A source at the US Department of things that you would find in public, but with Energy’s National Renewable a much more forgiving audience. Googlers Energy Laboratory, which internally may make a meme when you has collaborated with give them something that doesn't work For more Microsoft on testing well, but you don't end up with press on cooling, fuel cell technology, headlines about it.” check out our confirmed to DCD that This process, and the focus on supplement representatives from innovation, has helped the company on p31 Google have expressed stay ahead, with Sloss seeing Google as an interest in the tech, “the first company to do cloud computing visiting the government at scale, as we were building this stuff back laboratory to learn more. in 1998. Now we have several companies that are building cloud computing at scale Another area Microsoft’s R&D is very much and a lot of the folks who have historically engaged in, Gauthier said, is the topic of had bespoke systems and bespoke data high density rack cooling, potentially using centers are appreciating that, actually, liquids. With the number of AI and ML cloud computing brings a large benefit workloads growing, “we definitely are seeing both in terms of flexibility and in terms of density increasing, and in the air-cooled economics.
35%
The amount of energy supplied by a power plant that is actually delivered to the data center due to generation, transmission and distribution losses
DCD>Webscale | San Francisco San Francisco Marriott Marquis
June 25-26 2018
Join representatives from Google, Microsoft, Facebook, Apple, eBay, Digital Realty and more at the premier conference for those looking to design, build, manage and operate the infrastructure of the zettabyte era. With 40+ hours of presentations, panels, keynotes, lightning rounds, solutions briefings, ‘big discussions,’ and lunch-and-learn sessions, there's something for everyone. bit.ly/DCDwebscale
18 DCD Magazine • datacenterdynamics.com
Cover Feature “So I'm not worried about us becoming stuck with the design that we've got - the design that we've got now is the thing that everybody's investing in. But a more interesting question may be: what comes after cloud computing?” Again, he sees it as a portfolio matter: “How much of Google's total engineering is going into using the infrastructure that we've got today versus using the next generation? I don't know if I could accurately predict what infrastructure will look like in 15 years. But I will observe that Google appears to be on the forefront of machine learning infrastructure, which is barely different from cloud computing infrastructure. To me, that is an interesting new angle on where computing is going.” But innovation requires sacrifice, Sloss warned. He created the concept of Site Reliability Engineering - a discipline that incorporates aspects of software engineering and applies it to IT operations problems - and, in the book of the same name, he notes that product development and SRE teams can suffer from “a structural conflict between pace of innovation and product stability.” This can be resolved “with the introduction of an error budget,” a set percentage of errors and downtime. With this in mind, as the VP of 24/7, does he have an error budget? “Yes, it's just very small,” Sloss said. “Google's availability targets are typically in the five nines range.” If you consider all the other pieces of non-Google infrastructure involved in making an online search, Sloss said, when a user cannot access Google “it is almost always because of something that has nothing at all to do with us. So you, as a user, can't actually tell the difference between full availability and five nines of availability. To you it appears identical. But the level of effort and cost that's required, and the drain on engineering resources and feature velocity that's required to go from five nines to, say, six nines is actually immense.” Sloss believes it is this realization, that 100 percent is not the right availability target for most services, that is key. “Even if you were 100 percent perfect, actually people's experience of you is going to be perhaps two and a half nines. Once you've got that, then the question is: What availability target is the right balance between making your users extremely happy and being able to deliver them lots of new products at a rapid pace and at very low price point? And then it is just about picking the correct point on that, which is crucially important.” Achieving high availability across Benjamin massive systems, while still growing Treynor Sloss and rolling out new features, has presented difficulties for all of the major cloud companies, each having suffered unplanned outages and downtime. “We've hit scaling challenges within AWS that most providers will never get to,” Massingham said. “We architected systems to address those challenges that most providers have never had to architect yet. u
"What comes after cloud computing?"
Issue 28 • June/July 2018 19
u "We have had service incidents in the past, to fixate on the things that are dramatic like: of course, but the one thing you might have 'What if there was a fire? What if there was an noticed is the frequency of these incidents is explosion? What if there was a huge power much, much much lower outage? What if there was than it ever has been an earthquake?' No, the historically,” he said. actual problem is software One of the ways to bugs, by far and away." reduce the number of incidents that Microsoft For this reason, the has found useful is to company has invested simplify the data center. heavily in cold storage, as “I've been in some data “resilience requires you centers that are to have offline storage not ours, where some that a bug can't touch.” of the maintenance To avoid future errors, transfer procedures are after an outage Google 75 or 80 steps long and has “a no blame postthat person's going to do mortem culture, because Ian Massingham that on 10 or 12 different if something goes wrong, power trains,” Gauthier people didn't intend to said. break it, but they broke it anyway. “It's just the law of big "Even with the most numbers. You're going to make a mistake spectacular problems that we've had, the in there somewhere. And so we spent a lot focus is never 'who can we blame for this?' of time in the design of our data centers, "It's about: 'How can we fix our analysis, thinking about how we minimize the steps our processes, our system and so on, you have to take in a maintenance situation.” so it doesn't happen in the future?' That But the biggest cause of outages, Sloss philosophy plays perfectly into what we're said, was software bugs. "When people think doing in the public cloud now because in the about disasters in cloud computing they tend public cloud you can't control who's using
“We've hit scaling challenges that most providers will never get to,”
The social network's network While not a public cloud company, Facebook is another firm operating at a remarkable scale - with 1.368 billion daily users on its social network, another 1.5 billion on WhatsApp and 500 million on Instagram, it has had to face scaling issues that few in this industry have ever had to contemplate. “It feels to me like we're always laying the tracks in front of the train and moving as quickly as we can so that the software team and the product teams can deliver whatever solutions they want without necessarily being concerned whether they have enough data center capacity,” Facebook's Infrastructure and Site Services VP Delfina Eberly told DCD. “When you're running and operating these data centers at the scale that we run and operate, one of the things that is very cool about being part of a software company is having the software engineering talent to be able to develop specific solutions that make data center
operations and facility operations more efficient.” One of the systems the company relies upon is called Facebook AutoRemediation (FBAR), an automated service for handling hardware and software failures, and the first step towards building a fully self-healing data center. “Machine learning is not something that you normally would think you would apply to a repair function in a data center, and we're using it in multiple places in how we run and operate facilities,” Eberly said. “There are just so many things to keep track of, at some point human beings are no longer the most effective thing to use.” This also extends to logistics, Eberly said: “We've done some really innovative things for managing the amount of material moving through a data center: parts that are being replaced and replenished, parts that are being returned, and so on.” Sometimes the technology is used to augment humans, rather than replace them, for example by building “a single system where you can do a look up and
20 DCD Magazine • datacenterdynamics.com
your systems. You can't go and say 'no don't do it this way do it this other way.'" Instead, "you have to have it so that the systems are supporting people taking best practices seriously, and actually make it easy for them to make good decisions." Together, these companies have had to change the way they operate, and, in doing so, have had to change how everyone else operates. But as these goliaths focus on building and fixing systems at a scale never before seen, Microsoft’s Gauthier isn't thinking solely about that. Instead, he's reminded of his father: “My dad worked at NASA during The Space Race and I'm like: did he know what was going on at the time? Because I'm thinking, am I going to look back in 15 or 20 years and go ‘holy crap how did we do that?’
see where a part is at any data center or location around the world. You can see what previous problems existed in that space, without having to go run a specific report or having somebody else give you additional insights into the problem you may be looking at.” She added: “Logistics isn't necessarily something a lot of people innovate on and we think it's a game changer for us - simply that handling and managing of materials, things that have to happen across multiple data centers.” This has enabled Facebook to have just one data center operator per 25,000 servers, an unprecedented ratio. “We've done some very cool things in places where people have historically not focused,” Eberly said. “I think that's where we can credit the server-to-tech ratio. And with our use of machine learning, we're saying ‘let's try this thing in a place where most people would likely not look first,’ and we're excited about the early performance of that technology in this space.”
Advertorial: Technimove
Technimove – Data Centre migration experts Thinking of moving your data centre? Technimove’s CEO, Ochea Ikpa, explains why overcoming the fear of change is much better than tolerating long term discomfort
By failing to prepare, you are preparing to fail - Ochea explained: “The number one reason for a project failure is a lack of preparation. The battle for success and failure takes place long before the first device is moved. Preparation and planning is everything. A recent project for University of Central London, saw a 6 month analyses phase, the preparation and logical migration phase took just over a year, while the physical migration phase was completed in a month.” We are seeing more and more Enterprises adopting various Digital Transformation initiatives and early engagement is one of the key factors in ensuring the successful transformation of critical infrastructure environments. Technimove engage with the customer at key and varying levels kp to ascertain what success aI he c O looks like for the business, the customer and key stakeholders; and begin with the end in mind. Our consultative approach undertakes a deepdive discovery and analysis across all the critical infrastructure that is in scope, as well as those dependencies from the core to the edge and interconnected (Internet of Things – IoT) business applications. Programme and project management services can then be aligned to deliver the desired outcome, while the customer remains focussed on their ‘live production’ business operations. A laser-like focus on quality and service is what is needed when moving clients’ critical environments. We ask ourselves ‘why would ove CEO
We’ve built our reputation on delivering high-quality migration services around the globe and we’re a highly valued partner to organisations such as IBM, HP, Dell, Fujitsu CDW and Insight as well as Enterprise class data centre providers, including Equinix, Ark, Interxion, Cyxtera and Global Switch. When undertaking migrations and transformations of digital infrastructure, Ochea says no other company can deliver the level of control, expertise and accountability available from Technimove, this is something we been improving for two decades. Undertaking both the logical and physical sides of the migration, together with the project management, Technimove put their position of market leaders down to their unparalleled dedication, stretching over the last 20 years. Noting customer satisfaction as one of their key drivers when taking on a project, Ochea enthuses: “We are a one call, low risk, solution. We can manage the entire project for our customers, literally handing them the key to their new environment at the end. Servers and related equipment are amongst the most valuable assets a company can have. Often companies
Early preparation is the key to success
The complete technimove service • Transformational Consultancy Services • Rationalisation or Consolidation Consultancy • Migration Programme and Project Management • Application and Infrastructure Auditing Cabling Solutions • Logical and Physical Migration Services
|T ec hn im
The perceived risk is far greater than the reality
will avoid relocation even when it is by far the best option because of the perceived danger involved, we take that fear and worry away from our customers, they know they’re in safe hands when they hand over the project to us”.
a
W
hy would an organisation wish to move data centres? –poor connectivity, sub-standard service levels by their incumbent provider, lack of space to expand, reduced space requirements? “I’ve witnessed situations where an organisation has all of the above reasons and more, yet inertia, driven almost purely by a fear of moving, has prevented a change” says Ochea.
you settle for anything less?’ Ochea says: “We will audit the client environment at an application level, devices, cable connections and power draw, amongst other things. From here, we will design the client’s infrastructure and new data centre layout, by way of size, type, alignment and number of racks, structured cabling, enclosures and any other requirements needed. Technimove pre-cable the client’s new data centre location, with both structured cabling and patching. The next step is to shut the equipment down in its existing location, remove all cabling, de-rack, pack, move, re-rack, re-cable and power up. We then re-establish connectivity of all devices, inclusive of storage equipment. All of this is undertaken, whilst providing full insurance for each and every migration, so again the client has complete peace of mind and can concentrate on their day today business.” We have thousands of success stories from clients all around the world. If your organisation is considering its data centre footprint, then give us a call.
Contact Details Europe Office Technimove House, Spitfire Business Park, Hawker Road, Croydon, Surrey, CR0 4WD United Kingdom T: +44 (0)208-686-8800 E: info@technimove.com US Office 525 North Tryon Street, Suite 1600, Charlotte, North Carolina, 28202 USA T: 1-800-675-0538 E: info@technimove.com
Advertorial: Technimove
Colo + Cloud
Reinventing retail using robots and free software Ocado and Gap are thriving among the 'retail apocalypse' - both make good use of their data centers, reports Max Smolaks
Max Smolaks News Editor
T
he retail sector is changing at an incredible speed. The global brands that dominated it for decades are under threat, undermined by the convenience of online shopping. Some of them are already gone - Toys R Us, RadioShack, Maplin and Claire's. Many others are on shaky ground. The past few years were notable for retail bankruptcies to the extent that the industry press started referring to the trend as the 'retail apocalypse.' But the new royalty of the online world have their own concerns, having to compete against the likes of Amazon and Alibaba. The former is busy making online shopping nearinstantaneous, the latter is enabling customers to contact manufacturers directly, bypassing large swathes of the supply chain. It seems that the only way to survive in retail is to embrace emerging technologies - American clothing retailer Gap and British grocery specialist Ocado are doing exactly that. The former is diving head-first into open source, while the latter is building a robot army. Ocado started in 2000 as an online supermarket, and gradually evolved into a technology company focused on logistics. Imagine a warehouse staffed by units the size of a washing machine, running on top of a rail-based grid system, constantly talking to each other and transporting boxes in the most efficient way possible. When their batteries are empty, the units automatically recharge.
Issue 28 • June/July 2018 23
When they are broken, a dedicated recovery bot collects them. This is exactly what Ocado has built. “We started out, like lots of other people, trying to go online by buying software. And we realized, probably 15 years ago, that we are just going to have to do it ourselves, because nobody had actually done it and made money out of it before,” Anne Neatham, chief operating officer at Ocado, told DCD.
buying resources from one of the market Cloud Foundry is used primarily to leaders – like AWS, Google or Microsoft. Gap manage engineering, deployment and did the opposite - it decided to craft a custom lifecycle of cloud-native software - but it is private cloud platform to run its websites, nothing if not versatile. Among other things, using its own infrastructure, in-house Gap uses it for price optimization based on expertise and popular open source tools like local customer demand, making thousands OpenStack and Cloud Foundry. Today, of price adjustments every hour. this cloud platform is powering one of the largest consumer In the process of building a private cloud, retail experiences in the engineers at Gap had to make sure that the world. infrastructure had all the reliability expected “We have a mixture from a commercial cloud service; they of data centers. We architected high availability into software, Ocado has never own a data center, we and tested the results in the chaos of the owned a retail store also colocate at a data Thanksgiving weekend, also known as Black Retail e-commerce instead, it designs and center, and then there’s Friday. The DIY approach enabled the retailer sales worldwide in 2017 runs highly automated a data center at our to create infrastructure that is affordable and (eMarketer) warehouses similar HQ. I have clouds at all remains under total control, but requires to those operated by three of those locations,” investment in people and skills. Amazon, currently the Elijah Elliott, Cloud Domain It looks like this investment is paying world’s third largest retailer. At Architect and SME at Gap, off: Gap reported that while its retail store first, the company was creating revealed at the recent OpenStack sales shrunk 1.2 percent over the course of robots for internal use, but it soon Summit in Vancouver. fiscal 2017, its online sales grew 18.8 percent. realized it could make more money by selling To be fair, OpenStack is not the only kind E-commerce now represents nearly 20 this capability to competitors, which had of cloud at Gap: as a whole, the organization percent of the company’s total revenue. larger shares of the market but were lagging uses a combination of cloud providers, behind in technology. including Microsoft Azure, and has an Today, major Ocado customers include ongoing partnership with Rackspace. But in Waitrose and Morrisons in the UK, Casino terms of new applications, Elliott said “almost Group in France and Sobeys in Canada. In everything” now runs on OpenStack. May 2018, Ocado announced a deal with US He said Gap required an environment retail giant Kroger, causing its share price to that could create new Virtual Machines jump 44 percent. in minutes, not hours. This drove the Like a true technology company, Ocado infrastructure team to experiment with runs its own digital infrastructure, with two open source software. “Spin up a VM from modular data centers per warehouse to the pipeline, put up the new code, test it control the robots. The servers are housed extensively, tear it down - I’m not saying in repurposed shipping containers and anything that anyone hasn’t heard equipped with all the necessary power before - but it was a big difference in the and cooling equipment. Meanwhile, core paradigm at Gap.” enterprise systems and websites are hosted The retail giant used OpenStack to in the cloud. “In our warehouses, we have build a cloud based on microservices, small data centers because of the latency – if enabling developers to rapidly modernize we are going to run robots, we can’t afford apps. Around the same time, Gap began the latency that the cloud would give us,” shifting its operations to Cloud Foundry, Neatham explained. the open source cloud application Ocado’s software is based on APIs and platform that’s become the darling of microservices, and the company has been the DevOps and CI/CD (continuous running on a platform designed entirely integration / continuous delivery) in-house since 2014. It also wrote proprietary movement. communications protocols for its robots. “A lot of that is our own specific IP that we sell Data center transformation in retail Coming to others,” Neatham said. That doesn’t mean A special report Ocado is not well-versed in open source – it June 2018 uses Kubernetes across its infrastructure stack, and has released some of its own code to the open source community, sharing With online sales cannibalizing profits on Main Street, shop owners have to Kubermesh, a tool designed to simplify data aggressively increase their investment in IT. On the one hand, they have to offer a center architectures for smart factories. top-notch online experience, backed by analytics and personalization; on the other, they have to breathe new life into brick-and-mortar stores, using digital tools to make Open source software is also a big hit at shopping more exciting. To find out how these changes might impact the data center, Gap, which has been part of the American we have teamed up with Vertiv to create a dedicated microsite and a special report, Main Street for nearly 50 years. When a retail launching at the end of June. Sign up for more information: organization decides to switch to a cloud architecture, there’s a great temptation to bit.ly/DCDRetailFocus outsource the process, and simply start
$2.3tn
24 DCD Magazine • datacenterdynamics.com
Entry Deadline September 14, 2018
> Awards | 2018
The Data Center Awards are Open for Entries! Category 1
Category 2
Category 3
Category 4
Category 5
Living at the Edge Award
The Infrastructure Scale-Out Award
The Smart Data Center Award
The Data Center Eco Sustainability Award
Energy Efficiency Improvers Award
Category 6
Category 7
Category 8
Category 9
Category 10
Mission Critical Innovation Award
The Open Data Center Project Award
Cloud Migration of the Year
Data Center Operations Team of the Year – Enterprise
Data Center Operations Team of the Year – Colo + Cloud
Category 11
Category 12
Category 13
Data Center Manager of the Year - NEW
Design Team of the Year
Industry Initiative of the Year
Category 14
Category 15
Category 16
Category 17
Young Mission Critical Engineer of the Year
Corporate Social Responsibility Award - NEW
Business Leader of the Year Award
Outstanding Contribution to the Data Center Industry
Charity Partner
Public voting For more sponsorships and table booking information, please contact: global.awards@datacenterdynamics.com
The World’s Most Extreme Data Center
www.dcdawards.global
Do you know an extreme Data Center? If so, please email us at: extreme@datacenterdynamics.com
Bringing West Africa online, one step at a time MainOne’s founding mission was to bring Internet access to Nigeria. While some of its efforts have paid off, battles must still be waged on many fronts, reports Tanwen Dawn-Hiscox
D
eveloping an entire region’s telecommunications infrastructure, with all the technical, logistical and legal challenges that entails, was never going to be a stroll in
the park. Having more than twenty years of experience in the US telecoms industry, MainOne’s founder and CEO, Funke Opeke, witnessed first hand how deeply technology has altered societies around the world, which is why she decided to bring this change to the country she was born and raised in: Nigeria.
Tanwen Dawn-Hiscox Reporter
Africa’s most populous nation has a unique set of social, political and economic conditions: for thousands of years, it has been a culturally and religiously diverse country, and to this day, it counts more than 500 ethnic groups. Under British colonial rule from the early 19th century until the 1960s, the country was plagued by institutional corruption and prebendalism at the hands of autocrats installed during military coups. But since the early 2000s, the Federal Republic of Nigeria has enjoyed relative political stability, and, thanks to an abundance of natural resources, has become one of the fastest growing economies in the world. An increasingly important part of the country’s economy has been the telecommunications sector, crucial to its booming financial services industry, and required by its youthful population. Thus, the Main Street Technologies subsidiary was founded in 2010, on the premise of building and operating an international
26 DCD Magazine • datacenterdynamics.com
Networks + Interconnect and operation of its infrastructure. According Thankfully, efforts of providers such to Opeke, this has often meant creating entire as MainOne are starting to pay off, frameworks from the ground up. revealing some of the benefits brought “Typically we're doing projects that about by the establishment of modern Opeke explained: “We set out have not previously been done in the telecommunications infrastructure. to bridge the digital divide region. We’re having to put the by addressing what was templates together for how you The United Nations projects that by 2050, perceived to be the most do it, and how you operate it approximately 70 percent of all Nigerians critical infrastructure successfully.” will live in urban areas - where the Internet bottleneck at that Working with the is more accessible - compared to less than The percentage of time, which was a government also presented 10 percent in 1950, and approximately 50 submarine cable.” its difficulties: “we were percent now. This will likely be accompanied Nigerians with The realization dealing with regulation; the desire for better access to online services, Internet access in 2017 soon dawned that regulation that may or may and MainOne wants to be first in line to (United Nations) improving connectivity not exist, so we had to educate deliver the underlying infrastructure. in West Africa would and According to the take more than laying convince most recent census, fiber across the bottom of the authorities.” Opeke said, more than a ocean. But ultimately, hundred million people It became clear, Opeke said, that the Opeke explained, the living in Nigeria “have Main One Cable had merely shifted the success of the project accessed the Internet bottlenecks and that there were other boiled down to building at one time or another” critical infrastructure elements missing in a strong team that could (though this may not the region, “be it in terrestrial distribution overcome numerous, account for doublenetworks or data networks,” with end-user often unpredictable counting or multiple connectivity often requiring 3G or 4G obstacles. This team had SIM cards). Smartphone coverage; not to mention data centers, to to share a core vision for adoption, she guessed, “is bring data or content closer to the end-users, the company, which she probably in the 30 percent and to ensure “better performance and an described as wanting range.” enjoyable experience online.” to “address some of And indeed, last year, the developmental according to a study by Faced with a glut of submarine capacity, challenges that Africa, or Nigerian online retailer she said, they still lacked “the carrierNigeria, or West Africa Jumia, the country saw more than 150 neutral or open access data centers face.” million mobile subscribers and 97.2 million where everyone can get their content,” The company wasn’t founded “just to make Internet users, out of 216 million Internet as well as the “robust infrastructure to a quick buck,” she said. “If it just were about users in the whole of Africa. move the traffic across the country, making money, we would set up a franchise across the region.” and sell some fast-moving consumer goods, Nigerians’ appetite for digital services is “I would argue that all the and we could make money that way. But manifesting itself in various sectors, Opeke submarine cables on the west this is about impact and development and explained, including mobile banking and coast of Africa are probably technology, and access to the Internet is a e-commerce. “I think you can file your taxes running at 20 percent of capacity critical element in terms of bridging the digital online starting this year. There are things or less,” Opeke noted. divide. So that’s really what we set out to do, to starting to happen.” And so, the company got impact lives. Major cloud providers, however, are yet to work, building terrestrial “But we’re also a business. I've been able to to commit their infrastructure to the West networks, interconnection attract investors who are equally committed to African market, meaning cloud customers at points, landing stations, and the development of Africa. I’ll admit this is not the moment have to deal with much higher its flagship data center, MDXi, the easiest kind of business [to run],” she said latency than their counterparts in Europe or located in Lagos. Today, in with a laugh. the US. u addition to connectivity services, MainOne offers colocation, cloud, and DCD>Africa | Johannesburg Jul 24 managed hosting to 2018 Hilton Sandton, Johannesburg business customers across West Africa. Earlier this After three years away, DCD is excited to return to Johannesburg for an expansive year, it set out plans to debate on the future of digital infrastructure across the continent. The African data expand MDXi and to center industry is poised for growth as demand for digital services increases. construct another data communications cable – a 7,000km (4,350 mile) submarine link connecting Nigeria and Portugal.
47.4
"We're doing projects that have not previously been done in the region"
center, this time in the south western city of Sagamu. Far from plain sailing, MainOne has faced all manners of challenges in the planning, deployment
Participants include MainOne's Funke Opeke, East Africa Data Centre's general manager Dan Kwatch, Liquid Telecom's CTO Ben Roberts, Djibouti Data Center CEO Anthony Voscarides and icolo.io founder and CEO Ranjith Cherickel, as well as representatives of Google, Microsoft and the Uptime Institute. For more information, sign up: bit.ly/DCDAfrica
Issue 28 • June/July 2018 27
u Having rated the level of adoption of digital services in Nigeria at “maybe three to four” on a scale from one to ten, Opeke said: “I would also agree with them that it is still early days and quite a bit of work needs to be done in the ecosystem. “But I don’t think their services would sit idle and not be utilized.”
their customers: “If you can't guarantee that everyone who needs to use the service is going to be online, then you're still going to need to provide another channel for your services to get to market. “What we're finding is high quality content producing companies are also primarily going to traditional media channels such as satellite TV or broadcast distribution, rather than the other way around. Again, that's
"I've been able to attract investors who are equally committed to the development of Africa. I'll admit this is not the easiest business to run"
By all indications, Nigerians are ready to consume more digital services as from a cultural standpoint, Opeke explained, as barriers to adoption have largely been overcome. "I think there's a great openness to digital adoption. Of course, you can talk about literacy and the kind of content you find online, although I think more players are making Google or local language content more available now.” Any remaining limitations to impact and adoption of Internet-based services, she said, are related to the need for infrastructure “and the pervasiveness of access.” This had led to some companies turning to more traditional means of interaction with
because of the limitations in the distribution infrastructure.” When big content providers finally come to shore, it will enable MainOne and other digital service providers to have a bigger impact on West African societies. “We think that will put pressure on the ecosystem in a positive way, because now there are betterconnected people who can have a really rich experience.” End-users will then require better access, and, as the number of users expands, “the price per user can come down, and we'll see some benefits of the economies of scale.” But Opeke remains realistic in the face of the hurdles that lie ahead. “There's still quite significant challenges in the distribution of content across the region. Until that is addressed, I don't think we'll see the kind of explosion that's taking place in some other parts of the world.”
DCD>Debate | Register Now! Building data centers in Africa
Jun 20 2018 3.00pm SAST
How fast is Africa's data center industry evolving? Our panel of experts will discuss the challenges they face and the solutions they are developing to build critical infrastructure, in advance of the DCD>Africa conference on 24th July. Panelists include Funke Opeke, Liquid Telecom's Ben Roberts and The Uptime Institute's Phil Collerton
28 DCD Magazine • datacenterdynamics.com
bit.ly/DCDDebatesAfrica
POWER DISTRIBUTION UNITS • Vertical/Horizontal Mounting • Combination Units • Power Monitoring • Remote Monitoring • Rated at 13A, 16A and 32A • Bespoke Units • Robust Metal Construction • Availability From Stock • Next Day Delivery
THE NUMBER ONE CHOICE
FOR BESPOKE POWER SOLUTIONS
Designed and Manufactured in the UK
+44 (0)20 8905 7273
sales@olson.co.uk
www.olson.co.uk
Global Content Partner
> Webscale | San Francisco
15
th
Annual
Focus Day on ENERGY SMART | June 25
DATA CENTERS FOR HYPERSCALE, SILICON VALLEY START-UPS & EVERYONE IN BETWEEN June 26 2018 // San Francisco Marriott Marquis Limited free passes for end users and consultants - last chance to register! Lead Sponsors
For more information visit dcd.events/conferences/webscale @DCDConverged #DCDWebscale
Datacenter Dynamics
DCD Global Discussions
> Cooling | Supplement
INSIDE
Cooled by:
The hottest and coolest
The great refrigerant shortage
Getting into hot water
At the edge, liquid cooling returns
> A few key locations are defining the cooling market, and how it evolves
> New EU regulations are putting the squeeze on data center refrigerants. Take note
> An energy efficient supercomputer could hold the key to the future
> As the edge is built out, we may have to revisit old ideas, creating something new
Best server chilled CyberAir 3PRO from STULZ stands for maximum cooling capacity with minimum footprint. Besides ultimate reliability and large savings potential CyberAir 3PRO offers the highest level of adaptability due to a wide range of systems, variants and options. www.stulz.de/en/cyberair-3-dx
Cooling Supplement
Keeping cool without costing the Earth
Sebastian Moss Senior Reporter
No matter what form they take, or where they are located, data centers will need to be cooled. The trick, says Sebastian Moss, lies in how you do it
F
ish swimming off of the coast of the Orkney Islands in Scotland are due for a surprise. Should they head into the depths, they may come across a strange object lying on the sea floor: a giant cylinder, vibrating ever so slightly, and warm to the touch. This bizarre sea creature is not the kraken of old, nor is it a sunken ship or Atlantean artifact. No, it is - perhaps - a glimpse of the future. The cylinder represents the latest efforts by Microsoft to operate data centers under the sea, building a digital kingdom among the crabs. 'Project Natick' began as a whitepaper in 2013, starting in earnest the next year. By 2016, Microsoft was ready for a test in the wild, running a three-month trial off the Pacific coast of the US - a single server rack in an eight-foot (2.4m) diameter submarine vessel, filled with inert nitrogen gas. Now it looks like the company is ready to shift the project into high gear, submerging a 12-rack cylinder featuring 864 servers and 27.6 petabytes of storage into the North Sea. The icy water is expected to provide more than enough cooling for the data center, which is powered by a cable connected to the shore. This power will come from the European Marine Energy Centre’s tidal turbines and wave energy converters, which generate electricity from the movement of the sea. After this test, Microsoft envisions larger roll-outs, dropping clusters of five cylinders at a time. Indicatively, last year saw the company patent the concept of artificial reefs made out of data centers. Things, it seems, are going to remain confusing for the fish.
On land, removing heat is a very different challenge, and one that may be about to get a whole lot more complicated - new EU regulations, designed to cut greenhouse gas emissions, have had the unintended sideeffect of causing a data center refrigerant shortage (p40). New refrigerant gases may be the way forward, but they too come with knockon effects. Another approach could see the gases removed entirely, relying on hot water sent straight to the chip - it seems counterintuitive, but for the right power densities, this is surprisingly effective. Plus, it could one day lead to super-dense computers, the likes of which we have never seen before (p42). Another company is aiming even higher - in fact, its ambitions are literally out of this world. By harnessing the scientific phenomenon of sky radiative cooling, it hopes to beam excess heat into space, taking advantage of the interstellar heat sink that surrounds us (p46). Closer to home, there are those wondering whether liquid cooling could find a home at the edge, with disused cupboards one day set to house micro data centers, cooled by water or oil (p37). Whether you pursue any of these approaches, or try something else entirely, may depend less on your budget than on your location. This sector is led by a few key markets, locations with specific ambient temperatures and humidity levels, sometimes blessed with natural resources, and sometimes cursed with perennial challenges like land scarcity. Understanding these markets, and their requirements, is key to understanding how to keep your cool (p34).
Facebook embraces Nortek's membrane Facebook is rolling out a new indirect cooling system, developed in collaboration with Nortek Air Solutions. According to Facebook, StatePoint Liquid Cooling (SPLC) can reduce data center water usage by more than 20 percent in hot and humid climates, and by almost 90 percent in cooler climates, when compared to alternative indirect cooling solutions. In development since 2015, the technology - which has been patented by Nortek - uses a liquid-to-air heat exchanger, which cools the water as it evaporates through a membrane separation layer. This cold water is then used to cool the air inside the facility, with the membrane layer preventing cross-contamination between the water and air streams. “The system operates in one of three modes to optimize water and power consumption, depending on outside temperature and humidity levels,” Facebook's thermal engineer Veerendra Mulay said. Facebook uses direct evaporative cooling systems as long as the climate conditions permit this. “But the SPLC system will allow us to consider building data centers in locations we could not have considered before,” he added.
Issue 28 • June/July 2018 33
The hottest and coolest data center locations
Peter Judge Global Editor
Climate may affect data centers, but the overriding factor will be where the demand is, says Peter Judge
D
ata centers are affected by many things. Climate can influence the choice of location, but there are usually many additional factors such as the state of the local economy, proximity to consumers, availability of power and networking connections and, very importantly, politics. In this article we look at some key data center locations, and draw out the patterns behind the most exciting (hottest) and most fascinating (coolest) locations on the planet. Wherever data center demand is strong, those building the facilities have no choice.
They must meet the environmental needs through a series of technology choices and trade-offs, designed to ensure the facility delivers a reliable digital service to its ultimate consumers. Energy can make up more than half the overall cost of the data center during its lifetime, and operators will do everything in their power to reduce their expenses. This means picking technology which will run the facility more efficiently - but also making geographical choices, such as going where the energy costs are cheap (or where there is a supply of renewable energy that will reduce environmental impact). There are also political decisions to be made. Facebook and the other large hyperscale operators famously play off different American states or European countries against each other, locating their facilities where they get the most generous tax breaks. In Scandinavia, Sweden, Denmark and Finland, each country has offered competing levels of tax exemption for data centers. And in the US, Utah and New Mexico were placed in open competition to give Facebook the best terms for a data center in 2016: New Mexico eventually won.
34 DCD Magazine • datacenterdynamics.com
Builders know in advance what cooling technology they will require, and what will be practical in a given location. Servers use electricity, and all the power used in a data center will ultimately be emitted as heat, which must be removed to keep the equipment within its working temperatures. It’s easiest to remove that heat in a cool climate, where the outside air can do most of the work - subject to being safely filtered and run through heat exchangers. Thermal guidelines from ASHRAE show which parts of the world can use free-cooling and for how many hours. In most of the populated regions of the Northern hemisphere, free-cooling can be used for at least a part of the year. In Northern countries like Iceland and Sweden, it can be used all year round. Near the equator, in places like Singapore, mechanical cooling is required all the time. At the same time, Iceland and Sweden have plenty of cheap renewable electricity, while Singapore does not. Despite all this, Singapore is thriving, while Iceland remains a relatively exotic data center destination. The reason? Location still carries more weight than anything else, except for providers with applications which can live with a long response time.
Power + Cooling run by Stockholm Exergi, a joint venture These conditions mean that historically, All of this could be changing. The Internet between the utility, Fortum, and the city the UK has none of the flagship data centers of Things and the demand for digital content of Stockholm, which will help make the designed and built by hyperscale operators have led to the growth of so-called “edge” economics of urban data centers more such as Facebook, Microsoft and Amazon. resources which are located where the data positive. It provides hot water to homes and The giants are leasing space locally in is needed most. This means that all locations offices - and pays industrial facilities wholesale colocation sites, but place their where there are people will need digital for their waste heat. Stockholm big data centers in countries like Sweden, infrastructure. Data Parks, run by Exergi, Denmark and Ireland, where the taxes, land But it will also boost offers tenants up to $200,000 prices and energy costs are much more requirements for backper year per MW of heat. favorable. end resources that can reduction in Sweden’s government On top of this, Britain’s narrow 2016 vote be accessed with a greater electricity tax is backing data centers, to leave the European Union (the so called latency, such as analytics and for data centers having slashed the country’s “Brexit” decision) caused a fall in the value of reporting. All the important in Sweden, energy tax for the sector, a the pound and might be expected to impact parts of the data collected at approved move designed to persuade London’s future as the financial hub, and its the edge will need to be backed in 2017 wavering providers, who might value as a European beachhead for foreign up and analyzed. And all the be considering other Nordic organizations. customer data which doesn’t countries, to join Facebook. So far, there has been no sign of any need regular access (think old impact, and investment has continued Facebook posts, or bank statements) can be To learn more about Sweden, sign up for unabated. This is partly because most safely put elsewhere. DCD’s Energy Smart event in Stockholm next political decisions are yet to be made. Moves That’s where the specialized hyperApril: bit.ly/ DCDEnergySmart2019 towards Brexit have been so slow and efficient data centers will come into their confused that it is still possible own. 2. London to hope for an outcome which In a sense then, almost every location on Blooming despite Brexit changes little. In the absence earth could find a role in the digital landscape Summer max: 73.4°F (23°C) of real data, Brexiteers are which we are building. The role of the Winter min: 48.2°F (9°C) still able to promise a bright technology is to deliver the digital resources Typical cooling tech: total colocation future outside of the EU, to where they have to be. Chillers with free/ power capacity while moderates say the UK evaporative cooling in London must continue doing business 1. Sweden (CBRE) with the EU after Brexit, so Capital of Energy Smarts The capital of the UK remains surely the country will maintain Summer max: 71.6°F (22°C) the largest and most vibrant the alignment with European laws Winter min: 33.8°F (1°C) data center hub in Europe, despite a and regulations which benefit the Typical cooling tech: Outside air freenumber of apparent obstacles. London has digital sector. cooling, heat reuse some 495MW of data center power capacity, The UK government has seen the according to CBRE, comfortably ahead importance of data centers and backed the Sweden is a small data center market when of other European cities including Paris, industry with a climate change agreement compared with giants like the UK (London) Frankfurt and Amsterdam. Colocation and which exempts them from energy taxes and Germany (Frankfurt) but it is positioning cloud providers have flocked here to service as long as they collectively improve their itself as an energy-efficient hub for data the city’s financial hub, and use the abundant efficiency. centers, and has had some significant wins fiber networks. London also serves as an Alongside these factors, the weather - the most well-known being Facebook’s English-speaking European base for foreign has less of an impact. The country’s cool, expanding campus in Luleå, which is multinationals. temperate climate enables outside air cooling growing to three data centers and will use All this might have been called into for most of - if not all of - the year, but data hundreds of megawatts of power. The Luleå question by a number of factors. Real estate centers still require mechanical chillers for area is also home to a data center location in London has eye-watering prices, the reliability reasons. called The Node Pole. country’s energy costs are high, and the Sweden gets most of its energy from electrical grid suffers from poor forward London will be home to DCD’s flagship renewable sources, and the capital, planning and a high dependence on fossil annual event, DCD>Zettastructure, this Stockholm, plans to be carbon-neutral by fuels. November 5-6: bit.ly/DCDZettastructure2018 2040. The city has a district heating system,
97%
495MW
Issue 28 • June/July 2018 35
3. Ashburn, Virginia
4. Singapore
Boom town Summer max: 87.8°F (31°C) Winter min: 41°F (5°C) Typical cooling tech: chillers with some evaporative cooling
Asia Pacific’s Data Center Capital Summer max: 89.6°F (32°C) Winter min: 86°F (30°C) Typical cooling tech: Chillers all year round
center building. Another project aims to create solar farms despite the land shortage, by floating them on the island’s reservoirs. A further solar development plans to distribute capacity by renting space on city floors. The heavy planning of the Singapore economy, combined with the continued need, seem set to keep the country at the forefront.
Like Virginia, Singapore is something of an Northern Virginia is not just the largest data accidental data center hub, but in Singapore’s center hub in the world, it continues to be case, it is definitely working among the fastest growing. With a total of against the climate. It has In September, we’re heading to Singapore for more than 600MW installed, the region is some 290MW of capacity the region’s leading data center and adding more than 100MW every year. It according to CBRE, and is cloud event. Be sure to join us: accounts for 20 percent of the US data center growing rapidly. bit.ly/DCDSingapore2018 market, and more than ten million square Data center operators feet of data center space. have no choice but to locate 5. Beijing Northern Virginia is a place close to in Singapore, as it is a crucial total colocation Planning at Scale the Beltway of Washington, with a lot of financial and business center for power capacity Summer max: 87.8°F (31°C) consumers and businesses eager for capacity the Asia Pacific region. However, in Singapore Winter min: 35.6°F (2°C) - but the fundamental reason for its strength it is a punishing place to build. The (Cushman & Typical cooling tech: chillers as a hub has little do with this and more with temperature is hot all year round, Wakefield) needed for large parts of the year a historical accident. and the very high humidity makes In the nineties, fiber networks were any reliance on evaporative cooling China is experiencing an impressive built out fast, and early investors settled in a laughable suggestion. And the cost of land level of growth in its data centers, as the Ashburn. AOL built its headquarters in in the tiny island state is very high country undergoes rapid development. Loudoun County, and Internet providers indeed. With its huge population rapidly taking up got together to interconnect their Singapore has a heavily fossil-fuel mobile services and other digital activity, it infrastructure, building an exchange based electricity grid, so data center has increased investment in infrastructure, point that became known as providers locating there will take a hit with cloud players like Alibaba and Tencent MAE-East, which was quickly on their corporate environmental becoming global giants. designated by the National footprint. Even though the climate China’s capital and its third largest Science Foundation as one of might allow for solar power, city, Beijing was one of the first Chinese four US Network Access Points. it’s very hard to exploit that in data center hubs, along with Shanghai, of the world’s Colocation giant Equinix arrived Singapore, because the city-state Guangzhou, and Shenzhen. However, Internet traffic in the area, and the rest is down has very high property values, and by 2016, the city was less willing to allow flows to network effects. little space for solar farms. data centers, owing to a shortage of land, through The local economy has The Singapore government along with the high power demands - and Northern become skewed towards is taking a proactive substantial carbon footprint - of data centers, Virginia data centers - staff and approach to data centers. which didn’t endear them in one of the land are available, and It has backed projects world’s most polluted cities. In 2016, Beijing local regulations and taxation simplifies to explore ways around these problems. A issued a ban on data centers with a PUE the building of new space there. Power is government-sponsored project, spurred by rating of more than 1.5. available, and local rulings make backup a possible shortage of land for data centers, Smaller cities are developing data center power easy to implement. is considering how to build multi-story sectors of their own, but Beijing remains a Once again, the climate has little effect facilities, and keep them energy-efficient crucial location. on this. Virginia’s summer heat precludes despite the local climate. data centers relying on free cooling, but the The Info-communications Media China’s biggest hyperscale winter is cool enough to turn off the chillers Development Authority of Singapore (IMDA), companies come together for extended periods. along with Huawei and Keppel Data Centres, this December: is currently testing a high-rise green data bit.ly/DCDBeijing2018
370MW
70%
DCD>Debate The benefits of evaporative cooling
Watch On Demand
Join our expert panel to examine how designing for pre-built evaporative cooling technology can reduce both mechanical systems and PUE costs while improving data center performance and resilience. Sign up here: bit.ly/DCDDebatesHyperscaleCooling
36 DCD Magazine • datacenterdynamics.com
Cooling Supplement
Will liquid cooling rule the edge?
Tanwen Dawn-Hiscox Reporter
The advent of edge computing could increase the popularity of liquid cooling systems, says Tanwen Dawn-Hiscox
S
ome may remember when the wider data center industry caught wind of liquid cooling technologies. Though the physics of using liquid - rather than air - to remove heat from server racks made sense, the concept seemed too risky to most, so investment in legacy HVAC systems continued unabated. Nonetheless liquid cooling found its applications, chiefly in High Performance Computing (HPC). This is a perfectly suitable use case for the technology: whether using chilled water or dielectric fluid, liquid cooling systems are an efficient match for high density, high power server nodes, and are much less prone to failures than their airbased counterparts. Iceotope’s Edgestation, an enclosure the size of an electric radiator, is liquid cooled on the inside and passively air cooled on the outside, supporting about 1.5kW of IT. A variant of this can be placed on a roof or mounted on a wall, and the company offers to engineer bespoke products on demand. The company’s ‘ku:l’ product range, which comes in vertically or horizontally mounted form-factors, is designed for significantly higher densities than the Edgestation, between 50 and 100kW, with output temperatures in the 50°C (122°F) range, which can, for instance, be used to heat a building. Iceotope's direct immersion system is in use at the University of Leeds and at the Poznan Supercomputing and Networking Center - in other words, used to sustain otherwise difficult to cool chips. But it can, in theory, be placed in, say, a disused cupboard, or any poorly utilized space in an office block or a factory. While liquid cooling may not be the go-
to approach of the early edge computing adopters, it can complement other technologies for added efficiency. In Project Volutus, Vapor IO chose to partner with BasX, whose chief engineer founded Huntair (now owned by Nortek) and invented the idea of fan-wall cooling. The technology used for Vapor’s modules, which are being deployed at the base of Crown Castle cell towers across the US, is essentially an adaptation of airside free cooling. Instead of an evaporative system, air circulates in a closed loop with a chilled water cooling coil which runs to an outside projection coil, which, if the temperature difference ranges between 12 to 15°F (7-8°C), can reject all of the heat from the small data center. The size of the outside coil is adapted according to its geographic location to ensure maximum efficiency, but for hotter days, the system also contains a liquid cold plate refrigeration circuit. With project Volutus, it is likely that several tenants using different technologies in each module, making it less likely they are suited to the use of cold plate technologies. But the prime reason liquid cooling isn’t used to cool average densities is the cost. And, for the time being, primarily due to the complex engineering the manufacture of such systems requires, this still stands. What’s more, widespread adoption of novel technologies often awaits the endorsement of enough competitors to take off. But edge computing may well be the
springboard that propels liquid cooling into mainstream use. The dynamics of data distribution are evolving. It used to be that data was transmitted following a ‘core to customer’ model, but increasingly, it moves between peer to peer before traveling to the core, and back again. Consequentially, the network infrastructure will likely be forced to adapt, bringing compute much closer to the user. And without the barrier of having to replace legacy cooling systems, this could bring about liquid cooling’s heyday.
Issue 28 • June/July 2018 37
Advertorial: STULZ
Give edge data centres some liquid refreshment STULZ is pioneering the use of direct contact liquid cooling (DCLCTM) as a way to extract heat from processing components, servers and equipment in edge and micro data centres.
T
he exponential adoption of the cloud, the Internet of Things (IoT), Industry 4.0, and Web 2.0 applications, along with our desire to view increasing amounts of streamed content via services like Amazon and Netflix, has resulted in a growing number of data centres being built closer to where users are. Latency, speed and bandwidth are key challenges. Edge and micro data centres allow for the reliable distribution of compute assets and carrier links to process workloads in a multitude of locations, while still keeping core functions in a central location. They
Advertorial: STULZ
are therefore meeting the demand for uninterrupted availability of data, audio and visual content, and eliminating the challenges around latency, connectivity and cloud outages. Furthermore, in the near future autonomous vehicles are projected to consume terabytes of data with continuous sensing, data interchange, analysis and management. These applications need the continuous high-speed connectivity and availability of a large volume of processed data. This leads to architectural changes in data management, processing, analysing, relaying and storing and is already changing
the data centre form factor. The design of the data centres is evolving from data centres at the edge to facilities at the mobile edge. Energy efficiency is a major concern when considering the future for production processes and IT infrastructures, and the trend is to decrease floor space and invest in more compact and powerful computer systems that are able to process at faster speeds. Data centres of all kinds consume vast amounts of energy for powering their servers and the pressure is on to reduce the level currently used. That is why Power Usage Effectiveness (PUE) has become such a prevalent industry metric – the closer it is to 1.0 then the better the facility is doing in managing its use of energy – and DCLC can help lower this figure. Maintaining optimum climate conditions is just as important within edge and micro data centres as it is for enterprise, colocation and hyperscale facilities. To combat higher cooling costs, STULZ has partnered with CoolIT to develop solutions that serve multiple applications and diverse customer needs across many verticals in this data driven world. Their innovations have lowered operating costs and, due to the physics principle that liquids have a higher heat transfer capability over the air as the medium of exchange, the inherent benefits of DCLC are gaining in popularity. DCLC is a disruptive cooling method that can be applied for heat extraction from IT equipment. This patented technology uses cold plate heat exchangers that are directly mounted in the heat generating surfaces.
Advertorial: STULZ
These plates transmit extracted heat into the atmosphere, enabling equipment to operate at optimal temperature for higher processing speeds and enhanced reliability. Due to compact servers with higher capacities, the kW per rack ratio is significantly increased, with economic benefits that help to maximise the white space usage in data centres. Operational efficiencies can therefore be improved to positively impact bottom lines. DCLC uses the exceptional thermal conductivity of liquid to provide dense, concentrated, inexpensive cooling. It drastically reduces dependency on fans and air handlers – therefore, extreme high rack densities are possible and the power consumed by the cooling system drops significantly. This results in more power availability for computing, as each server in each rack can be liquid cooled – significantly lowering operating costs. STULZ and CoolIT’s technical leadership and record of reliability and innovation is meeting the exploding need to rapidly cool the huge increase in data traffic demand. Their joint technology leadership is resulting in higher rates of data centre availability, reliability, resiliency and, therefore, a lower cost of operation. They have provided DCLC solutions to major server and processor manufacturers like HPE, AMC, Apple, Intel and Dell, while Bitcoin mining firms have become DCLC users because it enables high densities and lower cooling costs when compared with traditional air cooling. As the density of installed equipment in the data centre has risen, so too has the amount of heat generated. While being able to fit more
STULZ Micro DC - High Perfomance Version
kit into a smaller space is generally considered a good thing, the need to control temperature has led to the growing use of liquid cooling. While the initial capital expenditure (CapEx) and the estimated operating expenditure (OpEx) will vary for every edge and micro data centres, what will not alter are the significant savings that owners and managers will achieve across the value chain by applying DCLC.
STULZ and CoolIT A summary of the benefits of STULZ and CoolIT’s capabilities can be seen in Table 1 and Table 2.
Contact Details Norbert Wenk Product Manager STULZ GmbH Hamburg GERMANY T: +49 40 55 85 0 E: wenk@stulz.de
Table 1: The benefits of DCLC
Feature
Benefit and impact
Performance
Facilitates peak performance for higher powered or overclocked processors
Density
Enables 100 per cent use of rack and data centre spaces
Quiet
Relieves employees from the disruptive screaming of server fans
Efficiency
Benefits from a significant reduction in total data centre energy consumed
Scalability
Meets fluctuating demands through the ability to modify data centre capacity
Savings
Generates immediate and measurable operating expense benefits, reducing overall total cost of ownership and thus increasing return on investment
Table 2: The features, benefits and impacts of the STULZ Micro DC with DCLC.
Feature
Benefit and impact
Video surveillance system
CCTV recordings for monitoring the unit and area around the unit
Fire alarm system
Fire monitoring and release of extinguishing agent
Cable management
Universal cable tray with horizontal cable management
Power distribution
Smart PDUs with environmental probe and temperature humidity sensor
Monitoring and security
Remote infrastructure management
Electronic cabinet access
Security access with integrated card reader
Rack construction
Heavy duty steel construction with powder coat finish
Drop-in solution
Rapid installation
Modular design
Easy to expand as need increases
Integrated cooling solution
DCLC integrated into the unit
Data centre in a box
Suitable for data centres and non-data centres
Advertorial: STULZ
The great refrigerant shortage European regulations are phasing out certain refrigerants, with major effects on data center cooling. Peter Judge reports
Peter Judge Global Editor
E
fforts to reduce the impact of climate change by limiting greenhouse gas emissions could have a big impact on data centers, causing changes to one of their main components - the chillers. While many data centers aspire to free cooling (just using the outside air temperature) that’s not possible in all locations all year round, so data centers will usually have some form of air conditioning unit to cool the IT equipment. Air conditioning systems have come under fire for their environmental impact and a major component of this is the global warming potential (GWP) of the refrigerants they used. Rules are coming into force that will reduce the use of current refrigerants, and replace them with more environmentally friendly ones - while having a profound effect on equipment used in data centers. The HFC refrigerants used in chillers are being phased out because of their high GWP. The effect is to increase the price of HFCs and push vendors towards other chemicals. So it seems equipment makers will have to put up the price or use new refrigerants. Sticking with the current F gas rules, each year the price of HFCs will go up, and the pressure to change will increase. The trouble is, the replacements have drawbacks.
They are generally more expensive. More surprisingly, the replacements are flammable. Why would international environmental rules demand we use flammable liquids in AC units? Natascha Meyer, product manager at Stulz, explains it is actually inevitable: “A low GWP means that the refrigerant degrades rapidly as it enters the atmosphere. The only way to ensure this is to make it chemically reactive. However, high reactivity also generally means high flammability, entailing safety risks for people and machines.” There are some products which have a low GWP and relatively low flammability, but they are possibly more unacceptable, says Felisi, because they are toxic: “There is a lobby in northern countries pushing for the use of ammonia. Ammonia is natural, and not flammable, but is it safe? Would you allow ammonia in your house?” As well as being toxic, these fluids can be expensive. Meyer says one of the possibilities, R1234yf is out of the question: “It reacts with water to form hydrofluoric acid [...and] its sparsity on the market makes it too expensive at present.” The best possibility is R1234ze. It is possible to modify chillers to work with this fluid, but there are still issues, says Meyer: “We have specially modified the CyberCool 2 to work with this refrigerant.
40 DCD Magazine • datacenterdynamics.com
F gas rules attack HFCs Chillers currently use HFCs fluorinated hydrocarbons - which have a GWP thousands of times larger than carbon dioxide. The two main culprits are R410a, used in systems up to a few hundred kW, and R134a, used in larger systems. If this gives you a sense of déjà vu, that’s because refrigerants have been changed regularly on environmental grounds. HFCs themselves only came in as a replacement for CFCs (chlorinated hydrocarbons) which were banned for a different environmental impact: they depleted the ozone layer. “A few years ago, we passed from R22 then we made a change to R47c, then the industry changed to R410a,” says Roberto Felisi, product marketing director at Vertiv. “So it is the third time we changed refrigerant in 15 years.” This time, the refrigerants are being phased out gradually, using the 2015 “F gas” regulations in Europe which set a cap on the amount that can be produced and sold (or imported) by the large chemical companies that supply the products. That’s just a European rule, and one response is for manufacturers to ship units empty if they are going outside the EU, to be filled on arrival. However, a global agreement to cap and reduce F gases was passed in Rwanda in 2016, and should start to come into force from 2019. It’s worth remembering that data centers are only a small part of the air conditioning market, which is dominated by “comfort” air conditioning. The whole market is so large, and the global warming potential of HFCs is so extreme, the Rwanda deal was billed as the greatest single step in heading off global warming. It is possible that the Trump administration in the US might become aware of the Rwanda deal and back out of it, as it did with the Paris agreement on climate change. However, at present, it remains in place.
Cooling Supplement
However, R1234ze has a low volumetric cooling capacity. Consequently, a chiller that originally delivered a cooling capacity of 1,000kW over a defined area now achieves just 750kW over the same area.” The modified chillers are less energy efficient, and customers need larger units that take up more space - which may be a serious consideration in a built up area. So companies will continue to buy, and maintain, chillers based on R134a and R410a, and face the impact of the F gas regulations. They will have to pay more for refrigerants, and these price changes will be unpredictable. Meyer warned that users might have stocked up in 2016, and sure enough the big price increase was delayed. As Felisi says in 2018, “the price has gone up much more than we forecasted. The price of 410a went from €7 to €40 (US$8-46) per kg something like a five times increase.” It’s possible to overstate the current impact of course. As Felisi points out: “The cost of refrigerant is only a few percent of the price of running a chiller.” However, that cost will keep increasing. In the long term, it may mean existing chillers will have higher maintenance costs, and may be replaced sooner.
These cost changes may be harder to bear for smaller manufacturers, while larger manufacturers may be able to use their purchasing muscles to get hold of F gas more cheaply, and compete to maintain and replace those older systems. Taking the longer term approach of changing the refrigerant, it is possible to blend coolants, and bring down the GWP from say 1500 to 600, says Felisi, with a coolant that is “mildly flammable.” This will have an impact on data center design - making split systems less popular and boosting the prospects of systems which circulate chilled water. Split systems have evolved, which circulate refrigerant to provide localized cooling, even putting the actual cooling into the racks, have seemed a good idea. However, they have long pipes, which need a lot more refrigerant, so they will become too expensive (or dangerous if more flammable coolants are being circulated). “In a split system, you might have 100m of piping,” says Felisi, estimating that refrigerants could be as much as ten percent of the running cost of a split system. “Split systems have become much less viable.”
The most likely result of the regulations will be to reinforce an existing trend towards placing air conditioning units outside the facility and circulating chilled water within the site. This keeps the volume of refrigerant down, and avoids circulating flammable material in the white space areas. “We see an increased use of chilled water solutions,” says Felisi. “In a chilled water system, the refrigerant is installed in a packaged unit, outside the building.” This move also potentially improves efficiency, as it is easier to combine external chillers with adiabatic evaporative cooling systems. Of course, the use of water has drawbacks, as it may be expensive or in short supply in a given location. As with all environmental decisions there are trade-offs. Increasing water use in order to reduce the impact of the refrigerant might have a negative environmental impact in some locations. Meanwhile, a move which pushes data center builders to use less efficient chillers will mean that data centers will consume more energy (and possibly have a higher carbon footprint) in order to reduce their impact from their refrigerants.
Issue 28 • June/July 2018 41
Getting into hot water Fears over climate change and rising power densities have led to the creation of a new wave of liquid cooled systems. Sebastian Moss traces the history of hot water cooling, and peers into a future where supercomputers could become vastly more efficient, and more powerful
Sebastian Moss Senior Reporter
I
deas can strike at any time. In 2006, Dr Bruno Michel was at a conference in London, watching former head of IBM UK Sir Anthony Cleaver give a speech about data centers. At the end, attendees were told that there would be no time for questions because Cleaver had to rush off to see the British prime minister. “He had to explain to Tony Blair a report by Nicholas Stern,” said Michel, head of IBM Zürich Research Laboratory’s Advanced Thermal Packaging Group. The Stern Review, one of the largest and most influential reports on the effects of climate change on the world economy, painted a bleak picture of a difficult future if governments and businesses did not radically reduce greenhouse gas emissions. “We didn’t start the day thinking about this, of course,” Michel said in an interview with DCD. “What Stern triggered in us is that energy production is the biggest problem for the climate, and the IT industry has a share in that. The other paradigm shift that happened on the same day is that analysts at this conference, for the first time, said it’s more expensive to run a data center than to buy one. “And this led to hot water cooling.” IBM’s history with water cooling dates all the way back to 1964, and the System/360 Model 91. Over the following decades, the company and the industry as a whole experimented with hybrid air-to-water and indirect water cooling systems, but in mainstream data centers, energy-hungry conventional air conditioning systems persisted.
42 DCD Magazine • datacenterdynamics.com
LRZ’s SuperMUC Source: IBM “We wanted to change that,” Michel told us. His team found that hot water cooling, also called warm water cooling, was able to keep transistors below the crucial 85°C (185°F) mark. Using microchannel-based heatsinks and a closed loop, water is supplied at 60°C (140°F) and “comes out of the computer at 65°C (149°F). In the data center, half the energy in a hotter climate is consumed by the heat pump and the air movers, so we can save half the energy.” Unlike most water cooling methods, the water is brought directly to the chip, and does not need to be cooled. This saves energy costs but requires more expensive piping, and can limit one’s flexibility in server design. By 2010, IBM created a prototype product called Aquasar, installed at the Swiss Federal Institute of Technology in Zürich and designed in collaboration with the university and Professor Dimos Poulikakos. “This [idea] was so convincing that it was then rebuilt as a large data center in Munich the SuperMUC, in 2012,” Michel said. “So five and a half years after Stern exactly on the day - we had the biggest data center in Europe running with hot water cooling.” SuperMUC at the Leibniz Supercomputing Centre (LRZ) was built with iDataPlex Direct Water Cooled dx360 M4 servers, comprising
Cooling Supplement supercomputers are using some form of hot water cooling,” Michel said. There are signs that the technology may be finally ready to spread further: “We did see a big change in interest in the last 18 months,” said Martin Heigl, who was IBM’s HPC manager in central Europe at the time of the first SuperMUC. Heigl, along with the SuperMUC contract and most of the related technology, moved to Lenovo in 2015, after the company acquired IBM’s System x division for $2.3 billion. “There are more and more industrial clients that want to talk about this,” Heigl, now business unit director for HPC and AI at Lenovo, told DCD. “When we started it in 2010, it was all about green IT and energy savings. Now, over time, what we found is that things like overclocking or giving the processor more power to use can help to balance the application workload as well.”
more than 150,000 cores to provide a peak performance of up to three petaflops, making it Europe’s fastest supercomputer at the time. “It really is an impressive setting,” Michel said. “When we first came up with hot water cooling they said it will never work. They said you’re going to flood the data center. Your transistors will be less efficient, your failure rate will be at least twice as high… We never flooded the data center. We had no single board leaking out of the 20,000 because we tested it with compressed nitrogen gas. “And it was double the efficiency overall. Plus, the number of boards that failed was half of the number in an air-cooled data center because failure is temperature change driven: half the failures in a data center are due to temperature change and since we cool it at 60°C, we don’t have temperature change.” The system was the first of its kind, made possible because the German government had mandated a long-term total cost of ownership bid, which meant that energy and water costs were taken into account. As a closed system running with the same water for five years, the water cost was almost zero after the initial installation. The concept is yet to find mass market appeal, but “all the systems in the top ten of the Top500 list of the world’s fastest
With hot water cooling, Lenovo has been able to push the envelope on modern CPUs’ thermal design point (TDP) - the maximum amount of heat generated by a chip that the cooling system can dissipate. “At LRZ, the CPU will support 240W - it will be the only Intel CPU on the market today running at 240W,” Heigl said. “In our lab we showed that we can run up to 300W today, and for our next generation we’re looking at 400-450W.” He added: “Going forward into 2020-2022, we think that to get the best performance in a data center it will be necessary to either go wider or go higher, so you lose density but can push air through. Or you go to a liquid cooling solution so that you can use the best performing processors.” Hot water cooling has also found more
A data center in a shoebox In 2012, the Dutch government launched the DOME project - a joint partnership between IBM and ASTRON, the Netherlands Institute for Radio Astronomy - to design computing technology for the Square Kilometre Array (SKA), the world’s largest planned radio telescope. Building upon the hot water cooling technology in SuperMUC, DOME lead to the creation of IBM Zurich’s MicroDataCenter, a computational dense, and energy efficient 64-bit computer design based on commodity components. But with IBM mostly out of the low-cost server market, it is currently licensing out the technology, with the first company utilizing the product coming from The Netherlands. ILA microservers, a startup, offers variations on the hot water cooled server.
success in Japan “as there’s a big push for those technologies because of all the power limitations they have there,” as well as places like Saudi Arabia where the high ambient temperatures have made hot water a more attractive proposition. Meanwhile under Lenovo, the SuperMUC supercomputer is undergoing a massive upgrade - the next generation SuperMUCNG, set to launch later this year, will deliver 26.7 petaflop compute capacity, powered by nearly 6,500 ThinkSystem SD650 nodes. u
Source: IBM
Issue 28 • June/July 2018 43
SuperMUC from above Source: IBM u “The first SuperMUC is based on a completely different node server design,” Heigl said. “We decided against using something that’s completely unique and niche; our server also supports air cooling, and we’re designing it from the start so that it can support a water loop - we are now designing systems for 2020, and they are planned to be optimal for both air and water.”
supplied by Fahrenheit, a German startup previously known as SolTech. “We’re the only people in the data center space working with them so far,” Heigl added.
“We do think that this will be used more often in the future, though. In our opinion, LRZ is an early adopter, finding new technologies, or even just inventing them. The water cooling we did back in 2010 - no As power densities continued to rise, one else did that. Now, after a few years, Lenovo encountered another cooling others - be it SGI, HPE, Dell - they have challenge - memory. “DIMMs adopted different kinds of water didn’t generate that much cooling technologies.” heat back in 2012, so it was Michel, meanwhile, sufficient to have passive remains convinced that heat pipes going back and the core combination of The number of servers forth to the actual water microchannels and directloop,” Heigl said. to-chip fluid can lead to Lenovo will have “With the current huge advances. “We did shipped when it generation 128 Gigabyte another project for DARPA, upgrades the DIMMs, you have way where we etch channels into more power and heat the backside of the processor SuperMUC coming off the memory, so and then have fluid flowing we now have water running through these microchannels, between them allowing us to reducing the thermal resistance by have a 90 percent efficiency in taking heat another factor of four to what we had in the away.” SuperMUC. That means the gradient can The company has also explored other then become just a few degrees.” ways of maximizing cooling efficiency: “We take hot water coming out of it that’s 55-56°C The Defense Advanced Research Projects (131-133°F), and we put it into an adsorption Agency’s Intrachip/Interchip Enhanced chiller,” which uses a zeolite material to Cooling (ICECool) project was awarded to “generate coolness out of that heat, with IBM in 2013. The company, and the Georgia which we cool the whole storage, the power Institute of Technology, are hoping to supplies, networking and all the components develop a way of cooling high-density 3D that we can’t use direct water cooling on,” chip stacks, with actual products expected Heigl said. to appear in commercial and military The adsorption chiller Lenovo uses is applications as soon as this year.
20m
44 DCD Magazine • datacenterdynamics.com
“It is not single phase, it’s two phase cooling, using a benign refrigerant that boils at 30-50°C (86-122°F) and the advantage is you can use the latent heat,” Michel said. “You have to handle large volumes of steam, and that’s a challenge. But with this, the maximum power we can remove is about one kilowatt per square centimeter. “It’s really impressive: the power densities we can achieve when we do interlayer cooled chip stacks - we can remove about 1-3 kilowatts per cubic centimeter. So, for example, that’s a nuclear power plant in one cubic meter.” In a separate project, Michel hopes to be able to radically shrink the size of supercomputers: “We can increase the efficiency of a computer about 5,000 times using the same seamless transistors that we build now, because the vast majority of energy in a computer is not used for computation but for moving data around in a computer. “Any HPC data center, including SuperMUC, is a pile of PCs - currently everybody uses the PC design,” Michel said. “The PC design when it was first done was a well-balanced system, it consumed about half the energy for computation and half for moving data because it had single clock access, mainboards were very small, and things like that. “Now, since then, processors have become 10,000 times better. But moving data from the main memory to the processor and to other components on the mainboard has not changed as much. It just became about 100 times better.”
Cooling Supplement
Cooling towers on the rooftop of LRZ Source: IBM
This has meant that “you have to use cache because your main memory is too far away,” Michel said. “You use command coding pipelines, you use speculative execution, and all of that requires this 99 percent of transistors. “So we’re using the majority of transistors in a current system to compensate for distance. And if you’re miniaturizing a system, we don’t have to do that. We can go back to the original design and then we need to run fewer transistors and we can get to the
factor of about 10,000 in efficiency.” In a research paper on the concept of liquid cooled, ultra-dense supercomputers, ‘Towards five-dimensional scaling: How density improves efficiency in future computers,’ Michel et. al. note that, historically, energy efficiency of computation has doubled every 18-24 months, while performance has increased 1,000-fold every 11 years, leading to a net 10-fold energy consumption increase “which is clearly not a sustainable future.” The team added that, for their dense system, “threedimensional stacking processes need to be well industrialized.”
An iDataPlex Direct Water Cooled dx360 M4 server Source: IBM
However, Michel admitted to DCD that the road ahead will be difficult because few are willing to take on the risks and longterm costs associated with building and deploying radically new technologies that could upend existing norms.
“All engineers that build our current computers have been educated during Moore’s Law,” Michel said. ”They have successfully designed 20 revisions or improvements of data centers using their design rules. Why should number 21 be different? It is like trying to stop a steamroller by hand.” The other problem is that, in the short term, iteration on existing designs will lead to better results: “You have to go down. You have to build inferior systems [with different approaches] in order to move forward.” This is vital, he said: “The best thing is to rewind the former development and redo it under the right new paradigm.” Alas, Michel does not see this happening at a large scale anytime soon. While his research continues, he admits that “companies like ours will not drive this change because there is no urgent need to improve data centers.” The Stern Review led to little change, its calls for a new approach ignored. “Then we had the Paris Agreement, but again nothing happened” he said. “So I don’t know what needs to happen until people are really reminded that we need to take action with other technologies that are already available.”
Issue 28 • June/July 2018 45
Cooling Supplement
A new cooling frontier
Sebastian Moss Senior Reporter
Look to the skies for an innovative approach to cooling, says Sebastian Moss
W
hen trying to cool a data center, consideration is often given to outside temperatures, to the cooling capacity of chillers, and the airflow of fans. Less thought is given to the coldness of space, the vast bleak void that envelopes us all, just begging for warmth. That may change, with a small startup planning to take advantage of radiative sky cooling, a fascinating natural phenomenon that allows one to beam heat into space. SkyCool has begun to develop panels that emit heat at infrared wavelengths between 8 and 13 micrometers - which are not absorbed by the earth. “This essentially allows us to take advantage of the sky, which it turns out is very cold,” SkyCool’s co-founder and CEO Eli Goldstein told DCD. “And more broadly, space is the ultimate heatsink and is around three kelvin - it's extremely cold.” While the phenomenon has been known and studied for centuries, the company’s panels, made out of a thin layer of silver covered by layers of silicon dioxide and hafnium oxide, can remove heat during the day - a first. A prototype of the system was installed
on a two-story office building in Las Vegas in 2014. Under direct sunlight, the panel remained 4.9˚C (8.8˚F) below ambient air temperatures, delivering “cooling power of 40.1 watts per square meter.” “What our surfaces do, is they're able to not absorb heat from the Sun, but at the same time they're able to simultaneously radiate heat to the sky in the form of infrared light. And that combination of properties has really never been present in any natural material, and it's been very difficult to make up until recently by design,” Goldstein said. The startup was formed by three researchers from the Stanford University: Goldstein, his post-doctorate advisor, Aaswath Raman, and his professor, Shanhui Fan. SkyCool was started last year to commercialize the research first carried out at Stanford. “At the end of this summer, we hope to have a couple of installations in California,” Goldstein said. “After that the plan would be to deploy panels in more locations and scale up.” The company hopes its panels, which circulate water-glycol, will find success in sectors that require high cooling loads data centers, the refrigeration sector and commercial cooling. “Early on we're more focused on edge
46 DCD Magazine • datacenterdynamics.com
data centers,” Goldstein said. “I think there's absolutely interest in the larger ones: provided that you wanted to cover the entire load, you would need to have adjacent space for the panels, not just the roof. You could also use the panels in conjunction with traditional cooling systems to reduce water use from cooling towers, or electricity use from cooling in more traditional ways.” He added: “We've had a number of conversations with data center companies. I think the biggest challenge for us right now is, because we're such a small company, to think about a 5MW data center or more is a lot, it's a big installation for us.” Another challenge, he said, was that “unfortunately, no one wants to be the first person to try a technology, especially at facilities like data centers. We know the technology itself works - I can heat up water and pump it through the panels and show that we can cool the water. The next thing we need to demonstrate is not on the technology side, but on our ability to execute deployments in a cost-effective way, and tie it into actual systems. “The energy side of things is pretty straightforward, we know how much heat needs to be removed, and how much heat can be removed by a panel, now it's about how we do that at scale.”
Fast and flexible. From network edge to factory floor. The STULZ Micro Data Center: A complete solution in a single unit. Includes rack, cable management, cooling, UPS, power distribution, ambient monitoring and firefighting. It can also be installed anywhere, is rapidly ready for use, and can be technically expanded in many ways. Thanks to direct chip cooling, for example, heat loads of over 80 kW are no problem. www.stulz.de/en/micro-dc
The picture shows the STULZ MDC high performance version, the standard version differs in terms of equipment
Advertorial: Nortec
Cooling trends: Why evaporative cooling is becoming so popular A single adiabatic humidifier can provide up to 680kW of evaporative cooling while operating on as little as 0.3kW of electricity
Water in
Cool humidified air
Warm dry air
their potential for delivering low cost, low energy cooling to an air handling unit is great. There are three main strategies for evaporative cooling in air handling units.
Direct Evaporative Cooling Humidity is added to the incoming fresh air stream, reducing its temperature whilst increasing its humidity. This conditioned air is supplied directly to the room with a high percentage of the room air being exhausted, rather than re-circulated, to maintain an appropriate humidity level in the room. The amount of cooling that can be achieved depends upon the humidity level of the incoming air stream. Air with a lower humidity will absorb more moisture, resulting in a greater evaporative cooling effect.
Indirect Evaporative Cooling Outside air is used to cool an internal environment without any mixing of the internal and external air streams. Outside air is run through a heat recovery (HR) unit and is then exhausted. The return air from the room is cooled by the HR unit before being reintroduced to the room. By humidifying the external air stream prior to the HR unit, its temperature is reduced, enhancing the cooling capacity of the system. This enables indirect evaporative cooling to be used when the outside temperature is higher than the desired room supply condition. A higher velocity on the external air stream than the internal further increases the cooling capacity of the system.
Exhaust Air Evaporative Cooling
H
umidifiers are required in data centres to prevent electrostatic discharge damaging servers and offer high capacity, low cost evaporative cooling. ASHRAE recommends a humidity level of 41.9°F (5.5°C) dew point to 60%RH and an allowable range of between 20-80%RH. In most parts of the world, at some time in the year, humidification will be needed to meet these internal conditions. Humidifiers are often used alongside free air cooling systems to either boost the cooling capacity with an evaporative cooling effect or provide high load, low cost humidification to the large volume of air flowing through the data centre’s ventilation system. In temperate climates, where free air cooling alone cannot meet the required
Advertorial: Nortec
internal conditions all year round, adiabatic humidifiers will provide additional cooling on hot days. This increases the operating window of the ventilation system without having to rely on traditional DX chillers. In colder locations, adiabatic humidifiers can be used to economically add huge volumes of moisture to the incoming air. Heat from the data halls is used to warm the incoming air prior to humidification increasing the moisture content and reducing the temperature to the necessary supply condition. For every kilo of humidification delivered from an adiabatic humidifier 0.68kW of evaporative cooling is achieved. As a single adiabatic humidifier can provide up to 1,000kg/h of humidification and a resultant 680kW of evaporative cooling, while operating on as little as 0.3kW of electricity,
The return air extracted from the room is cooled by the humidifier before being run through a HR unit and then exhausted outside. The cool thermal energy provided by the humidifier is transferred to the incoming air stream by the HR unit, cooling it and reducing the required load on DX air conditioning systems. As there is no mixing of the humidified exhaust air and the incoming fresh air, there is no moisture added, so cooling occurs irrespective of the incoming air’s humidity level.
Contact Details Caine Ruckstuhl Director Advertising & Sales Promotion T: 1.866.667.8321 E: caine.ruckstuhl@humidity.com
2N versus hybrid resilience
Peter Judge Global Editor
It has been said that the cloud will make reliable data centers unnecessary. The truth is, the cloud is revealing deep issues with our legacy IT, reports Peter Judge
F
or decades, data center builders have gone to great lengths to make facilities which can deliver services with extreme levels of reliability. Now, a new generation of IT people say the cloud can do this just as well. Where should CIOs turn to for resilient services - the cloud, or redundant data center hardware? Big surprise: the answer is neither. True resilience comes from properly understanding the services you are running. Traditionally, data centers have been made more reliable by using redundant architectures. There are backup servers with backup storage ready to take over, and the facility itself is made reliable by battery backup from uninterruptible power supplies, duplicate power feeds and spare cooling capacity. The wisdom of data center reliability has been codified in the Tier system of the Uptime Institute, as well as the European standard EN 50600. It comes down to duplication - sometimes referred to as “2N” - of power and cooling, and concurrent maintainability, the ability to service the facility without downtime. Duplicating resources can push operators to maintain two separate data centers, far enough apart (say 100km) so a natural disaster won’t strike them both, and mirroring live data in so-called “active-active synchronization.” That’s complicated and expensive, say enterprise IT staff, who have been slyly adopting cloud services as an alternative, and have noticed that these are pretty reliable, by and large. They are implemented on duplicated hardware, running in multiple data centers. They are accessed at the level of virtual machines or applications, which run across multiple servers. This can be a big plus in terms of reliability, if it is designed to minimize fault
propagation (see box). Services like Netflix are built so that individual modules can pick the cheapest resources to run on, says Liam Newcombe, CTO of data center analytics and TCO firm Romonet: “You can take out entire servers or buildings and Netflix doesn’t notice. When the building comes back on, it replicates back the few transactions it missed, and these are replicated out to a distributed database.” What if this is all the duplication you need? If applications are running in the cloud, then reliability becomes something for the cloud provider to worry about. It’s especially tempting since using cloud services is simpler - and often cheaper - than managing your own facility. Reliability experts say it’s not as simple as that - but they acknowledge that a major shift is happening. Richard Hartmann builds data centers for colocation provider SpaceNet in Germany. He makes them reliable, and is a passionate advocate for the EN 50600 reliability standard. Despite this, he says, there’s a new generation of users - the cloud natives - who will see standards like this as irrelevant: “No one in the cloud will have EN 50600.” Cloud native people don’t care about having redundant sites, he says, and even if they did, you can’t build reliable architectures like active-active synchronization on top of cloud services. To move to the cloud, one has to accept that reliability is implemented differently: “Cloud natives have a totally different view of what redundancy means. They no longer care about the underlying infrastructure as long as they have enough components. If you have ten database servers, you don’t care if half of them go down.” Of course, cloud outages do happen. To quote Metallica’s John Hetfield, it’s all fun and games, till someone loses an eye. In 2017, Amazon made an error, and
accidentally deleted servers providing the index to its AWS Simple Cloud Storage Service (AWS S3). All over the Internet services went down, including Quora, Giphy, Instagram, IMDb, American Airlines, Imgur, and Slack, to name but a random few. u
Fault isolation Whether you are in the cloud or an on-premises facility, the big problem is how faults in one part of a complex system can affect the rest of the system. “You want fault isolation, but often a complex system is interconnected in such a way as to create a fault propagation path,” says Newcombe. For example, if a database is continuously mirrored, then a copy is always available if the server fails. But if the data is corrupted, then both copies will be bad, because those errors have propagated to the other instance. “In large distributed databases, such as Cassandra, the software is itself intended to be distributed, and it is architected to be fault tolerant,” he says, “knowing how to retry and degrade in a graceful manner.” Legacy applications were designed to run on a “really solid monolithic block,” says Newcombe, but that approach “never really worked. When those horrible monoliths go bang, they go bang in the most appalling way and take the longest time to fix.” By contrast, “cloudy” microservices “tend to go wrong in minor ways.” Fixing such an app should be considerably quicker.
Issue 28 • June/July 2018 49
The error at AWS had such serious consequences because these services all had unknowingly been built with S3 as a single point of failure. So the new cloud-native mindset doesn’t dispense with the need for a considered approach to reliability - it shifts the responsibility up into the design of the service. And how will they know if their design is reliable? The Uptime Institute, the source of the Tier standards for reliable facilities, wants to help here, with an Uptime Hybrid Reliability assessment, to gauge the reliability of hybrid cloud implementations - but it’s not as simple as offering a new set of Tier guidelines for the cloud. “A company may have databases right across the cloud, but the CIO is still responsible for providing availability,” says Todd Traver, VP of IT optimization strategies at the Institute. In some ways the situation is worse, because that CIO is expected to offer guarantees for services provided using elements from third parties. As Newcombe explains, even if those underlying cloud services offer a service level
agreement (SLA), it can’t be used to deliver a service level agreement to customers down the line: “Service penalties don’t flow through an SLA chain,” he says: “You can’t pass losses down the chain.” Cloud providers will only repay the service fee of their direct customers, not the much higher fees paid for applications built on top of those cloud services. In the end, a cloud provider is not an insurance company, and won’t reimburse losses. All this means that CIOs feel they have lost control, says Traver: “They no longer provide or even manage the various elements of their IT. And it is not good enough to just cross your fingers and hope AWS will be there!” “In the past, they had a 2N data center, which would never go down,” he says. “The applications would be non-resilient; they 100 percent depended on a data center. Now they are spread across multiple locations. The data center is critically important still - but also the application you use. There are a lot more pieces and parts.” A CIO who might previously have implemented a service under their own
50 DCD Magazine • datacenterdynamics.com
control in a Tier III or Tier IV data center now has to contend with a hybrid that combines their own IT resources with multiple cloud services. That’s a complicated task. Uptime’s approach is to add in the other factors: as well as the in-house data center, the hybrid assessment takes in the networks, platforms and applications involved, and the nature of the overriding organization. In other words, it shifts responsibility back to the CIO, or the provider of a service. Resilience is no longer about building a reliable data center. It’s about whether you yourself are managing a reliable service, constructed from multiple components. In the end, Traver says, a CIO tasked with providing reliable infrastructure must do due diligence on all the services used, perhaps even including a check of the data center architectures of their cloud providers paying attention to the issues that are created by the combination that has been adopted. It’s also going to be a continuous responsibility - those who get the Hybrid Reliability stamp of approval should check and recheck throughout the year, in case
Colo + Cloud architectures or components subtly change. Such apps can be run fairly well in legacy “Companies are coming to us who have data centers, but they can’t be migrated into either had a large outage in the past, or are the cloud in a stable way. Breaking those concerned they don’t know how resilient applications up and moving them to web they are,” Traver says. At the time of writing, services will be fantastically complex. And no stamps of approval have been while it may be tempting to blame issued, but those working this crisis on the cloud, the real towards them include a cloud culprit is the inertia which has database spread across left those legacy applications multiple locations that in use. “recently had a bit of an “We need to have apps outage,” and wanted to which are fault isolating know: “Why did we not and degrade gracefully,” Total outage time at see this coming?” Newcombe says. “That is an entire discipline that AWS in 2017 Uptime consultants will people like Netflix have got (CloudHarmony) carry out the assessment right, and other people have over the course of a week got wrong.” or so. During that time, Traver In the end, the cloud model is expects some flaws will be found inevitable, and users and businesses and fixed, and the process should educate must adapt to it. In doing so, users will the in-house staff to a level where they can be changing one set of risks and threats self-assess to stay online until the next audit: for a different set of risks - and hopefully “It will be like going for your annual physical.” increasing their awareness in the process. Uptime plans to cover what happens in “If you already own your own data center the event of a failure, perhaps helping the it’s an extraordinarily cheap resource if you provider define a degraded service level, can run it,” Newcombe adds. It’s also the best which should help in aligning expectations place to keep legacy applications like SAP with what can be delivered. which are designed to run there. So has the cloud changed everything? Actually no, says Newcombe. It is just making The in-house data center will continue to visible a gap between customer assumptions be popular with companies wanting control and reality which has always existed in of customer data, either because of measures conventional data centers. like Europe’s GDPR, or simply because they For instance, a data center which just like to have control. promises 99.95 percent uptime or “four Cloud infrastructure will be more open nines” might imply the site will be down for and usable, and - if done correctly - the cloud less than 0.05 percent of the year - about four model can match or exceed the reliability of hours. However, Newcombe points out: “Your over-complicated enterprise data centers. outsourced provider might well be entitled to Whichever way you move, the important take a couple of hours every Sunday.” Over a thing will be to understand the trade-offs you year this would massively outweigh the 0.05 are making. percent allowed for unplanned outages - and would also probably be longer than outages in cloud services over that time. Bringing this issue to the surface could benefit everyone, because the techniques used in web service design are available, to some extent, to all. When services have been Chaos Monkey broken up, the architect should be able to How do you know if web services decide how critical each part of the service are reliable? One of the most is, how important is the data it produces, and striking approaches was taken by how vital its availability really is. Netflix, which created a tool called Marketing retweets may be expendable, Chaos Monkey (now part of a wider but paid transactions are not. The data in the project called the Simian Army). payroll system is crucial, but it only needs to Chaos Monkey tests network be continuously available for a short period resilience by randomly turning when the wages are being calculated. off parts of Netflix’s production network to see the result: “What More fundamentally, firms are relying on better test of having written a fault legacy applications which are simply not tolerant loosely coupled application written in a way which is compatible with could you have?” Newcombe asks. the cloud, or with modern microservice “And doing it on your live network architectures. “Legacy apps should have is the only way to be sure.” been rewritten that way in the ‘90s, but they weren’t,” Newcombe says.
205
minutes
Issue 28 • June/July 2018 51
Powering the golden state America’s most populous state wants to go all in on renewable energy, but doing that requires smart regulations that help businesses and consumers alike. Sebastian Moss reports
Sebastian Moss Senior Reporter
C
alifornia believes it can change the world: the state is at the forefront of a shift away from fossil fuels, hoping to vastly improve efficiency of renewable energy generation, and, at the same time, rearchitect the aging electrical grid. At the heart of these efforts are regulations, state-level mandates that aim to build a greener future without imposing burdensome costs on businesses. “We cannot regulate energy efficiency
standards in California unless they are technically feasible and economically beneficial,” Albert Lundeen, deputy executive director for strategic planning at the California Energy Commission, told DCD. “You've got to be able to show that, whatever the technology is, it is market-ready and that ultimately, it will save the consumer money. One of the things that the private sector desires is certainty - the public sector can establish policies that let you know where this state or that government is headed in the near and the far future.” This renewable energy drive dates back decades, but its most ambitious efforts are yet to come. “Our renewable energy goal at the moment is 50 percent by 2030,” Lundeen said. “Eighteen months ago it was 33 percent by 2020, and we know that milestone is going to be achieved. Utilities saw the writing on the wall and planned for it because they can't make the decision in a month, or even 18 months. They need to look years ahead.”
The move to renewables will also force a rethink of how electricity is stored and delivered. In 2013, California set itself another target - 1.3GW of energy storage by 2020. “And a lot of that is already in place. In fact, there's got to be half a dozen or more reasonably-sized projects, mostly in Southern California. “It's a necessary component of the grid,” Lundeen said, admitting that due to its heavy investment in solar, California had to pay other states to take excess energy several times last year - instead of storing it locally. “Most of these storage facilities have been based on batteries at this point, but we don't want to close the door on alternatives. We've done a lot of R&D on some other possibilities as well - different types of batteries, pump storage, air-compressed storage.” Complementing new energy storage capabilities are plans to change how America’s antiquated grid system works one, on a local scale, using several microgrid systems. The other takes aim at a larger problem, a common grid for western states. Governor of California Jerry Brown has proposed a new grid for as many as 14 states, but the plan, which has gone through multiple iterations, faces stiff opposition. “We've got a lot of partners through Utah and Arizona, our neighboring Intermountain states, and a big concern of theirs is that they don't want to be overly influenced by California,” Lundeen said. Equally, there are those in California that fear that expanding the grid will increase the number of politicians, agencies and business that have influence over it, putting California’s renewable goals at risk. Challenges remain, but there are some examples of interstate cooperation. “Right now we get wind from Wyoming,” Lundeen said. The goal is to allow states to share
Ensure your Invaluable is backed up by our Reliable WE DESIGN • WE BUILD • WE DELIVER 52 DCD Magazine • datacenterdynamics.com
Power + Cooling
in each other’s natural resources, rather than each being a kingdom unto itself. For example, California, despite being blessed with copious sun, lacks the abundant wind supply found in some mid-western states. “We have a deep shoreline, so the idea of attaching a wind turbine tower to the base of the ocean floor is just not a possibility off the coast of California, but we're exploring offshore wind on floating platforms,” Lundeen explained.
Already, California’s progress has had a global impact: “We think of ourselves as kind of like a laboratory for the world. Every week we have visitors from around the world that come to our HQ and ask how we have succeeded at making the changes we have.” But while the world may be taking notice, at a federal level, the United States appears intent on reverting to the old ways, with the Trump administration pulling out of the Paris Agreement, and rolling back numerous EPA
Energy Smart Focus Day DCD>Webscale San Francisco Sustainable energy, microgrids and energy storage will be a major theme at the Energy Smart Focus Day which kicks off DCD's San Francisco event, opened by the California Energy Commission Chair, Robert Weisenmiller. bit.ly/DCDwebscale
June 25 2018
regulations. Where does that leave California? “We certainly have a different dynamic now than a couple of years ago,” Lundeen said. “But we're seeing a big move toward renewable energy, despite what some would see as a conflict.” Even with the coal-fueled fever that's gripping the White House, it appears that renewables have already passed a crucial point - no matter the regulation of the day, they are simply more economical. Data centers, ever the power hogs, are turning to renewables in droves - not just for the good PR, or that warm, fuzzy feeling. No, they are doing it because the prices of green energy are low and, crucially, stable. Operators can plan and budget years in advance, instead of being at the mercy of a fluctuating fossil fuel market. Lundeen said: “On the contracts for big commercial facilities, solar and wind are very competitive, if not the best price out there.” But he cautioned: “We have made significant strides, but much more needs to be done."
SMART SWITCHGEAR & DATACENTER SOLUTIONS
WWW.THOMSONPS.COM
1.888.888.0110 Issue 28 • June/July 2018 53
Open Source
The agony and the ecstasy of Mark Shuttleworth
L
“There is a perception that we can act as if we don’t compete that’s simply not true” Mark Shuttleworth
ast month at the OpenStack Summit in Vancouver, Mark Shuttleworth went to war. The CEO of Canonical, the company responsible for popular Ubuntu Linux and Ubuntu OpenStack distributions, used his keynote to talk down another open source organization. The presentation was so aggressive, the recording of it never made it to the Summit website – it is the only keynote not available online. Shuttleworth started by taking a swipe at VMware – unusual, but still in keeping with the spirit of the event, not much love for proprietary code here. But then he launched a scathing attack at Red Hat, the well-known proprietor of Red Hat Enterprise Linux, criticizing it as too expensive – actually more expensive than VMware. “Google, IBM, Microsoft are all investing and innovating to drive down the cost of infrastructure. Every single one of those companies engages with Canonical to deliver public services – not one of them engages with VMware or Red Hat to offer those public services, they can’t afford to,” Shuttleworth said. “You have to think carefully who you’re listening to. And if you’re only listening to Red Hat then you wouldn’t have heard that every single major public cloud Kubernetes service uses Ubuntu to deliver Kubernetes. “Half the cost of VMware, one third the cost of RHEL, and truly portable, multi-cloud Kubernetes.” In a surreal sequence of events, Shuttleworth used his time on the stage to make a sales pitch, complete with two special offers. It was decisively undiplomatic. The open source software movement emerged in the nineties, promoting the values of universal access, community and collaboration. But it looks like some of its disciples are growing up. The recent acquisition of GitHub by Microsoft drives this point home - OSS is big business, and one of the largest open source events reflects this. Shuttleworth is under pressure to turn Canonical into a public company after 14 years of pouring his own money and energy into Ubuntu. “What I said there was factual and defensible,” he told DCD. “I do think it’s important that the OpenStack community have a very clear view on the business case for OpenStack.” “There’s a perception that we can act as if we don’t compete – that’s simply not true. What I wanted to do is remind people that OpenStack set out with a mission which was to provide cost-effective data center automation and infrastructure-as-a-service.” And he might be right – maybe the open source community needs a reminder that in business, success is measured in money. Shuttleworth’s outburst is not without precedent; the original bad boy of the open source community is, of course, Linus Torvalds. The creator of Linux is famous for courting controversy: outspoken and sometimes abusive, he uses a wide variety of expletives and raises his middle finger at everyone and everything – including major hardware vendors. I guess we should be grateful that Shuttleworth minds his language.
Max Smolaks News Editor
54 DCD Magazine • datacenterdynamics.com
Cools down without even warming up The CyberHandler 2 from STULZ is the complete energy efficient solution for Data Centers. Year-round free cooling, low total cost of ownership and high-quality components leave nothing to be desired. www.stulz.de/de/cyberhandler-2