DartPoints’s CEO The Edge lives in Tier 3 cities
The Power supplement Hydrogen, Open19, renewable tech, & more
Loon balloon
Connecting the world from the sky
Issue 38 • November 2020 datacenterdynamics.com
VOYAGE OF DISCOVERY What Microsoft learned about data center reliability from Project Natick
R
High Rate & Extreme Life High Rate Batteries for Critical Power Data Center Applications
RELIABLE Narada HRL and HRXL Series batteries deliver! Engineered for exceptional long service life and high rate discharges, Narada batteries are the one to choose. Narada can provide solutions to meet any Critical Power need.
ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties
Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339
ISSN 2058-4946
Contents November 2020
6 News Rise of the FLAD, chip consolidation, Goldman Sachs targets data centers, London outages, and the biggest stories of the past two months
21
14 Voyage of discovery For two years a data center has lived under the sea. We talk to Microsoft about Project Natick, and explore whether it is the wave of the future
The CEO interview
18 14
“ Networks driven by a few centralized hyperscale locations won’t meet the future demands of the traffic across the networks, from applications such as AI, autonomous vehicles, telehealth and agriculture,” DartPoints CEO Scott Willis tells DCD 21 Mind the doors! ASHRAE explains how to run the Edge 27 Despite everything, a successful year (For data centers) 30 The failure of 5G? 5G was supposed to be a revolution. So far, it’s not even been a great evolution
33 24
27 55
33 The Power supplement Open19 lives on, the fight for renewable energy, hydrogen generators and more in this special supplement on Power 49 Growing underground Bluebird Network found out how to upgrade diesels in a disused mine 53 Connecting the world via balloon Loon’s CTO on Internet by helium 55 W ho’s got 5G? Which countries have got 5G, what standards are in use, and how important are they for telcos?
53
60 Edge in the next normal The Covid epidemic accelerated digitization. How will this change the deployment of Edge? 62 Is it time to break up big tech?
Issue 38 • November 2020 3
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
StarlineDataCenter.com/DCD
Exploring sea, air and caverns
D
igital infrastructure really is everywhere, in the stratosphere, deep underground, and even on the sea bed. Microsoft's Project Natick is surely the most striking data center experiment of recent years. For two years, a data center ran on the sea bed - even helping fight Covid-19 and those aquatic servers were more reliable than land-based equivalents. Project leader Ben Cutler told us how Microsoft plans to apply the lessons on land and sea, in a true deep dive of an interview (p14).
If you haven't factored in environmental protection, rethink your Edge deployment! Underground caverns in Springfield Missouri are home to a well-established data center. This is no experiment: it's been there 20 years, and Bluebird Networks' Todd Murren told us the story of how to do major upgrades 85 feet underground when you have no room to move (p49). Twelve miles up in the stratosphere, Google's Loon project uses weather data to navigate helium balloons that deliver Internet access to some of the world's remotest areas (p53). Loon started as a crazy idea, much like Project Natick. The fact that it's being rolled out is because of the determination of its creators, Loon's CTO Sal Candido tells us how blue sky thinking turned into commercial reality.
Edge isn't what you thought Both Loon and Natick are Edge facilities of a sort, and we see both of them as signs that Edge networking - the move to bring computation close to users may be very different from the way it's been portrayed so far. ASHRAE, the body which revolutionized large scale data centers, has a stark message for those who think equipment in tiny closets can be treated just the same as in big data centers. Open the doors of a cabinet in the open air, and you could destroy the electronics inside, according to an ASHRAE technical bulletin. If you haven't factored in environmental protection, you need to rethink your Edge deployment (p21).
753 days
From the Editor
Length of time Microsoft's SSDC002 spent underwater
SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
Wait! There's more... This issue also has an update on the pandemic (p27), and an entire supplement on power distribution (p33). And next issue, we're hoping to take you even further afield than this issue. How? That would be telling...
Training
Reporter Alex Alley
DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
Edge, and we have a whole supplement on the subject coming soon. For now, consider this. 5G undoubtedly has technological benefits, and telcos have ambitious plans to deliver it (p56). But there are serious issues with a partial roll-out, which offers small benefits over current 4G networks, at a significant cost increase (p30). One Edge pioneer, DartPoints, has taken a pragmatic decision to concentrate on small versions of traditional facilities. CEO Scott Willis told us why (p18).
Debates
Deputy Editor Sebastian Moss @SebMoss
Head Office
5G could be another problem for the
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
Global Editor Peter Judge @Judgecorp
Chief Marketing Officer Dan Loosemore
Dive even deeper
Events
Meet the team
Awards
CEEDA
www.pefc.org
Š 2020 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
Whitespace
News
NEWS IN BRIEF
Whitespace: The biggest data center news stories of the last quarter
IonQ opens quantum computing data center The 23,000 square foot (2,100 sq m) facility will help with the company’s hardware R&D, while the platforms will be made available to researchers across the US.
Former Equinix CEO Steve Smith takes top job at Zayo Smith was previously CEO of Equinix, but in 2018 resigned with immediate effect “after exercising poor judgment with respect to an employee matter.”
Amazon to invest $2.8 billion in second Indian data center region It is expected to launch in the state of Telangana by mid-2022. “We are set to see a transformation in the way businesses in south India harness the power of IT,” said IT Minister K.T. Rama Rao.
Dublin replaces Paris in the top four, as European hubs accelerate Investment is faster than ever, and it’s FLAD not FLAP Europe will have four “Gigawatt markets” for data centers, and Paris won’t be one of them, as data center investment in Europe continues to accelerate at unprecedented levels, according to the first data center report from global property adviser Knight Frank. All of Europe’s data center markets are growing, but Dublin has overtaken Paris, joining the cities of Frankfurt, London, and Amsterdam, all of which are expected to exceed 1,000MW of data center capacity before 2023, according to the first Knight Frank Data Centre Report, put together by market analytics firm DC Byte. Investment is continuing strongly, with over $25 billion of total data center investment expected to complete in 2020, and a higher take-up than the first half of 2019. “The period of M&A activity under Covid-19 [has been] one of the busiest periods in data center history, with over $25bn of total data center investment expected to complete in 2020,” says the report, listing completion of Digital Realty’s $8.4bn Interxion purchase, and EQT buying Zayo for $14.3bn and EdgeConneX for a rumored $2.5bn.
6
Investment in the first half of 2020 was over four times the annual average, and “a colossal increase on last year’s $2bn investment volumes,” says the report. Take-up in H1 2020 was also 50 percent higher year-on-year at 282MW. The report looks at enterprise and colocation data centers as an asset class, in twelve key European markets. It combines the number of MW of live data center power in each market with the amount under construction, along with data centers which are in the pipeline, having been paid for in advance by the customer. “Phased IT power means a committed development,” explained DC Byte CEO Ed Galvin to DCD. “You have a field with planning consent and you have signed it to Google.” According to DC Byte’s graph of the twelve hubs, the front four are jostling for position. It looks as if Dublin is currently ahead of the rest of the FLAD, with 769MW of live power, but London has nearly 600MW phased or under construction enough to take the top spot. bit.ly/FLADonTop
DCD Magazine • datacenterdynamics.com
HPE to build 375 petaflops LUMI supercomputer in Finland Were it to launch today, it would be the world’s second most powerful system, behind Fugaku, but by the time it launches in mid-2021 the US may have launched its first exascale supercomputer.
After submarine cable accident, Facebook abandons equipment and 6,500 gallons of drilling fluid under Oregon seafloor Officials claim Facebook took over a month to come clean about the incident, a narrative it disputes. There are currently no plans to retrieve the equipment, with it says poses no risk.
CBRE: North American data center boom continues despite Covid-19 The first half of 2020 saw 134.9MW of wholesale data center space taken up across key markets (Northern Virginia, Dallas, Silicon Valley, Chicago, Phoenix, New York Tri-State, and Atlanta). 373MW of capacity is being built, including 239MW in Northern Virginia.
Marvell to buy Inphi in $10bn semiconductor deal Marvell supplies chips that move data on copper-based cables, but Inphi makes chips that traffic data over fiberoptic cables. Inphi’s customers include Amazon, Google, Microsoft, and Facebook, which use its chips for optical connections inside their data centers. Inphi’s assets are also used for 5G hardware. As part of the purchase, Marvell will give Inphi shareholders $66 in cash and 2.32 shares of stock in the combined company for each share of Inphi.
Nvidia to acquire Arm for $40 billion If regulators will let it GPU giant Nvidia has agreed to acquire British chip designer Arm from SoftBank for $40bn. Unlike Nvidia, which designs and sells its chips, Arm licenses out its designs to other chip companies, including Apple, Qualcomm, and Amazon. This approach means Arm earns significantly less per chip sold, but equally its cores end up in a lot more products. More than 180 billion Arm-designed chips have been shipped since the company was founded nearly 30 years ago, making it the world’s most popular instruction set architecture. The transaction is expected to close in 18 months, subject to what could be significant regulatory hurdles. Nvidia will pay SoftBank a total of $21.5bn
in common stock and $12bn in cash, as well as issue $1.5bn in equity to Arm employees. The remaining $5bn in cash or common stock is subject to Arm meeting undisclosed financial performance targets. “Arm’s business model is brilliant,” Nvidia CEO Jensen Huang said in a blog post. “We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with Nvidia’s world-leading GPU and AI technology.” Huang added that the company would remain headquartered in Cambridge, UK, and retain its brand identity and management.
bit.ly/NotThatMarvel
bit.ly/ArmingUp
AMD to acquire FPGA company Xilinx for $35 billion Every day the industry gets smaller - or at least, less populous AMD will acquire Xilinx in an all-stock transaction valued at $35 billion. The transaction has been unanimously approved by both companies’ boards, but is subject to approval by AMD and Xilinx shareholders, regulatory approvals, and other customary closing conditions. It is currently expected to close by the end of 2021. Xilinx is best known for its field-programmable gate arrays (FPGAs), which consist of logic blocks that can be configured by the customer on the spot to accelerate specific workloads. The chips have proved popular in automotive, aerospace, and military sectors, as well as 5G networking and data centers. Its primary rival, Altera, was acquired by Intel for $16.7 billion back in 2015. “Our acquisition of Xilinx marks the next leg in our journey to establish AMD as the industry’s high performance computing leader and partner of choice for the largest and most important technology companies in the world,” said AMD CEO Dr. Lisa Su. bit.ly/IntelLooksOnNervously
Issue 38 • November 2020 7
Whitespace Vantage Data Centers raises $1.3bn for refinancing and expansion The money will come in the form of two low-rate loans and help it refinance outstanding debts. Vantage says the move will reduce overall costs by about 30 percent on average across its capital structure and extend debt maturities. The money was generated via securitized notes, rated “A-” by financial company Standard & Poor’s, and will be issued in two chunks during Vantage’s 2020-2021 and 2020-2022 funding rounds. The proceeds will be used to refinance debt, but a portion of the money will go towards data center construction in North America. Securitization financing is essentially a company mortgaging assets, a common practice among telcos and the housing market. Vantage was the first company in the data center business that ever raised funds by securitizing its debt back in 2018. The company raised around $1.12bn. This allowed it to pay lower interest rates on its debt of $900m.
Goldman Sachs sets up new data center builder Global Compute with $500m Plans to build and buy $1.5 billion of properties round the world Goldman Sach’s merchant banking division has hired a top data center management team and given it $500 million to join the infrastructure boom. Its first investment is in a Polish data center business, ATM. Global Compute will build and buy data centers in North America, Europe, Asia Pacific, and Latin America. The Goldman Sachs investment, from its Wall Street Infrastructure Partners fund, will enable $1.5 billion in acquisitions and development. It will be led by Digital Realty co-founder Scott Peterson, along with several other former Digital executives. Global Compute follows similar new ventures set up by finance houses in recent months, including GI Partners, Stonepeak, and KKR.
CEO Peterson led investment at Digital Realty, from its founding in 2004. This amounted to some $17bn in deals in 14 years, at what became the largest data center builder in the world. He left the company in May 2018. COO Christopher Kenney is another cofounder of Digitial Realty. Global Compute will be led in Europe by Stephen Taylor, previously DLR’s senior executive in EMEA. It will buy ATM from a consortium of funds managed by MCI Capital and Mezzanine Management. Under the Atman brand, ATM runs three data centers in Poland totaling 42MW: WAW-1 and WAW-2 in Warsaw, and KTW-1 in Katowice. bit.ly/TheVampireSquidCometh
Peter’s ByteDance factoid TikTok’s owner ByteDance has leased 53MW of data center space in Northern Virginia, but doesn’t know if courts will force it to use Oracle Cloud as part of a sale.
bit.ly/VantageDebt
Morgan Stanley fined $60m for data center oversight The US Office of the Comptroller of the Currency (OCC) has fined Morgan Stanley $60 million for failing to properly decommission two wealth management data centers in 2016. The bank failed to properly oversee its contractors, and how they wiped data from servers and other hardware. Some customer information remained on the equipment after it was sold to recyclers, but there was no indication that any of the details were misused.
8
DCD Magazine • datacenterdynamics.com
Plaintiffs in two class-action lawsuits filed against the bank this summer claimed the data left on the devices included Social Security numbers and passport information. The bank “engaged in unsafe or unsound practices that were part of a pattern of misconduct” and failed to effectively assess or address the risks associated with the decommissioning of its hardware, the OCC said. bit.ly/2008Redux
Nokia to build the Moon’s first cellular network Another territory Huawei’s not in NASA has turned to Nokia to build a 4G network on the Moon. The LTE system will be deployed in late 2022, and could be upgraded to 5G at a later date. The $14.1m contract, awarded to Nokia’s US subsidiary, is part of NASA’s Artemis program which aims to send the first woman, and next man, to the Moon by 2024. The new network will be used for vital command and control functions, remote control of lunar rovers, real-time navigation, and streaming of high definition video. It will be part of a wider communications network. “Leveraging our rich and successful history in space technologies, from pioneering satellite communication to discovering the cosmic microwave background radiation produced by the Big Bang, we are now building the first ever cellular communications network on the Moon,” said Marcus Weldon, CTO at Nokia and Nokia Bell Labs President. “By building the first high performance wireless network solution on the Moon, Nokia Bell Labs is once again planting the flag for pioneering innovation beyond the conventional limits.” In the next issue of the DCD Magazine, we look at the quest to build an Internet for the Moon. bit.ly/BringingTheMoonOnline
bit.ly/
Whitespace Naver plans “Cloud Ring” - a second Korean data center
T5 Data Centers acquires former Apple data center in San Jose And plans to expand the Newark site T5 Data Centers has acquired a data center in Newark, San Jose, spanning 128,000 square foot (11,900 sq m) with 17MW of critical power. The company described the previous owner as a “Fortune 100 enterprise,” but images of the T5@Silicon Valley facility match with that of a data center previously owned by Apple. “We’re excited to bring T5’s lifecycle solutions to one of the top data center markets in the country,” said Pete Marin, T5’s President and CEO. “Not only will our new facility showcase our best-in-class development solutions, but this facility also gives T5 a solid position in one of the most supply-constrained data center markets in the US.”
The company plans to develop an additional 32.1MW, 180,000 square foot (16,800 sq m) expansion at the site. The data center features flexible data hall density to support variable loads, renewable solar solutions, and N+1 mechanical cooling. That facility was originally built by communications company MCI WorldCom, but was never finished when WorldCom entered into financial difficulties - ultimately declaring bankruptcy in 2002 amid an accounting scandal. The site then was picked up by Stream Realty Partners for an unknown sum, who in 2006 sold the nearly-completed data center to Apple for an estimated $45m-$50m.
South Korean online giant Naver plans to build a cloud data center near Sejong City, with designs that refer to a nearby native village heritage site. Naver announced the planned $420m (500 million won) facility in October 2019, after strong opposition from residents forced it to abandon a previous plan for a data center in Yongin, Gyeonggi Province, according to Korea Times. Like the company’s previous “Gak” data center in Yongin, the new site has an architectural design which uses natural cooling where possible, and references local history and architecture. The architect, Behive, has published plans for a 250,000 sq m facility which “floats” on the mountainous site and uses the local air circulation, as well as thermal mass and ice storage to minimize airconditioning. bit.ly/RingNaver
bit.ly/TheAppleDoesntFallFarFromTheT5
AirTrunk plans 300MW hyperscale campus in Japan The campus’s first 60MW phase will be built in late 2021 Australian data center company AirTrunk is building a hyperscale campus in Inzai, near Tokyo. TOK1 will initially be a 60MW project launched in late 2021. The campus will have 56,000 sq m (600,000 sq ft) of technical data hall space, 9,600 sq m (103,000 sq ft) of office and storage space, and 42 data halls. When finished, the entirety of TOK1 will include seven buildings across more than 13 hectares (32 acres) of land on Inzai’s data center hub. The facility will come with a dedicated 66kV substation and is expected to have a power usage effectiveness of 1.15. Set to be one of the largest independent data centers in Asia, TOK1 will be scalable to over 300MW. It is the company’s sixth facility in the APAC region and, if fully built out, will bring AirTrunk’s total capacity to more than 750MW across five markets. bit.ly/BigInJapanAirTrunk
10 DCD Magazine • datacenterdynamics.com
Equinix LD8 data center experiences major outage UPS failure causes network issues Equinix’s Docklands IBX LD8 experienced a power outage this August causing significant network issues. The problems at the London data center started early in the morning, and lasted more than seventeen hours. “Equinix IBX LD8, in the Docklands, London, UK, experienced a power outage. This has impacted customers who are based there. The outage may have also affected customers’ network services,” Equinix said in its first public statement at 12:04pm BST some eight hours after the issue started. “Equinix engineers have diagnosed the root cause of the issue as a faulty UPS system and we are working with our
customers to minimize the impact. We regret any inconvenience this has caused.” “Due to this incident, we are allowing customers more flexible access to LD8 working within our Covid-19 restrictions including mandatory temperature checks and face coverings. The safety of our employees and our customers is our highest priority.” bit.ly/L8Reply
Faulty UPS triggers fire at Telstra’s London data center Emergency services were called in August to put out a fire at a data center operated by Australian telecoms provider Telstra, located in London’s Isle of Dogs. A message on the company’s Docklands office answering machine said the site has “lost the green system” providing its power, as the fire tripped the busbar breakers. It is believed the fire was started by a faulty UPS system. bit.ly/UPSandDowns
Introducing ECHELONLCY10
A new state-of-the-art data centre in the heart of London
Significant New Capacity Purpose built 12,000 sqm data centre, providing 20MW of capacity. Ideal Location Adjacent to London’s financial districts in the City of London and Canary Wharf, within easy reach of London City Airport.
Facilities and Flexibility 1,200 sqm of ancillary office space. Powered shell adjustable to meet tenant specifications.
For more info contact: info@echelon-dc.com www.echelon-dc.com
Whitespace
Keppel Data Centres signs another MoU for LNG and hydrogen floating data center project Keppel Data Centres has signed a memorandum of understanding with City Gas and City-OG Gas Energy Services to explore using liquefied natural gas and hydrogen to power Keppel DC’s Floating Data Centre Park. The idea was first pitched back in 2019, and took a step forward this April when the company signed an agreement with Toll Group to study the feasibility of developing the floating facility, and another with Royal Vopak to explore the use of LNG to power infrastructure in Singapore. City Gas is the producer and retailer of piped town gas in Singapore, and retails natural gas to commercial and industrial customers in Singapore through City-OG, a joint business venture between City Gas and Osaka Gas. City Gas is wholly owned by Keppel. The three parties will jointly explore and evaluate LNG procurement strategies and the long-term potential of transitioning to hydrogen. bit.ly/HydrogenDreams
City of Santa Clara to study impact of data center discharge on sewage system When you’ve gotta go, you’ve gotta go - even if you’re a data center The City of Santa Clara plans to study the impact of data center discharge events on its sewage system, as it mulls upgrades. The city is concerned about excess water discharge from facilities during heat waves, and what it could mean for the overall capacity of the network. The study could lead to restrictions on how data centers dispose of waste water. Back in 2018, clean water infrastructure and environmental consulting firm Woodard and Curran were awarded a contract to maintain the city’s sanitary sewer system hydraulic model and evaluate the sanitary sewer system’s capacity. That model is regularly updated and used to help decide which sewer mains are in need of repair, with non-urgent fixes carried out on an annual basis. Now, the Californian city has said it needs to expand the scope of its efforts, analyzing deficiencies in the system in case larger
capacity improvement projects are needed. As part of that, the city wants Woodard and Curran to complete a study of data center discharge, DCD can reveal. “Data centers can experience ‘extreme discharge events’ from their cooling systems associated with heat waves,” council meeting documents state. “This may present a challenge to the City’s sewer system since such heatwaves could be expected to impact operations of other nearby data centers at the same time, resulting in significantly increased discharge to the sewers from these facilities.” The city wants to understand the number of data centers in its area, where they are, what their regular flow patterns are, what it takes for an extreme discharge to occur, and what such events entail. bit.ly/BeCarefulWhenYouFlush
Microsoft funds Quincy wastewater treatment plant The City of Quincy plans to launch a wastewater treatment plant for data centers next summer, funded by Microsoft. “Microsoft has been a good partner,” city administrator Pat Haley told DCD. “But they’re the big dog and so they thought they could kind of do what they wanted to do,” he explained. “Regulation finally slowed things down, because the Department of Ecology was saying, ‘you can’t quite do what you want to do there... the wastewater that we have that is is going to our municipal plant is increasing a certain concentration of minerals in that water. That’s outside the boundary of what Microsoft is permitted for.” The result is the construction of an entirely new ‘Reuse Wastewater Central Facility,’ which “is specifically for the treated water that comes from data center cooling.” bit.ly/NotMakingASplash
12 DCD Magazine • datacenterdynamics.com
an IMS Engineered Products Brand
MILLIONS OF STANDARD CONFIGURATIONS AVAILABLE IN TWO WEEKS OR LESS MADE IN THE USA
WWW.AMCOENCLOSURES.COM/DATA
847-391-8100
Voyage of discovery Microsoft’s underwater data center experiment employed wave energy and post-quantum cryptography on the seabed off Scotland. But the big lessons could be learnt on land
O
n a grey July day, the barnacle-encrusted cylinder broke surface off Scotland’s Orkney Islands. It might have been taken for some unexploded World War II ordnance, but this was bigger than any bomb. Twelve meters long, two meters in diameter, it was the size of the X Class midget submarines which trained off the Scottish coast in 1942. But the gantry barge wasn’t retrieving a piece of military history. In harbor, pressure hoses revealed a gleaming white tube. This was more like a time capsule from the future. The logo on the end made its identity clear: “Northern Isles” or SSDC-002 was
not lost treasure. It was the property of Microsoft. In 2018, Microsoft tethered it to land by power lines and fiber optic cables, and deliberately sank it. For the next two years, under 117 meters of sea water, 12 racks of IT equipment inside continued to run, processing workloads for the aptly-named Microsoft Azure. The underwater data center was the latest experiment in Project Natick, an ongoing effort to run servers unattended; to find whether the cloud can work underwater. In July 2020, it was time to retrieve the capsule and assess the results. Natick began in 2013, when Microsoft researcher Sean James, previously a US Navy sub-mariner, wrote a paper proposing
14 DCD Magazine • datacenterdynamics.com
Peter Judge Global Editor
underwater data centers. In 2014, the company decided to do it for real, and put together the Natick team under Ben Cutler. In 2015, Cutler’s team took the plunge, sealing a single rack of servers in a vessel and sinking it in shallow water off the coast of California. The micro data center ran reliably for three months, before it was retrieved for assessment. That was a proof of concept. Northern Isles - SSDC-002 - would be the real test. It had 864 servers and 27.6 petabytes of storage in a cylinder filled with unreactive nitrogen gas. It used a grand total of 240kW of power, working out to some 20kW per rack. “The system is designed to do up to 900W/U, and has 42U racks, so we can
Cover Feature | Project Natick manage more than 30kW per rack,” Cutler told DCD. The system worked - and it beat expectations. Cutler says SSDC-002 didn’t just equal land-based equivalents. In some respects it outperformed them. Microsoft has said that the hardware which spent two years in the underwater data center was eight times more reliable than equivalent servers running on land. Cutler gave DCD some more detail behind that figure. Why did the seaborne servers do better than those on land? How was the comparison made? And what are the implications for the future? Seaborn reliability There was nothing special about the kit in the box, Cutler explained: “Basically, this data center is designed to use off the shelf IT equipment. It’s not even brand new equipment; the servers had been retired from a data center. We did put in brand new SSDs for boot drives and brand new hard disks for data. That’s literally, standard off the shelf equipment.” Cutler’s team took a batch of nearly 1,000 servers that had been retired from a data center: “We split them between the land and the water, and ran the same workload in both places.”
855 of those servers went into the Northern Isles canister, and worked unattended underwater for 25 months and eight days. The remaining 135 servers toiled in a couple of racks of a normal data center, alongside hardware running Microsoft's Azure cloud. Internal Microsoft projects sent jobs which were distributed fairly between the two sets of servers. “So the same thing was happening both in land and water,” says Cutler. “We wanted the same life experience for these servers, so just the environment is different.” Neither group got any maintenance, he says, although “there was a lot of normal data center work activity going on all around [the land-based servers].” SSDC-002 never handled customer data, but it ran so well it had spare capacity this year. Microsoft donated that to the fight against Covid-19, through a project led by Folding@home to understand the protein structures of the virus. Cutler explains: “We turned over all of our data center assets on Project Natick, through to decommissioning.” Long cables to land could look like a security risk, so SSDC-002 picked up on modern security methods: “The path between the data center back to land was protected by post-quantum cryptography. If
"Oxygen reacts with stuff which causes corrosion. Our hypothesis is that removing the oxygen could result in higher reliability" somehow, there was a quantum computer and someone was listening in, they wouldn’t have gotten anything. So we were a testbed for a variety of other technologies beyond straight data center technologies.” After two years, the aquatic servers outperformed those on land. “From the 135 land servers, we lost eight,” says Cutler. “In the water, we lost six out of 855.” Cutler makes a point of saying these failure rates of untended second-hand systems - aren’t comparable outside the project. The temperature in the underwater vessel was steady, and there were no passing people. But it’s likely that a major factor in the aquatic servers’ reliability was the inert nitrogen atmosphere. Earth’s atmosphere is mostly nitrogen but the 21 percent of oxygen, which we need to breathe, is reactive: “Oxygen reacts with stuff which causes corrosion. So our hypothesis was that removing the oxygen, and going to this nitrogen atmosphere, could result in higher reliability.” Natick did detailed tests on long term changes to this atmosphere (see box), but the major finding is that - unsurprisingly - IT is very comfortable in an unreactive nitrogen atmosphere. Once the unit was ashore, the internal data center was sent to Microsoft’s partner, Naval Group in France, so the equipment could be analyzed in detail, says Cutler: “to understand how different parts wore, relative to expectations." The eight servers which failed are probably the most intensively examined pieces of IT hardware ever, as Microsoft does a root cause analysis: “We have things like scanning electron microscopes, we have the ability to X-ray parts, and do detailed surface analysis.” That analysis was nearly complete when we spoke to Cutler, and no surprises had yet emerged. But one conclusion seems to be that hardware can be more robust than expected. “There’s a bathtub curve for the lifetime of parts, and a sweet zone for temperatures. If you’re too hot or too cold, you can have problems. We were outside the sweet zone on the hard disks: we operated them colder than normal, and that did not hurt us. Sometimes people have preconceptions about what matters.”
Issue 38 ∞ November 2020 15
Environmental impact Normal data centers maintain a steady temperature and humidity, and are concerned about airflow. With a sealed container, the Natick team also had to include equipment to vary pressure. ”Remember your ideal gas law from school? Now if we raise the temperature, the pressure goes up. So things are a little bit different in this environment.” Cooling was by an air-to-liquid heat exchanger between each pair of racks, says Cutler: “Each of those heat exchangers has data center fans on it, that push the air through as needed.” Seawater was pulled in from outside and run through the heat exchanger, and back out to the ocean. That’s a big plus for Cutler: “Data centers can use a lot of water for cooling, but we don’t use any potable water. We’re just driving sea water through and then right back out. That allows us to deploy this anywhere we want to, without the need to tap into a water supply. "That’s important, because there’s a lot of places on the planet where water is the most valuable natural resource. Even right now in the United States, half the country is in drought conditions, and we don’t want to compete with consumers and businesses for that.” The effectiveness of the cooling also means that Natick’s data centers can be deployed in seas from the Arctic to the equator, says Cutler. They wasted very little energy in cooling, so most of the unit’s power could go to the servers, giving it a power usage effectiveness (PUE) of only 1.07. A low PUE is good, but did the SSDC-002 affect its local environment? “The water we discharge is a fraction of a degree warmer than what comes in from the ambient ocean. And it’s a very small percentage of the water that’s passing outside. So literally a few meters downstream, you can’t measure the temperature difference anymore.” Heat dispersal is important: “Wherever we would put these things, we would look for persistent currents, and the density at which we put these things is small, so we don’t have any harmful local effects.” In fact, says Cutler, “the sea life likes these things. It’s an artificial reef. It becomes a nice place to go foraging for stuff, and it’s a place to hide from bigger creatures.” The cylinder was supporting large healthy anemones when it came ashore - but the seabed where it lay has returned to the way it was before Northern Isles arrived. Although renewable energy was not in the scope of this project, Scotland has a lot of local green energy sources, and Orkney is home to the European Marine Energy Center, a test bed for wave and power generation. “It’s a facility where people go to test these renewable energy devices,
and they have ‘bursts’ you can lease. We actually leased one of those bursts in the wave energy area.” SSDC-002 was connected to the same grid, but as a consumer: “One of the things we liked up there was the dynamic that it was a renewable environment. That’s consistent with the direction we want to go." Re-launch? The obvious question now is: what next? “This statistical result is very strong,” says Cutler. “But what do you learn in terms of tangible things you can go off and do? We know some next steps to try and do on land.” Will there be more subsea facilities? Previously, Cutler has said the sea could be a haven for data centers, and he’s still keen to see it happen. The environment there is anything but harsh, with cooling available for nothing, he says. And real estate is cheap. Finally, the sea floor is actually a convenient location, with more than half the world’s population living within 120 miles of the sea: “It’s a good way to get applications close to people, without the societal impact of building a giant data center in London.” In 2017, Cutler filed a patent for Microsoft, describing a large underwater data center in which eight or more containers are lined up as an artificial reef. Such a site could benefit from renewable energy: “Europe’s the leader in offshore wind, and a lot of those offshore wind farms are relatively close to population too. So imagine a data center, deployed at scale, and colocated with an offshore wind farm. Now I don’t have long distance power lines to get my power to the data centers, and I’ve taken out a bunch of capital costs and a bunch of risk from all those transformers in the power lines.”
16 DCD Magazine • datacenterdynamics.com
"We don't use any potable water. That's important, because there's lots of places where water is the most valuable natural resource" Given the steady power from some wind farms, he says, “imagine a data center that has no backup generators, that has no batteries. It sits there, it’s a small fraction of the overall size of the wind farm, and it draws power from it. On a rare day, when there is no wind, it pulls power from land. “Now, that’s not quite how that infrastructure works today. But it gets us to a mode where we take out a lot of capital costs, a lot of risks, and become much more environmentally friendly relative to infrastructure today. Batteries are an environmental challenge, and a supply challenge as cars and other things adopt batteries more broadly. So we like this idea of truly locally generated renewable power, close to customers, with very good environmental characteristics.“
Cover Feature | Project Natick Learning on land But right now, it’s too soon now to say whether Microsoft will follow up SDCC-002 with a bigger seabed facility. And Cutler says Microsoft has learned plenty, even if it never puts another data center underwater. “We want to understand what learnings we can take from this experience and bring back to land data centers,” he says. “One aspect of the analysis that’s going on now is to understand that, and then maybe spin up some work that would be low impact, and improve reliability on land.” “In a normal data center if something breaks, someone goes in to replace the part,” says Cutler. “In this case, we can’t do that. If something dies, it’s broken, whether it happens the minute before we take it out of the water, or right after it goes in the water.” In fact, that model is very like the new data centers emerging on land, in remote
locations, at the Edge of the network. “They will tend to be lights out, like what we did. We operated this thing for 25 months and eight days, with nobody touching it. "And when you think about the Edge you’re gonna end up with things that operate on their own. People don’t go there for a long time because it’s too hard to get there.” Edge data centers will tend to be identical units, deployed to varying environments, and Cutler says this process could look like an extension of the Natick idea: “The vision here is a handful of global factories with a controlled environment. You manufacture the shells, inject the servers, seal them, and you can quickly deploy them, and have a much more common experience for the servers, regardless of where we send them.” One problem with lights-out operation has been the need to keep upgrading
"In a post Moore's Law world, there's no reason to change the infastructure every two years" hardware, but that could lessen as the continued performance improvements predicted by Moore's Law come to an end. “A huge percentage of the cost of a data center over its lifetime is the servers. In a post Moore’s Law world, there’s really no reason to change the infrastructure every two years,” he says. In this world, it will pay to arrange longer life expectancies, “because that then drives out not just cost, but environmental impact.” He’s talking about the embodied energy and materials in hardware, as well as shipping costs and warranty work. “All that might be better spent on other things like designing smarter, better machines, rather than a lot of churn. “High reliability is not just for Edge,” he says. “Since the 1980s, we’ve been on this curve of increased reliability. We’re trying to drive further out on that curve.” SSDC-002 sounds historic, but it won’t end up in a museum. Cutler’s team took their commitment to recycling to extremes and, when the equipment had been dismantled and tested, recycled the canister. When Cutler spoke to us, it had already been cut up and was scheduled to be melted down. After all, he says, the value of the project is in what we can learn from it, not in the metal container.
Checking the atmosphere Before the seal was broken, researchers inserted test tubes through a valve at the top of the vessel to collect air samples: “We took atmospheric samples when the thing came out of the water, so we can do mass spectroscopy and gas chromatography,” says Cutler. Why do that? With no air coming into the vessel, SSDC-002 provided a unique opportunity to find out whether a data center creates its own air pollution. “We have a sealed environment. So we don’t have to worry about any forms of pollution, either natural or manmade, that come in. We don’t have to worry about oxygen. But, on the other hand, we have plastics in there.” Over time in the capsule, the plastics that coat Ethernet cables and the like might give out vapor or gases, changing the atmosphere.
Before sinking SSDC-002, the Natick team had already practiced the best way to create a comfortable 100 percent nitrogen atmosphere for the IT equipment, starting with Earth’s normal air of 78 percent nitrogen and 21 percent oxygen. For the first test in California, the Natick team simply lowered the pressure in the cylinder and then injected nitrogen. “When we do that the humidity goes down, because I’ve taken whatever was there, and I’ve replaced it with pure nitrogen. It’s got no water vapor in it. But then if you wait a few hours, the humidity goes back up,” says Cutler. “Things like network cables all have moisture in them." A dry atmosphere causes moisture to evaporate from the cables, but it has a worse effect: “If you got rid of all the water,
then you get electrostatic effects, so you really want to have some humidity.” For SSDC-002, the Natick team injected nitrogen at one end of the cylinder while pulling air out of the other. They adjusted the moisture content in the air before launch, and remotely during the experiment. “We were targeting about 30 percent humidity, that’s very much like a reasonable sort of land atmosphere.” After two years, there was one small concern to check. “We weren’t extracting moisture from the cables. But we still had to worry about what sort of compounds in those cables might gradually come out over time, and be a problem for the electronics.” To Cutler’s relief, analysis showed that there were no problems in the atmosphere in the cylinder.
Issue 38 ∞ November 2020 17
E
dge computing is a new field. There are lots of startups, they’re making lots of noise, and they have lots of different approaches. So how do you tell where the real opportunity is? Scott Willis thinks he has the answer. Last year, Willis was at equity firm Astra Capital, helping it look at the Edge market. “I got involved with Astra last Fall, evaluating target companies,” he tells DCD. This year, Astra chose to invest in DartPoints, an eight-year old Edge data center player - and appointed Willis as the new CEO in March. Astra Capital took a macro-level look at Edge: “Networks driven by a few centralized hyperscale locations won’t meet the future demands of the traffic across the networks, from applications such as AI, autonomous vehicles, telehealth and agriculture. Today’s architecture won’t support that type of demand. The networks are decoupling and pushing capability out to the Edge where requests need to be processed locally.” But what’s required to meet those needs? While some Edge start-ups are pushing ideas like small Edge boxes at cell towers, Astra was looking for a section of the Edge market that was mature enough to invest in. “If you are going to be a player in that sector, you have got to understand what your strategy is and what’s the market sector you’re going after,” says Willis. “Even today, Edge is very much in an evolving state,” he explains. "From the original buzzword, it has developed into something different from what people originally thought. Some companies will be successful at cell towers, and there will be a number of different types of distribution points that will be referred to as Edge.
DartPoints has a defined goal: extending
Peter Judge Global Editor
Edge under new management Forget 5G, the IoT, and micro data centers at cell towers (for now) - DartPoints CEO Scott Willis says the Edge opportunity is small facilities in Tier 3 cities 18 DCD Magazine • datacenterdynamics.com
the virtues of carrier-neutral colocation data centers to customers outside of the main hubs: “We are focused on building Edge data centers in underserved markets: Tier 2 cities, outside the largely saturated Tier 3 and Tier 4 markets.” That’s a good investment bet because those “traditional” customers are there already, while augmented reality and autonomous vehicles are still in development, and DartPoints uses existing fiber, not proposed 5G networks. “Those central locations are in no one’s imagination going away,” he says, as they have the advantages of cheaper land and power. DartPoints aims to pick up customers for whom that model does not work. “We lose some of the benefits of concentration and scale, but there are disadvantages to users in those underserved communities. Higher end enterprises have to pay a significant transport cost, to backhaul to these locations. Edge can give
CEO Interview | Scott Willis them the same benefit at a much lower cost and better performance level.” DartPoints can offer its customers the benefits of lower latency and cheaper communications, but it still has to compete with large facilities that have the economies of scale. And that takes some engineering. Hyperscale players serving big hubs can have many MW of power, and hundreds of square feet of space, he says: “We’re booking from 60kW up to 200kW; 10 racks or maybe 40 at the high end, delivering the same capability to that underserved community.” “We have to develop an architecture that is more robust and delivers the same performance, so end users get the same experience, the same well-tuned fundamentals, as someone in Dallas or New York. When a request is made locally, we process and store it locally, with the ability to go out to the wider Internet.”
Hyperscale players have to manage multiple locations, but DartPoints has to go further: “We are managing many times more locations, with smaller footprints,” he says, and that affects design, operational support, customer care and security, among other things. DartPoints has to implement all this consistently across data centers, to streamline its own costs, and also to allow customers to deploy to multiple locations. Despite this, DartPoints does not have a single standard design. “We don't try to apply one box or one size to every single location,” he says. “We come into a market and determine the market need by analytics - and then look at the best solution.” Some DartPoints facilities are outdoors on land which has been purchased or leased. Others are in leased space inside an office building or in an existing building that DartPoints has bought and converted. “It is what happens to be the best for that particular market,” he says. DartPoints has a central network operations center (NOC) but each facility is more or less autonomous. “We can’t afford to have manned resources at every one of those locations.” Willis plans to have a regional structure to support those locations, perhaps with a support partner: “As we get more scale, we could look at working with a third party organization that has a large footprint of tech resources round the US.”
"From the original buzzword, Edge has developed into something different from what people originally thought. We are focused on building Edge data centers in underserved markets" Repairs and maintenance could be controlled from a central location and dispatched using third parties, he says. This sort of challenge is inevitable for the multiple locations which Edge demands, and Willis wants DartPoints to be the first company to crack the problem. The end result could be that DartPoints is a logistics company for virtual environments, he muses: “Mobile people who fix things will be a critical part of it.” Willis’ future plans are based in three year chunks. DartPoints is currently focused on four US regions: the Southwest, central midWest, Southeast and mid-Atlantic: “That’s 32 of the 50 states. Over the next few years, those are the target markets. The Northeast is too competitive.” After starting in Dallas, DartPoints is using its Astra money to extend to places like Iowa: “If you have an agriculture or educational application, it can be delivered in a cost effective way without backhaul into Dallas or Chicago.” Willis is using analytics to pick the right locations, where there are customers and expertise, and upstream fiber connections. “DartPoints is by no means a build-it-andthey-will-come player. We do the analytics, and then target those customers, including enterprises and content delivery networks (CDNs), that want to participate in the market we want to create.” One major requirement is fiber networks: “Whether I’m a hyperscaler sitting in Virginia or New York, a Tier 3 city, or a midtier city like Atlanta, the backbone of what enables this is fiber connectivity”. At the higher end of Tier 3, Atlanta has many fiber providers. Other locations may not have the same number, admits Willis: “But the requirement doesn’t go away.” The trouble is, realistically, if the fiber isn’t there, an Edge player like DartPoints with smaller customer won’t have the muscle to get it in: “If that foundation isn’t
"Covid has been a double-edged sword. It's slowed us down, but it's amplified the way we use the infrastructure. The pandemic has accelerated the recognition that our architecture has to change"
there, there’s not much I can enable to put it there. Dartpoints is not in the fiber business.” But in new markets, DartPoints has to have an open mind: “We may get into locations where we can make investments to put fiber out to a meet-me location to a fiber provider that is compelling to us,” he says. “That’s not our core, but if we truly want to build robust connectivity in a market, there are clearly going to be environments where we have to deploy some fiber to support those initiatives.” With Astra’s backing, Willis does have capital to deploy: “Where we’ve determined there’s a need, with a potential and perspective customer base, we can start deploying capital.”
Targeted acquisitions of buildings and networks are likely: “We’re looking for regional facilities that give us greater reach within those regions.” Those acquisitions will be looking at people as well as data centers, “or a combination of both.” Despite starting from a fairly traditional view of data centers, Willis can’t help looking at the mobile experience as well. Before his time at Astra, he was CEO at Zinwave, an established wireless company with a distributed antenna system (DAS). DartPoints doesn’t have plans in that direction but, given the poor delivery (so far) of 5G, DASs are exactly the kind of thing that Edge players are now pinning their hopes on to deliver localized applications in buildings and on campuses: “Most people would agree that the in-building experience remains a challenge,” Willis tells us. “75 percent-plus of all mobile traffic originates indoors.” Starting all this in a pandemic has made surprisingly little difference to DartPoints, especially as others have presented Covid-19 as both a doom and savior of the Edge. “Covid has been a double edged sword. Yes, it’s probably slowed us down, as customers have been impacted,” he admits. Some firms have cancelled new projects. “On the flipside, as a result of Covid-19, the demands put on our architecture have been amplified. It’s created a realization across the industry, and amplified almost overnight the way we use the infrastructure. The pandemic has accelerated adoption, and the recognition that our architecture has to change.”
Issue 38 ∞ November 2020 19
Corning takes care of the
cleaning process for you!
As a leader in optical fiber and connectivity, Corning understands the value of clean connectors. That’s why we developed a new factory cleaning and sealing process, Corning® CleanAdvantage™ technology, ensuring a pristine end face upon first use for all our EDGE™ and EDGE8® solutions. So, go ahead and uncap that CleanAdvantage connector and connect with confidence. To learn more about our CleanAdvantage technology, visit corning.com/cleanadvantage/dcd © 2020 Corning Optical Communications. All rights reserved. LAN-2650-AEN / February 2020
Save up to 17% install time Save up to 95% on consumables
Let's get technical
Mind the doors! Edge facilities will be built in hostile environments. If you open them without due care, you could destroy the whole business model, warns ASHRAE
E
ven after three or four years of hype, there are still plenty of questions about Edge computing - the movement to put small micro data centers close to end users and applications, to provide low latency processing power. But one of the biggest questions, could be a surprising one: “What happens when you go to an Edge facility… and open the door?” That sounds trivial, but it turns out to be crucial to data center deployment. And the best answer so far has come from ASHRAE, the industry body which literally wrote the book on building reliable data centers. ASHRAE’s name stands for the American Society of Heating, Refrigerating and AirConditioning Engineers, and 20 years ago its members were being asked to equip a new class of special buildings designed to house IT equipment. ASHRAE’s Technical Committee 9.9 considered the airflow, temperature and humidity required by the equipment within those new data centers. “Most buildings are there to serve people,” says Jon Fitch, a data scientist from Dell and a TC 9.9 member. “But data centers and mission critical facilities are there primarily to serve equipment. It's a very, very, very different take on what a building is there to do.” TC 9.9 produced a series of books and recommendations which have become the Bible for data center buildings, and
have been incorporated into building regulations. They’re in use around the world, because, despite the “American” in its name, ASHRAE has been international since it was formed in 1894. Fast forward to now, and there’s a hype machine in action, telling us that centralized data centers aren’t enough. We urgently need to build a whole lot more tiny data centers outside brick and mortar buildings, in smaller buildings, cabinets and containers which are close to mobile phones, people, autonomous vehicles and the sensors used by the Internet of Things. The Edge hype says new applications need a fast response, that can only be delivered by these Edge facilities. The hype also assumes that, because it’s needed, this can be delivered. and it will use the same IT equipment and at the same cost as in a brick and mortar data center. Face reality But there’s a problem with that. Jon Fitch is lead author in the team that explains what that is, in ASHRAE’s new Technical Bulletin Edge Computing: Considerations for Reliable Operation. It’s a short document, that distills ten years of work, he tells us. “Edge is driven by proximity to the data, not by factors like disaster avoidance. It’s all about getting computers closer to customers and data,” he explains. Traditional data centers can be sited using a risk aversion map, which says shows the risk of natural disasters, so you can choose “a lovely area that is very risk-averse.”
"The challenge we face today is how do we achieve Telecom results with economical equipment. We show how to make Edge data centers with off the shelf equipment, that achieve similar uptimes"
Peter Judge Global Editor
Edge data centers don’t get that option: “These Edge data centers can go in a dirty metropolitan area where there's all kinds of pollution and vehicle exhaust. They could go in an agricultural area, they could go into a dusty area where there were seasonal winds which blow up dust storms.” Edge data centers are also typically small and modular, in shipping containers, phone-box sized modules, or even smaller units, and this has consequences: “Many items that are non-issues for brick and mortar data centers are real issues for small edge data centers.” Telecoms networks are already deployed at the Edge of course, but they use hardened equipment. Specifications like NEBS, defined by AT&T in the 1970s, mean the kit is resistant to changes in temperature, humidity, airborne pollution, and dust. Telecoms engineers can work on equipment in all weathers. “It's hardened to a 55°C temperature excursion capability, with a dust filter on the bezel,” says Fitch. “And this equipment comes with a higher price tag, it's a higher cost structure, a more expensive business model.” Hardened equipment is too expensive for the mass deployment of IT to the Edge that is envisaged today. Effectively, we are asking engineers to deploy equipment into a hostile environment for which it was not designed. “There are two schools of thought,” says Fitch. ”One is you can harden the hardware at a higher cost point. Or you can take care of the environment. I think the technical bulletin we've written provides a pretty good blueprint for how you can control the environment and use lower cost structure, commercial off the shelf [COTS] IT equipment for edge applications. That's the type of equipment that most providers are used to using. “The challenge we face today is how do we achieve Telecom results with
Issue 38 ∞ November 2020 21
economical COTS equipment. Our bulletin does tell you the steps you need to take to engineer those Edge data centers, so they are compatible with commercial off the shelf IT equipment, and achieve similar uptimes.” He explains: “Compare a small modular edge data center to a large brick and mortar cloud data center for a moment. The cloud data center probably has at least three doors between the outside environment and the IT equipment. The distance between those doors is probably 30 meters or more, and there's no way all those doors will be open at the same time. “If you open a rack door in a data center, what happens? A whole lot of nothing. Because you're opening a rack door from an environment that's 20°C and 50 percent relative humidity, to an environment that's 20°C and 50 percent RH.” It’s different in an Edge data center, where the outdoor environment enters the enclosure the moment the door is opened, bypassing HVAC and filtration: “You can’t always choose the place and timing of your service. If you've got a winter blizzard and your Edge data center goes down and it needs service, you've got to go out there. And when you open that door, the cold winter air rushes in immediately - or it might be desert air, dusty air, or moist air from the morning dew.” Surviving these effects means more than just physically engineering the Edge facility. When ASHRAE developed its recommendations for buildings, it rapidly found that manufacturers define what you can do with their IT kit in warranties. These warranties now include an allowance for a certain amount of time outside the ideal conditions - so-called “excursions.” If something goes wrong, and you can’t show you kept the equipment within its tolerance - you’ve voided the warranty. “Most IT equipment specs for temperature and humidity are written for 7x24 steady state operation in a brick and mortar data center where the environmental conditions are well controlled,” says Fitch. When you open the door to a small edge data center you can change the temperature and humidity. “A lot of IT equipment has the capability to record temperature,” he points out. Hard drives have a SMART [Self-Monitoring, Analysis and Reporting Technology] data sector on them that records temperature periodically. Most servers have a temperature data capture on them. And so there is going to be a discussion between the customer and the IT equipment supplier. This data is being recorded and at some point somebody is going to notice and say ‘Hey, on October 16 you opened the door to that Edge data center and that cool
"If you open a rack door in a data center, what happens? A whole lot of nothing. If your Edge data center needs service and you open that door, cold moist or dusty air rushes in immediately" air went on and your equipment dropped below its rated warranties.” Technicians need to step carefully. Humidity and dew point is a non-obvious problem, warns Fitch: “A technician may service an Edge data center on a beautiful 74°F (23°C) morning in Atlanta. He’s thinking it’s a great day to be a data center technician, but Georgia has a lot of humidity and there’s dew on the grass. When the technician opens the door, humid air rushes into the facility, which is at 68F°. Within minutes his equipment is covered in condensation.“ The solution is to carry a handheld temperature and humidity monitor: “Environmental conditions like inrush are non-obvious. You may need to train service personnel on how to interpret readings to determine whether condensation is going to be a problem. A $100 to $200 monitor could save $1,000s." Alternatively, technicians can use the temperature readouts from the IT equipment itself, but the main thing is awareness: “These would be non-concerns in a brick and mortar data center, but need to be part of the mindset of the Edge service technician.” Use a tent To open the door safely, Fitch says we’ll need to work under a shroud: “The best solution is something simple like a small tent that encloses the door and still provides enough room to work.” Another solution might be an accordion between the Edge enclosure and an airconditioned bay at the back of the service truck: “These are off the wall ideas but things we need to think about.” The tent will help with other issues such as air pollution: “You own the land but you don’t own the air stream!” he warns, and pollution can be seasonal. Crops are only sprayed at certain times of year, coal-fired heating systems may be switched on or off, and prevailing winds can change. Pollution and corrosion are cumulative failure risks. Material that enters on one occasion will remain there, and accumulate. As well as potentially causing short circuits, extraneous matter in Edge data centers can cause corrosion. Real-time corrosion monitors are available, and - importantly for Edge
22 DCD Magazine • datacenterdynamics.com
facilities - they can be networked and checked remotely. “You want advance notice of corrosion problems, with enough lead time to install corrosion abatement filtration,” says Fitch. “If you wait until you are seeing corrosion-related failures, all of your IT equipment has likely been compromised and you will either have to live with a high failure rate or do an expensive rip-andreplace with new equipment. Neither is a good option.” Full-scale data centers have filters to keep out dust, but Edge facilities don’t: “Dust is usually removed by MERV 11 and 13 class filtration,” Fitch explains. “When you open the door to one of these modular data centers, you completely circumvent the filtration.” This dust accumulates. “You might say ‘Why can’t I just take compressed air and blow it out?’ Well, here's the problem. A lot of dust is comprised of silicon dioxide, but some dust also has stuff like gypsum, salts, and other materials. If they get down inside a contact, like a DIMM or a processor, and you do a service on those, what you can do is actually smear the particles onto the contact. I liken it to smearing peanut butter or Nutella on toast. It's very thick and viscous and, by golly, if you want to get it off the toast, it's pretty hard to do!” This is made worse, he says, by the sheer number of contacts. One 2U server can have 10,000 contacts (288 per DIMM, 3,600 per CPU, and 64 or 98 per PCIe). “Some of these contacts are redundant - like power and ground - so they're non-critical. But a lot of these are single point of failure contacts. So if you get a smear on that contact, you have a failure.” Lower efficiency These considerations means Edge data centers end up being lowering the overall efficiency of the data center fleet, says Fitch. Data center builders are working to eliminate air conditioning from brick and mortar facilities, and minimize redundant equipment, but Edge data centers will have to be designed for reliability. This means some redundant equipment: “If you have a remote data center that’s going to take several hours or more to reach, it needs to have some level of
Let's get technical
Picture:Dell
"Dust also has stuff like gypsum and salts. If they get down inside a processor contact, it's like smearing peanut butter or Nutella on toast" Issue 38 ∞ November 2020 23
Let's get technical
"Lights out operation is an aspirational goal, but I don't think it's very practical right now. Sometimes there's no substitute for being able to go out there and troubleshoot the equipment firsthand" failover and redundancy. Think about a phone booth sized edge data center, maybe it’s got a 42U rack and it’s got 20U of compute in it - that’s ten 2U servers. If any one of those fails, you’ve got only 90 percent of your compute capability, and you’re going to need a failover spare. Failures or service interruptions in a small data center can have a bigger impact than in a large cloud data center, which might have tens of thousands of servers available.” Most Edge facilities will have some sort of air conditioning: “Most regions of the world have some form of extreme temperature or humidity, and will need some sort of aircon or mechanical environmental control. If 20 percent of your data center fleet is now dispersed, and all those Edge data centers have direct expansion (DX) air conditioning, that can reduce the efficiency of your fleet. It’s difficult to implement airside or water side economizers, in Edge data centers.” Another challenge is power distribution and backup in a small space: “UPS or batteries that you would locate in a separate gray area in a large brick and mortar data center, now may reside in the same enclosure as the computer equipment. That’s an additional engineering challenge. For example, batteries need to have very good ventilation to make sure there’s no buildup of gases like hydrogen. You have dissimilar equipment in the same environment. You need to control the environment to the narrowest specification range of whatever facilities equipment you have.” Brick and mortar data centers can have a contained hot aisle, he says, “but how do you implement that in a phone booth sized facility?” Can new tech help? Given these constraints, some vendors have suggested that Edge could be a breakthrough use case for liquid cooling. They argue that liquid cooling systems don’t need raised floors or contained aisles, and can operate quietly in environments alongside people. Fitch cautions against bringing in new tech for remote installations. “Liquid cooling is not ready. Piping and tubing is an opportunity for leaks, and an Edge data center may be hours or days
from a service person,” he says. That would be long enough for a small loss of coolant to result in a significant high-temperature excursion. While liquid cooling systems have been developing rapidly, their pipes and tubing don’t have enough hours in service to be used remotely yet: “For facilities that are fairly remote, approach new cooling technologies cautiously and conservatively. If it’s five min from a service technician, then maybe it’s a different story.” Likewise, while Microsoft’s underwater Natick experiment has shown that it’s possible to run a data center for years without opening it up, but Fitch says lights-out operation isn’t ready yet: “That’s an aspirational goal, but I don’t think it’s very practical right now. DIMMS need resetting, and servers and software need to be upgraded. Sometimes there’s no substitute for being able to go out there and troubleshoot the equipment firsthand. So I would say a sealed never touch. a data center is a good aspirational goal. All this can sound daunting, and it sounds as if there are inevitably losses of efficiency. But there are choices an operator will have to make: “Either you harden the hardware and have a higher cost point, or you take care of environment, and you can continue to use COTS equipment. ASHRAE believes the bulletin will enable operators to do just that - and it’s made efforts to communicate the urgent information succinctly. “This technical bulletin is a new form of communication, which ASHRAE is going to use going forward. It’s there to communicate succinctly and rapidly actionable information that the industry needs. We’re taking the information from what used to be 30 to 50 page academic white papers, and we’re rolling it up into a crisp 10-page actionable document.” It’s going to have to communicate it well because, unlike the original ASHRAE TC 9.9 work, data center builders will work with it directly, not via building codes and legallybinding regulations. “I don’t see this as something that’s going to be taken up and rolled into legislation, like has been done for buildings,” says Fitch. “These are small facilities. And so I think keeping this information at the user level makes a lot more sense.”
24 DCD Magazine • datacenterdynamics.com
Check list •C heck the specifications for each piece of IT equipment •D esign enclosure to support the narrowest of those specification ranges •F ind a way to maintain those conditions when opening the enclosure door, perhaps by using a tent •S elect cooling systems which meet capex, opex, and sustainability targets for the site •C onsider uninterruptible cooling for remote/unstaffed sites •D esign-in enough redundancy and remote servicing capability to support your service staffing strategy whether that is onsite or remote •M aintain ASHRAE monthly angstrom corrosion limits - Silver = 200 - Copper = 300 (may require filtration) • I ntegrate DCIM-based remote monitoring of environmental parameters (i.e., temperature, humidity, corrosion) •S et up alerts and service alarms (i.e., for air filter performance) •D uring service, monitor IT equipment inlet air (both temperature and humidity) •S tay within the rate of change limits and above the dew point.
Advertorial | SUNeVision
providers. Lau considers this the key asset and most unique benefit of the SCX platform since it opens up greater business opportunity, instant collaborations and more effective business decision-making across the region Fiona Lau Executive Director & Commercial Director, SUNeVision
There are more than 10 submarine cables which land in Hong Kong and the majority of these have chosen to locate their PoPs within SUNeVision’s facilities. Hong Kong’s proximity to the markets of East and South East Asia also facilitates faster regional data exchange.
More Choice, Anywhere, Sooner
Asia’s infrastructure needs to keep up with multi-cloud demands Fiona Lau explains how the SUNeVision Cloud eXchange makes multi-cloud accessible, flexible and cost-effective
T
SUNeVision’s rich ecosystem provides customers with the competitive advantages of choice, accessibility and opportunity. “Migration is much easier than before when the client actually needed someone to go to their rack to switch provider,” says Lau. “That might take a couple of days, not forgetting associated labour costs and security risks. The beauty of the SCX is that it is all remotely managed. So, with the click of a button, you’ll be able to switch to any major cloud provider, enabling greater flexibility and better user experience for customers.” Deployment of cloud has not always met
he sheer scale of digital
efficiency, flexibility and ease of use. Increased
expectations. A key feature of the SCX is clients
growth across Asia dwarfs
regional access to multi-cloud is only possible
know what they are using, for what purpose,
that of any other continent.
through a unique provision system which
and only pay for what they use.
By 2023, the Asia-Pacific
is carrier and cloud-neutral and available on
region will have 3.1 billion
demand. Users in key upcoming markets
will be different use of cloud in different
Internet users, up from 2.1
“Especially in larger organisations there
including Vietnam, Myanmar, Cambodia,
departments, and nobody really has an
billion in 2018, according to the Cisco Annual
Brunei and the Philippines can now access
oversight over what is being used and how
Internet Report (2018-2023). As Fiona Lau,
major cloud providers via virtual connections
they can optimise it,” says Lau. “Therefore, we
executive director and commercial director
into the software-defined SCX in Hong Kong.
have developed a totally technologically viable
at SUNeVision explains, this scale of regional
“Multi-cloud has been around for some
and proven platform that means now we can
data growth is a key driver behind the launch
time, but it’s not used uniformly across Asia, so
create significant convenience and extra value
of the SUNeVision Cloud eXchange [SCX].
we have developed strategies to enable them
for customers – you pay for it only if you use
Equally important is the consensus that
to maximise the performance and benefits,”
it. That is what the SCX represents, a better
cloud and in particular multi-cloud is the way
says Lau. “Our data centre campus and SCX
way of enabling businesses to use multi-cloud
to meet Asia’s surge in digital demand, and the
just serve the local economy and help the
effectively and efficiently in order to harness
recent impetus from COVID-19.
neighbouring regions grow digitally. Access
the opportunities that technology can bring.”
“The way the economy is growing through
to SCX is through a single portal - the system
innovations such as AI, 5G and IoT is off the
will then enable one-to-many connectivity as
charts,” says Lau. “We have seen significant
required by each individual user.”
growth in data traffic in the past few years. Our
Delivery of SCX is based on reliable data
customers understand digital transformation
centre infrastructure and secure networks
is inevitable. In the past, they knew they
which connect users to multi-cloud. The SCX
needed to migrate to cloud; now they know
is hosted from SUNeVision’s Asia-leading
they have to do it sooner rather than later.”
MEGA Campus. Its core resides in a strong ‘eco-system’ built up over 20 years, founded
The Era of Multi-cloud
on strategically connecting telcos, ISPs, CDN,
The SCX is designed to meet the growing
OTT, new economy players and enterprises
demand for multi-cloud which offers greater
from different sectors with all major cloud
www.scx.sunevision.com
Meet the new
Global Data Centers. We’re one of the largest data center platforms on earth, and we put our clients at the center of it all with tailored, carrier-neutral, connected data center technology and solutions. Visit datacenter.hello.global.ntt to discover how together we do great things.
We made it this far
Despite everything, a successful year (for data centers) Exploring the only silver lining to what has been a terrible year
I
f, many thousands of years from now, historians picking through the rubble of a long lost civilization tried to piece together the year 2020 from data center industry figures they would find something remarkable. Poring over charts of growth and expansion, they would assume that this year was one of prosperity and good fortune. They would simply not see the depressing roller coaster that is 2020, ever lurching from disaster to disaster. Data centers’ integral role as the fabric of digital society has meant that they were spared from most of the harm that 2020
has wrought. For many in the industry, the world’s rush online has even meant business is booming. “The move to digital business, migration to hybrid cloud, and network optimization - all of those things have been accelerated,” Equinix UK head Russell Poole told DCD.
In its annual report on global traffic trends, the company found that interconnection was growing at a compound annual growth rate of 45 percent. “I don't think Covid caused anything new to be happening, there's just more going on.”
Sebastian Moss Deputy Editor
While there have been some customers in industries that have struggled, the overall trend has been for businesses to require more space - be it in colocation data centers, or through the cloud. “Even companies that are not doing so well are looking to take cost out of their supply chain, and I think digitizing is a way of doing that,” Poole said. On tangible supply chains, data centers were also caught up in the early worries of equipment being delayed when coming from China. But, despite an uptick in demand for networking equipment, this did not cause serious issues for the data center
Issue 38 ∞ November 2020 27
We made it this far industry, with many operators having stock on hand. Equally, while Facebook was among a few data center operators to pause construction during the onset of the crisis, most carried on. “Because of the critical nature of what we do, our contractors were able to carry on,” Poole said. “They had to change working practices to make it safer, and in some cases that made things take longer. But we haven't seen any substantial delays to any of our projects.” In its latest earnings report, Equinix's CFO Keith Taylor admitted that there were some costs to the crisis - with some customers going out of business, and others seeking discounts, Taylor believes Equinix took a $20m-30m hit. Overall, though, the growth more than made up for the losses - with Equinix raising its revenue guidance to just under $6bn for the year. The company is far from alone, with the industry broadly seeing revenues rise as businesses shift to digital - with the exception being smaller providers that have significant exposure to industries hit hard by Covid.
Another struggle for some has been winning lucrative hyperscale contracts, which can bring in huge revenue hauls but are often an all or nothing proposition. "We have talked about the lumpiness of the hyperscale business and how the timing of these larger deals can impact bookings from quarter to quarter," CyrusOne CEO Bruce Duncan said in an earnings call presenting "disappointing" leasing results. The company, which was struggling and underwent layoffs before the virus hit, has pinned its hopes of a turnaround on winning a slew of hyperscale contracts in the years ahead. "We are going to be very focused on closing the valuation gap between us and our very good competitors," Duncan admitted. In a year of accelerating trends, that of an industry ever beholden to hyperscale business was among the most rapid advances. For the first half of 2020, hyperscale cloud ecosystem revenues were up by 20 percent to an enormous $187 billion, Synergy Research found. “Cloud technologies and services continue to disrupt the market and to open up new opportunities for operators, technology vendors, and corporate end users,” Synergy chief analyst John Dinsdale said. “Amazon and Microsoft may be the poster children for this movement, but many others are benefitting too and most have seen relatively few negative impacts from the pandemic.” Of course, with cloud providers building their own huge data centers and
"All of the data center managers are being encouraged to make sure that their teams take time out and properly just go away and disconnect"
related networks of infrastructure, their relationship with the wider data center industry has always been an uneasy one. They both serve as wholesale providers’ main customers, and as competitors to the concept of hosting the Internet across a plethora of providers. But Equinix’s Poole believes that the balance will stay in the providers’ favor for the foreseeable future - “hyperscaler consumption of data center capacity that they don't own is growing - look at the performance of the operators out there who focus solely on that,” he said. “And when we look at the demand that we currently see and expect to continue to see, the hyperscalers will build their own data centers, but the scale of their growth and their low latency requirements mean that they are likely to continue using thirdparty operators.” It’s a bet Equinix has made with the launch of its xScale business, focusing on wholesale business to hyperscalers although its traditional colocation and interconnection efforts also bring in significant revenue from the cloud giants. The company hopes to place itself as the intermediary between the cloud providers, with its Equinix Cloud Exchange Fabric serving as a crucial (and lucrative) link in a multi-cloud world. This last quarter, it passed an annualized run rate of $100m. This, and the general success of the industry in 2020, is clearly a positive note to help make the year a little more bearable. But we are not automatons who solely derive pleasure from steady share price increases, nor are we future historians reading ancient traffic trends. In the symphony of terror that has been this year, we’ve been humans. The lack of major disruptions to supply chains, construction timelines, or Internet connectivity has come at the expense of an incredible joint effort by those in the industry. The rush to digitization has meant transitions that used to take months were pulled off in weeks. While they were mostly shielded from the same job security anxiety as other
28 DCD Magazine • datacenterdynamics.com
industries, data center staff have had to face the same stresses as everyone else the constant fear of the virus impacting themselves or their loved ones, the decline of democracies, the rise in police brutality, the growth of climate-related disasters, and the gnawing feeling something else is going to happen. Together, we have all had to endure lockdowns, and the shared weight of so much tragedy. With that in mind, employees’ mental health has to be a focus of any company, including those in the data center field, Poole said. “We've been encouraging people to take time out,” he said. “One of the things we found was people stopped using their holiday allowance. We were just working, working, working. So as senior leaders, we have been very publicly taking time off and being properly out of office.” The company instituted a policy of two extra mandatory days off, worldwide, around Mental Health Day (with the exception of critical facility staff, who were given days in lieu). "The whole company had two days off - 10,000 people - we had a company-wide shutdown." While the company is built around 24/7 operations, human employees aren't. "All of the data center managers are being encouraged to make sure that their teams take time out and properly just go away and disconnect." While it’s unclear how much longer the pandemic will rage, or what other disasters the future may hold, caring for employee mental health will be a mission-critical task for data center operators to maintain. This year has been a good one for data centers, but here’s hoping we never have another like it.
Save your cloud from drowning You can never stop the rain, but you can harden your data center to avoid water ingress. Use Roxtec cable and pipe seals to prevent flooding, protect critical infrastructure and ensure business continuity. #nomoredowntime roxtec.com/datacenters
Waiting for 5G
Vlad-Gabriel Anghel Contributor
The failure of 5G? 5G was supposed to be a revolution. So far in 2020, it’s not even been a great evolution
W
e are reaching the end of 2020, a tumultuous year that saw us realign our priorities, re-imagine our surroundings and adapt to a new way of living - at least for the duration of the pandemic. This was supposed to be the year in which 5G hit the big time, with wide rollout, breakneck speeds and huge bandwidth. And while it has appeared in certain instances (p56), it seems that the real world results of 5G are, at best, underwhelming, and at worst offer no improvement over former generations of mobile communications. I have previously covered the incredible promise 5G offers once mass rollout is achieved. Historically, rollout of different generations of mobile networks has always taken several years. And while 5G standards are adopted at a quicker rate thanks to some level of interoperability between this and prior network standards, the near snail’s pace of true stand-alone 5G rollout is
starting to dent the idea that 5G will reshape industries and markets. There are two variants of 5G: nonstand-alone (NSA) and stand-alone (SA) architecture. The majority of current deployments have been of the non-standalone variant, a variant which relies on LTE (“4G”) for its control channels. The difference between the two, in simple terms, is that communications are under LTE control, and communications only shift to 5G when a device wants to exchange. At that point, the connection to that device is sent through 5G NSA equipment. This approach enables 5G speeds to kick in for the right data transfers, at least in theory, but excludes all other intelligent functions of 5G, such as idlemode management and mobile control. So it is not actually 5G, but something in between. A very small number of 5G SA (standalone) networks have seen the light of day. In the US, T-Mobile is the first operator to realize a nationwide 5G SA architecture on August 4 2020, with Verizon following suit shortly afterwards. The UK saw its first claim of a 5G SA when Vodafone ran a showcase at Coventry University (a far cry from a metropolitan area real world scenario). The deployment of true 5G networks is much slower than anticipated, due to a number of reasons, the most important of which is politics, When the US deemed Huawei a national security risk by the US, it caused many 5G projects to cease - some of which were already in deployment. These operators then had to choose other vendors, so everything had to be redrawn and renegotiated. As one might expect, Covid-19 has also caused a delay, but the pandemic has had a much wider impact than was first thought. Restrictions and lockdowns have caused supply side workforce constraints, at the same time as reducing demand by squeezing the purchasing power - and mobility - of the average consumer. Given these issues, operators have slowed down their 5G deployments. 5G turns out to need more masts than 4G - and that has proven to be an issue. Because it uses higher frequencies, 5G relies on shorter wavelengths, which have difficulty in penetrating obstacles such as buildings. One way to combat this is to build more telecom masts, closer together, specifically for 5G. However, operators who pursue this have to first find the money to invest, and then have faced a surprising level of pushback from residents living in neighborhoods where future 5G masts are planned. The future of 5G as the technology that will truly unlock the Edge as well
30 DCD Magazine • datacenterdynamics.com
Non-stand-alone 5G enables 5G speeds but excludes other functions. It's not actually 5G as the industrial Edge is uncertain. To build true 5G SA networks would require significant investment, with base stations on “every street pole," as well as software standardization to provide interoperability with previous networks. During this delay, operators are now wondering if technologies already in development would alter the course that 5G is currently aiming for. There are new Wi-Fi standards, including Wi-Fi 6, with an increased data rate and enough density to meet a lot of needs. Meanwhile, Internet-beaming satellite startups and organizations are rapidly deploying their equipment, with some claiming speeds of over 100Mbps, making this another - somewhat surprising - option. In addition, with most of us working from home for the near future, adoption could see another delay, as we are switching from mobile data to our home Wi-Fi networks. Sadly, in an attempt to change the public perception of 5G, carriers have aggressively exaggerated its capabilities. Verizon has claimed it will revolutionize everything, almost pitching it as a cure for cancer through remote consultation, AR and VR tools. This claim has been debunked as experts pointed out that the majority of medical institutions in the US already have the gigabit speeds and low latency that Verizon promised - using Wi-Fi or Ethernet. Even for those consumers who can get 5G, it’s been a disappointment. 5G is not fully polished, and devices which support it are more expensive, and also demand a higher power draw compared to 4G. For the end consumer, it’s a minimal benefit. And even before it’s arrived, some are making it sound obsolete, by shifting the conversation to 6G. At the recent IEEE 5G++ Online Summit Dresden, Ericsson’s chief researcher Magnus Frodigh set out the company’s vision in a session titled “The Journey to 6G," promising the “Internet of senses” and networks with extremely high frequency band operation, alleged zero cost sensors with commercial availability in the early 2030s. Judging 5G by its current results, it’s a modest evolution at best, and far from the revolution we were promised. This is not to say that 5G’s promises will never come to fruition, but they need operators to get moving on stand-alone 5G deployments. The question remains, is 5G the future? It’s simply too early to tell.
DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER
Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation. Learn more at http://www.cat.com/datacentre
© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.
Sponsored by
Power Supplement
INSIDE
The energy journey Turning to batteries
Rack it up
Is hydrogen the future?
> How one supercomputer is using huge batteries to keep its system online
> Learn about the fate of Open19 and what can be achieved by simplifying the rack
> After Microsoft ran 10 racks off of hydrogen power for 48 hours, we look at what comes next
Uptime is everything—
So don’t fall for the imitators. Trust 30 years of innovation and reliability.
Originally released nearly 30 years ago, Starline Track Busway was the first busway of its kind and has been refining and expanding its offering ever since. The system was designed to be maintenance-free; avoiding bolted connections that require routine torquing. In addition, Track Busway’s patented u-shaped copper busbar design creates constant tension and ensures the most reliable connection to power in the industry—meaning continuous uptime for your operation. For more information visit StarlinePower.com/DCD.
Power Supplement
Sponsored by
Contents Power brokers
36. Data centers can lead the move to renewable energy Climate change is real. It's time to make some changes. 38. Simply the BESS How a university data center adopted batteries 40. Advertorial: Combatting the human error factor with reliable power distribution 42. Simplifying rack power distribution Wiring racks is complex. We examine two alternatives. 45. Could hydrogen kill off diesel? There's a new game in town. Hydrogen could power your facility
E
nergy is a journey. It starts with sourcing the power you need, then using it well, and finally dealing with the times it disappears.
Data centers depend on energy more than any other resource. It is the biggest operational expense by far, and it's as necessary as air. As soon as it stops, your data center fails. This supplement follows that journey.
Batteries beat diesels
42
38
Once it's in the facility, overcomplex power distribution systems can waste energy, time, and money (p42). Multiple steps in the power train result in losses in transformers. Too many loose wires mean that airflow is restricted and more energy is wasted getting the heat back out. As racks get denser, these issues get more pressing, and it could be time to look at an alternative way to route energy to your servers.
Renewable sources Climate change is real, and it's not enough to make small steps to reduce our environmental footprint (p36). Big data center players increasingly demand to have 100 percent renewable energy. That can mean challenging local utilities and lawmakers, as Switch found in Nevada. It can also mean a more fundamental challenge, in a country where renewable energy is not easy to find, and land prices make nearby solar farms untenable. Our sector has some of the world's most inventive minds, so yes: why can't data centers lead the way?
45
Simplifying the racks
When Rachel Moorehead was preparing a new supercomputer at the University of Alabama, she decided to avoid the perennial problems of diesel backup (p38). Her answer was a battery energy storage system (BESS). Tesla Megapack batteries make sure that power is available when required, and the facility has a high level of reliability.
Running on hydrogen Microsoft has run ten racks for 48 hours entirely on hydrogen. That may not sound much, but it could be the first step towards a new economy (p45). Hydrogen answers multiple questions. It's a great system for energy storage and transport. It can provide a way save surplus energy that is produced by renewable sources. It can even provide a novel source of cooling. But there are plenty of issues to sort out before we switch over to hydrogen.
All power to you Every data center operator has their own power journey, and plenty of stories to tell on the way. The way data centers are powered today is noticeably different from ten years ago, but there is a strong thread of continuity, kept by this industry's determination to keep things reliable and only use tested technologies. In the next few years, we expect to witness more changes and look forward to sharing them with you.
Power Supplement 35
Power Supplement
Data centers can lead the renewable energy transition We can’t just wait for the energy market to improve on its own
Y
ou have to fight.” Adam Kramer is insistent. Climate change is real, and we all need to do our part. Speaking at DCD’s San Francisco event this October, Switch’s EVP of strategy pulled no punches: “To build a credible renewable energy plan is not simply to state that we want to be 100 percent renewable by 2050, or whatever it is. It has to be better. “I just saw one of our peers announce they’re up to 18 percent renewable energy this year,” he said. “That’s a bit unacceptable, honestly, for our industry. It shows a lack of true desire to actually become renewable.” Switch has been fully powered by renewable energy for years, and Kramer says it had to fight to get where it has.
Back in 2010, local utility NV Energy used Nevada’s laws to block Switch from changing suppliers and use renewable energy from First Solar, a utility with an industrial-scale solar farm. Switch contested this in the courts of law and public opinion, eventually suing NV Energy.
Casinos, lobbyists, and politicians were dragged into the brawl, along with electric car maker and Nevada neighbor Tesla. Millions were spent on both sides for the largest ballot initiative in the state’s history, with Switch ultimately winning the right to leave the utility back in 2016. “Sometimes it doesn’t have to be a fight with the utility - which is funny coming from Switch - but it also can be a partnership,” Kramer said. “We worked
36 DCD DCD Supplement Supplement •• datacenterdynamics.com datacenterdynamics.com 36
very closely with Georgia Power on getting renewable energy tariffs, working with the utility to bring on more renewable energy that’s locally produced.” Over in Michigan, where the company now plans to build a huge campus with 1.8 million square feet of white space and 320MW of IT power, Kramer says that Switch was willing to risk the project for renewable energy. “We worked with [the local utility] and told them we wouldn’t even come if they wouldn’t build this, and we worked with political leaders, and we invested the money and the time to make that happen.” When it sources renewable energy Switch now demands that energy is newly added to the grid, instead of using simple credits to consume existing capacity. Where
Power Supplement
Sebastian Moss Deputy Editor
possible, it also seeks renewable power generation projects that are close to its data centers, keeping the carbon reduction within the community. Over in Tahoe, the company plans to take this philosophy to the extreme. The campus eventually hopes to grow to over 10 million square feet of data center space, requiring colossal quantities of power. “Just to the east of us, we are building the largest behind-the-meter [onsite] solar plus storage project in the world. That’s 100MW of solar and 240 megawatt-hours of energy storage,” Kramer said. “We’re building a micro grid system that is both behind the meter, but also interconnected into the grid, so we can pull any additional resources we need from the grid, or maybe sell back excess energy if there is any.” The project, which uses First Solar panels and Tesla Megapack storage, is funded by Capital Dynamics, requiring no up-front investment from Switch. It is also planning two other similar projects across Nevada, for a total of 555MW of power generation and 800 megawatt-hours of battery storage. “We have availability of land [by the data center campus], so we can triple the size of our project and go to 300MW of energy behind the meter. And we can do as much energy storage as we see fit, just because it
really is a small footprint for those megawatt hours of storage.” Such proximity to the renewable power source is not something every data center can take advantage of, of course. “What we can do for a new 100MW greenfield data center in Virginia is very different than what we might do for a multi-story, urban data center in Singapore,” Digital Realty’s director of sustainability programs Aaron Binkley said. “What options are available just within that state or country can vary significantly and you multiply that by 280+ data centers around the world and 45 metros, you can see how we need to have consistent themes and approaches but have a lot of ability to tailor and suit solutions to those individual circumstances.”
On-site solar is a problem where space is tight, for instance. “For a typical greenfield data center, we can cover the entire roof with solar panels, and only offset about half of one percent of that building’s energy load,” Binkley said. Adjacent land is expensive - it’s land able to support data centers - “so that quickly shifts us to looking at off-site utility-scale renewable solutions. “And that’s simply to address the scale of the loads at a given site, but also to address the realities that there are just some things
you can’t do on the site, given the densities that we need to hit, to make to make those developments profitable.” Both companies have seen customers demand a more aggressive approach to renewable energy commitments. Hyperscalers in particular already mostly demand 100 percent renewable energy. Increasingly, smaller customers are asking about it too, but it’s still not at the top of everyone’s agenda. “We’ve got 4,000+ customers around the world,” Binkley said. “For many customers, it’s not even on the radar,” he admitted. “For some, it might be on the radar, but they’re silent to us about it. And others are aggressively pursuing us to find solutions and collaborate with them. And we have to address that whole spectrum from a customer expectation standpoint.” Often a company’s public pronouncements don’t line up with the reality of what it contracts, Binkley revealed. “The amount of times we see a corporate commitment from a business around renewables and then we see absolutely nothing at all about it in an RFP for data center space is pretty common.” For many of Digital’s data centers, the company offers renewable energy via PPAs, if customers ask for it. “And we would love to compete with all of our peers to provide the best data center solution with the most renewables,” Binkley said. “If that’s a requirement of an RFP, let’s put everybody on a level playing field and let’s see who’s got the best solution.” For that to happen at a greater scale requires users to be more proactive in demanding renewable energy, Binkley argued. “We’re kind of looking to the marketplace of users to make it a requirement, and put screws to the industry.” But for Kramer - who was keen to note that “Digital Realty has done a really good job” with its sustainability efforts - data center companies need to become more proactive in making the world’s energy grids transition to renewable sources. “I think that many of our peers have allowed the lack of availability of green energy tariffs, or sometimes the cost of it, to stop them from actually procuring it, without realizing that with a little more work and digging a little bit deeper, you can procure renewable energy and actually reduce your energy costs,” he said. “If renewable energy isn’t available on your local system, right now, you can work with each other, you can work with politicians,” Kramer continued. “But you have to be willing to do the work, you have to be willing to engage there is still not an easy button in most of these places to get renewable energy. You have to do the work.”
Power Supplement 37
Power Supplement
Simply the BESS A new supercomputer is using Tesla batteries to stay online, but the implementation is more complex than you might think
Sebastian Moss Deputy Editor
R
achel Moorehead is used to outages. For the past five years, she’s had to keep a pair of ancient data centers online for the University of Alabama in Birmingham, a task that has proved exhausting. “Power outages, generator failures, chilled water disruptions and cooling emergencies, happen about every four months,” she said at DCD>San Francisco. “And that is enough to make anybody have grey hair, and I’m not 40 yet, and I don’t want grey hair, so we need to find a way to fix that.” Now, the university plans to launch a new supercomputer in a brand new building with all-new equipment. Finally. “It’s not quite done yet - pretty much all of the exterior is done, and so we’re all working on the inside now,” Moorehead said. “We’re setting up the data center now, the racks are here, we’re getting in the containment systems. It’s all coming together and we’re super excited.”
Rather than building a standard facility, the UAB’s Innovation Center looked to live up to its name, and try something a little different. But it also had to occupy a small footprint, be affordable, meet the university’s sustainability goals, and last up to 50 years. To help pull this off, the university turned to design consultants RSP Architects. “Before we actually put any paper to pencil, or designed on CAD, we really had to understand what the owners’ requirements were to determine what the product will look like,” RSP’s Rajan Battish explained. “The fundamental requirements were basically a Tier II data center, because this is a university data center. We’re not looking to build a financial bank Tier IV type data center solution,” he said. “In addition to that, we needed to make sure that the operations and maintenance of the IT equipment and the infrastructure that supports it provided a 2N solution because
38 DCD DCD Supplement Supplement •• datacenterdynamics.com datacenterdynamics.com 38
Power Supplement that’s where most of the human interaction occurs. And typically, most of the failures occur downstream of that critical system.” The site was fortunate to have power coming in underground, with three 15kV circuits connecting to a series of network transformers in a vault. “And we have a network transformer configuration feeding the data center at an N+1 configuration,” Battish said. “If you can get a solid stable utility source, it basically helps increase your resiliency to the facility, and also the reliability.” The university could have fed that power into a traditional data center set up with UPS systems and generators, but were already burned by the experience on their existing facilities. “We have the traditional model of UPS and generator, and we knew that didn’t always work for us,” Moorehead said. “There were just too many times we had the same component fail on both legs of our power that we wanted to make a wise choice.”
The team wanted to diversify but “we didn’t want to go too much off the reservation,” she said. The result was to use a traditional generator-UPS model for one leg of the facility’s power, and on the second leg, Wwe wanted something with fewer moving parts” that was less likely to fail. “And so then stage-right, enter the BESS system,” Moorehead said. What she calls the ‘best energy storage system’ is a battery energy storage system featuring Tesla Megapacks converted to work as backup power in case of critical power failure. “The Megapack is an all in one system,” Tesla’s senior project development manager Jake Millan said. “It’s completely pre-assembled and pre-tested in a single enclosure from our Gigafactory in Nevada - This includes battery modules, bi-directional inverters, thermal management systems, an AC main breaker and controls. There’s no further assembly required on site. All you need to do is connect the Megapack’s AC output to your site wiring.” But to actually get the battery system fit for a data center required a little bit more work, Battish said. The three 15kV power circuits came via a “network transmitter sitting outside to a switchgear that was designed to accommodate all three circuits at 40V and the power flow actually comes through a static switch that is a fast disconnect that we needed to provide to adapt the BESS UPS system for commercial data center application,” Battish said. “In addition to that, we ended up putting in galvanic isolation between the utility source and the BESS system by putting a Delta Y transformer.” Power comes in through A and B feeds,
with BESS serving as the power conditioner and standby power for A, and the traditional UPS serving as power conditioner for B, while a generator is used as B’s standby power. “You’ve got a battery on one side, and you’ve got a traditional generator-UPS system on the other side, “ Battish said. “It provides that extra level of resiliency to maintain uptime, not just electrically, but mechanically as well.” The work required to make this possible was considerable, Battish said. “Anybody can do this, but there’s a lot of thought process that needs to happen for this to work.” While much of the technology in BESS and standard UPS systems is quite similar, “the controls and the software can be quite different,” he said.
“You’ve got a battery on one side, and you’ve got a traditional generator-UPS system on the other side. It provides that extra level of resiliency to maintain uptime, electrically and mechanically." “That’s the key: how do we get those two different technologies to work properly in the output so that there’s no outage to the critical load. That was something we had to flush through during the design,” he added. “Also the sequence of operations. When we have all these cross ties between A and B sources of future loads, we have to be very mindful of how do we develop that sequence of operation, the failure modes associated with the sequence of operations and, obviously, the cost.” Now, the team is planning to add a third leg of power, with another BESS system, perhaps using Tesla batteries, but potentially someone else’s. “We definitely do not want to be sole sourcing to Tesla solutions, per se,” Battish admitted. “We want to look at different manufacturers, and see if the market will garner that desire and the interest to create that technology.” The idea of big battery storage solutions is not new, he noted, pointing to the hundreds of megawatts being installed around the world. “But it’s a new application, and it challenges the status quo.” He continued: “It’s like, why not? Why can’t we do this?”
Where next for batteries? Lithium-ion batteries have already powered multiple revolutions, including the arrival of mobile phones and electric vehicles - and they are at the heart of the Tesla Megapacks used at the University of Alabama (main story). But researchers worldwide are searching for better batteries. Data centers need energy when there is a power cut, but better storage has a more fundamental promise. Utilities have to run dirty fossil generators to meet our demands for power, because renewable sources like wind and solar are intermittent. Cheap batteries could save green energy when there is a surplus, for use when demand is high. Possible improvements to batteries come from several directions. Lithium batteries can store more energy and deliver it faster, if the surface area of their electrodes increases. Scientists are using nanotubes to allow more reactions in the same volume - and some of these use different materials for the electrode, including silicon. Others are looking at different chemistries. Replacements for lithium are being considered, including the much cheaper chemical sodium, which proponents such as Natron say can provide a safe long-lived battery. Lithium-sulfur is another option being tested - with Sony planning to bring it to market For large scale energy storage to balance the grid, big tanks of vanadium electrolyte are being provided by Invinity Energy for testing in California. Some energy storage systems make use of simple physical processes, including pumping water uphill or raising large concrete blocks to store potential energy; or compressing and liquifying gas. Each of these developments has advantages, but none has become a magic bullet that can deliver everything required in an energy storage system.
Power Supplement 39
Combating the Human Error Factor With reliable power distribution
H
uman error is a major factor in failures in the data center. It is well known that a data center must be designed with the highest level of reliability that the business model requires, or budget allows. Because this essentially means the data center can never be turned off, it must include provisions for concurrent maintenance and the replacement or changing of components without a disruption of power. Flexible overhead track busway facilitates ease in change management as it allows circuits to be added or removed without entry into critical panel boards or PDU’s. In addition, overhead power provides improved visual circuit management which reduces the probability of inadvertent operation of incorrect circuit breakers. It is nearly impossible to mistake which circuit breaker feeds which load when the cabinet power connection is made directly overhead. Under floor cabling requires extensive labeling of where a cable originates and terminates
at the time of commissioning. Once load is applied it will be extremely challenging to trace, particularly when complicated with redundancies. Overhead power distribution simplifies this task and mitigates the potential for human error as cabinet connections are made directly above the cabinet. Improved circuit management is just one of the many well-known benefits of overhead power distribution. But while there are commonalities between different busway systems, there are still critical areas where busway design can help further mitigate the risk of human error. Simplicity of design Failure can and does most often occur at connection points along any power distribution system. The more components of a design, the more room for error, which in a busway power distribution system includes: The design and maintenance features of each of these critical points in the power chain are paramount in delivering total system reliability.
40 DCD Supplement • datacenterdynamics.com
• End feeds - where source power is supplied to the busway run • Joints - where sections of busway are joined • Plug-in units - where the power is directly distributed End power feed units The end feed is the first critical connection in the busway power distribution system. It is also the only customer landed connection, meaning the installer will be required to torque the incoming power supply to the busway lugs. In the event this is not done correctly, a faulty connection occurs which can lead to catastrophic downtime and damages to infrastructure. Furthermore, regular maintenance on end feeds can require operators to open the end feed for IR thermal scanning to detect abnormalities.
Starline | Advertorial compression joint design, there is no maintenance required, eliminating the need for planned thermal imaging and providing a reliable connection without any future maintenance needs. The potential for installation error is also eliminated because the joint kit cannot be installed incorrectly. Starline reduces the total number of joint kits needed in the system by providing busway sections up to 20 ft (6 meters) long. Twist-in contact electrical connection There are numerous installation mechanisms and plug-in unit designs manufactured by busway providers, often with wide ranges of configurations and customer-specific customization capabilities. As plug-in unit designs grow more complex, the possibility of installation error increases when compared to a robust, simple design with fewer moving parts. Some plug-in units deploy mechanical handles that may inhibit full engagement of the plug-in unit to the copper busbars. Proper installation of plug-in units
is paramount to ensuring a reliable connection to power in data centers. Some manufacturers require copper grease to ensure a strong electrical connection which ultimately is another potential failure point due to human error. Starline deploys a simple, protective earth-first twist-in connection design ensuring a reliable connection to power without grease or complicated mechanical processes. Further, IR window technology can be integrated into plug-in unit designs, mitigating the risks associated with live work and reducing planned maintenance windows. Total system reliability These three critical parts of busway power distribution systems can be a source of downtime at worst or ongoing maintenance at best for data center operators. By utilizing a simple yet robust design and integrated power monitoring features, Starline mitigates any risks of downtime and offers enhanced reliability from power feed to plug-in unit, reducing the human error factor.
Starline offers several solutions in its Critical Power Monitor (CPM) platform to mitigate the risks associated with monitoring these critical connection points. Integrated IR windows allow for scanning of the end feed lugs without the risk of live work and the extended downtime required when opening the end feed box to perform this task. In addition to IR Window technology integrated into end feeds, Starline offers real-time temperature data of the end feed lugs. This allows operators to trend temperature and power data and includes alarm capabilities to trigger alerts when certain temperature thresholds are exceeded. This thermal technology, when combined with power data from the CPM, allows users to move beyond reactive and preventative maintenance and into predictive maintenance scheduling. Joint design Outfitting a facility can require thousands of feet of busway sections. A typical section of busway is 10 feet (3 meters) long – this means deploying numerous coupling systems and joint packs to support the power demands of each mission critical facility. Some busway systems utilize bolts as part of the joint pack, which can become loose over time and require retorquing. Furthermore, dedicated thermal image scanning intervals are recommended as ongoing maintenance for these systems. However, with Starline’s unique
Starline, a brand of Legrand, is a global leader in power distribution equipment. Fieldproven for more than 30 years, Starline Track Busway has provided data centers with the most flexible, reliable, and customizable overhead power distribution systems on the market. www.starlinepower.com info@starlinepower.com
Power Supplement 41
Simplifying rack power distribution Data centers include a massive amount of electrical devices, powered by buses and cables. Let’s look at moves to simplify that
E
lectrical power enters data centers via transformers and is then routed round the facility. But the ways power gets to the servers and switches have been changing - largely because of the evolution of servers and switches. Servers and switches have been getting more powerful, and are installed in racks which are cooled by air. That’s meant more power cables and that’s a problem - not just because of the time and effort involved in complex cabling. Cables sitting under the raised floor deliver energy - but at the same time obstruct the air that removed that energy
once it had been consumed and turned to heat. To improve this, power can be delivered from overhead, either from cables in trays or (increasingly) through busways which support flexible connectors down to the racks, where power distribution units (PDUs) provide power outlets for individual switches and servers. In the last few years there’s been a movement to simplify the way power is distributed within racks - and higher densities, along with the move to the cloud, could drive further changes. About ten years ago, Facebook set up the Open Compute Project (OCP), to share
42 DCD Supplement • datacenterdynamics.com
Peter Judge Global Editor
"The main difference between OCP and Open19 power distribution is shared versus dedicated" its designs for data center hardware, and allow others to improve it. Facebook, and other OCP founders, are hyperscale players, with large data centers running monolithic applications. A really standardized rack system suits them, so at the time OCP
Power Supplement members designed a new “Open Rack,” which held more equipment (in 21 inches instead of 19), and replaced the PDU with a DC busbar, a copper connector similar to the overhead busway, which goes down the back of the rack. Menno Kortekaas of Circle B likes the simplified power distribution in OCP racks, but his customers are much smaller than most OCP users, and they need his help. Kortekaas runs a room full of Open Compute equipment in the Maincubes AMS01 data center in Amsterdam. It’s a refurbished space, and so is some of the kit, he says: “There’s a few racks of renewed equipment that’s come back from Facebook,” he says, provided by circular economy player ITrenew. “Customers are receptive to using DC power distribution instead of PDUs,” he tells DCD. “As long as the server gets power, they don’t mind.” Working with OCP kit does require care and understanding, but he thinks traditional PDUs are vulnerable to errors. “We also have some 19in racks, and when one network card failed, we went in to change it, and we turned off the server by mistake. Luckily it was redundant.”
OCP racks are different, and that makes them specialized - unless you happen to have a big data center full of them. “The takeup of OCP-powered racks depends on skill,” he says. “Companies big enough to build their own data center don’t need me.” His customers have between 6kW and 11kW in each rack, and the racks are put together by Rittal, with Circle B handling the installation. “We provide remote hands,” he says. “If there is anything wrong, they log in and we fix it. They don’t have to have any specific hardware knowledge.” Perhaps because OCP equipment is that specialized, 2017 saw another group develop a rack power distribution alternative. This time, one designed to appeal to mid-size companies. Open19 was launched by Yuval Bachar, who was LinkedIn’s chief data center architect. He spearheaded a move by the social media company to commission its own network hardware and design its own infrastructure, in order to save money - and then set up the Open19 Foundation to share that design with other users. “The main difference between OCP and Open19 power distribution is shared versus dedicated,” says Bachar, who is now working on data center efficiency at LinkedIn’s new owner, Microsoft. “In OCP, power is distributed through a busbar, and the whole rack shares that busbar,” he says. “Any fault that happens on it
In 2020, when Covid-19 made travel impossible, the Open19 Summit was canceled outright, instead of moving online
even right for all OCP members, even those with their own hyperscale services. OCP implementations at Facebook and other hyperscale companies have diverged, and we understand that Microsoft’s own implementation of the OCP system eschews the busbar in exchange for more dedicated control.
Open19 contributed its specifications as a will knock down the whole rack. In Open19 racks, each server is fed directly from a power shelf which provides low voltage DC.” The Open19 power shelf delivers power at 12V to cages for servers and switches. The IT hardware “bricks” have no power supply and slot into these cages, where they clip onto the power bar. Because servers are powered individually. Open19 racks can include a level of server monitoring that is not possible with OCP - and which OCP’s typical users don’t need - says Bachar: “The main difference is between a shared environment for power distribution versus a dedicated environment.” It all depends on what building block you deal with, says Bachar. Facebook has tens of thousands of racks, so it manages at the rack level, and can reboot a whole rack if necessary. “In Open19, every server counts that‘s why we created it.” Dedicated feeds in Open19 can monitor and control servers in the traditional way, while those in an OCP rack can only be managed with a daemon on the server itself. The OCP-style rack-level busbar isn’t
standard within OCP, but it’s unclear at this point which OCP members, if any, see the need for it. Open19 itself has had low visibility for the last year or so. Microsoft bought LinkedIn in 2016, and in 2019 announced that LinkedIn would be moving away out of its own data centers, and onto the Microsoft Azure cloud. In 2020, when Covid-19 made travel impossible, the Open19 Summit was canceled outright, instead of moving online. The equivalent OCP event happened online, and some drew the conclusion that Open19 had folded. But these rumors are exaggerated. Open19 still has the distinctive features that Bachar points out, and LinkedIn still uses many Open19 racks. LinkedIn’s move to Azure will take some years, leaving it on Open19 racks for some time to come. And meanwhile, a new champion for the Open19 rack and power distribution system is emerging. In 2019, given his changing responsibilities at Microsoft, Bachar handed over the presidency of Open19 to Zachary Smith, CEO at Packet, a network company
Power Supplement 43
"You put all your capital in, between a half million and a million dollars in silicon and memory, and wheel it in. And then you hope you never change it. Because the second you have to it becomes a mess."
which delivered bare metal services using Open19 racks - a similar concept to the way Circle B plans to deliver infrastructure as a service on OCP racks. Packet became the most public proponent of the Open19 rack design. But in 2019, colocation giant Equinix bought Packet, and its future seemed unclear. Most of its customers take space in facilities, and many of them offer cloud services. Would Equinix step into IaaS? Late in 2020, it became clear that was in fact the case. Equinix relaunched the Packet service under the brand Equinix Metal. Zac Smith is now leading that part of Equinix, and he predicts a massive boost to the use of the standard. Smith thinks Open19 is ideal for a business that wants to quickly provision any amount of IT resource for enterprise customers in an IT environment which has been pre-wired for power and networking. The magic of it is that it’s pre-plugged and commoditized, but flexible and manageable down to the level of individual components. “Most Equinix customers are not running a giant server farm of a million servers, where it’s okay if some go down. That’s not
the scenario of most enterprises, especially in a distributed world,” he tells DCD. For users of space at a colocation facility, getting hardware installed is key, but so is the ability to change and manage it once it is there. And for customers with hardware in multiple data centers, working at remote locations is an issue.
In recent years, some enterprises have moved to a fully pre-loaded rack, in a system called “rack and roll,” where racks are pre-integrated with all their wiring, servers and switches at a special center and then shipped to the data centers, where they are installed - cabled and ready to use. But there are problems when you examine this concept, says Smith: “Let’s piece apart a standard rack. Let’s say you’re not even doing crazy density, you’re just doing 40 servers per rack, with redundant power per server, and 2 x 25Gbps network ports. We’re talking about five cables each just for the servers, so you’ve got well over 200 cables at the back of your rack.” Integrating a row of racks in advance, offsite, makes your data center very inflexible: “You put all your capital in, between a half
44 DCD Supplement • datacenterdynamics.com
million and a million dollars in silicon and memory, and wheel it in. And then you hope you never change it. Because the second you have to have some remote hands tech in Melbourne go touch it, it becomes a mess.” The alternative means very expensive technicians have to visit that data center and cable everything up on-site. “That’s a very, very high cost per server, because you have no efficiencies when you’re in a data center doing system integration with ten servers.” The Open19 approach disaggregates that, building the power and network cables into the rack before it arrives, but leaving the expensive technology to be slotted in without any expertise, once the racks are in place. “It basically says, what if you could deploy a small amount of capital, your sheet metal and cables, at a different time from when you do your big capital, your CPU and your memory,” says Smith. “We’re talking thousands of dollars, not hundreds of thousands of dollars, to have your cabling and sheet metal done right - and then to add your expensive parts more incrementally, more just in time.” That’s actually a neat summation of the same benefits which Menno Kortekaas promises with his refurbished OCP kit in Maincubes. His remote hands in Amsterdam are a smaller version of the armies of Equinix technicians with which Smith plans to deploy Equinix Metal. Both systems offer pre-wired infrastructure on demand. The two models will also come close physically, because Amsterdam is one of the first four markets where Equinix is offering Metal. Equinix bought a data center in Amsterdam in 2019. At the time, Switch Datacenters’ AMS1 was the home of Circle B’s first OCP Experience Center, and the purchase is the reason Circle B move to its current home in Maincubes. If IaaS based on pre-wired racks takes off, then one model (Open19 wiring) could replace another (OCP busbars) in what used to be OCP’s main shop window in Europe. Kortekaas will smile wrily.
Power Supplement
Could hydrogen kill off diesel?
Sebastian Moss Deputy Editor
Microsoft ran 10 racks on a hydrogen fuel cell for 48 hours. Here’s what comes next
W
e need to get off diesel. At a large scale, diesel generators are helping contribute to the warming of the planet, serving as yet another carbon emission. Locally, they produce particulates and nitrous oxides that harm communities, and make it harder for data centers to get permits. That’s why Microsoft, as part of a wider plan to be carbon neutral by 2030, plans to stop using diesel generators by the end of the decade. But that leaves the hyperscaler with a problem - how else to power its myriad data centers when the grid fails? For the past few years, the company has experimented with using hydrogen as an alternative backup power. This July, that work culminated in a successful 48-hour test run, where hydrogen was used to power a row of servers without failure.
Microsoft used a 250kW proton exchange membrane (PEM) hydrogen fuel cell from the automotive market to power the 10 racks, with PEM fuel cells combining hydrogen and oxygen in a process that produces water vapor and electricity Relying on an automotive product has its advantages - it’s one of the biggest markets for hydrogen fuel cells, so costs are falling. The products are also designed for fast acceleration, so they can give the rapid reaction time to take over from a UPS in case of grid failure. But these products were not designed with data centers in mind, project lead and principle infrastructure engineer Mark Monroe explained. “They had never run a fuel cell for longer than six hours because they couldn’t have a fuel tank bigger than that [on a vehicle].” To extend the usable time of the fuel cells, Microsoft brought in tankers full of
hydrogen, said Monroe: “We said, ‘we’re gonna run these.’” In a 24-hour trial in December, Microsoft discovered “some things in the fuel cells that needed slight modification, such as air filters and things like.” The company got those fixed before doing this year’s 48-hour test. “It went without a hitch,” Monroe said. For its standard data centers, Microsoft currently keeps 48 hours worth of diesel fuel on-site, but also has refueling contracts if power outages stretch longer than that. “It’s pretty rare that that [happens],” Monroe said “The feeling in most of the industry is that with 48 hours of backup you have plenty of time to warn your customers and they have time to migrate their tasks.” While the company may test longer
“The feeling in most of the industry is that with 48 hours of backup you have plenty of time to warn your customers." periods of running on the fuel cells, it is confident that it has passed a crucial milestone. Now, the focus is on size. “If you look at Microsoft or the other hyperscale companies, we build very large facilities and the 250kW system that we tested on for 48 hours needs to scale up to multiple megawatts,” Monroe said. “That’s where we’re headed next.”
Power Supplement 45
Source: Microsoft
The company is currently seeking to procure a 3MW fuel cell, with an eye to testing it out in 2021. “And then that will take us one step closer, by running it at the scale and for the duration that we need.” That’s when the real work begins. For hydrogen to make sense at Microsoft’s facilities, the company hopes to be part of a wider shift to the hydrogen economy. Space agencies and the petroleum industry are already huge hydrogen users, while every day sees new hydrogenpowered vehicles roll out onto the streets. Huge projects are planned that could turn Australia into a hydrogen exporter, convert all the gas lines in the north of the UK into carrying hydrogen, and Japan has promised to become a ‘hydrogen society.’ And yet, given the vast challenge of transitioning an entire energy network, and the numerous false starts of hydrogen in the past, it has yet to reach critical mass. Microsoft envisions scenarios in which its hydrogen fuel cells could be deeply integrated into the wider community. When liquid hydrogen is stored, comparatively small amounts boil off into gas when heat gets into the tanks. Monroe believes that it could be used elsewhere: “It could involve vehicle fueling, where we partner up with a vehicle fueling company and enable the creation of hydrogen fleets around the area for our own use or for others’ use,” he said. “Or we might do things like capacity planning, peak shaving, or demand response and interacting with the electric grid
because we’re not limited by the number of hours that we can run that generator, and so we could just use the boil off to knock off parts of our electric bill.” More ambitiously, Microsoft could produce hydrogen on-site, pairing the fuel cells with electrolyzers so that excess renewable energy during the day “could be turned into hydrogen for long term storage, and then reconverted back into electricity by a fuel cell,” Monroe said. For this to happen requires various industries to get on board the hydrogen bandwagon, with the company in July joining the business-led Hydrogen Council consortium, which pushes for hydrogen products and standards. Still, even Microsoft isn’t prepared to go all-in as things stand now. “I wouldn’t say yet we’re head over heels with hydrogen, but we’re definitely investigating it thoroughly,” Monroe said.
To meet its 2030 diesel-free target, it is “pursuing several alternatives in parallel,” Monroe explained. “One of them is just simply big batteries, right? Being able to run for several hours on batteries might be enough in some locations… the crossover point is somewhere in that four to 10 hour range, where a large battery versus a hydrogen generator becomes more economically viable.” The company will also use diesel derived from biomass, rather than fossil fuels, as it transitions to whichever post-diesel solution it picks. “So a large portion of the infrastructure will still run on an internal
46 DCD Supplement • datacenterdynamics.com
"Using hydrogen as a coolant and as a fuel is something that is quite interesting as that opens up a lot of opportunities." combustion engine that has some sort of renewable fuel [after 2030],” Monroe said. Diesel is incredibly easy to procure and, along with biomass alternatives, it has greater energy density than hydrogen, so it won’t go away any time soon. Still, Monroe says Microsoft is “optimistic about hydrogen and the technology. We think that probably three to five years out will be when you might see data centers deploying this at a large scale [as backup power], and there is definitely the possibility of using hydrogen as a primary power source.” There are others who hope to use hydrogen for even more. “Using hydrogen as a coolant and as a fuel is something that is quite interesting to me as that opens up a lot of opportunities for innovation,” Dr. Victor Nian told DCD. The senior research fellow at the Singapore government-funded Energy Studies Institute is fascinated by the idea of using hydrogen as a multi-energy carrier. “You’ve already spent the energy to make it liquid hydrogen,” which needs to be at least −252.87°C/−423.17°F to liquefy at atmospheric pressure. “Now, when you use the liquid hydrogen or highly compressed
Power Supplement hydrogen it gives you the cold energy, whether you like it or not,” Nian explained. “Whereas with water, you actually have to cool the water.” Meanwhile, for the cold liquid gas to be used in fuel cells, it is warmed slightly - something that could instead be done by servers. “You can go from the hydrogen fuel cell, cool the data centers, and then come back to become a fuel at a reasonable temperature and then use that fuel to produce electricity to power the data center,” Nian said. “So maybe we can achieve triple benefits - on paper this is workable.”
There doesn’t appear to be any data center company that has embraced the idea just yet, with the closest being Keppel Data Centres. Also based in Singapore, a nation with limited landmass, it is exploring building a floating data center that could use hydrogen power, and is also studying using the cooling from liquified natural gas regasification to help data centers. But all of its efforts are in a very early stage, and the company declined to comment. Another idea that is possible, on paper, is that of hydrogen cryomagnetics. Instead
of storing energy in a kinetic form in a flywheel, supercooling the system with hydrogen would mean that the copper would achieve superconductivity, so the energy could be stored in the magnetic field. “It’s not a new concept, but is barely mentioned in the literature because the application is very, very niche,” Nian said. “But it could be super convenient because you can discharge energy from this storage at super short time intervals at a very high power.” When asked about Microsoft’s view on using hydrogen for cooling, Monroe was reticent. “It’s possible,” he said. “We’ve looked into a couple of schemes where we might use the hydrogen for some cryogenic type cooling applications for maybe super performance or quantum or things like that.” He added: “Definitely having more experience with cryogenics as a result of having large amounts of liquid hydrogen around will definitely help us as a company as we move into those kinds of spaces.” That’s still a long way off. Microsoft first has to convince itself that it wants to use hydrogen - then it needs to convince everyone else. Plus, it needs to make sure that people use the right kind of hydrogen.
The first wave Shifts to new technologies don’t happen overnight. Should they be adopted by the data center industry, hydrogen fuel cells will have to integrate with existing facilities as well as new builds. “We’ve asked our suppliers to make this a drop in replacement for a diesel generator,” Microsoft’s Mark Monroe said. “And so electrically, the way that we test the system looks just like the diesel gen does. And from an architecture standpoint, we want the data centers to be exactly the same, beyond the generators and storage.” But, he noted, long term “that may or may not be the best way to do hydrogen generators.” For example, they produce direct current, so Microsoft could theoretically “shortcut some of the electric system and do a redesign that takes advantage of the way that the fuel cells work.”
While hydrogen itself is a green energy storage product, the process of creating it can be less so. Currently, roughly 95 percent of hydrogen is produced from fossil fuels, emitting carbon dioxide and carbon monoxide as byproducts. This often comes from natural gas Keppel, for example, operates gas businesses as sister companies to its data center arm. For hydrogen to improve things, rather than simply moving the problem downstream, it needs to be produced with electrolysis, where electricity from renewable sources is run through water to separate the hydrogen and oxygen atoms. Here, Monroe hopes, data centers can lead: “Where we think the size and speed of the data center industry will help the hydrogen industry the most is in driving down the cost of the fuel cell modules and driving up the supply of green hydrogen. We think the data center industry will start out by saying we have to have green hydrogen, period.” Once a company like Microsoft decides to shift to hydrogen, Monroe believes that everyone else will soon turn to it. “Diesel generators are an accepted difficulty for the data center industry because they’re the best that’s available today,” he said. “But there are other folks that would like to get rid of their diesels as well. And so we think that once one of us proves out the technology, others will follow along quickly.”
Power Supplement 47
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
StarlineDataCenter.com/DCD
Growing Underground
Growing underground
Alex Alley Reporter
How Bluebird Network got a data center - and how it expanded
W
hat makes someone look at a mine and see an ideal spot for a data center? In the case of Bluebird Network’s underground facility in Springfield, Missouri, the story goes back at least 20 years. Bluebird’s general manager Todd Murren told us the history. In 1946, the Griesemer Stone Company began mining limestone from beneath the city of Springfield. The “room and pillar” method left huge empty underground halls. By the 1960s, Griesemer was offering warehouse space in these vast caverns. In the 1980s, the city of Springfield got its own telecoms firm, SpringNet, when the community-owned utility decided to roll out broadband. At SpringNet, Murren began looking at the idea of setting up a secure data center in the 1990s, and eventually realized the ideal site was 85 feet below him, in Griesemer's combined mine-and-warehouse, now renamed Springfield Underground.
“You can’t find anywhere else with the same level of protection,” he told DCD. “Some people look at the mine and feel scared or think it’s a bit ominous, but human beings have been looking to caves for protection since the beginning. We wanted something that would be safe from dangers that most data centers faced.”
Missouri is prone to tornados, and a data center underground is one way to stay safe, but moving to a once-working mine brings other concerns. “When you’re underground in a mine you may want to check that you’re not building next to a TNT factory. We set up a seismograph back before we even built the data center, because we’re dealing with hard drives and the last thing you want to do is shake them. “But we found no problems. I’ve been there when they detonated explosives in the mine, there’s nothing quite like it,” he said. “Of course, I was a safe distance away.” Limestone is structured like a rocky sponge,
he explained. It can absorb shock and is flexible enough to dissipate energy. SpringNet bought space in the mine, and soon began construction - with some of the work already done for them: “The walls, floor, and roof are already built for you.” The low ambient temperatures reduced the cooling bill, and expansion was never an issue as there’s almost unlimited space to expand into. Originally, the data center was only going to be partially in the mine, with auxiliary systems such as generators kept above ground. But then 9/11 happened. “It caused us to pause and rethink some of the design features. The mine offers some very reliable natural disaster reinforcements, but unnatural events caused us to rethink them. That pretty much brought the project to a halt.” Suddenly, SpringNet became very aware of the damage a plane could do. The redesign brought all of the data center’s surface assets, generators, and mechanical cooling systems underground. “Cooling and power,” Murren said. “They’re much like our critical organs in the human body. Our heart and lungs are protected under our ribcage. So, our redesign after September 11 brought those elements underground.” With the rethink, work began in earnest to get the system ready for a launch in 2003. “It took about three years,” Murren said. “Three years to properly prepare the place, smooth out the walls, and install ventilation and the electrical equipment. We had to come up with a whole design philosophy and kept on making changes because of the challenges we faced.” Before Murren and his team ever moved in, the company that operated the mine was contracted to prepare the space. Once enough room had been made, level floors were laid with concrete. Of course, the underground roads into the facility weren’t exactly smooth, so most of the heavy equipment brought into the mine had to be dismantled or shipped in on special trucks equipped with suspension controls, and specialists who knew how to navigate tight spaces were brought in to help. “We had to remove two old generators and bring in three new ones. As with most generators, these things are brought to the job site on a semi-truck. In normal circumstances, a crane could be used, but when you are in a mine, with only 25 feet of space to the ceiling, you can’t use a 100-foot crane. Getting that generator off a truck and getting it into place was a challenge,” Murren added. “We had to do a certain amount of dismantling.” Another problem was the fuel for the backup generators, a potential fire hazard in the enclosed space: “When we started, we
Issue 38 • November 2020 49
Growing Underground
had to store around 3,000 gallons of fuel and now we’re storing around 12,000. When you bring those underground in a mine right next to critical operations, you’ve brought a risk that you normally don’t have in a [surface] data center.” Generators also have to breathe: when they’re turned on, gases such as carbon monoxide and carbon dioxide are released. The mine is essentially an enclosed environment, so those generators needed sufficient ventilation. By 2014, SpringNet had a thriving underground business, with 84 tenants including regional healthcare providers, but the telco wanted to invest in fiber, so it sold the data center to local telecom provider Bluebird Network.
DCD>Modernization DCD>Magazine Supplement Upgrading The Artificial andIntelligence retrofiting data Supplement centers
Mining ceased in 2015, just one year after Bluebird took over, and the new owners embarked on an upgrade process, backed by tax incentives from the State - Murren stayed on. Bluebird has expanded the space in three phases, with help from Schneider Electric’s data center team. In 2016, it added 4,000 sq ft. In 2018, it kicked off another 4,700 sq
Out now
This article featured in our a free free digital digital supplement supplement onon artificial data center intelligence. modernization. Read today to learn today Read about to rack learn density, aboutdeep howlearning data center technologies, operators faced the role upof toCPUs a unique in inferencing, set of challenges, the quest to improve for fusion and upgrade power, and facilities muchin more. a variety of locations, including Edge sites in Dallas, a Nato hangar in Iceland, and a historic New York landmark. bit.ly/DCDModernization bit.ly/AISupplement
50 DCD Magazine • datacenterdynamics.com
ft expansion, and a third one adding some 7,000 sq ft was begun in 2019, leading to an eventual site with some 30,000 sq ft. If any more space is needed, it’s no problem as the mine still has some five million square feet of tunnels and caverns. The expansion meant increasing the number of UPS systems, cooling systems, and generators. This last addition meant something more had to be done with the exhaust. Poisoning the workforce was not part of the plan. “Part of the work on the mine was addressing the exhaust and getting rid of it. The first thing we do is scrub it, getting it as clean as we can, and then get it out of the mine.” Until the expansion, the underground complex's regular ventilation systems could do the job, but for the expansion, Bluebird drilled a hole, 65 feet deep and 13-feet-wide, to act as a chimney for exhaust gases and hot air from the chillers. During intense lightning storms, something Missouri faces quite often, electrical spikes were discovered. These ‘anomalies,' as Murren calls them, led to the realization that the data center’s grounding system wasn’t good enough. “So, since buildings up on the surface get to ground themselves by digging down;
"Human beings have been looking to caves for protection since the beginning.” what do you think you can do when you’re underground?” The team decided it would be best to dig 340 feet down into Springfield’s water table. It may sound strange, but limestone is an insulator and the company needed to find a good conductor to dissipate the voltage. “So, we grounded ourselves to the water table. There are many wonderful properties to stone, one being you can’t electrify it, but the issue is you want something conductible when you want to ground electricity.” Modernization can be a challenge to any building, but underground it’s a major challenge. The Bluebird data center is Murren’s pride and joy and he’s proud to have seen it through all these stages. “It’s a blessing in disguise. Because now, anything can be going on, above, and has zero impact on the resiliency of this data center."
POWER TO KEEP YOU CONNECTED Protect your critical data with backup power that never stops. Our priority is to solve your data center challenges efficiently with custom continuous, standby, and temporary power solutions you can trust to keep you connected. Our trusted reputation and unrivaled product support demonstrate the value of choosing Caterpillar. For more information visit www.cat.com/datacenter. © 2020 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Corporate Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.
Providing Speed the Market Needs.
With Gray’s 60-year history in the industrial sector, we are more than equipped to meet the unique challenges data centers require. Gray has built mission critical facilities for domestic and international customers, including a tier 4 facility, which house cloud-based services. From concept to commissioning and beyond, you can count on Gray to make your vision a reality. Zach Lemley Senior Manager, Business Development zlemley@gray.com gray.com
Internet by helium
Connecting the world via balloon Alex Alley hopes you have a head for heights
A remarkable amount of flexibility comes with these balloons and the cost of each craft comes in at around $40,000. “What balloons allow us to do is to cover mediumdensity areas,” Candido added, “a little bit more efficiently than you would be able to do with towers on the ground.” The CTO made it very clear that Loon isn’t a one size fits all measure. It has limited bandwidth for multiple users. It is designed to support dozens of people in a square mile of countryside, not the hundreds or thousands that you would see in London or New York. The “pull” Candido says is that Loon can be inserted in areas where it usually is, “not economical to put down towers because there are not enough people to use those towers.” Essentially, Loon takes the equipment that would normally sit at the top of a cell tower, adapts it for longdistance operations, and sets it floating in the stratosphere. “We look at ourselves as a long-term piece of infrastructure… we’re able to add resiliency if a terrestrial network goes down,” he adds.
In 2017, Hurricane Maria hit the
Alex Alley Reporter
A
high-altitude balloon sounds like an eccentric way to deliver the Internet but Loon, an Alphabet company, has spent the last decade figuring out how to do it. Loon is taking unconventional means to solve an otherwise impossible problem: bringing the Internet to rural regions which basically cannot afford it. As Loon’s CTO, Sal Candido told DCD: “We provide connectivity, the idea is to do it all the time in all places. There are over a billion people in the world who don't have connectivity right now.” Loon started in 2011 at Google's research division Google X, with a plan to beam the Internet down to rural and isolated areas from high-altitude balloons that float in the Earth’s stratosphere between 11 to 16 miles above the surface.
After a few years of research, Loon was spun off as a separate subsidiary of Google's parent in 2018. Since then it has tested its technology in various countries including the US, New Zealand, Sri Lanka, and Brazil. Candido has been on the project since it started in 2011, and has seen it transform from a blue-sky prototype to a solid proposition in the real blue skies - as a for-profit organization which inserts itself into countries as a stop-gap that links up isolated areas which local telcos can’t afford to cover on their own. The company uses helium balloons, 15m across, made of polythene less than onehundredth of a millimeter thick. Each craft is equipped with 10kg (22lb) of electronics, including a radio hot-spot, LTE networking equipment, and solar panels. Each balloon lasts about six months before it is s landed and picked up by collection teams.
Caribbean and the advantages of floating cell towers became blindingly obvious. With the help of AT&T and T-Mobile, the company was able to get its aerostats in place and start beaming the Internet to civilians in Puerto Rico soon after the disaster. More than 3,000 people were killed by the hurricane and, in many places, terrestrial infrastructure such as cabling and cell towers were wrecked. Survivors in remote areas suffered due to a lack of communication, hampering relief efforts, and any hope of a return to normal. In response, Loon got into gear and placed balloons over the affected areas, with its “floating cell towers” allowing previously disconnected people were able to communicate from inside refuge centers and even to stream Netflix. What’s most impressive is the way the balloons placed themselves in response to the crisis. At an instruction from Loon, they “chose” their own routes and positioned themselves to provide the connection the survivors needed. The balloons are autonomous, given the nearly impossible task of controlling numerous systems floating in the stratosphere. Each system uses data about air currents to navigate itself according to instructions. Each vehicle uses AI to intelligently adjust its altitude, so the balloon rides on different currents to “sail” to different regions in the world, and maintain its location once it is there. This AI has led to some shocks. Shortly after the hurricane, Candido checked in
Issue 38 ∞ November 2020 53
Internet by helium
on his fleet of balloons one early morning, and almost spat out his coffee. He saw that four of his balloons were not following orders. As air currents caused them to drift away from their location, he’d given them a pre-programmed circuit that would bring them back to Puerto Rico - but they were ignoring that. “Normally,” Candido said, “balloons that had finished a round of service above the island would float southwest, cross South America and eventually loop back to Puerto Rico to resume serving that needing connectivity.” Instead, the balloons were hovering in a kind of “holding pattern.” Candido thought this was an error and attempted to investigate only to find the Loon system knew better than its masters. “Diving deep into the forecasts,” he said, “made it clear I had missed something. The winds, as it turned out, were expected to change in the coming days. The new winds would allow the balloons to simply drift straight back toward Puerto Rico, rather than taking the longer, circular route (which they were programmed to do) through South America.” The Loon system used the data, and spotted it would be more efficient to wait. More recently, the company worked with Telefónica to help people out after a category eight earthquake in Peru cut off connections in remote parts of the country.
Loon’s experience in Peru was different from how things went in Puerto Rico - right from the time it took for the balloons to arrive. It took four weeks to begin providing service in Puerto Rico, but in Peru the network was up in two days, partly because the balloons were already operating nearby, but also because navigational techniques had improved. As a company, Loon is based out of a Google campus in Winnemucca, Nevada, where a lot of Alphabet’s engineering R&D departments test out some of its other innovative ideas. But as it has grown, it’s turned out that Nevada is not the best site for balloon launches. To get service to new sites, Loon has the balloons deliver themselves. Some have gone up from Nevada, and sailed off on high-atmosphere currents, but the team has switched its primary launch site to the country where it provided a much-needed emergency service. “We didn't want to launch balloons off of Google's main campus and Puerto Rico is just a really good site for us to get balloons into the Tropics,” explained Candido. With balloons that can travel long distances, the company doesn’t have to ship
"Loon can cover areas where there are not enough people to make towers economical" them, he told us: “What is closer to Kenya? Is it Italy or is it Puerto Rico? We did a whole bunch of calculations like that to figure out what are good launch sites for Loon based on the places that we want to be working on in the near term. The point is the balloons follow the wind.” Prevailing winds make this a little more complex, as balloons often have to travel an apparently circuitous route. “When we want to provide service in Chile,” Candido added, “you may think you would want to launch in Argentina, it's right next to it, but it's on the wrong side because the wind is always going East.” Sometimes a balloon that wants to travel east to west has to follow a wind in the opposite direction until it can be looped around by the current. The wind travels in different directions and at varying speeds depending on the altitude. When a balloon is launched the vehicle will float up and can adjust its height by changing the amount of helium in the envelope - choosing an altitude where the wind is favorable. Puerto Rico is ideal because of its tropical latitude, making it possible for a Loon balloon to easily navigate to Africa - a continent where Loon suspects it will find many customers for its services. Loon uses meteorological data from the National Oceanic and Atmospheric Administration (NOAA), to allow its craft to make navigation decisions. Each craft receives data from ground stations that are
54 DCD Magazine • datacenterdynamics.com
in turn connected to distant Google data centers. Inside each of these, locational updates to Loon’s navigational systems are conducted. “Most of the wind data and the navigation smarts occur in our data centers,” Candido said. “And we just send the balloons a simple command, like ‘here's your plan for the next five minutes, and here is the plan for the next five hours’. It's in the data center that all the wind data is being processed; there's just not enough computation on the balloon itself.” Before the company operates in any country Loon has to install ground infrastructure and connect to a local network, as well as securing flight permissions from authorities.
Loon beams Internet derived from local mobile operators or partners such as a regional telco which is wirelessly connected to deployable Edge locations operated by Loon. These ground stations send signals to the balloons, and the signals can be routed between balloons before finally reaching the customer on the ground. In July this year, the company finally launched its first commercial offering. After several tests and successful trials, the company now interconnects 35,000 people over 50,000 square kilometers (31,000 square miles) in Kenya’s Radad region. The 35 balloons being used offer average speeds of 18.9Mbps downlink, and 4.74Mbps uplink, with a 19-millisecond latency. Following this, Loon is looking to be deployed in Mozambique and Peru. As of 2020, the fleet of balloons have flown for more than a million hours and traveled nearly 40 million kilometers (24 million miles) - enough to make 100 trips to the moon.
Exploring 5G's spread
Who’s got 5G? Which countries have got 5G, what standards are in use, and how important are they for telcos?
F
aster fifth generation (5G) mobile networks offering download speeds of up to 1Gbps are currently being rolled out in almost every continent. Yet the pace and scale of progress varies considerably from one country to another while different mobile network operators (MNOs) are relying on different standards and wireless frequency allocations. Figures published by network test, monitoring and assurance specialist Viavi estimated that as of January 2020 commercial 5G networks had been deployed in 378 cities across 34 countries. Elsewhere the GSM Association (GSMA) estimates that 5G networks will rapidly expand over the next five years to account for as many as 1.2bn connections by 2025, covering up to a third of the world’s population. Different 5G standards Rather than being defined by a single common approach, 5G encompasses a broad set of different technologies and standards included in the ITU-R M.2083 framework set out by the International Telecommunications Union (ITU) and the third generation partnership project (3GPP). The 3GPP has identified three core capabilities which no 5G network can be without – namely eMBB (enhanced mobile broadband), ultra reliable low latency communications (URLLC) and massive machine type communications (mMTC).
As with its 3G/4G predecessors, 5G will not be switched on overnight. Rather it will arrive in a series of waves likely to gain download speed and capacity as use case, penetration, and coverage accelerate. What you get in terms of bandwidth, latency, and availability will depend on which portion of the frequency spectrum 5G networks use: low-band, mid-band, or high-band. Low-band 5G networks use the 600/800/900MHz frequencies which allow broader geographical coverage because they are less susceptible to interference from static objects like walls and ceilings or atmospheric conditions. A single tower can cover hundreds of square miles and low band networks can offer connection speeds of anything between 30Mbps and 250Mbps according to recent tests, depending on the user’s proximity to the base station. Most operators plan to use low-band spectrum as a way to connect large numbers of 5G subscribers in less densely populated areas with the minimum amount of mobile bandwidth (in some cases not much more than 4G offers now) while faster speeds will be provided by other frequencies in cities and urban conurbations. For example, mid-band, or 5G New Radio (NR) Sub-6 could deliver average download speeds between 200Mbps and 900Mbps using frequencies under 6GHz - most commonly 2.5GHz, 3.5GHz, and 3.7-4.2GHz. Governments around the world are also mooting other wavebands within the Sub-6
Martin Courtney Contributor
spectrum which are currently have uses that will disappear eventually, such as analog TV broadcasting. High band or millimeter wave (mmWave) 5G networks require a large number of small, low range nodes to deliver dense coverage that can support huge volumes of connected devices concurrently over shorter distances – ideal for cities, transport hubs, and other congested sites. High band frequencies operate in the 24GHz-40GHz wavebands and are expected to eventually deliver average download speeds of 1-3Gbps though manufacturers like Samsung have successfully demonstrated 5G high band connections up to 7.5Gbps. Both mid and high band 5G infrastructure are defined as 5G New Radio (NR) by the 3GPP, meaning they can operate as non stand alone (NSA) infrastructure that additionally uses existing 4G networks to carry signals through dynamic spectrum sharing (DSS) to increase coverage in the early stages of 5G evolution. Those networks are eventually expected to graduate into stand alone (SA) 5G infrastructure that operates independently of other cellular technologies and relies exclusively on 5G packet core architecture which uses 5G cells for both signaling and data transfer. Some carriers (AT&T for example with its 5GE infrastructure) are also deploying an interim 5G technology that uses 4G transmission technology upgraded with MIMO support and fiber optic backhaul.
Issue 38 ∞ November 2020 55
of up 1Gbps using the low and mid band portions of the frequency spectrum. Reports suggest China’s 5G users had already exceeded 88m by the end of July 2020, at which point they represented 80 percent of global users. State support and finance, coupled with the dominance of telecoms equipment supplier Huawei, have helped drive that rapid expansion as did a faster recovery from the disruption caused by the coronavirus pandemic this year.
However, tests of the 600MHz Sub-6 network revealed little incremental advantage in terms of download speeds over 4G LTE, leaving T-Mobile working hard to upgrade more of its transmission masts to standalone 5G in order to cover greater parts of the US population. AT&T too has been quick to deploy 5G using both mmWave and low band 850MHz network infrastructure, additionally using DSS to share portions of the spectrum previously used by 4G LTE to maximize coverage. The telco’s mmWave 5G Plus service offers average download speeds of around 1.5Gbps (recent tests indicate 1Gbps is more likely) and will eventually cover more than 200m people in 395 locations around the country, but for the moment is only available in 35 cities across 17 states. In contrast the low band service already reaches 200m Americans, the company reported.
US continues rollouts at pace The three major MNOs in the US (AT&T, Verizon, and T-Mobile after the latter’s acquisition of Sprint) have all launched 5G networks and will continue to expand coverage in 2020/2021. Verizon’s UWB 5G is officially available in 36 US cities so far though its use of the 28GHz frequency waveband means that while bandwidth is high at up to around 1.4Gbps, coverage remains patchy for the time being. The telco is also expected to deploy a supplementary 5G New Radio (NR) Sub-6 network that additionally uses DSS sharing to utilize existing 4G infrastructure to offload 5G traffic when needed. T-Mobile first built a mmWave network last year [2019] before adding further infrastructure that uses the low-band 600MHz spectrum. The mid-band 2.5GHz spectrum acquired through its US$26bn merger with Sprint in April 2020 covers over 200 cities and gives T-Mobile a considerable advantage in as much as it covers all the spectrum bases - mmWave in cities and densely populated urban locations, mid band in metro areas and low band nationwide.
European progress beset by delays Europe is at risk of falling behind other countries when it comes to 5G rollouts, with delays in implementations being caused by a combination of staff shortages, coronavirus lockdown restrictions and budget constraints. A recent report from PwC suggests that the Covid-19 pandemic could delay 5G rollouts in Europe by 12-18 months, with telco investment falling by €6-9bn between 2020 and 2022. Some European governments postponed auctions of the wireless frequency spectrum which 5G uses to transmit data for example. Spain has put its upcoming 700MHz frequency 5G auction on hold indefinitely, while the Czech Republic postponed its auction of 700MHz frequencies and 3.5GHz wavebands. Austria’s TelekomControl-Kommission and France’s ARCEP also postponed their second 5G spectrum auctions as has did the telecoms regulator in Poland. In Switzerland, protests, and political opposition have led local MNOs to halt antennae and mast deployments until more data on any associated health risks is
Tests conducted in August this year calculated the average 5G download speeds exceeded 650Mbps in Seoul and six other major cities in the county, around four times faster than 4G a year before South Korea and China well ahead Countries which have fostered faster implementations of 5G have invariably been supported by government initiatives intent on accelerating the respective mobile economies. South Korea led the way with 85 5G connected cities, following early implementations in April 2019, for example. Progress was boosted significantly by government investment of up to US$26bn designed to deliver a fully-fledged 5G environment by 2022. The country’s Ministry of Science and ICT raised US$3.3bn from spectrum auctions in the summer of 2018, with operators including LG Uplus (a subsidiary of electronics giant LG), Korea Telecom and SK Telecom buying six different frequency allocations within both midband 3.4-3.7GHz and mmWave 26-29GHz wavebands between them. Tests conducted in August this year calculated the average 5G download speeds exceeded 650Mbps in Seoul and six other major cities in the county, around four times faster than similar tests conducted on 4G networks a year before. For the moment, 5G infrastructure uses NSA networks that use the 3.5GHz spectrum, with mmWave versions planned for later in 2020. With China Mobile, China Telecom, and China Unicom all launching 5G services in November 2019, Chinese government reports suggest the country will expand coverage by building over 10,000 5G base stations a week in 2020, with over 600,000 planned for the end of the year. That infrastructure will cover almost all of China’s 300 major cities, offering download speeds
56 DCD Magazine • datacenterdynamics.com
Exploring An Edge Supplement 5G's spread obtained, with some groups even fearing 5G infrastructure can help spread coronavirus. Consequently, more than half of the European Union’s 27 member states have still not launched commercial 5G services according to report released by lobby group the European Round Table for Industry in September 2020. Live commercial services are now available in certain countries, led by Germany, Spain, Italy, and the UK, mostly using the 3.4GHz-3.6GHz midband frequency and often using network sharing arrangements to increase coverage. Estimations by the European 5G Observatory suggest that at the end of June 2020 there were 248 5G enabled cities across the EU and the UK, though many offer private 5G networks and tests sites as opposed to commercial 5G services. Japan, Singapore, Australia lead APAC All of Japan’s four MNOs (NTT DoCoMo, KDDI, SoftBank, and new market entrant Rakuten) had launched 5G services by July this year after the government assigned the country’s wireless spectrum in the mid and high bands (3.7GHz, 4.5GHz, and 28GHz) for free in April 2019 in return for pledges on minimum investment that ranged from US$1.7bn for Rakuten to US$7bn for NTT DoCoMo. Rollouts initially planned to coincide with the 2020 Tokyo Olympics were delayed by pandemic restrictions but have since picked up pace. KDDI, which covered 15 of Japan’s 47 prefectures at launch in March, forged a network sharing agreement with SoftBank to increase rural coverage in the country while SoftBank aims to install over 10,000 5G base stations by the end of March 2023 and cover 90 percent of the population by the end of 2021 for example. Singapore remains at the testing phase, partially due to restricted availability of mid band 3.5GHz frequency spectrum though MNOs have announced plans to cover the whole island with 5G connectivity by 2025 using parts of the low band millimeter waveband. In Australia, Telstra, and Optus
are set to roll out services initially using the mid-band Sub-6 frequencies, advancing to mmWave connectivity in later 2021/early 2022. MENA well advanced The beginning of 2020 saw ten operators in the Middle East and North Africa (MENA) region having already rollout out commercial 5G services, with another 12 countries set to follow by 2025 according to the GSMA. The majority of activity is limited to the wealthier Gulf Cooperation Council (GCC) states and Israel, however. Zain’s 5G network already covers 95 percent of Kuwait’s urban areas, for example with the telco starting its rollout in Bahrain this year. By the end of 2019, Zain had switched on 2,600 5G towers across 26 cities in Saudi Arabia, relying on Nokia to provide MIMO equipment transmitting in mid-band 2.6GHz and 3.5GHz wavebands. The operator will also use the E-band microwave technology in certain areas of the country to deliver ultra-high capacity backhaul links within the 60-90GHz spectrum range able to transmit data at speeds of up to 4Gbps or 10Gbps depending on the specification at distances of up to 10km. In Huawei’s cloud-based radio access networking (C-RAN) solution, RTN 380/380H are designed to backhaul traffic between 5G basestations in areas where deploying wired fiber optic infrastructure is either problematic of prohibitively expensive. Rival carrier Saudi Telecom Company (STC) launched its 5G service in June 2019 also using a portion of the 3.5GHz band. Oreedoo has launched commercial services in Qatar ahead of the planned Fifa World Cup tournament in 2022, having successfully conducted trials using Ericsson NR RAN technology that saw transmission speeds of up to 4.2Gbps in the 200MHz mid-band spectrum. More recently Etisalat rolled out fixed wireless access (FWA) 5G infrastructure for residential broadband customers in the United Arab Emirates (UAE), having earlier implemented a standalone mobile network
operating in the 3.5GHz waveband in 2018. The GSMA estimates that 5G adoption will reach 16 percent in the GCC Arab States by 2025 with 15 countries served by 5G networks, slightly ahead of the global average. The pace of implementation is significantly slower in Latin America, though Claro is working with Ericsson and Qualcomm in Brazil and Antel is partnering Nokia in Uruguay to conduct suitable trials of the technology. And pilot 5G services have been launched in Africa (notably Vodafone in Lesotho using the mid band frequency) a lack of available spectrum and poor existing mobile penetration are likely to delay significant adoption for a few years to come. The future of telecommunications Few, if any, MNOs can afford to delay their 5G rollouts for long – arguably more so with the coronavirus putting strain on existing 3G/4G networks as use of streaming, gaming and conferencing applications increases, along with personal productivity apps which support many hours spent working at home. In March 2020, Vodafone reported up to a 50 percent rise in Internet use across its networks in some European countries, with Verizon noting a 75 percent jump in gaming traffic and 30 percent surge in virtual private network (VPN) usage. While 4G networks may be able to handle that additional strain over the short term, it is expected that a greater number of Internet connected smartphones, tablets, smartwatches, and other mobile and Internet of Things devices will eventually outstrip 4G capacity, particularly if end users start to want high definition 4K video, augmented (AR) and virtual reality (VR) applications and high speed gaming. More importantly, telcos need 5G to help them win customers in a competitive market, increase their average revenue per user (ARPU) and launch innovative new services that generate sufficient turnover to offset declining revenue from traditional voice and messaging services.
DCD>5G DCD>Magazine Supplement Preparing Thefor Artificial changeIntelligence Supplement
Coming Out soon now
This article will featured feature in ainfree ourdigital free digital supplement supplement on artificial on 5G'sintelligence. impact on digital Read today to learn about rack infrastructure. Asdensity, the newdeep network learning technology technologies, arrivesthe it brings role ofopportunities, CPUs in inferencing, but alsothe quest for fusion challenges. Howpower, will 5G-enabled and muchfacilities more. be deployed, powered, and used? bit.ly/AISupplement bit.ly/DCDSupplement
Issue 38 ∞ November 2020 57
An Edge Edge Supplement Supplement
Edge in the Next Normal
How fast is Edge growing?
Peter Judge Global Editor
The Covid epidemic accelerated digitization. How will this change the deployment of Edge?
E
dge computing is the much talked-about paradigm where computing resources are placed close to the devices and consumers they address, in order to minimize latency and reduce the cost of communications back to centralized cloud resources. How has the Covid-19 pandemic affected the way Edge is being deployed? Before we answer that, we have to note that actual figures on Edge deployments, even before the coronavirus pandemic, are hard to get hold of (see box, right). Edge is predicated on a rapid growth of applications including the Internet of Things, streaming media and smart cities. The last few months have seen a change in the way these applications are being deployed, which will have implications for the deployment of Edge. Many have pointed to an acceleration in “digitization” brought about by the pandemic as people shift activity online. Enthusiasts believe this is a blanket boost to the Edge. “The pandemic is likely to accelerate the change curve by more than 12 to 18 months," Mark Thiele, CEO at startup Edgevana, told DCD’s Building the Edge event in May. "If you had five years to make the change before, you have threeand-a-half years now." Thiele is uncompromising. He thinks the role of Edge is now to facilitate automation and cut people out of the equation. "Visualize a team of executives around a virtual table. The CEO asks his or her C-suite 'What’s our biggest constraint to how we manage through this pandemic?' At least one person around the table will raise their hand and says ‘well it’s
people,' and the CEO will say, ‘yes, that’s right, people are our constraint." Months after he said that, companies are starting to shed staff in large numbers. This may be more due to the economic downturn than to automation, but Thiele predicted that the pandemic will lead to a new surge in automated resources. “Edge is not taking a rubber band of old IT and stretching it out to a new place,” he said. Previous Edge initiatives have been “traditional data centers in a smaller footprint,” he said, but we need “something entirely different, as cost effective as a centralized data center - and you can assume you'll have to do it without people. Not everyone takes this extreme view. For one thing, Edge is not booming across the board. Business travel and meetings are moving to Zoom while public concerts and performances are being replaced with Netflix and Disney. But other Edge subsectors, like the rollout of major transport and smart city projects, will likely be postponed or delayed by the pandemic. For years, organizations have been expecting to communicate with a much more distributed user base. As the reality arrives, it turns out the users base are largely at home, and that’s reshaped this year’s Edge deployments, according to Dean Bubley of Disruptive Analysis. “[Compared with four months ago] there’s a lot more focus today on the fixed network as well as the mobile network,” said Bubley in a conversation he chaired at DCD’s online event Building the Edge. Where previous Edge discussions had focused on 4G and 5G mobile data, he now predicts a “converged networks or even a fiber-only edge.”
"Edge discussions focused on 4G and 5G, now there's more focus on the fixed network" 60 DCD Magazine • datacenterdynamics.com
Most Edge market research was published before the pandemic got properly started, and therefore may be altered by the reality. It’s also suffering from the difficulty of tracking any technology that is simultaneously “new” and “hot”: It’s growing from such a small base that it’s hard to quantify, at a rate which is pretty much guesswork. Among various predictions, it’s popular to claim a figure of around $3 billion for the size of the market in 2019, and predict a growth rate of 30-40 percent. For instance, at the end of 2019, an outfit called Meticulous Research predicted a global market of $28 billion by 2027 based on a growth rate of 34 percent (implying a $2.7 billion value in 2019) In March 2020, Grand View Research reckoned Edge computing was worth $3.5 billion in 2019, and predicted a 37 percent growth rate, which would imply a global market of something like $32 billion by 2027. However, treat these figures with caution. Two years ago, a superseded Grand View report from 2018, (still traceable in press releases) predicted a growth rate of 41 percent, but a total figure of only $3.24 billion in 2025. That suggests Grand View was using a figure of about $290 million for the market in 2017. If it was right about the actual size in 2017 and 2019, then the actual growth in two years was about 250 percent per annum. Some market research published in June 2020 makes no mention of the effect of the Covid-19 pandemic, but other research is suggesting a reset. Bizwit predicts a dip in deployment in 2020 and 2021 before growth resumes, leading to a market of $16.9 billion in 2025. According to researchers who divide out the sectors within Edge, hardware will make up around 40 percent of the market with the rest divided more or less equally between platforms, services, and software.
Even wireless networks have become more diverse, as small providers and indoor networks outmaneuver the big operators and carrier networks, in applications including factories and smart cities. It’s not clear what office occupancy will be like in the next normal, but there’s a clear opportunity to develop Edge applications that help enable social distancing in offices and other premises, and which anticipate people will spend much less time at desks. “I’ve seen something of a shift from an Edge focused purely on low latency requirements, to other use cases for Edge,” said Bubley. “Things like localization of data or even conferencing, If your company is in the city and your staff are working home, why are your conferences hosted on a data center 1,000 or 10,000 miles away - why not have localized web conferences?” “There’s a lot of new stakeholders involved,” he said, referring to organizations that have had to deliver services differently: “They may well want Edge facilities for themselves, the visitors or their tenants.” Cole Crawford, of Vapor IO, sees “standalone Edges”, where customers need data sovereignty and may localize and control the routes of their packets, either because of privacy or simply to avoid a network which is handling more traffic than ever before. With that network densification it may become harder to guarantee the delivery of a packet from an IoT sensor in a hospital - which could be doing life-saving work without changing the way traffic is handled. It’s possible that for some applications, the Edge will be “closer,” as usage condenses off streets and into homes, while for other applications, such as healthcare, where people are less likely to attend a hospital, the Edge will be “further away.” As someone involved in the development of applications for Edge, Jason Hoffmann believes the shift in applications over the last few months will be significant. He’s CEO of MobiledgeX, established by Deutsche Telekom to create an ecosystem of Edge developments, and he thinks there’s a change in emphasis. “It’s early to truly understand the extent of this,” he told a DCD audience, “but things on the consumer side are either stopping or shifting.” Citizens have not been getting together in shops or for sports, so those applications are on hold. But thanks to the pandemic, there are obvious needs for new applications, to track cases and help minimize the spread of the virus. “There’s definitely a shift to an interest in consent-type systems to trace, track and locate,” he said. “We’ve gone from worrying about use cases around entertainment or
Is Edge deployment corona-resistant? A lot has been said about the architectures required for Edge computing resources. Could these attributes actually help deploy them in a world suffering from restrictions designed to limit the Covid-19 pandemic? Edge computing resources have to be autonomous and operate in a federated manner, because they will be installed in locations where there is no IT support. They have to be shipped in by a delivery truck and installed by someone who is not a tech expert, and then those systems have to set themselves up automatically by connecting to a central control system. In theory, this sort of rollout should be more feasible during a time of reduced travel than many tech functions. Delivery services are one of the few things that have continued with little alteration during the pandemic.
consumption, to things that are closer to impacting human health.” Enterprises that are moving to remote working need more applications to support this, and which easily shift the workflows when workers move around. All the routine functions of the office such as expenses and HR have been ripe for digitization for years, but this has been the final push to get them completely online, with as much localization as they need. Not So Fast Overall, Edge deployment appears to have been less explosive than the optimistic predictions of vendors a couple of years ago. This could be because Edge is based on two assertions which might not be so cutand-dried as they first appeared. Firstly, it’s assumed that IoT, virtual reality, smart cities and the like need Edge resources for reasons of bandwidth both to minimize the latency and cost of communications. Secondly, it’s assumed that because they’re needed, they’ll be affordable on the revenue generated by those applications. At this stage, either of those assumptions may turn out to be an overestimate. Edge is actually about the applications, not where they are. The key focus is on making things happen, not how it is done. So if it turns out that a centralized data center is good enough, the application will stay there, and if it emerges that an Edge project costs more than the budget, it will be delayed.
What about cell towers? Three years ago, DCD met a number of Edge startups plotting to put Edge resources at cell towers, normally in the form of micro data centers. Mobile networks are everywhere, and we were told Edge applications would use 4G and 5G. These containers would hold distributed colocation centers hosting federated applications close to the wireless lastmile network. Since then, we’ve heard little concrete news about these plans, beyond a few pilots, partnerships, and ecosystem deals. We now suspect hopes for a fast roll out of Edge in cell towers have run up against practical issues such as land leasing and electrical power at those sites. The pandemic hasn't helped, and today's Edge boxes are not yet as autonomous as they need to be. Also, 5G is delayed and locked down are shifting from mobile to broadband, so these facilities aren't as necessary as we were told. We expect the pandemic will be a further blow to the cell-tower model of Edge, and expect to see reduced estimates and consolidation amongst those players this year.
DCD>Edge Supplement DCD>Magazine Edge in the Modern Age Supplement The Artificial Intelligence
Out now
This article featured in a free digital supplement that on artificial examines intelligence. the challenges Readand today to learn about rack opportunities of density, the Edge.deep As well learning as thetechnologies, impact of Covid-19, the rolewe of CPUs look atinhow inferencing, Edge relates theto quest for fusion transport, and smart power, cities and- much and how more. it made peace with the cloud. bit.ly/AISupplement bit.ly/DCDEdge2020
Issue An 38 ∞Edge November Supplement 2020 61
Trustbusting
It’s time to break up big tech
I
f Teddy Roosevelt were alive today, he’d say two things: first “how the hell am I still alive?” and second “what do you mean that there are three tech companies worth more than $1.5 trillion?” The overwhelming power of just a handful of goliaths has led more than just reincarnated presidents to ask whether power concentrated in the hands of so few is a problem. One of the few bipartisan issues of this divided age has been the question of tech monopolies. There are different takes on both sides of the aisle, but the general consensus is that something needs to be done, with a recent Congressional report highlighting numerous ways big tech is hurting competitors and customers. Most likely, following extensive lobbying and protracted legal fights, the tech giants will make small concessions and face minor restrictions. But, at the extreme looms the possibility of a break up. Perhaps YouTube will be taken away from Google, or Instagram from
Facebook. Less likely, but still not out of the bounds of possibility, is that Amazon Web Services could be forced to be spun out of its e-commerce parent. It’s something that former AWS VP Tim Bray has called for in a speech to Amazon union workers. “Why on earth should an online retailer, a cloud computing company, a smart speaker company, an organic supermarket company, and a video production company all be conglomerated into one corporate entity controlled by one person?” he asked, adding that the US needs “aggressive antitrust legislation to pry these operations apart.” Such moves could inject more energy into the cloud space, opening it up beyond companies whose vast pockets make competition impossible. This would prove valuable for many data center companies who have yet to win the favor of the handful of hyperscale firms dictating the future of this sector. This, too, may lead to more data center operators - a welcome addition to an industry increasingly starting to be dominated by giants of its own.
62 DCD Magazine • datacenterdynamics.com
One of the few bipartisan issues of this divided age has been the question of tech monopolies
Sebastian Moss Deputy Editor
THE FUTURE OF DATA CENTER UPS POWER AT SCALE A 30-minute live webinar no large scale data center professional should miss. 17 November San Francisco 12.30pm PST | Houston 12.30pm CST | New York 12.30pm EST | London 12.30pm GMT 18 November Frankfurt 12.30pm CET | Moscow 12.30pm MSK | Dubai 12.30pm GST 19 November New Delhi 12.30pm IST | Beijing / Hong Kong / Shanghai 12.30pm CST | Singapore 12.30pm SST 20 November Tokyo 12.30pm JST | Sydney 12.30pm AEDT
www.piller.com/en-US/piller-webinars
Nothing protects quite like Piller Piller will be donating US$20 to the WHO COVID-19 Solidarity Response Fund for each attendee.
www.piller.com
DISCOVER THE LATEST IN PDU INNOVATION
PRO3X PDUs MORE BENEFITS IN A SINGLE PDU SOLUTION
JUST SOME OF THE PRO3X PDU BENEFITS: • Improve uptime with mechanical locking outlets • Increase flexibility with outlets accommodating both C14 and C20 plugs in a single outlet • More intelligence from a technology platform that enables complete power chain visibility, remote management, advanced alerting and security • Improved airflow and load balancing with alternating phase design STAY POWERED AND ENSURE UPTIME WITH INDUSTRY LEADING INNOVATION Learn more about the PRO3X PDU at: https://www.servertech.com/solutions/pro3x-pdu-solutions
www.servertech.com | sales@servertech.com | 800-835-1515