Issue 40 • April 2021 datacenterdynamics.com
HOW DATA CENTERS survived the
Diesel deliveries and mutual aid kept the Internet up in Texas. But there are echoes of 2012’s Hurricane Sandy in New York. Will we learn the lessons?
The Edge frontier
New applications need fresh thinking
SPAC attack
Data centers are going public, but should they?
Digital builder
What next for the man who built a $39bn firm?
Don’t build here
Treat communities right, or face the consequences
YOUR BEST CHOICE FOR LITHIUM BATTERIES
RELIABLE BATTERY SOLUTIONS
NESP SERIES HIGH RATE LITHIUM UPS BATTERY SYSTEM
www.mpinarada.com | ups@mpinarada.com | Newton, MA, USA 800-982-4339
ISSN 2058-4946
Contents April 2021
52
6 News Fire destroys an OVHcloud data center, while other facilities are hit by terrorism, storms and drought. We round up a tough three months 12 Surviving the Texas storm Arctic conditions and a failing grid knocked out America’s second-largest state. Data centers prevailed... just 14 Reliving Sandy How New York data centers coped with the 2012 storm
12
The CEO interview
20
“ I spent three years trying to get us to merge with Equinix, but it fell apart.” Scott Peterson tells us his part in the building of Digital Realty, and his plans for Global Compute Infrastructure, his Goldman-backed venture
23 When will the SPAC bubble burst? Special purpose acquisition companies float quickly. But watch out! 28 Edge computing for health Don’t prescribe every sector with the same course of treatment 30 Bringing the Edge down to size Edge applications are different to cloud, and need different hardware
41 23
25 24
34 The Edge of Mars NASA’s Perseverance rover is the most extreme Edge application ever 37 Is OpenRAN in the running? 5G needs open radio standards, but OpenRAN is still being finished 41 Just how resilient are satellites? They’re integral to our world, but can we depend on space technology? 46 The space between satellites LyteLoop has a plan to store data cheaply in orbit. Are they serious? 52 Data center NIMBYism The way to head off local opposition is to engage with communitites
46
56 The next crisis is coming After the Texas storm and the OVHcloud fire, do you still feel lucky?
Issue 40 • April 2021 3
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
StarlineDataCenter.com/DCD
If it can go wrong, it probably will
S
ince our last issue, our industry has been dealing with disasters. Winter storms battered Texas, and then OVHcloud's data center in Strasbourg, France burnt down. In the meantime, terrorists struck digital infrastructure in the US, and drought hit chip supplies from Taiwan. Data centers are built to be resilient, however, and only one of those incidents has impacted users severely. OVHcloud customers are counting the cost of poor backup. Read more on the DCD site, and in our news pages (p6).
The problem in Texas wasn't Mother Nature, but humanity's failure to prepare Learning from disaster February's Arctic conditions in Texas stretched the preparedness of data centers, but most had enough diesel fuel to keep working (p12). Disaster isn't new, so we spoke to veterans who were there when New York's data centers faced a battering from Hurricane Sandy (p15). With this experience, why is it that in 2021, one of the richest parts of the world has an electricity grid, vital to society, which is not adequately resilient (or "winterized")? The data center community is not shy of pointing out that disasters needn't have such extreme consequences: "This isn't a Mother Nature story, it's a lack of preparedness story."
Money to spend In the last year, the whole world has faced a much greater economic and social disaster, in the Covid-19 pandemic. But it brought a boom in online working that has led to greater investment than ever. That's an opportunity for Scott Peterson. He helped build Digital Realty to a $39bn behemoth, but never quite scored the big merger - with Equinix. Now he's got Goldman Sachs backing to build another data center company (p20).
18 flights
From the Editor
The Edge... of space Once more, this issue, we look at the prospects for Edge (p25). Edge computing is not just a new kind of localized cloud. Edge resources will have to be carefully designed for different sectors such as healthcare (p28), they'll need special hardware (p30), and standardized networks (p37). NASA's Perseverance rover on Mars is the most extreme Edge application ever - but it's a specialized system developed on old tech that's been ruggedized (p34). And talking of space, satellites are becoming more important to digital infrastructure. That means concern over their reliability (p41). It also creates opportunities for new storage techniques (p46).
Head of Partner Content Graeme Burton @graemeburton SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Head of Sales Erica Baeta
Conference Producer, APAC Chris Davison
Conference Director, NAM Kisandka Moses
Chief Marketing Officer Dan Loosemore
Head Office DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
The Texas storm may teach us one thing. Data centers offered shelter to customers in weatherproof offices. That kind of spirit needs to be more obvious. Data centers need to win over communities as the Edge reaches residential areas (p52).
Training
News Editor Dan Swinhoe @DanSwinhoe
For sixty hours. (p15)
Community service
Debates
Deputy Editor Sebastian Moss @SebMoss
Conference Director, Global Rebecca Davison
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
Global Editor Peter Judge @Judgecorp
Stairs climbed by Peer 1's bucket brigade after Hurricane Sandy in 2012.
Dive even deeper
Events
Meet the team
Awards
CEEDA
www.pefc.org
© 2021 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
Whitespace
News
NEWS IN BRIEF
Whitespace: The biggest data center news stories of the last three months
Biden signs executive order for 100-day review of semiconductor shortage Says he will “push” for $37bn in funding as automakers idle plants amid a global semiconductor crisis, but held back on concrete proposals.
FCC seeks to exclude three more Chinese telcos from US networks The FCC will revoke China Unicom Americas’, Pacific Networks’, and ComNet’s authorization to provide telecommunications services to the US.
Federal Reserve interbank payment system suffers outage, disrupting crucial piece of US economy Usually used to process more than $3tn every day, the system was down for hours due to an undisclosed “operational error.”
Fire destroys OVHCloud’s SBG2 data center in Strasbourg SBG1 also badly damaged, customer data lost OVHcloud’s SBG2 data center in Strasbourg was destroyed by a fire in the early hours of March 10. Thankfully, no one was hurt, but the blaze also damaged the adjacent SBG1 facility - as did a separate smaller fire at the site a few days later. “We don’t plan to restart SBG1. Ever,” company founder Octave Klaba said after a week trying to fix the facility, although some servers are being cleaned in the hope they can be saved. Efforts are currently focused on restoring SBG3 and 4, as power cables to the data centers were damaged. Other issues have also arisen, with SBG3’s elevators not working. More than 100 firefighters were able to control the initial blaze after six hours of tackling the disaster. The cause of the fire is yet to be disclosed, with police and insurance investigations ongoing. Klaba appeared to suggest the cause was due to the UPS systems, which were serviced the day before by their manufacturer. In early April, firefighters were again
6
called to the site, due to apparent issues with a UPS at SBG3 - but the incident was quickly resolved and the system restarted. With all the fires, Klaba announced that the company would set up a lab to explore how fire prevention systems work (or don’t work). He also said that customers would receive free backups to their data in the future. Both moves will be little comfort to those whose servers went up in smoke, many of whom lost data completely, or suffered extended downtime. Among those impacted was the dystopian video game Rust, which lost 25 servers holding game data. The data centers were also used by malware businesses, with the fire temporarily cleaning up some of the Internet. The incident is one of the single largest disasters to impact the data center sector, and is likely to be studied for years as others try not to suffer the same fate. But data center fires are not wholly new - for more, see DCIRN’s overview of similar events on page 11. bit.ly/SmogComputing
DCD Magazine • datacenterdynamics.com
Google Cloud loses $15bn in three years, counts it as a success “We continue to invest strongly in the business given the momentum we are seeing,” CEO Sundar Pichai said in an earnings call. The business trails the highly profitable Amazon Web Services and Microsoft Azure.
Atos and HDF Energy plan hydrogen-powered data center for 2023 The two companies will build a “full production data center” powered by hydrogen in 2023 as a test bed to drum up business. “The goal is to make the data center energy-independent from the grid, and supplied 100 percent by decarbonated energy,” Atos told DCD.
NOAA data center floods, sinks buoy data A National Oceanic and Atmospheric Administration data center used to process data from marine buoys has suffered an outage caused by a burst pipe. NOAA’s National Data Buoy Center was brought offline by the flooding, knocking out a critical early warning system and data point for mariners, as extreme weather events increase in frequency.
AWS warns employees as right wing extremists threaten to bomb data centers
Nashville bombing caused fires and floods at AT&T facility The Christmas morning Nashville bombing disrupted wireless services, including that of the First Responder Network Authority. The suicide attack by Anthony Quinn Warner was carried out directly in front of AT&T’s central office facility, but it is not known if the telecoms company was intentionally targeted. The explosion caused significant damage to the AT&T building on 2nd Avenue, although thankfully no one was killed beyond Warner as he had issued a warning before his attack. The bomb damaged multiple floors, with the explosion tearing through the building’s façade, beams and columns, and elevators. Commercial power connections were severed, while two local water mains were
destroyed, causing severe flooding in the building’s lower floors. The building was evacuated on Friday 25, and again the next day when a fire reignited overnight. Technicians drilled access holes into the building to reconnect power to critical equipment via external generators on the 26, while technical teams began rerouting services to other facilities. By the 27, power was restored to four floors, and three feet of water was pumped out from the basement, but it was still deemed off-limits. By Tuesday, December 29, the local utility had restored commercial power to the building. The facility is still undergoing repairs. bit.ly/datacenterterrorism
Amazon has warned data center staff to “be vigilant” amid threats to facilities and Internet infrastructure, following the company’s decision to deplatform right wing social media site Parler. When AWS announced it would drop Parler due to violating its ToS, users threatened the company. One wrote “It would be a pity if someone with explosives training were to pay a visit to some AWS Data Centers - the locations of which are public knowledge.” bit.ly/Stayvigilant
NYPD: White supremacists and conspiracy theorists are targeting cell towers Conspiracy theorists and far-right white supremacist groups are “increasingly targeting critical infrastructure to incite fear, disrupt essential services, and cause economic damage with the United States and abroad.” The warning, published in a New York Police Department intelligence report seen by The Intercept, comes after several attacks have already occurred. On December 14, 2020, someone broke into a cellphone tower ground station inWest Virginia. There, they severed the main power cable and removed the primary and back-up generator batteries. This impacted wireless coverage throughout West Virginia, Pennsylvania, and Maryland, and led to damages of more than $28,000. Five days later, someone cut fiber optic cables, and damaged equipment at a cell tower site in Decatur, Tennessee. A third cell tower attack occurred in New York. The NYPD also detailed a neo-Nazi group whose “members strongly supported exploiting civil unrest in the United States by attacking the country’s infrastructure.” bit.ly/TheNewFront
Whitespace Quantum computing startup IonQ announces $2bn SPAC merger Quantum computing startup IonQ has announced a $2bn merger with a Special Purpose Acquisition Company. The startup has entered into a merger agreement with dMY Technology Group, Inc. III, to become the first publicly traded quantum computing company. The combined company is expected to be valued at around $2 billion and IonQ shares will trade on the NYSE under “IONQ.” “This transaction advances IonQ’s mission, to solve critical problems that impact nearly every aspect of society,” said Peter Chapman, CEO & president of IonQ. SPACs are ‘blank check’ shell companies that list on a stock exchange and then acquire or merge with an operating private company. This route to the stock market is often quicker and involves fewer steps than a traditional IPO - and much less regulatory oversight. For more on how SPACs work, and which data center companies have turned to them, check out our explainer on page 23. bit.ly/SPACtacular
UK Gov to choose Microsoft for £1.2bn supercomputer, despite Atos lawsuit Cloud company is the surprising winner of a weather system contract The UK Government has denied breaking the law in awarding Microsoft a £1.2 billion contract for the Met Office’s new supercomputer. Last year the Met Office announced it would spend US$1.56bn on building the world’s most powerful supercomputer dedicated to weather and climate. Within the total contract, the supercomputer itself is expected to cost £854 million ($1.2bn). In February, Atos filed a lawsuit against the Met Office and UK Government, claiming procurement laws were breached in awarding the contract to Microsoft. Law360 reports the secretary of state for the Department for Business, Energy and Industrial Strategy and the Meteorological Office have dismissed all allegations brought against them
by Atos, arguing that the French company’s proposals did not meet their requirements. After the case was heard in court last month, A Met Office spokesperson told DCD that the Court had lifted the injunction, allowing it to proceed with its supercomputer procurement. “We will continue to robustly defend our selection decision at any future hearings and remain confident that any issues can be worked through,” said the spokesperson. Microsoft wouldn’t confirm or deny at the time whether it had won the tender, only advising DCD to contact the Met Office. The award would be one of the first public supercomputing procurements the cloud provider has won. bit.ly/TheCloudIsTheNewHPC
Peter’s supercomputing factoid Microsoft built a cloud supercomputer for OpenAI back in 2020, after investing $1bn in the startup. If it had competed in the Top500 ranking, it would have come in at number five.
AWS to build at site of UK’s Didcot A Power Station An unnamed company has proposed building two large data centers on the site of the former Didcot A Power Station, in Oxfordshire, UK. DCD can reveal that the firm, operating under the pseudonym Willow Developments LLC, is actually Amazon Web Services. A combined coal and oil power plant, Didcot A opened in 1970, but was demolished between 2014 and 2020. At peak, it was capable of generating 1,440MW.
8
DCD Magazine • datacenterdynamics.com
An unnamed company has proposed building two large data centers on the site of the former Didcot A Power Station, in Oxfordshire, UK. Ahead of the release of the new planning documents, Willow has begun a charm offensive, with over 3,000 leaflets sent to nearby homes by public affairs company Tristan Fitzgerald Associates, working on behalf of Willow. bit.ly/TurningCoalIntoData
EdgeConneX and Adani form 1GW data center joint venture in India AdaniConneX aims for India’s Edge and hyperscale
Adani Group and EdgeConneX have formed a new joint venture named AdaniConneX, and aim to construct 1GW of data center capacity over the next decade. The Indian conglomerate and US data center firm will take an even split in the venture, which will develop and operate
Edge and Hyperscale data centers throughout India. The new JV will build a network of hyperscale data centers across India, starting with the Chennai, Navi Mumbai, Noida, Vizag and Hyderabad markets. Development and construction at these sites have already begun.
AdaniConneX will also construct a number of Edge data centers throughout India, with these locations reportedly designed to scale with demand and become full scale data center campuses if needed. Both the hyperscale and Edge data centers will mostly be powered by renewable energy. “In Adani, we have the ideal partner in India,” said Randy Brouckman, CEO of EdgeConneX. “They possess the necessary capabilities and unique expertise in India required to build out critical digital infrastructure that can best support our customers across the entire country. We look forward to investing in the digital economy of India and meeting our customers’ needs throughout the region in collaboration with Adani.” Jeyakumar Janakaraj, previously CEO of Adani Global Singapore, will become CEO of AdaniConneX. bit.ly/ConnexingUp
Iron Mountain forms Indian joint venture with Web Werk Will expand into Bangalore, Hyderabad and Chennai Iron Mountain has formed a joint venture with Indian data center firm Web Werk. Web Werk currently operates carrierneutral data centers in Mumbai, Pune, and Delhi NCR, totaling 225,000 square feet (20,900 sq m) and 4MW of capacity. Iron Mountain expects to invest $150 million over the next two years, which will enable Web Werk to expand its operations in its three existing markets and subsequently expand into Bangalore, Hyderabad, and Chennai. “This investment reflects Iron Mountain’s commitment to invest in high growth, good return global markets to continue to meet our customers’ requirements. The India data center market is projected to grow rapidly in the coming years and we are excited to be an early mover into a market where the demand is high and the supply is low,” Mark Kidd, EVP and GM of Iron Mountain Data Centers, said.
“Joining forces with the Iron Mountain Data Center team will further solidify Web Werk’s leadership position in the pan-India region and among the broader set of global customers,” Nikhil Rathi, Web Werks CEO, added. The deal is one of a number in the growing Indian market: NTT recently acquired six acres of land to construct
a 70MW data center in Noida, STT GDC is working towards building a 36MW campus there, and Yotta Infrastructure is developing a 20-acre hyperscale data center park in the area too. Reliance Jio is also planning a $950 million data center campus in Uttar Pradesh. bit.ly/WerkHardPleyHard
Whitespace
Taiwan cuts water supply to TSMC and Micron, but chipmakers say they have sufficient reserves Taiwan has issued a red alert on its water supply, as reservoirs plummeted to dangerously low levels. The government will reduce water supply to two science parks in Taichung by 15 percent, home to chip fabs operated by Taiwan Semiconductor Manufacturing Co. (TSMC) and Micron Technology. TSMC said that production would not be affected and that it would ramp up efforts to truck water in. Micron declined to comment. Economics minister Wang Mei-hua reviously said that Taiwan would be able to keep its tech companies going on water reserves until the seasonal rains in May. The country is experiencing its worst drought in more than half a century, with low rainfall and zero typhoons making landfall in 2020. A warming planet has led to high-pressure zones merging in the upper atmosphere over the Pacific and Southeast Asia, reducing landfall. bit.ly/ThirstyChips
Renesas semiconductor fab catches fire, impacting chip production No casualties in 5 hour clean room blaze; cause unknown A fire broke out at a Renesas Electronics Corporation semiconductor factory on March 19, 2021. The fire, which started in the clean room, was extinguished after 5 1/2 hours. There were no casualties, and no damage to the building frame. 300mm wafer production at the N3 Building has been halted. The fire broke out on the first floor of the N3 Naka Factory, in Hitachinaka, Ibaraki Prefecture at 2:47 am local time. It was extinguished at 8:12 am. An issue with plating equipment was to blame - casing of the equipment and the plating tank have a relatively low resistance to heat, and the equipment ignited due to overcurrent. Why the overcurrent happened is yet to be determined. Some utility equipment was damaged, including the pure water supply and the air conditioning. About two percent of the fab’s manufacturing equipment was damaged.
The area burned totaled 600 sq m (6,460 sq ft), around 5 percent of the entire clean room. “We would like to give our sincerest apologies to neighboring residents, customers, partner companies, relevant authorities and all those involved for the trouble,” the company said. The fire is expected to exacerbate an already difficult global semiconductor shortage. About two-thirds of the affected production were automotive chips. Car companies previously idled plants due to limited chip supplies, reducing the output of vehicles. The damage is expected to halt production for at least a month, but some of the lost production is expected to be taken up by other fabs due to Renesas’ diversification efforts following the 2011 Tohoku earthquake and tsunami. In addition to the cost of replacing equipment, the company expects to lose 17 billion yen ($156m) a month due to the reduced production. bit.ly/HotChips2021
Samsung shuts Texas S2 fab amid storm The temporary shutdown of Samsung’s Austin-based S2 foundry may have reduced around 1-2 percent of the global 300mm wafer supply for the month of February. The facility was forced to cease production due to rolling power outages enforced by utilities during Storm Uri, as natural gas supplies froze over. A TrendForce analysis of the shutdown notes that the facility produces about five percent of the global supply of 300mm products, primarily at 14nm and 11nm nodes, as well as some 65nm to 28nm chips and automotive semiconductors for Tesla and Renesas. With Samsung shutting the site down slowly, after being given sufficient warning, it is not believed that any supply was damaged. Instead, the production was simply delayed. But the outage - which also impacted NXP and Infineon fabs - will add additional strain to an industry struggling to meet demand. bit.ly/FrozenChips
10 DCD Magazine • datacenterdynamics.com
How frequent are data center fires - and how damaging? Dennis Cronin, CEO-Americas for the Data Center Incident Reporting Network (DCIRN), looks at a history of fire With the recent total loss of two data centers (single site) due to a fire at a cloud provider in France, the question arises as to how frequent, and impacting are data center fires? To answer this question, we started roving the Internet to see how many published incidents of data center fires there were and what insights could we glean from those reports. All in all, from June of 2003 to March of 2021 (18 yrs.) we were able to identify 31 data center fire reports. The challenge was that details and specifics as to the causes, durations and impacts were difficult to come by. It seems that they only make the news when there is a fire or reports of a fire. If the causes are not readily known, follow-ups with more specific details seldom occur or the details are buried behind non-disclosure agreements (NDAs) prohibiting all from speaking out. One would also expect, with the Loss Risk being so large, the insurance companies would demand data center operators take greater precautions and share experiences to teach better risk mitigation. Further, data center operations are inherently opposed to sharing information even if it means preventing future incidents. This culture of not sharing information on common issues made our research a bit more difficult and limited. Here are a few stats of what we found: • 65 percent or 20 were confirmed as real fires • 23 percent or 7 of the 31 were reported as fires however there was no confirming data • 6 percent or 2 of the 31 reports were at new data centers under construction • 3 percent or 1 of the 31 was construction dust setting off a live data center fire suppression system • 3 percent or 1 of the 31 was an interruption due to a fire suppression system test Because of the industry’s secrecy there is no doubt that there were many more data center fire incidents that did not make the news but even so, we were able to identify
Dennis Cronin DCIRN
an average of 1.5 major fires per year. Even more telling of the severity of your typical fire (excluding the recent incident in which two data centers were lost), the downtime averaged 17.5 hours due to fire incidents. And that was the time taken to get the facility operational - exclusive of the IT reboots that followed. So, 1.5 data centers out of thousands of data centers around the world does not seem like much, unless of course it is yours. If it is, are you prepared for 17.5 hours of downtime before you can reboot servers? Don’t forget the 17.5 hours assumes the servers are not damaged by heat, soot, water, or fire. Then there are other factors like the data loss due to the hard crash when the power suddenly goes away. Interestingly, of the incidents where the time the fire started was provided only 27 percent were in the PM hours. The other 73 percent started in the morning hours, where all but one started between 9:00 AM and 11:00 AM local time. This we find interesting because everyone expects incidents to happen on the graveyard shift when no one is around. Perhaps this statistic reinforces the need to keep people out of data centers because things always happen when people are around. While we did not find any contributing factors related to human activities the inference remains strong. Perhaps when more details are shared in the future, we will eventually be able to establish a statistical correlation. Another factor we were able to look at was the frequency by year. Curiously, we
found years 2011, 2014, 2015, and 2018 all tied with four fire incidents each. More importantly, since 2011 we have never gone more than three years with less than four fire incidents. If that holds true, then we are due to have two more fires by the time 2021 ends. Certainly, this is not something to look forward to. To be real about all these facts and figures, the datasets are just not sufficient to be statistically reliable. For that we need much more data to achieve the degree of confidence that statistical analysis requires and assure the data is not skewed. Achieving these goals will require the industry to start opening up and sharing common non-proprietary operational data providing insights to operational improvements. As data center incidents have more and more impact on people’s daily lives, the industry will have to stand up and contribute to this process or risk intervention by multiple governments with differing reporting requirements. Leadership by example is a powerful tool and we need more everyday leaders like Octave Klaba, who can stand up in the face of disaster, muster his team to deliver herculean efforts in restoring services to their clients and commit to full transparency of the factual causes that lead to the total loss of two adjacent data centers to a fire. We will learn much from this incident, but this is only one data center site. The industry can learn so much more by sharing common data across all data centers.
Surviving the Texas storm A record storm and a failing grid knocked out America’s second-largest state. But data centers prevailed. We hear the stories of those that were there
W
hen a record winter storm hit Texas earlier this year, chaos reigned. A poorly maintained grid was unable to cope with freezing temperatures, which led to rolling blackouts and closed roads, bringing the state to its knees for an extended period in February. As residents huddled at home and industries closed, data centers mostly were able to persevere despite the sudden storm. How they managed to keep online proved to be a lesson in the value of preparation - and in luck. "We were watching what local weather forecasts were saying about a storm coming in, and that there would be some cold temperatures," Digital Realty's Dallas/Fort Worth technical operations manager Benny Furtick remembers.
"The interstates were shut down on the south part of Texas coming in from Louisiana to Houston... That night, another coating of ice came in." "Our guys have been through severe weather before, so they know what that looks like, whether it be a hurricane or ice and such, but I think what surprised us was how widespread it was." When the storm hit on a cold Sunday evening in February, it blanketed the entire state and crept into neighboring ones. "As it started to hit, we understood pretty quickly how big this was," he said. "Our teams rose to the occasion, and got through it with hardly any issues." Other businesses were less fortunate, with around 80 percent of the state's
Sebastian Moss Deputy Editor
12 DCD Magazine • datacenterdynamics.com
chemical production brought offline, along with chip manufacturers, and other factories. Data centers, however, are built with a preternatural distrust of the grid, as even brief moments without power can bring things crashing to a halt. Each provider DCD spoke to said they had at least 48 hours of diesel fuel on site. "And we wanted to make sure that we kept the fuel coming," Furtick said. "Our agreement with [our supplier] is that they will have a truck at whichever site we need within a 24 hour period. And then after that
Storm Uri vs Data Centers they just keep the trucks coming." This worked for Digital, which experienced grid power outages in bursts of 10-14 hours at some sites in Houston, and 6-12 hours at others in Dallas. "And then we have properties in Dallas and Austin that never even saw a flicker." It was not a stress-free experience, however. With power out across Texas, diesel suppliers could not pump their fuel into trucks. Instead, deliveries had to come in from out of state. "[But the] interstates were shut down on the south part of Texas coming in from Louisiana to Houston," Furtick explained. "We didn't get to a point where we're running out of fuel, but the 24 hour period was getting pretty close to time's up by the time the trucks arrived." Equally, when the company realized it had more than enough supply in Houston, it switched deliveries to Dallas. "That night, another coating of ice came in, and the interstate between Houston and Dallas became treacherous." The trucks drove at around five miles an hour, slowly crawling to their destination. While the trucks ultimately arrived, it's not hard to imagine a slightly worse storm that might shut down more roads and make supply harder - if not impossible. "There's a version of this scenario where that did happen," Akamai's Americas VP of network infrastructure Todd Lawrence said. "And that's where you start to get worried about when your next delivery is." Netrality's COO Josh Maes agreed: "These sorts of winter storms have more risk than I think we all originally anticipated. We didn't need to bring fuel in, but anecdotally I think we learned that fuel could be an issue." The company was fortunate to have a large on-site supply, which "is a really big value add that we didn't appreciate the full extent of," Maes said. Its 1301 Fannin facility has 65,000 gallons of fuel on-site which can support the building for 7-10 days. It's not clear how much supply every provider had, and how quickly they were running out. "Nobody wants to say 'we're this close,'" FiberTown's VP and business unit manager Anthony Froelich said. "Someone with a deployment at Equinix in Dallas was telling us they were down to the last eight hours of fuel." "We had heard that Equinix was getting down to about 12 to 16 hours of fuel, and not providing good information," Akamai's Lawrence said. "I think there was a miscommunication there, but we started to get very nervous and started to figure out how to de-risk by moving applications, moving traffic around." The facility stayed online, like most of the industry, but it highlighted a crucial flaw
Photography Left: Gabriel Cano Above: Khanh V Le
"If all the interconnectivity is in a few hands and there's a disaster, then the Internet is fundamentally screwed." in our fragile network ecosystem. “The stark reality is that on the Internet, particularly in the US, there's a few facilities that have a lion's share of the interconnectivity,” Lawrence said. “And so if we have a critical failure, it won't matter where my servers are. If all the interconnectivity is in a few hands and there's a disaster, then the Internet is fundamentally screwed. A truck bomb in [the right] nine facilities in nine cities going off simultaneously would cripple the US economy.” Akamai is working on taking control of its own backbone, and its own interconnection, but it - like everyone else - is reliant on the interconnected nature of the Internet. Unlike data center operators, it is also reliant on the companies it hosts in. As the years have progressed, and the number of 'once in a 100 years' disasters has increased to a steady drumbeat, Lawrence said that the company's focus on solid fuel
preparations has steadily crept up. "We have eliminated people due to poor responses or lack of infrastructure related to not only how they store the fuel, but how old their equipment is, how often they maintain it, and how strong their fuel supply contracts are." Froelich concurred, saying that some customers care more now about data center resiliency plans, and want details on the age of equipment and how they hope to keep them running. His company, which operates a data center in Bryan/College Station and leases a Digital Realty data center in Houston, was also fortunate with the grid. "In Bryan, we were never asked to go off of utility power and run because we had 911 operations and other critical functions happening out of the site," he said. "Their call center was down so we have an emergency one for them below the data center."
Issue 40 ∞ April 2021 13
Another customer was an energy supplier, which relied on the facility to help get its natural gas to producers. "That's one thing that I've tried to bring back to our network operations teams - the emergency operations team that came and were sleeping overnight in the little bunk below the data center, and in their call center, coordinating rescues of people that were trapped in homes or elderly that had no heat before they froze to death. "I want the team to realize that if our data center wasn't up and running, none of [the emergency workers] would have been able to do what they needed to do, which was saving lives." The Texas Advanced Computing Center also wanted to help. Classed as critical infrastructure, its power was prioritized, but decided that it was more ethical to reduce the load to help reduce strain on the grid. "We never actually had our circuits turned off, but we do draw an enormous amount of power," TACC's executive director Dan Stazione explained. "And we knew hundreds of thousands of residents didn't have power at home, so we started shedding load on a Monday morning, and then eventually turning stuff off as it went idle over 24 hours, and we stayed that way until Friday morning." The supercomputer usually consumes about 6MW (9MW at peak), but was brought down to less than 1MW. A supercomputer center with a different view on uptime than commercial operators, TACC does not have diesel generators. Commercial facilities mostly did not go off the grid without being forced, with the exception of Evoque, which confirmed it voluntarily ran on a generator for 15 hours at its Allen data center. "Our clients saw no interruption in uptime during that time," Drew Leonard, VP/Strategy at Evoque, said. While it sought to shut down as much as possible, TACC could not risk fully switching off. "We do chilled water storage, about a million gallons," Stanzione said. That's hard to freeze, but some of it is in the pipes that span the data center. "As long as we can just keep it circulating," it will be okay. The site has three chillers and numerous redundant pumps. "We just left one pump running a little bit and kept the water moving through the pipes," he said. "Had we actually turned off the chilling
Photography: Jonathan Cutrer
plant completely…” he trailed off. “Our facility didn't suffer burst pipes but another University of Texas at Austin building across the street from us did.” Each operator found that the data centers provided much needed shelter to staff and customers. “From a preparedness standpoint, we had folks who weren't planning to be at the data center, but they got stuck there,” FiberTown’s Froelich said. “So we didn't want them leaving, because you're taking your life in your own hands at that point. In future we would prepare for more people, to ensure greater comfort.” Digital Realty was asked by the city of Lewisville if they had spare space that was warm. "We happened to have an office-type facility that a customer moved out of, and we made it available," Furtick said, although ultimately the city never used the space. TACC also saw an influx of people looking for warmth, Stanzione said. “We had more people than normal in the building because it had power and water and a lot of people's
"We knew hundreds of thousands of residents didn't have power at home, so we started shedding load on a Monday morning, and then eventually turning stuff off, and we stayed that way until Friday" 14 DCD Magazine • datacenterdynamics.com
houses did not. We have a couple of showers on site, and I think somebody came in to do dishes at one point.” Others weren't so fortunate. Nearly 70 percent of those served by the state's main power grid, ERCOT, went without power at some point during the subfreezing temperatures of Storm Uri, while almost half had a water outage, a University of Houston study found. Outages lasted on average 42 hours. For those sheltering at home, such outages were sometimes fatal, with at least 111 people thought to have died. “There's critical infrastructure in the state that is not built to withstand the extremes of weather that, frankly, should be expected,” Stanzione said. “This wasn't a natural disaster, this was a few days that were cold,” he added, pointing to cities like Chicago that easily handle similar events. “This was self inflicted.” Studies, lawsuits, and eulogies will unpick exactly what went wrong that week in February, but at a high level an unregulated grid failed to enact basic winterization or redundancy measures - despite warnings leaving it open to collapse. “Utilities act in response to the incentives that are there,” Stanzione said. “If there are no incentives to invest in common good infrastructure, then we're going to have issues. “This isn't a mother nature story, this is a lack of preparation story."
Here We Go Again
Reliving Sandy How data center operators coped during the record storm, and what lessons we still need to learn
W
e really weren't afraid going into it. In fact, we were live streaming it. It was exciting… up until the point where the surge hit." Hurricane Sandy was merciless. In 2012, the terrifying storm killed hundreds and cost billions across eight countries. In New York's financial district, home to a cluster of data centers, it proved equally devastating. "I would do data center tours all the time, and I would show off about our redundant fuel tanks, redundant pumps, wired to separate power sources, and the generators and all that. It sounded really good," Alex Reppen recalls. "But when the storm hit our basement it dislodged our fuel tanks, it was so powerful that the rush of water coming in through the grates on the street was enough to rip the bolts out of the concrete." At the time the founder and CEO of Internet service provider Datagram, based at 33 Whitehall, Reppen got to watch his flagship data center falter before his eyes. It took down big name sites including Gawker, Gizmodo, Buzzfeed and Mediate.
Due to regulations enacted after 9/11, New York data centers keep their diesel fuel in the basement, and their generators on the roof. This is fine in normal outages, but proved disastrous in Sandy. With the Datagram site offline, "on the evening of the surge, we were in the streets with our electricians building makeshift pump controllers on plywood and holding them up 12 feet up in the air to keep them dry," Reppen said. "At the same time, our building was working on pumping out the basement which took a very long time. We really didn't even know what we were dealing with - a couple of guys went down in the water, and when we found that it was unsafe we were forbidden from going back down there until it was pumped out." Once it was finally pumped out, they
Sebastian Moss Deputy Editor
discovered that the building's electrical riser was destroyed from the bottom. "So we had to run cable up 25 stories just to get to a street generator," Reppen said. "Once the street generator arrived, we put it on the new main riser that we built. And then that generator failed because it had dirty fuel." Fuel vendors, seeing opportunity in crisis, sold whatever they had on hand. Heating fuel was sold as generator fuel. "We got new generators, we had about three generators at one time on the street. I was down there with buckets of fuel trying to filter them out and it was a nightmare," Reppen said. For Datagram, the disaster did not alleviate as the storm ebbed. The sheer scale of the damage to the building and - Reppen claims - a slow and difficult landlord, meant that the company ran its data center off of diesel for three months. "They just took their time
"We had to run cable 25 stories and put a street generator on a new main riser that we built. Then the generator failed because it had dirty fuel" Issue 40 ∞ April 2021 15
fixing that electrical riser, which you don't realize how dependent you're on until it gets flooded." Running off of a generator for so long was a deeply exhausting and fraught experience, he recalled. "I lived like six blocks away, and I remember even a month later getting the alarm at three in the morning, and then I'd be sprinting as fast as I could. "You'd be looking to see if every light was off. It was just such a nightmare," he reiterated. Many of the staff slept at the data center helping keep it going, or communicating with customers. "We didn't have customers immediately canceling," he said. "We had little $29 and $100 a month customers screaming bloody murder and canceling, but the customers who paid $1,000s and tens of $1,000s a month were fine. They understood that this is a disaster." Adding to the pressure, Datagram was in the midst of potential sale discussions, ultimately being acquired by SingleHop in 2015 (the site is now owned by INAP). "We probably lost tens of millions in potential valuation," Reppen said. "We also lost a ton of money just paying for diesel fuel, paying for customer credits, paying for moving customers. Building up a 25 story copper riser alone was $400,000500,000, just for the copper, which then the building got to keep." The company ultimately sued the building in "a five-year lawsuit that really went nowhere," he said. "We sued our insurance company and went into arbitration. [Mayor] Bloomberg reached out to us and offered us help and assistance, but it turned out to be nothing. It really was not great." The legendary bucket brigade Datagram was not alone in its struggle. The chaos took out other providers in Manhattan, Internap, Steadfast Hosting, Init7, and Cogent, among others. A few managed to survive, notably one which turned to a nowlegendary solution - the bucket brigade. Less than 200 yards away from Datagram, Peer 1 had a similar issue in a carrier hotel at 75 Broad Street. Most of its fuel storage was underwater, except for a small diesel tank by its generator. Thanks to Peer 1’s small size (400kW), this equated to about six hours of data center runtime. So, the company figured, as long as it could get the diesel fuel up to the generator
it would be able to keep running. At first, the team tried to carry 55-gallon diesel drums on hand trucks one flight at a time - for 18 flights. "That wasn't sustainable," Michael Mazzei, the data center's manager at the time, told DCD. "So you go through Plan A, Plan B, Plan C, and then when you start getting further into the alphabet you actually start coming up with some plans that work," he said. "What ultimately worked for us was transferring the diesel into smaller five-gallon barrels and carrying those up." The team, building staff, and even customers like Squarespace and Fog Creek all chipped in. The bucket brigade carried on for sixty hours. Mazzei used other techniques to extend the useful life of the fuel tank: turning off the CRAC units and air conditioning in the Peer 1
"We ran a single generator for 10 days, and it felt like forever. You're one fuel injector, one piston, one belt, one hose away from bye-bye." 16 DCD Magazine • datacenterdynamics.com
space, and allowing the temperature to rise. "I'm terribly out of shape, and after doing just one of those trips I was really wondering if I was going to be one of those young people that was going to have a stroke," staff member Jeff Burns said in a promotional Peer 1 video a year after the storm. Peer 1’s one-time neighbor Reppen is critical of the effort, telling DCD "at the time we heard the stories of the guys next door carrying the buckets up and slipping downstairs and everything. "I mean, I'll be honest with you, I walked up the stairs once or twice. I was dead, and I'm a fairly in shape guy. With the humidity and everything going on, I couldn't imagine going up [18] flights with buckets of diesel." He added: "It wasn't worth the risk, and that was stupid. I'm surprised somebody didn't die." Mazzei counters that the team was careful and "smart about it. Diesel is a safe fuel from the perspective of transportation," he said. "I don't think we would have done this with gasoline, just because of a light bulb dropping... diesel is a very stable fuel, you have to atomize it, and you have to compress it for it to be problematic.
Here We Go Again
Lessons unlearned
"It was just one of these things that, you know, you get initial momentum behind it, and it just kind of acted like a flywheel and we kept going. You're running on adrenaline, you know that there is a working running data center and it was vital to Squarespace as well as other customers." Keeping the facility live was also deeply personal for Mazzei. "I felt like as much as it was a Peer 1 data center, it was a data center that I actually started from day one. I just felt that if I could keep that data center running for my customers, I was ready to go the extra mile. And we obviously had to make it through a lot of obstacles. And we did." Plus, he argued, the facility was able to "provide a lot of good.” With no working phones in the building, and mobiles not connecting, people would come to the data center to make calls. “We had all sorts of vendors that were helping the building coming up to the second floor and using our open WiFi so that they could communicate and get emails out.” The bucket brigade worked, and Peer 1 stayed up throughout, while other providers in the same building, including Internap, went dark. "Fortunately for us, no one got hurt, we had no mishaps," Mazzei said. "We were spilling a little bit of diesel fuel on a sidewalk here and there, of course," he admitted. "But it made me laugh - we had people complaining [about the spills], and at the same time we had cars floating past, gasoline and diesel from the floods, and sewage. You're complaining about five
"You're running on adrenaline, it was vital to customers." gallons of diesel that I might have spread here and there?" Nine years on, from the comfortable vantage point of hindsight, it's hard to judge the actions taken at the height of the storm. Given a similar situation happening at the time, "I would have done it again," Mazzei said, but added that with most customers now spread across multiple data centers he wouldn't do it in the current climate. Still, even with all that effort, the whole endeavor could have easily collapsed. "All it would have taken was one clogged fuel filter, and it would have all been for nothing," Mazzei said. "We ran a single generator for 10 days, and it felt like forever. Generators are amazing pieces of equipment but you're just one fuel injector, one piston, one belt, one hose away from bye-bye"
Everyone we spoke to about both the Texas and New York storms, on and off the record, was proud of the camaraderie in the face of crisis. "Verizon was right below us," Reppen said. "I got to meet their emergency response team, which they are an amazing group of gentlemen, and we got to piggyback on some of their logistics, their outreach, and all that. They helped us get some generators." Mazzei equally shared stories of other data center crews that helped out, and were looked after in turn. "But when you're in a private interest industry, companies aren't really having long-term collaboration," he said. "A lot of these big players in the industry are pretty tight-lipped about what they're doing." That means that the knowledge of how to handle disasters "gets siloed," he said, making it harder to prepare for the next one. Anyone who has been in the industry for more than a decade has likely experienced some crisis, Reppen said. "But there's those who have just never gone through a real disaster that makes you really question everything - your life, your place on Earth, you know. "It's a real rough experience, I wouldn't wish it on anybody. But if you have gone through that, then you're not going to be someone who's turning your nose up at secondary and tertiary redundancy."
Issue 40 ∞ April 2021 17
PLIABLE RIBBON ALLOWS FOR LESS SPACE WITH MORE FIBER HOW PLIABLE RIBBON TECHNOLOGY IS CHANGING THE FIBER OPTIC CABLE LANDSCAPE
WHAT IS PLIABLE FIBER OPTIC RIBBON? Pliable ribbon cable designs fill central tube cables with more fiber than ever before. The pliable structure has no preferential bend, therefore allowing the fibers to collapse on top of one another while still attached in ribbon form. This feature allows the circular central tube of a cable to be completely occupied with fiber rather than having space left empty by using a rectangular or square stack of traditional flat ribbons. There are also pliable ribbon cable designs that utilize 200um-sized fiber to make design even more densely packed, space-saving cables for today’s ever-increasing demand for more bandwidth.
TERMINATING RIBBON-BASED CABLES Ribbon-based cable constructions offer multiple advantages over loose tube and tight buffer cable constructions in the area of fiber termination. The process for splicing the fiber is the same – Strip, Clean, Cleaver, Splice, and Protect. The only change is using a heated jacket remover for removing the ribbon matrix. 144 Single splicers at 120 splices per second would take an experienced splicing technician around 288 minutes (4.8 hours); however, splicing 144 fibers in a 12-count ribbon construction will yield a splice time of only 24 minutes. Splicing 12ct ribbon would be 92% more efficient than splicing single fiber! MPO Splice-On Connectors are field installable connectors designed for customized, on-site terminations with ribbon cabling. Logistical delays associated with pre-engineered cables are eliminated because of the flexibility of determining exact cable lengths and easy terminations on the work site.
FREEFORM RIBBON® TECHNOLOGY DESCRIPTION Sumitomo Electric Lightwave’s Freeform Ribbon® allows for dense fiber packing and a small cable diameter with a nonpreferential bend axis thereby increasing density in space-constrained applications. Sumitomo Electric’s patented pliable Freeform Ribbon® construction is designed to both pack densely in small form factor cables while still being capable to transform quickly, without any tools, to splice-ready form similar to standard/ flat ribbon for fast and easy 12ct ribbon
864F STANDARD FLAT RIBBON
Standard Ribbon Design
splicing (for both in-line and fusion spliceon connector splicing applications). Whether installing high fiber count cables, such as 1728, 3456 and higher to fit into existing 1.5” or 2” ducts, or needing to work with smaller and easy to terminate interconnect cables, the Freeform Ribbon® is the central component to achieve both.
For more information visit SumitomoElectricLightwave.com
1728F FREEFORM RIBBON®
Outer Jacket
Outer Jacket
Strength Members
Strength Members
Central Tube
Central Tube
Ribbon Fibers
Ribbon Fibers
Freeform Ribbon® Design
Double the Fiber, Same Outside Diameter
Bulging pockets He helped Digital Realty grow to its current size - and nearly pulled off a merger with Equinix. Now Scott Peterson has a new gig that’s a lot smaller. But will it stay that way?
S
cott Peterson is head of a would-be data center giant. It is grandiosely named Global Compute Infrastructure but, right now, his outfit is so small and new it only has four staff. I just checked, and it doesn’t even have a website. But he’s worth talking to for two reasons: his track record and his backing. He cofounded and helped to build the world’s largest data center operator, and now he’s moved on and got himself a big pot of money from Goldman Sachs, the uberinvestment house whose efficiency earned it the sobriquet “the great vampire squid.” Oh, and one more thing. Global Compute may be a fledgling formed in late 2020, but it’s already got a purchase under its belt: a Polish data center operator with 42MW of capacity. Building Digital Realty Back in 2004, Peterson was one of the founders of Digital Realty, a company he stuck with for 14 years, during which time he engineered $17 billion in deals and mergers, leaving it the largest data center company in the world - currently with more than 275 facilities and worth some $35 billion. “I was there,” he says. “We were five people in a conference room, and we had $500 million from CalPERS [the first investor was California Public Employees' Retirement System], to execute a strategy that was technology related real estate.” What became Digital Realty started out in GI Partners, with the idea of owning technology companies that had real estate applications, but it “morphed into more of a middle market kind of buyout shop,” he says. “We bought a facilities management firm, which was focused more on technology related real estate out of the Enron bankruptcy.” In the early days, GI Partners bought different things, including a Hollywood studio, and a chain of British pubs, but quickly found its niche: “All that was
interesting, but the best traction we got was in the data center side.” Internet-related properties were drastically undervalued after the crash of the early 2000s: ”The dot-com implosion had happened, and people said the Internet's dead. There's no use for it; it's just a novelty. Our fundamental belief was the Internet's not going away. It's here to stay; it will come back; it is the future. “A lot of people thought we were crazy. But I thought, at the very least it's a really good value play. We were buying these things for huge discounts for replacement cost, and great yields.” It rang bells for Peterson. Working at GIC in the 1990s, he’d bought office buildings in Singapore during a slump in commercial
Peter Judge Global Editor
eight European data centers which rival Equinix was forced to sell off by the EU, as a condition of Equinix buying Telecity. In 2017, Digital bought its rival Dupont Fabros for $8 billion. “I’d been fighting since 2006 to get us into the colo business,” he said. “We acquired Telx, the following year, we acquired Equicity. “Shortly after that I actively started working on David Ruberg [the CEO] at [European colocation leader] Interxion. I worked on that for many years, and they finally got it done the year after I left, which is good! The price was a little high at the end of the day, but I'm happy for Dave.” All that established Digital Realty as a $35 billion behemoth. “Whether it was
"I spent three years trying to get us to merge with Equinix. But we just couldn't quite get over the last couple of deal points, and then it fell apart" property: “In my view, it was a very similar model. There was a real dislocation of capital that was mispricing the opportunity. So we did very, very well.” In late 2004, Digital Realty took on its identity and floated on the stock market: “We decided to take it public. And that's when it became a real company.” As demand for data centers came back, Digital started developing new assets, buying land for development, and acquiring portfolios: “We took the business across the rest of the US, we started on a campaign to expand through Europe, we got into APAC, and did a couple of deals in Latin America. “And along the way, we came up with this full spectrum strategy of space, power and connectivity on a global scale.” Peterson engineered deals: in 2015, Digital Realty got into the colocation business by buying Telx for $1.8 billion. In 2016 it bought “Equicity” - a set of
20 DCD Magazine • datacenterdynamics.com
a $5 million piece of dirt, or a $7 billion acquisition of DuPont Fabros or anything in between. our group really was the one that took us around the world and expanded the global footprint,” he says. “Before I left, we'd done about $17 billion worth of transactions and costs.” If you put a value to them, he says, “these deals created the vast majority of the enterprise value of the company and I'm extremely proud of what we did.” But there was one deal that eluded Peterson: “I spent three years trying to get us to merge with Equinix, which would have really been transformational in the industry. But unfortunately, we just couldn't quite get over the last couple of deal points, and then it fell apart.” These days, regulatory concerns would present the two largest data center players from combining, he says: “But boy, if you imagine back in 2012 or so it would have
Double Digital
I had watched large customers put their capacity in far flung areas like Oregon and Nebraska only to realize that it was a lot more expensive to push data around than they thought and the latency was not great for their customers.”
really created a platform that would be very difficult for people to compete with. But that ship has sailed.” Moving on Peterson left in May 2018. He was CIO [chief investment officer] of the eighth largest publicly traded REIT in the US and Digital was listed in the S&P 500, but his time was heavily committed: “Board meetings, earnings calls, investor conferences, and all that other stuff chews up about six or seven months a year - that's pre committed January 1 - and you have to have a life and take a vacation. And somewhere in there, you’ve got to work. I was like, do I want to do this for the rest of my life?” He didn’t want to retire: “I like working.” Things had changed by 2018, he said. “It used to be that [Digital Realty] were the only guys that had access to significant equity, but there was a lot more equity coming into the business. Digital always had the lowest cost of capital in the industry, but I thought, well, I don't think they really do.” Debt was coming back, he says: “Investors wanted into data centers, but couldn't quite figure out how to do it.” At the same time, large colo customers’ demands were changing: “I had watched them go to these far flung areas like Oregon and Nebraska to put their capacity - only to realize that it was a lot more expensive to push data around than they thought and the latency was not great for their customers.” Large colo customers - and enterprises - wanted to come back to the population centers. “They call it the Edge, we call it core markets. I realized that that was going to continue to occur across the globe - and that presents an awful lot of opportunity. “If we could find a competitive source of capital we could put all that together and come up with a better solution for customers. They don't need cheap money, right? They need somebody who can get them capacity, at a reasonable number, in the timeframe and the quantities they want.” Peterson is clear that he doesn’t want to create another Digital Realty, but he could see opportunities that Digital couldn’t grasp: “When you're Digital Reality, you get stuck. You have to do things that fit within the REIT mold, and you have to do them in a volume that is meaningful. You've got a $35 billion denominator, right? You can't go do something that's going to generate $20 million of profit, because it doesn't go anywhere.” He said Digital “had trouble doing deals with some cloud providers,” whose model was a “build and sell model: you build a building and you sell it.” The rates for that were low: “Digital can't just build and sell buildings. It's a one time game that
Issue 40 ∞ April 2021 21
the investment community completely discounts. You've got to deploy capital, and you've got to do it in a way where you get long term rental revenue streams. “There are all these things that Digital can't do that are attractive and interesting, and we know how to do them. That's really what led to this,” he says.”Smaller, nimble competitors [have] got a smaller team to cover, fewer mouths to feed. If you have really good people in the right positions, you can outperform the companies that have a lot of people.” When he says small, he means small: the company was four people when we spoke, and expects to grow to maybe ten people. The COO is another Digital Realty co-founder, Chris Kenney (“he actually was there a few months before me”); the European head is Stephen Taylor, who was Digital’s former head of EMEA for seven years, but actually worked with Digital Realty before that, while at CBRE. “We had a sourcing relationship with him to find deals for us in EMEA, starting around 2006.” From CBRE, Taylor joined Sentrum which was bought by Digital. The CFO is Doug Lane, formerly at GLC Advisors: He's a banker. We realized we needed somebody that was more than a pure real estate CFO, somebody who had more of a structured finance background. He's a guy I've known for 25 years.” The founders invested their own money in the venture, but the plan was to get a strong financial backer: “I went to a very focused group of potential partners and had a real solid list of what we were looking for. It was all terms and economics and governance and that, but the biggest, one you can't really put in a spreadsheet, is chemistry: finding the right guys, right? “Do they have a global presence? Are they used to investing globally? Are they good guys? Do we see eye to eye execution?” he says, “And I really found that in the Goldman guys, and they found that with us as well.” Global Compute was set up in August with $500 million from Goldman Sachs, a figure that could translate into $1.5 billion in deals when other investors come on board. By the end of the year, it had made its first acquisition, ATM SA, a Polish company with three data centers in Warsaw and
"We think the fundamentals in Europe are particularly attractive. I think they're also attractive in Asia. Latin America is also attractive, behind the other two markets“ Katowice: “I think that's a good example of one that it'd be tough for Digital to do. It's a small company in a secondary market. I think we found a great opportunity and market that's poised to grow really well. We're partnered up with the best platform in the country, and through that we can address Central and Eastern Europe.” The company could do other types of deals: “We will buy assets, we will buy development dirt, or we will do developments on our own, we will do joint ventures with other groups. We can make investments outside of the pure real estate side of this. We can do connectivity plays. I don't think we’ll do cell towers, but anything that allows the customers of the cloud to consume the cloud is something that we could be doing. But it'll mostly be data centers.” Within that, he’s definitely aiming for large deals: “Most of the customers we're focused on are hyperscalers. That's where the demand is. You know, large transactions with a high credit counterparty? Of course, they're attractive. But I think there's more to the business than that. There are more opportunities, so we don't want to be myopically focused on that.” Geographically, Global Compute will focus on Europe and Asia to start with, but it can afford to be opportunistic: “We think the fundamentals in Europe are particularly attractive. I think they're also attractive in Asia. Latin America is also attractive, behind the other two markets.“ He has deals in the pipeline in Europe and Asia, and in Brazil: “we have a couple opportunities there that we're actively working on.” The need for data centers in Africa, likewise, is interesting: “I think it's the Wild West. But with the connectivity conductivity that we see going there… if you're in subSaharan Africa, you've got two choices.
"We will buy assets, we will buy development dirt, we will do developments on our own, or joint ventures with other groups. We can make investments outside of pure real estate" 22 DCD Magazine • datacenterdynamics.com
You're going to send everything North, or you can send it South to South Africa.” He comes back to size: “When you're Digital Really, you'd have to say ‘I'm going to build five or six buildings or I just can't go there, because I can't allocate the resources.’ Well, we could do one building in a market like that. If it's successful our goal would be to do more, but we can easily accommodate doing a one off transaction in these more distant markets.” Don’t expect any Global Compute business in North America just yet, he says: “I do like North America, but I think it's a little late cycle right now. Northern Virginia is and will remain the best data center market in the world. But it's crowded. Everybody wants to go, and the dirt price! Land prices are astronomical, and there's a lot of competition. “I think we can be successful there. But the question is, how much time do you have to devote to do something kind of given the difficulty of acquiring the land and the cost and the competition, we just think our time is better invested someplace else.” “I'd rather do something that's creative and unique and provides a solution to a customer, as opposed to slugging it out in a market where they've got a ton of options. We're not differentiated at that point to the customers. We're just one of the crowd.” According to Peterson, “our real value is in finding markets with good supply and demand fundamentals. Our real value-add is not buying a piece of dirt and spending four years trying to develop it. We can do that, but we'll do that in joint ventures with other people. “We call ourselves a tweener or a hybrid. We're not pure operators and not pure capital allocators. We live in that space in between. We have a capital allocator intelligence and discipline, with operational capabilities, so we can adjust our focus within any particular transaction.“ At the moment, Peterson would be very happy to have a queue of people proposing deals Global Compute might enable. He doesn’t predict any slowdown. “I've been in it for almost 20 years now, and I think the demand fundamentals are gonna continue to be good throughout however long I'm willing to work.”
Attack of the SPAC
SPACs are coming for the data center industry. Will the bubble burst before they can get there? Investors and companies are treating SPAC mergers like VC funding rounds, and the trend is coming to data centers
S
pecial purpose acquisition companies are shell companies that are listed on a stock market. Known as blank check companies, they are empty companies with large amounts of capital that list on the stock market specifically to merge with another company. Broadly speaking, investors group together and fund (or sponsor) an empty company which then goes through an IPO. That SPAC finds a target (known as an operating company) merges with it, and it becomes a "deSPAC." Operating companies get to avoid the high fees and lengthy steps of the usual IPO process and the SPAC investors get a large equity slice of the newly public deSPAC. Though they have been around since the early 1990s, SPACs have only recently become a major trend; the 248 SPAC IPOs in 2020 was more than the previous 10 years combined. 2021 has already seen almost 300 SPAC IPOs raise almost $100 billion. This is driven in part by market volatility in the traditional IPO
process coupled with low interest rates. “With the pandemic, private companies needed capital, and raising funds through SPAC transactions is one of the best and most suitable ways to get such capital,” says Evgenii Tiapkin, executive director of Freedom Finance Europe. “Getting ready for a regular IPO requires time, from a few months to a year, while creating a SPAC is much easier and can be completed in just three weeks.” “SPACs changed a lot and are now a very viable IPO alternative for many private companies. Although they can't replace traditional IPOs, they can provide more flexibility and efficiency, and this is why some companies choose to go public this way.” SPAC money coming for data centers The SPAC hype had largely avoided the data center industry. Until this year, Vertiv’s 2019 merger with GS Acquisition Holdings Corp was the only such merger of note in the space, and mergers in data center adjacent industries were far more common.
Dan Swinhoe News Editor
The last two years have seen deSPACs in telecoms (satellite firms Spire Global and AST SpaceMobile), semiconductors (Achronix, indie Semiconductor) batteries (FREYR, Microvast, QuantumScape), software (AvePoint, Computex), energy (SolarMax), IoT (KORE) and even industrial doors (Janus International). However, 2021 has seen the SPAC industry make more moves into the realm of data centers, both as acquisition targets and to lead the new shell companies being listed. Ex-Telecity CEO Michael Tobin is reportedly seeking to launch a new $250 million technology SPAC in Amsterdam with the help of Torch Partners investment bank, while CyrusOne’s former CEO and CTO Gary Wojtaszek and Kevin Timmons are both part of the leadership team for InterPrivate IV InfraTech Partners Inc. In February, Cyxtera Technologies announced a $3.1 billion SPAC merger with Starboard Value Acquisition Corp, which was founded by the Starboard Value hedge fund. Less than a month later, quantum computing startup IonQ entered into a merger agreement with dMY Technology Group, Inc. III, to become the first publicly traded quantum computing company in a deal worth $2 billion. “I view SPAC mergers for quantum computing hardware as a very positive development,” says Matt Johnson, CEO of quantum computing startup QC Ware. “It can very well take the technology out of the hype cycle and serve as an accelerant toward the development and commercial viability of full-scale quantum computers. Ready access to a lot of capital allows for more rapid investment into engineering and fabrication and also allow quantum computing startups to compete at eye-level with the quantum computing groups inside the tech giants.” For operating companies, Series C, D, and E funding rounds have almost converged with SPACs as just another way to raise capital, says Randy Pond, CFO at Edge computing startup Pensando.
Issue 40 ∞ April 2021 23
"This kind of crazy speculation... is a sign of an irritating bubble” Charlie Munger, Berkshire Hathaway “SPACs are just continuing to move broader in the market. For technology companies like us looking for growth money, it's much easier than an IPO.” There are more SPACs out there looking for data center targets; the likes of Power & Digital Infrastructure Acquisition Corp, and Prime Impact Acquisition I, lead by former Western Digital execs Michael Cordano and Mark Long, are actively looking for targets in the space. Dish founder Charlie Ergen’s Conx and the Timothy Donahue-led Cerberus Telecom Acquisition II are hunting in the telecoms sector. There are also a number of real estate SPACs on the horizon from the likes of CBRE and Benchmark Real Estate Group. And while the SPAC trend is largely concentrated in North America, investors are ready to look further afield if the pool of quality targets in the US begins to dry up. “Interest has increasingly turned to Europe, Asia, and other regions, as a fruitful place to look for targets,” says Rob Brown, founding partner, CEO, Clearthink Capital. “Even SPAC that originally focused principally on the US generally maintain the flexibility to work in other jurisdictions and frankly to look in other verticals.” “As long as that capital is willing to back the structure they're going to keep creating more SPACs. Even in this environment, we're
hearing from capital markets professionals that every SPAC is oversubscribed anywhere from five to seven times.” Is the SPAC boom sustainable? For some data center companies, the appeal of a relatively quick and cheap injection of cash and listing might be appealing. But will the bubble burst before it can make significant waves in the sector? Previous SPAC booms in the 1990s and prior to the 2008 crash were known for ‘pump and dump’ investors looking to make a quick buck. There are still risks that is could bottom out. SPACs have 18-24 months to find acquisition targets; if it fails to merge it must liquidate and return money to the investors minus IPO fees. Though there have been more deSPACs and less liquidations in the recent boom, this pressure to find targets could result in some subpar targets or SPACs merging with companies in sectors leadership has no experience in; an issue potentially compounded by the fact SPAC IPOs are currently outpacing mergers. The arrival of industry heavyweights like Goldman Sachs, KKR and BlackRock, and increasingly focused SPACS is a good sign, even if an influx of ‘celebrity SPACs’ led by the likes of former NBA and NFL stars Shaquille O’Neal and Colin Kaepernick indicates to some the market is over-hyped.
24 DCD Magazine • datacenterdynamics.com
David Solomon, the CEO of Goldman Sachs – which invested in a large number of SPACs in 2020 and 2021 – has questioned if it’s a bubble, saying in January he didn’t think current SPAC issuance “was sustainable” in the medium term. Charlie Munger, vice chairman of Warren Buffett’s Berkshire Hathaway, said the world would be better off without them and “this kind of crazy speculation... is a sign of an irritating bubble.” Despite this, Clearthink’s Brown says the SPAC has become a ‘more accepted structure’ than in previous years. Having been involved with SPACs since their inception in the early 1990s, he thinks the institutional acceptance of the structure that wasn’t there in previous waves means it will evolve to and eventually look very similar to the ‘regular IPO’ cycle. “Hedge funds and other institutions can't be paid when they're holding unemployed capital,” he says. “SPACs lets them deploy that capital; it's still in treasuries because of the structure of the SPAC and they can collect fees, so it's very positive from a hedge fund standpoint.” On the funding side, SPACs can be thought of as closer to an investment vehicle for investment firms; private investment in companies going on in public. In a recent analysis of SPAC investment returns, Loup Ventures likened SPAC investment to venture capital funding in public markets in that greater returns are funnelled through a few big winners which offsets losses elsewhere. Many of the companies going through SPAC mergers today are ‘pre-revenue’ or even still in the concept stage, which means little chance of profits for years to come. “The SPAC market is now broadened out and some of the traditional reputable players are now sponsoring SPACs,” says Pensando’s Pond. “The quality of the investor has gotten better – these are not pump and dump guys – and they're choosing wisely amongst the markets.” “They're taking some long shots, but I don't think these guys wouldn't be putting the kind of money that they're putting behind these things if they didn't believe that were real,” he says. “They’re making bigger bets but they're now demonstrating that these things have sustainability.” Pond warns that although experienced institutional investors can “winnow out the weak links," the market is still ‘kind of goofy’ at the moment and could have a “day of reckoning,” especially for some of the companies that are still pre-revenue ‘science experiments.’ “We won't really know whether some of these gigantic market caps can actually stand up to the scrutiny for multiple years of no revenue until we get a little more time under our belt.”
Sponsored by
Edge Supplement
INSIDE
Fresh thinking for a new frontier Health and the Edge
Right-sizing the Edge
Edge of Mars
> One size doesn’t fit all when trying to bring compute to the healthcare sector
> Maybe racks of kit aren’t the best way to support remote applications
> Seeking signs of life on our neighbor, Perseverance is the most extreme Edge project ever
Because Downtime Is Not an Option.
No Matter Where Your Edge Is. When your network edge is hard to get to, give your team access and secure IT management.
What’s Your Edge? Vertiv.com/DCDEdge
© 2021 Vertiv Group Corp. All rights reserved. Vertiv™ and the Vertiv logo are trademarks or registered trademarks of Vertiv Group Corp. All other names and logos referred to are trade names, trademarks or registered trademarks of their respective owners.
Edge Supplement
Sponsored by
Contents 28. Edge computing in medicine Don’t prescribe every sector with the same course of treatment 30. Bringing the Edge down to size Edge applications don’t need the same hardware as cloud ones 32. Advertorial Decoding the Edge: A Systematic Approach to Edge Deployments 34. The Edge of Mars Nasa's Perseverance rover has more latency than your Edge application! 37. Is OpenRAN in the running? The Edge needs multi-vendor 5G. OpenRAN could be the answer
Honing a fresh Edge
E
ach time we return to the subject of Edge, it becomes ever more clear that there is still plenty of thinking to be done about it.
We've been told many times that Edge is a frontier territory, where infrastructure has to support localized applications that need responsive, low-latency processing. So it shouldn't be a surprise when the old techniques don't quite match the new demand. This supplement looks at some ways to make Edge work, on this planet and beyond.
The healthcare Edge As the past year has sadly shown, nothing is more important than healthcare - but for the industry to advance, it will need more compute, at lower and lower latencies. Edge deployments at hospitals and pharmaceutical companies are on the rise, but one size does not fit all. Fitting a facility into a small hospital, or dealing with the tight budgets and complex bureaucracy of the healthcare sector can require custom solutions, or careful planning. Plus a new wave of healthcare tech means a new focus on compute. Hopefully it can help us prepare for the next pandemic (p28).
37
The distributed Edge Edge applications are not quite the same as cloud applications. They don't need exactly the same hardware. The network may take the lead; small identical nodes on a mesh might do just as good a job as traditional racks, and they might do it more resiliently (p30).
The open, networked Edge Taking that thought further, what kind of network do we need? 5G is a convenient catch-all phrase for the next generation mobile networks, which will surely be there when we interconnect our Edge applications. But what kind of 5G will we have? There's a race on - to create an open, standards-based set of protocols called OpenRAN, before the urgent need for bandwidth forces everyone to install the network vendors' latest proprietary offerings, and usher in another era of high-cost infrastructure. OpenRAN could win the race, and we might all benefit (p37).
The extra-terrestrial Edge Finally, as a reminder of just how revolutionary Edge technology is, we take you to another world. NASA's Perseverance rover is exploring Mars. It is the most remote application imaginable, with 20-minute latency times that would cripple any traditional remote control systems (p34). Hands-on repairs and realtime control have been literally impossible, during a risky one-off landing and a lengthy search for signs of life on our nearest neighbor in the solar system. Do these extreme conditions need cutting edge tech? Nope. The rover is powered by a processor out of a 20-year old Macintosh. Its drone copter carries a processor from a six year old smartphone, which your child would scorn. As we said at the start, Edge needs fresh thinking. On Mars, that led to a choice of trusted technology and appropriate tools.
Issue 40 ∞ April 2021 27
Edge computing in the health sector Don’t prescribe every sector with the same course of treatment
T
he health computing sector is suffering high data deposits and networking sclerosis. A prescription of Edge computing might help the patient pull through, but the treatment will need experts to administer it without serious side effects. Unfortunately, the health sector is experiencing rapid infrastructure metamorphosis, exacerbated by Covid-19, which could severely impede the thought processes of any organization. It’s a widespread problem, which has been diagnosed in many industry sectors, for instance by Digital Realty’s Data Gravity Index, which reveals that large amounts of data, stored without due care, can create stasis and resistance to change in organizations.
The patient (global computing in all sectors) has a data metabolism of 1.5 exaflops but will need to show massive levels of improvement if it can summon the extra 9 exaflops needed to process the 15 zettabytes of information that will be stored in its brain by 2024, according to Interxion’s analysis. In short, without support, the extremities of the network may experience the IT equivalent of peripheral neuropathy. The syndrome is particularly acute in the health sector because of two of the many reactions to Covid-19: remote working and research. Permanently remote The trend for remote consultancy won’t be reversed post Covid-19. A permanent consequence of lockdown is that installations
28 DCD Magazine • datacenterdynamics.com
Nick Booth Contributor
to support digital therapeutics will increase by 69 percent every year, according to Juniper Research, which projects revenues of $53.4 billion by 2025. Estimates of the projected volumes of medical research data fluctuate wildly because growth is so furious with the likes of GlaxoSmithKline building 400 petaflops supercomputers across Europe to speed drug research. Suffice it to say that bodies such as the International Covid-19 Data Alliance (ICODA) see IT infrastructure problems, rather than data volumes, as the primary challenge to its progress, according to an ICODA seminar given in March 2021. Edge computing cannot be a universal panacea for the health sector because this is a category of service with a complicated history. And the challenges vary according to culture
Edge Supplement and infrastructure. In the UK’s National Health Service (NHS), the management culture can be harder to reconcile than power or cooling or comms connections. While private enterprises will have an ops manager and a facilities manager with welldefined roles, a hospital has IT staff who know nothing about power, cooling and comms racks. This is not a problem, if the service provider can manage these decisions for the client, but getting the facilities laid on is a massive initial challenge. Get a room Just finding a place in a hospital for a small self-contained micro data center is a problem, according to one service provider we spoke to, who has worked on installing Edge IT into hospitals: “The hardest part of our Edge computing work for the NHS is finding a room.” The biggest challenge is the time scale. An IT project leader may make an instant decision on the need for servers but suppliers can spend three months just trying to get a meeting with the right head of department. It could be the operations boss, it could be the estate manager. There is no one person cracking the whip. In one project a hospital appointed an installer to create a local data center. The service provider advised the installer on the best options and awaited instructions while the reports were processed. At the initial consultation, the installer saw a room that was ideal for hosting a local data center. It was months later when they heard another room was allocated. Now they had to get the requisite power connected, and that took eight months. When it was all ready, the news came that the building was about to be torn down. “At one hospital we had to walk around for ages to see if we could shoe horn something in,” said an installer who asked not to be named. “If nobody gives you a space, you have to go in the car park or the corner of a room.” Monitoring and cooling There is a general pattern of installation in hospitals: one or two big data centers (a computing site and a disaster recovery facility) with multiple discrete and informal comms rooms tucked away, each with some sort of uninterruptible power supply (UPS). There can be 200 UPS systems dotted around the hospital campus and these are hard to find and impossible to manage. The upside is that this creates more sales for service providers; they can monitor them remotely via the inbuilt DCIM platform and bring all the information together for the client. Providing containerized data centers that use liquid cooling, as part of an Edge computing package, solves multiple problems in the case of a hospital. Since it immerses the
“At one hospital we had to walk around for ages to see if we could shoe horn something in. If nobody gives you a space, you have to go in the car park." servers in liquid, that prevents any likelihood of dust fires ignited by overheated circuitry (dust is a major problem in the type of rooms that will be on offer in a hospital). Immersion also stops any flakes of metal from being blown onto the boards and short circuiting the computer. AI for care In private health service providers, where the decision making is quicker, the challenge is to be sufficiently adaptable to circumstances. Care homes exemplify another aspect of how Edge Computing can solve congestion and compliance problems. Covid-19 put a huge strain on staff in care homes, making it harder for them to find the time to keep tabs on their patients. Meanwhile, they also had to contend with new privacy laws that limited the amount of camera surveillance that could be carried out. One solution to the problem was created at The University of Amsterdam (UoA), where Dr. Harro Stokman invented a way to use artificial intelligence to make sense of the patterns of events in each room. The legislation restricted the time that humans can watch patients through video cameras. However, there are no such limitations on a computer and, if it is deemed intelligent enough, the machine’s judgment can be trusted on the well-being of a patient. This was the logic of Stokman’s Kepler Night Nurse (KNN) AI system which observes patients and decides if events (such as a fall) need intervention. The problem is that the KNN system creates too much data to load into the cloud without creating huge bottlenecks and comms bills. In response, the UoA spin off company, Kepler Vision Technologies (KVT) built an Edge Box to handle all the data locally, using Nvidia’s small form factor Jetson Xavier NX module. The Edge computing node can process data locally and improve on the quality of intelligence gathered. By localizing the analysis less data is sent to the cloud to be processed. Infrastructure still needed Creating Edge computing hardware is one thing. But where will the supporting infrastructure come from? Mobile telco industry watcher Dean Bubley, founder of Disruptive Analysis, warns that the expectations created for 5G are unrealistic, especially in regard to supporting systems that need instant response times. “The low-latency 5G Emperor is almost
naked,” says Bubley. In some cases, he concedes, the ultra-reliable low-latency (URLLC) associated with 5G could minimize network round-trip times for new apps and devices that need instant responses. “In that respect mobile Edge computing can cater to them, in the form of regional computing facilities or servers at each base station,” says Bubley. However, there are many new applications where the latency has to be a lot better. An endoscope or microsurgery tool might need to respond to controls and send haptic feedback 100 times a second, i.e. every 10ms. Drones are being proposed for drug transport between hospitals, but these flying devices must react in two milliseconds to a control signal, or a locally-recognized risk. It’s also doubtful if 5G could offer the latency needed by photon sensors used in research, which need to operate at picosecond durations. Fiber to the rescue? One of the US’s answers to the infrastructure challenge is open access or competitive fiber optic network, such as SiFi Networks’ FiberCity offering. SiFi promises this allows access to multiple service providers and geographically diverse paths on a fiber optic network that gives ‘99.9999’ percent reliability. In this model, a city-wide fiber network passes by each home and business and effectively gives each company a private network. This could create a citywide private network, secure from the Internet, to deliver data to hospitals and research bodies via highspeed symmetrical connectivity. This would give them much less of a problem sending data over the cloud, according to SiFi Networks CEO Ben BawtreeJobson. Storing files locally creates issues for sharing with consultants offsite and internationally. Suitably sized cloud storage could come closer to seamless collaboration between consultants. “The question is then about how much bandwidth is required and 100 percent fiber optic networks solve this problem,” says Bawtree-Jobson. Meanwhile, Juniper Research reports that mobile players are partnering across the globe to build the mobile Edge computing infrastructure. Between them the likes of AT&T in the US, LG/Google in South Korea and the 5G Future Forum are spending $8.3 billion, by 2025, on the networking equivalent of life support systems for all those Edge systems. So the patient’s prospects are looking better.
Issue 40 ∞ April 2021 29
Bringing the Edge down to size
Edge applications aren’t the same as cloud ones, so they don’t need the same hardware
E
dge computing has been one of the major trends of the past several years, as applications have started to require lower latencies, and the volume of data handled by endpoint systems has grown to the point where streaming it all back to a cloud data center may be too costly, slow and bandwidth-hungry. But one of the issues with Edge computing is that it is a fairly nebulous term that means different things to different people. Does the edge of the network refer to endpoint devices, or to the communications equipment that links such devices back to the core, or does it cover both of these examples and more?
Gartner, for example, defines Edge computing as solutions that facilitate data processing at or near the source of data generation, but goes on to add that Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud. For the telecoms industry, edge computing has been closely identified with the development and deployment of 5G networks, with their goals of handling data rates of gigabits per second, minimal latency, and the ability to support a large number of simultaneously connected endpoint devices. These requirements are expected to see
30 DCD Magazine • datacenterdynamics.com
Dan Robinson Correspondent
cellular base stations increase their amount of compute power so that they effectively become miniature data centers. Meanwhile, enterprises and service providers have also been investing in socalled micro data centers in order to serve the needs of Edge computing. These micro data centers vary in size, but a typical product
Micro data centers are good for factories, but not for every use case
Edge Supplement
is the equivalent of a data center rack with power distribution units and cooling encased in a protective enclosure, which can be populated with standard rack-mount servers, storage and switch kit. Such solutions are perfect in a factory setting, for example, where a significant amount of compute power is required to monitor and control production lines, especially where multiple machine vision systems are employed, and fixed wiring is likely to be already in place for communications and power. However, Edge computing covers such a broad range of applications and use cases that no one solution fits every problem, so a
broad spectrum of capabilities is needed to fit every niche, and many will need to be more compact and have different capabilities. “There's actually a hierarchy of processing that you would want as you move from the edge of the network all the way into the core,” says Kurt Michel, senior vice president of marketing for Edge infrastructure firm Veea. Veea develops what it refers to as smart edge nodes, which can start with a deployment of just a single node but can scale by adding more nodes if required, as nodes can communicate with each other via a builtin mesh networking capability. Each node is a tiny box that looks like a WiFi access point, but contains a 64-bit quad-core Arm processor running Linux. According to Michel, this model emphasizes both computing and connectivity, which is important for Edge applications, but the nodes can operate as if they were a single system via mesh networking. “These separate nodes, you deploy them, and they will connect to each other. And what they do is they basically create a single virtual, connected compute platform. And they can connect to all of your different IoTtype devices, so cameras, thermal sensors, air quality sensors, vibration sensors, and the ways they connect might be Bluetooth, or LoRaWAN or ZigBee, or WiFi, or just plain old physical Ethernet,” he says. Because the hubs operate as a distributed system, any IoT devices connected to any of the nodes is visible to and can be accessed by applications running on any of the other nodes. It also means that the devices can share workloads. “The applications themselves run in Docker containers. And that makes these applications incredibly portable. So you can move them from one node to another node. And if you find a particular node becoming overwhelmed, you can deploy another node in that location,” Michel explains. One upshot of all this is that a mesh network can provide a decent amount of aggregate processing power if needed perhaps as much power as a micro data center - but that is not the way they are intended to be used. Instead, they are aimed at fitting into locations such as smart buildings, retail outlets or outdoor smart city environments, in sites where there may not be the space or power available to support a micro data center. The range of applications that such devices might be used for is diverse. Michel cites the example of a retail outlet that might have a node connected to a security camera monitoring the entrance to the premises. The device could run a machine learning visual recognition model to detect people entering and whether they are wearing a Covid face
"You have just got to find the balance, you basically take your tasks, and you break them up into the things that need a rapid response and the things that require deeper processing." mask, and generate an alert if not. This hypothetical example illustrates some of the justifications for such edge deployments; streaming the video back to a cloud data center for processing may introduce unnecessary delays in generating a response, and incur unnecessary costs in network bandwidth. “Anything that requires real time responsiveness, any control systems for robotic systems, industrial factory settings, whatever, all that real stuff that really can't handle the delay that going back to the cloud gives you,” Michel says. “You have just got to find the balance, you basically take your tasks, and you break them up into the things that need a rapid response and the things that require deeper processing.” It isn’t just specialist vendors that are looking to address the broad spectrum of device requirements that edge deployments encompass. In March, Lenovo expanded its range of ThinkEdge systems with a pair of ruggedized devices, the ThinkEdge SE30 and ThinkEdge SE50. Both are essentially PC hardware in compact enclosures designed for harsh industrial environments, but can be configured with 4G or 5G wireless modules in addition to WiFi, and feature RS232/422/485 serial ports for industrial peripherals. However, products such as these largely leave it up to the user or a systems integrator to provide a suitable software stack for their Edge computing application, whereas a specialist like Veea offers a turnkey Edge node platform that allows the user to focus on making their application work. Edge computing has been enabled by advances in computing that make it possible to add intelligence almost anywhere, and also by the spread of pervasive communications networks. But organizations need to take care when deciding whether Edge or cloud is the best place for data processing to happen, and also when choosing an appropriate platform from the wide choice available.
Issue 40 ∞ April 2021 31
Decoding the Edge: A Systematic Approach to Edge Deployments By Alex Pope, Vice President, Integrated Rack Solutions – EMEA, Vertiv
A
• Data Intensive: This includes use cases
ccording to Ericsson,
traditional data centers are somewhat
global mobile data traffic
homogeneous – different in size and details,
where the amount of data makes it
is estimated to reach
but unquestionably data centers – the edge
impractical to transfer over the network
226 exabytes per month
is comprised of a universe of small IT spaces,
directly to the cloud or from the cloud to
sometime in 2026. Let’s put
ranging from the single-server IT closet to
point-of-use due to data volume, cost,
that in context. An exabyte is
far more sophisticated cloud deployments.
or bandwidth issues. Examples include
1 billion gigabytes. If you collected and stored
As these sites have become more and more
smart cities, smart factories, smart homes/
all the words spoken in human history, that
critical, they have become more complex, and
buildings, high-definition content
would equal about five exabytes. If you did so
today’s edge bears little resemblance to the
distribution, high-performance computing,
45 times over, you would have 226 exabytes
earliest distributed sites.
restricted connectivity, virtual reality, and oil
– the amount of data we’ll be generating each month just five years from now. The applications driving this growth
One of Vertiv’s first steps toward bringing some order to this new world was to
and gas digitization. • Human-Latency Sensitive: This archetype
categorize edge sites based on applications
includes use cases where services are
range, from streaming videos and gaming, to
they support. We started by examining dozens
optimized for human consumption, and
telehealth and pandemic-driven remote work,
of edge use cases, focusing on workload
it is all about speed. Delayed data delivery
to pilot projects for autonomous vehicles. The
requirements and corresponding needs
negatively impacts a user’s technology
disparate technologies making it all possible
for performance, availability and security.
experience, potentially reducing a retailer’s
are linked in one critical way: their increasing
Ultimately, we identified four edge archetypes.
sales and profitability. Use cases include
reliance on computing at the edge of the
We use these models to better understand
smart retail, augmented reality, website
network.
and equip edge sites to meet the needs of
optimization, and natural language
the organizations and end users that rely on
processing. Increasingly, these applications
them. The four archetypes are:
are becoming the way people interact with
The edge of the network presents a number of unique challenges. Whereas
Vertiv | Advertorial brands, institutions, and each other. • Machine-to-Machine Latency Sensitive:
individual edge sites. They often carry many
efficiently while minimizing time on site for
traditional data center characteristics but
installation and service.
Speed also is the defining characteristic of
tend to be of modular construction. These
this archetype, which includes the arbitrage
sites are often employed by cloud providers
Edge as some sort of IT wilderness that can’t
market, smart grid, smart security, real-time
to serve sizable areas. Smaller versions are
be defined or aligned with our traditional
analytics, low-latency content distribution,
commonly used for disaster recovery as
approaches to the data center. This is not the
and defense force simulation. Because
well.
case. By applying a systematic approach to
machines are able to process data much faster than humans, the consequences for
We have been conditioned to see the
site analysis, we can decode the Edge and take After categorizing first by archetype, which
a major step toward standardizing the Edge
slow delivery are high. For example, the
is focused on the virtual application, then by
deployment process. Ultimately, this will help
continuous optimization of our energy
geography, we go even deeper, slicing these
our customers achieve their primary goal of
consumption, quality, and usage of
populations by physical environment and
reducing the time and cost required to deliver
renewables requires speed of analytics and
corresponding characteristics of sites within
the application experience they designed for
decision implementation on a scale only
a given group. This provides the final layer
their users.
machines can achieve.
of site analysis and allows us to quickly and
• Life Critical: This archetype encompasses
easily configure these Edge sites to meet
use cases that directly impact human health
the specific needs of our customers. The
and safety. Consequently, low latency and
categories are:
reliability are vital. Use cases include smart
• Conditioned and Controlled (<6 kW per
transportation, digital health, connected/
rack or >6 kW per rack): These are purpose-
autonomous cars, autonomous robots,
built spaces that are climate controlled and
and drones. For example, as transportation
secure. The only difference in sites is rack
becomes increasingly automated, the onboard processing of the vehicles and
density. • Commercial and Office: These are
drones will be augmented by connectivity
occupied spaces with existing, but limited,
Alex is grateful to have had
to real-time traffic, safety, scheduling, and
climate control and sites that are typically
opportunities in Vertiv to design,
routing intelligence processed remotely.
less secure.
assemble, and lead multiple
• Harsh and Rugged: These require more Of course, applications are only one
businesses in the Americas and in
robust systems and enclosures to protect
Europe, the Middle East and Africa.
variable, and they are virtual. The physical
against large amounts of particulate in the
Over 13 years with Vertiv, in roles
assets enabling these applications have to
air. These often are industrial sites with the
spanning marketing, strategy,
actually live somewhere — locations at the
threat of water exposure and in proximity
channel, and operations, his focus
edge — which tend to follow one of these four
to heavy traffic or machinery. They lack
has been consistent: working with
deployment patterns:
climate control and are far less secure.
the partner community to architect
• Geographically Disperse: These sites are
• Outdoor Standalone: These are outside and
continuity and simplify whitespace
similarly sized and spread across large
unmanned sites, exposed to the elements
deployment from the core data
geographies — typically a country or region.
and requiring a shelter or enclosure.
center through to the Edge. Prior
Retail, with stores scattered across a chain’s
They can be in remote locations that
experience includes six years of
footprint, or consumer finance, which
require some time to reach for planned or
technical sales between earning
includes bank branches, are good examples.
unplanned service.
degrees in mechanical engineering
• Hub and Spoke: This also typically covers
• Specialty: These sites likely share
(BSME) and business administration
a large area, such as a country or region,
characteristics with one of the above
(MBA), both from the University of
but the sites are organized with multiple
categories but must be handled differently
Notre Dame.
smaller deployments around a larger hub.
due to special regulatory requirements that
Communications and logistics networks
could be tied to application, location, or
tend to embrace this model.
other factors.
• Locally Concentrated: These are smaller
education, and industrial sites. They
understand (1) the IT functionality and
also tend to feature a number of small
characteristics each site must support; (2) the
deployments connected to a larger central
physical footprint of the Edge network; and (3)
facility.
the infrastructure attributes required of each
• Self-Sustained Frontier: This pattern, with
deployment. Once we have those data points,
widely spread footprints ranging from
we can configure, build, and deploy exactly
regional to global, consists of the largest
what is needed. We can do it faster and more
- EM
exercise, but a practical methodology to
EA, Vertiv
To be clear, this is not an academic
such as those common to healthcare,
A
le xP op e
ol ut io ns
networks, often servicing campus settings,
| VP Integrated
ck Ra
S
The Edge of Mars What’s the most remote IT system in everyday use right now? The Perseverance rover on Mars
E
dge computing is designed to help when applications need a fast response, but are a long way from central IT resources. The most extreme example of this right now is a self-driving vehicle doing detailed science work, 62 million km away from Earth on the surface of Mars. NASA’s Perseverance rover has to handle its environment in real time, but signals take 12 minutes to go from there to NASA’s Mission Control. Besides the delay, Internet communications over that distance are unreliable (see Phoning Home), so Perseverance has to be prepared to make a lot of decisions locally.
Despite these demands, the tech deployed to Mars is quite modest: the whole Perseverance rover is managed by the same type of PowerPC 750 processor which powered Apple’s Bondi Blue iMac back in 1998. There’s already an installed base on Mars: the Curiosity rover which landed in 2012 has the same processor, and is still in operation. But the Martian environment provides even more compelling reasons to stick with this technology (see box, The Brain) Due to its more recent design, the tiny Ingenuity drone copter, a passenger on the Mars mission, actually has a somewhat more powerful processor, the Snapdragon 801,
34 DCD Magazine • datacenterdynamics.com
Peter Judge Global Editor
which featured in 2014-era smartphones such as the Sony Xperia Z3 (see Flying Buddy). Yet all this kit is making unbelievable achievements. Even before it begins its scientific study, Perseverance handled its February 18 landing perfectly, analyzing wind patterns and the behavior of its heatshield during its supersonic entry into Mars atmosphere, and then using AI to identify a landing site and steer towards it for touchdown. The entry, descent and landing (EDL) had to be fully autonomous. The probe plunged through Mars’ atmosphere, at a speed of 12,500mph, and a peak temperature of 1,300°C, but NASA engineers on Earth could
Edge Supplement
Flying buddy The 1.8kg Ingenuity drone copter, due to make its first flights in April, is the first flying vehicle on Mars, so it’s not tasked with any major experiments. It carries a camera for use in scouting the terrain for Perseverance and future rovers, and has a compass, gyro, altimeter and all the sensors needed for autonomous flight - because all Earth-based engineers can do is program in a planned journey. It’s got 4ft diameter rotors - bigger than you’d need on Earth, because the atmosphere is 100 times less dense than ours. It’s got six lithium-ion batteries, charged with a solar panel. The copter uses 350W of power, and the batteries store 35Wh, so it’s limited to flights a few minutes long, but is expected to travel up to 50m, flying three to five meters from the ground. A Zigbee radio link gives it 250kbps communication with the rover.
When the parachute opened, the heatshield and its sensors was jettisoned. The data was stored for transmission back to NASA - and represents the first detailed data from a Mars landing. This means future Mars missions can have heatshields designed with data from an actual landing, not a simulation. NASA expects this will allow them to make better heatshields which weigh 35 percent less. The pressure sensors will tell NASA about the real dynamics of the Martian atmosphere, including the low-altitude winds it hit as it slowed from supersonic speed. Future missions will be able to predict the weather, and landing with more control, in a smaller footprint. Perseverance’s landing target was 4.8 miles by 4.1 miles, already three times smaller than Curiosity's landing target of 15.5 x 12.4 miles. Thanks to the data it captured in February, the next probe will land in a space 30 percent smaller. Controlled descent What happened next is even more impressive.
Perseverance is Nasa's fifth rover on Mars but it's the first to land with its eyes open As the parachute opened, Perseverance’s radar measured its altitude. With the heatshield gone, the rover's cameras could scan the ground. On-board pattern recognition picked out features and looked for the landing spot When it slowed down to 200mph, the parachute cut loose, and the rover’s rockets took over, slowing it right down. At this point the lander vision system (LVS) took over, using “terrain relative navigation” (TRN) to match the rover’s camera images to a map of the terrain, and guide it to a smooth landing on the jumbled terrain of Jezero Crater. The system was tested as much as possible, with helicopters and suborbital rockets on Earth but, for obvious reasons, could not do a full live test till the day of the actual descent.
not take a hand at all, because the whole descent took less than seven minutes. Before NASA saw Perseverance start to fall, the rover was already sitting on land. Entering the atmosphere NASA has operated five rovers on Mars, but Perseverance was the first to land with its eyes open. The heatshield and back shell was studded with 28 sensors; for the first four minutes of its descent, the searing temperature and pounding of the atmosphere were recorded by thermocouples, heat flux sensors, and pressure transducers.
Phoning home Perseverance communicates back to Earth by relaying signals via Mars orbiters, including the Mars Reconnaisance Orbiter, which has been orbiting Mars since 2006, and became a full time relay system in 2010. For the leg from Mars orbit to Earth, the interplanetary Internet connection uses a store-and forward-network, designed to deal with frequent errors and disconnects, long delays which can also vary by a huge amount. Given the light weight of interplanetary craft and their low power budget, communications systems are pretty asymmetrical. Big antennas on Earth are needed to pick up the whispers from Mars, and they must be trained and adjusted to catch signals from pre-arranged directions. Data is carried by packets defined by the Consultative Committee for Space Data Systems (CCSDS) telemetry standard. Each packet carries a variable amount of data, from 7 to 65,542 bytes, including the packet header. Error correction is also included.
Issue 40 ∞ April 2021 35
If there ever was life on Mars, Jezero Crater is the best place to look for signs it was once there
Before the landing, NASA's TRN lead Swati Mohan said: “If we didn't have Terrain Relative Navigation, the probability of landing safely at Jezero Crater is about 80 to 85 percent. But with Mars 2020, we can actually bring that probability of success of landing safely at Jezero Crater all the way up to 99 percent every single time.” On the day, when she was the public face of NASA, calling out the telemetry, she said: “ it wasn't until after I called 'touchdown confirmed' and people started cheering that I realized, 'oh my gosh, we actually did this. We are actually on Mars. This is not a practice run. This is the real thing'." Science mission Jezero is the hardest landing site NASA has chosen for any Mars mission, and it picked it for a reason. Perseverance touched down in an ancient river delta that fed a lake that filled the crater three billion years ago. If there ever was life on Mars, here is the best place to look for signs of it. Perseverance is kitted out with scientific instruments to look for signs of ancient life in the delta deposits (see box: The toolbag).
It will also drill out and cache interesting rocks for recovery by a later mission. That mission will require whole new techniques, but is due to launch in 2026. Perseverance will also carry out a key test for possible manned Mars missions in future: testing the production of oxygen from the Martian atmosphere. All this work will be done more or less autonomously, with high level instructions from Earth bringing back a payload of scientific data. It really is the farthest Edge computing has ever gone, and embodies several extremes: low data rates, unreliable links, and a “right-sized” processor and memory architecture. It also has absolutely zero chance of any human maintenance and support visits. Compared to the tight budgets of Perseverance, Earth-bound Edge systems have an embarrassment of riches, with 5G networks, mains electricity and the possibility that someone might come by and reboot them. While NASA leads scientists round the world in learning from this Mars mission, digital infrastructure builders will be able to learn a lot about the limits of Edge computing.
The Brain The rover is controlled by a chip that’s been in circulation for more than 20 years: the Power PC 750 processor, which also saw service in the vintage Apple iMacs launched in 1998. It’s not a cutting-edge processor: it;s got only 10.4 million transistors, about a thousand times fewer than a smartphone chip. And, while it can run at 233MHz, Perseverance is only operating at 133MHz. There’s a reason for this apparently low spec. It’s a ruggedized version of the processor, costing $20,000 and built under license by BAE Systems into the RAD750 single board computer, retooled from the ground up with radiation protection and error correction logic to repair any damage to data in the memory, because a single cosmic ray could fry an unshielded computer. James LaRosa at BAE Systems told New Scientist: “You have this multi-billion-dollar spacecraft going out to Mars. If it has a hiccup, you’re going to lose the mission. A charged particle that’s racing through the galaxy can pass through a device and wreak havoc.” The processor runs Wind River’s VxWorks real time operating system, which dates back to 1987. Perseverance has three computers on board, each with two gigabytes of flash memory (about as much as a small USB stick) and 256 megabytes of RAM. One takes care of the rover’s main functions, one analyzes navigation images, and the third one is a backup.
36 DCD Magazine • datacenterdynamics.com
The Toolbag The Mars Rover has the following equipment: • Mastcam-Z, a panoramic stereoscopic, zooming camera that can help navigation and mineralogy • SuperCam, an instrument which does chemical analysis and mineralogy at a distance. • PIXL (Planetary Instrument for X-ray Lithochemistry), an X-ray fluorescence spectrometer which maps the elemental composition of the Martian surface in more detail than ever before. • SHERLOC (Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals) the first Raman spectrometer on Mars, it uses an ultraviolet laser to map mineralogy and organic compounds. • MOXIE (Mars Oxygen In-Situ Resource Utilization Experiment), an experiment to test generating oxygen from the carbon dioxide in Mars’ atmosphere. Oxygen will be needed by any future astronauts for breathing - and to burn the rocket fuel which will get them home. • MEDA (Mars Environmental Dynamics Analyzer), sensors measuring temperature, wind speed and direction, pressure, relative humidity, and dust size and shape. • RIMFAX (The Radar Imager for Mars’ Subsurface Experiment) a groundpenetrating radar to explore the local geology. • A 2m robot arm, with a rock-coring attachment for gathering samples to be stored in sterile tubes. • Three antennas operating in UHF (at up to 2Mbps) and X-band. • A 110W power supply in the form of MMRTG (Multi-Mission Radioisotope Thermoelectric Generator), powered by heat from decaying plutonium-238.
Edge Supplement
Is OpenRAN in the
running? The Edge will rely on 5G and other radio communications which can be expensive and proprietary. OpenRAN might change that, but it’s still being finished
E
dge deployments place resources close to applications and the source of their data. But applications like the Internet of Things and autonomous vehicles are so full of moving parts, the only practical way to link them up is through radio networks. The development of Edge has been closely linked with the arrival of 5G, the short-range high bit-rate evolution of mobile phone networks that is still being delivered. But Edge applications will have to be flexible, and use whatever technology suits their needs - and that could be a problem. Radio Access Networks (RANs) provide connection between connected devices and the core network via the base stations. Though incredibly important, the technology used is often proprietary so equipment from one vendor will rarely interface with other
components from rival vendors. As a result, mobile operators are faced with vendor lock in and use end-to-end solutions from a small set of providers, which can drive up costs and lead to sub-par equipment being used in certain areas. OpenRAN, however, aims to break down the RAN into component parts and create a unified open interface to connect them. In theory, this allows operators to create bespoke and interoperable best-of-breed deployments. The goal is to create more diversity in the supply chain and allow smaller, specialized companies to enter the market and compete with the incumbents. Increasing amounts of virtualization, as well as software-defined and cloud architecture in telco infrastructure also means less hardware is required, offering more opportunities for software vendors and more use of commodity off the shelf hardware.
Dan Swinhoe News Editor
But while there is interest in the technology, is it mature enough for prime time deployment? Operators like OpenRAN Overall, everyone DCD spoke to said that OpenRAN was generally developing at a decent pace. And while it’s been successful in its goal of diversifying the RAN market, challenges remain, especially around interoperability and proven deployments in urban areas or where legacy technology is a consideration. Dell’Oro Group predicts OpenRAN will account for more than 10 percent of the overall RAN market by 2025 and total $10 billion, but company vice president Stefan Pongratz acknowledges existing suppliers are well positioned to do well with OpenRAN, and the approach won’t shift all new investments over to new players.
Issue 40 ∞ April 2021 37
“OpenRAN is currently trending upwards, although it has yet to reach an inflection point,” says Matt Melester, CTO of venue and campus networks at CommScope. “It is succeeding in its broader goals to have the potential of creating a larger ecosystem, but at this point in time, it is too early to tell how successful it will be.” Operators are seemingly keen to at least give OpenRAN an opportunity to mature, as it gives them more leverage over equipment makers. As well as groups like the OpenRAN alliance and the Telecom Infra Project, Deutsche Telekom, Orange, Telefónica, Vodafone, and TIM recently signed a memorandum of understanding around OpenRAN in Europe, signaling their commitment to make it the “technology of choice” for RAN. Vodafone has been a major supporter of OpenRAN. Last year the company said it planned to deploy OpenRAN technology across 2,600 sites in rural Wales and the South West of England and replace the existing Huawei hardware, with deployment starting in 2022. Andrea Donà, UK network & development director for Vodafone, recently told Telecom TV the company had already deployed two OpenRAN sites to its production network as part of its testing process. Though there are no commercial deployments yet, Vodafone’s test and pilot deployments are one of a number currently in development. Outside of the UK, Vodafone is working with Parallel Wireless and others on OpenRAN trials in Turkey, Ireland, and the DRC. Telefonica has an OpenRAN test underway in Peru and Orange in the Central African Republic. In the US, new wireless mobile provider Dish has plans to cover 70 percent of the US population by June 2023 with its standalone 5G network based on OpenRAN architecture through Fujitsu and Altiostar. In Japan, Rakuten Mobile is rolling out standalone 5G networks using OpenRAN technology in Tokyo, Osaka, and Nagoya. In Germany, Deutsche Telekom is creating an “O-RAN town” in Neubrandenburg and will work with Dell, Fujitsu, NEC, Nokia, Mavenir and others to deploy equipment at 25 “O-RAN compatible sites” that will provide 4G and 5G services. “All these deployments are using disaggregated network architectures with multiple vendors able to contribute different elements,” says John Baker, SVP business development at Mavenir, whose company has been involved in several OpenRAN deployments. These deployments are important, argues Paul Rhodes, OpenRAN and 5G principal consultant, World Wide Technology (WWT), as they are an opportunity for operators to see and validate good over-the-air performance. “Rather than theoretically in a lab with a
"We're having to rip out a lot of Huawei equipment by 2023. We don't have the luxury of waiting for this emergent technology to actually emerge" controlled environment, now they're actually exposing it to the real world,” he says. Still work to be done Mavenir, Parallel Wireless, and Altiostar Networks have found success in the OpenRAN space, while IT infrastructure providers like HPE and Dell are positioning themselves to provide that commodity hardware from which to run virtualized RAN technology. At the same time, incumbent vendors such as Nokia and Ericsson are looking at being involved and are virtualizing some of their offerings. “Even the traditional network equipment manufacturers, who were governing the space for a long time are now having to open up,” says Kalyan Sundhar, VP & GM of 5G edge to core products at Keysight Technologies. “Which tells you that the market is certainly moving in that direction and they have no choice but to move along with it.” However, despite its fans, few believe OpenRAN is ready for prime time deployment in large urban environments yet. Vodafone’s Donà acknowledged there was still ‘work to be done’ around the maturity of the technology, including interoperability, which is a core issue if the multi-vendor ‘best of breed’ approach is to ever come to fruition. TIM’s network engineering director Marco di Costanzo recently told BNamericas it would be “foolhardy” to say OpenRAN is ready for massive roll-out in large centers. “There are still many hurdles and challenges to overcome, such as supporting advanced features such as carrier aggregation, MIMO, beamforming/steering and others, which require complex, latency sensitive interaction between different RAN blocks,” says Prakash Sangam, founder and principal at Tantra Analyst. “OpenRAN has finally graduated from an interesting concept to reality, but it will take considerable time to be mainstream and a default option.” At the same time, purported national security concerns have led some countries, such as the UK, US, and Australia, to exclude Chinese companies such as Huawei and ZTE from new telco network deployments and - in some cases - to rip their equipment out of existing networks. These moves highlight the need for market diversification, which could benefit OpenRAN - but the timing could be off: the technology may not yet be ready to take full advantage of the switch.
38 DCD Magazine • datacenterdynamics.com
“We're having to rip out a lot of Huawei equipment by 2023 whilst these interface specifications are still getting developed,”says Paul Graham, partner for technology, media and telecommunications at law furn Fieldfisher. “They don't have the luxury of sitting back and waiting for this emergent technology to actually emerge.” Not all the business that was going Huawei is flowing to local incumbents such as Nokia and Ericsson, but the urgency with which some operators need to remove nowforbidden technology means many operators aren’t willing to wait for OpenRAN. “They've got to do it now, and order the equipment now, and it has to be the equipment that's available on the market right now, as opposed to something that might come on them on the market in 12 months' time,” he says. A tale of two OpenRANs OpenRAN is currently making in-roads on greenfield sites, and that will continue. In theory it can be backward compatible with existing radio networks for 4G, 3G or even 2G, but a lack of mature integration options means standalone 5G OpenRan technology is easier to deploy. Rakuten in Japan and Dish in the US are opting for greenfield deployments utilizing OpenRAN, and many of the deployments by the incumbent operators are in rural and under-served areas. Vodafone’s first deployment in Wales was at the Builth Wells showground; an area that wouldn’t have much capacity requirement for large parts of the year and therefore couldn’t previously justify the investment of a large roll-out. “Early greenfield adopters are more likely to include more components from the broader OpenRAN vision while the migration will be more gradual with the existing networks with initial deployments focusing on the O-RAN interface,” says Dell’Oro’s Pongratz. “2021 will be a pivotal year for the OpenRAN movement to assess the readiness with brownfield deployments.” “Companies like Rakuten or Dish have taken a different, more proactive approach to OpenRAN,” adds Commscope’s Melester. “This is because they will not have to start satisfying massive amounts of users right away. They have more latitude to deal with the teething pain of OpenRAN.”
Edge Supplement
Likewise, private LTE deployments could be an area where OpenRAN could find success, partly due to the greenfield nature of such rollouts, and the appeal of removing the need to install fiber or rely on satellite at desired sites. OpenRAN's use of open standards and commodity hardware is also a boon. “If you're building a network from scratch, and you're not looking for compatibility with anything legacy, then standalone represents a great opportunity [for OpenRAN],” says WWT’s Rhodes. “There's a great opportunity for OpenRAN to take an early lead in a sector, and not have to prove itself versus an established competitor.” Mavenir’s Baker says his company has been involved with 12 such deployments in 2020, including two ‘Industry 4.0’ applications in Germany, Naresuan University in Thailand, two indoor pilot projects in Spain, and the Ørsted windfarm in the Irish Sea alongside Vilicom. “OpenRAN is already well equipped to meet the needs of rural and suburban deployments. Development of some of the more sophisticated technologies required for high demand urban centres is proceeding at pace,” he says.
"The industry needs to go through some teething pains"
It’s coming, slowly but surely A number of people DCD spoke to predict operators are likely to deploy OpenRAN in greenfield, rural, and standalone networks during 2021 and 2022, and also in private deployments. “By this time next year, I think everybody will have a pilot deployment that's live and broadcasting over the air,” says WWT’s Rhodes. “The majority, if not all of the MNOs in any particular country will have OpenRAN presence and will be nodding rather than shaking their head.” Many operability standards are quickly being firmed up – the O-RAN alliance released more than 40 specifications in 2020 – and many of the current stumbling blocks around technology will naturally fall away as the technology matures, and and the first commercial deployments will be rolled out. “I think we will definitely see some very small targeted deployments [in the next 12 months,” says Keysight’s Sundhar. “the integration is going to be very daunting, and
for it to be a very general-purpose thing is going to take longer.” “The industry needs to go through teething pains. New companies will have more time to work out the bugs, as they don’t have legacy infrastructure to support at the same time,” says Commscope’s Melester, adding that security and power consumption also need work, as well as the interoperability issue. “In 2021, we’ll start to see some of the teething pain associated with real world OpenRAN deployments. This is only natural. 2024-2025 could see parity with traditional legacy OEMs and the gap will start to close in terms of what traditional vendors will be able to produce versus new entrants.” System builders with Edge projects will be watching developments closely, as OpenRAN could be a vital component to turn their ideas into reality.
Issue 40 ∞ April 2021 39
Comprehensive Critical Infrastructure Delivered in Days.
Keep IT On. Prioritize your network edge continuity with the latest in racks, power conditioning, remote access and thermal management.
What’s Your Edge? Vertiv.com/DCDEdge
© 2021 Vertiv Group Corp. All rights reserved. Vertiv™ and the Vertiv logo are trademarks or registered trademarks of Vertiv Group Corp. All other names and logos referred to are trade names, trademarks or registered trademarks of their respective owners.
Space Resiliency
Just how resilient are satellites? They’ve become integral to our world, but how robust are the satellites on which we depend?
T
he digital infrastructure on which our world now depends can at times be surprisingly fragile. Sharks, anchors, or unfriendly nations can cut submarine cables. Construction work can sever fiber-optics buried in the streets. And extreme weather, power cuts, or even equipment failure fires can render data centers out of action. But what about satellites? GPS has become integral to daily life, weather and observation satellites provide a number of information services to commercial companies, and now we’re beginning to see a number of commercial companies provide broadband and 5G connectivity from orbit. Are the satellites we depend on as robust as we need them to be? In space no one can hear you stream Here on Terra Firma, data center resiliency is relatively easy to measure: Is there an ample supply of water, power, and connectivity, is the area likely to see floods or other extreme weather, does it have redundant sources and routes of power and connectivity for backup? Likewise, cables can be shielded and buried, cell towers built sturdy and guarded. But in space there’s no one to give you an Uptime Tier-rating. The good news is, despite the harsh and unpredictable conditions of space,
satellites are usually well-engineered and highly redundant machines designed to keep the elements at bay and survive the bumpy ride into space. The costs to build and launch large satellites runs into the tens, if not hundreds of millions of dollars per launch and can take months to prepare, and so the multi-ton satellites flown to Geostationary Earth Orbit (GEO) 35,786 kilometers (22,236 miles) above the Earth are routinely built with multiple layers of redundancy on key systems and payloads and rigorously tested. “Satellites are reliable in the sense that they get strapped into a rocket and blasted into space through several Gs of acceleration and a ton of heat noise and vibration, and then operate in a vacuum with significant temperature shifts as they go from sunlight into the shadow back into sunlight, and radiation,” says Dr. Brian Weeden, director of program planning, Secure World Foundation. “In that sense, they are pretty durable.” Assuming a satellite survives the launch
Dan Swinhoe News Editor
and calls home without any troubles, it faces a constant battle for survival out in the harshness of space. Even Earth satellites in low orbits can see temperature swings of minus 50OC to plus 50OC every 90 minutes, which can have a big effect on the equipment onboard, as can the lack of air. “Materials that you thought were quite solid can actually have some liquid or gaseous components which can leave into the vacuum of space, changing the properties of the material and causing it to shrink or become brittle,” says Andy Vick, head of disruptive technology at RAL Space. Space weather is another major contributor to satellite failures. Many of these bussized, multi-ton satellites are out in GEO, thousands of miles from Earth where there is little atmospheric protection from extreme conditions and large amounts of radiation. And the void can be surprisingly active and unpredictable when it comes to weather. X-rays, ultraviolet rays, radiation, and geomagnetic storms can all wreak havoc on-
Cables can be shielded and buried, cell towers built sturdily and guarded. But in space there’s no one to give you an Uptime Tier-rating.
Issue 40 ∞ April 2021 41
board; components can be damaged by the high current that discharges into the satellite or damaged by high-energy particles that penetrate the satellite. Space dust – literally tiny particles of rock dust – can hit the sats and become plasma and damage equipment. Sun Outages, where the satellite passes in front of the Sun, don't harm the satellite. However, the sun's interference swamps the signal from the satellite, causing a loss of data. These outages affect the signals from geostationary satellites, and can last for around ten minutes a day during the Equinox - but they are predictable. The University of Reading recently recorded the first ‘space hurricane’ which it described as a ‘1,000km-wide swirling mass of plasma raining electrons several hundred kilometers above the North Pole.’ The most notorious space weather event was the Carrington Event, a solar flare in 1859 that caused auroras as far south as the Caribbean, woke people in the night thinking it was morning, and caused telegraph lines to fail. Smaller events in 1989 caused blackouts and communication failures. A Carringtonlevel event today would cause worldwide electronics failures, and could wipe out all the satellite networks of the world if action wasn’t taken ahead of time. In a disaster report, space insurance consortium Atrium warned a single anomalously large proton flare or a number of flares in quick succession from our sun could result in a loss of power to all satellites in geosynchronous orbit and cost billions of dollars to fix. Dr. Holger Krag, head of the Space Safety Programme Office for the European Space Agency, tells DCD there is little that can be done to protect satellites from the impact of a solar flare beyond turning off key electrical systems ahead of time. But the unpredictable nature of the sun can make this a difficult task. To better predict coronal ejections from the sun and provide more notice about potential space weather events, the ESA has planned a mission called Lagrange, where spacecraft will be positioned at "Lagrange points", where the gravity of the Earth and Sun balance providing stable locations to observe the sun’s activity a few days ahead of the Earth’s position. “From [the L5] position, it can see the surface of the sun that would turn towards Earth three days later. We can see an advanced view of the activity area on the sun as it’s rotating around its axis towards the Earth,” says Dr. Krag. “At the same time, you can have a side view on the line between Earth and Sun so it can see coronal mass ejection traveling from the Sun to the Earth and it can measure the velocity of the ejections." The Lagrange mission is expected to fly in 2027, and Dr. Krag says: "It will give us a much more reliable forecast."
"You've got to design for the entire mission life upfront; there isn't somebody who can watch them 24x7, or go and repair them when they do break" Outages in space are no joke Before satellites launch, they go through a rigorous testing regime that can see them placed into climate chambers to simulate the super cold and hot vacuum of space, as well as vibration and shock tests to see how machines cope with the rigors of launch and booster separation en-route to orbit. Satellites are built on the assumption they will never be touched again, so operators want to make sure their investments are built to last. “The vibration environment, the acoustic vibrations of the supersonic airflow over the fairing and things like that are quite extreme,” says RAL's Vick. “We have got the ability to put the package satellite in front of what is basically Deep Purple's 1980s speaker stack: a stack of speakers about three stories high. You completely surround the satellite and you blast it with sine waves and simulate the kind of acoustic blast that the thing will get on its launch.” The fact that satellites are untouchable once up in orbit has also required as much redundancy and backup capability as possible being out into each satellite. “The systems are built to be resilient and operate autonomously,” says Kevin Bell, VP of space program operations at the Aerospace Corporation, “and have several different kinds of fault management systems built into them; either to self-repair and recover, or to go into a safe mode where a human can come in and figure out what happened and recover them. “You've got to design for the entire mission life upfront; there isn't somebody who can watch them 24x7, or go and repair them when they do break, and you can't refuel them or put new parts on either.” Atrium says nearly $11 billion in insurance claims has been paid out to the space industry in the 20 years leading up to 2014. The most common points of failure were communications payloads, attitude and orbit control systems (AOCS – essentially the navigation and maneuvering systems) & computers, power systems, and data handling components. Similarly, a 2005 study of 156 satellite failures found that AOCS (including gyroscopes, momentum wheels, and thrusters) and power systems were responsible for more than half of failures, with mechanical failures around solar panels and short circuits of electrical systems also common issues. Over 40 percent of all failures happen within the first year of in-orbit activities. Space phenomena
42 DCD Magazine • datacenterdynamics.com
were directly involved in 17 percent of all failures. “You get a lot of early lifetime failures up to the first year after launch, then nothing for a while, and then you get a spike at the end of the lifecycle,” says Dr. Weeden of SWF. A large satellite may have to go through an unfurling process once it disengages from a rocket, and then realign itself, before finally calling home. “That deployment stage can be where there's a fair number of problems. If that initial contact doesn't happen, and the satellite never orients its solar panels to the sun, it runs out of battery and dies.” Accidents can occur before the machines make it to orbit, and sometimes before they even make it to the launchpad. In 2009 Nasa’s Orbiting Carbon Observatory (OCO) satellite failed to separate from its launch rocket, and the whole assembly crashed into the ocean 17 minutes after lift off. In 2003 the 1.4-ton NOAA-19 satellite needed $135 million worth of repairs after Lockheed Martin employees dropped it on the floor during manufacturing. Reliability and testing have improved over the years, and satellites are now less overengineered as we learn about what actually causes satellites to fail once out in orbit. “[In the past] they weren't looking at what happened to a previous satellite, because they didn't know,” explains Vick. “They would simply have tried to shield everything because they didn't know what was most susceptible to radiation. By being able to simulate things in the lab, including using facilities like ISS to simulate radiation, we've become more aware of what matters more to what's really happening. We're now focusing on the things that actually need to be shielded.” We are also slowly starting to open up the possibilities to repair, refuel, and potentially upgrade existing satellites even after years in orbit. Northrop Grumman’s Mission Extension Vehicle is the first satellite that can service other satellites and extend their lifespan. MEV1 completed its first docking to a client satellite, Intelsat IS-901 in February 2020, to keep the satellite operational for a further five years, while MEV-2 is due to dock with the Intelsat IS1002 satellite in early 2021. Nasa is working on a similar in-orbit service satellite as part of the agency’s OSAM-1/Restore-L project. Satellite failures are bad for Earth and Space Though relatively rare, in-orbit failures do happen. Despite a successful launch in December 2020, SiriusXM's new 7 ton SXM-7
Space Resiliency Satellite, built by Maxar to provide digital radio to consumers, failed during in-orbit testing. SXM-7, along with the SXM-8 satellite due to launch later this year, was meant to replace the Boeing-built XM-3 and XM-4 which were launched in 2005 and 2006, and are now approaching the end of their lives. Though its engine was able to move it to the right orbit, some of SXM-7's payload failed and the satellite has since been classed as a “total loss.” The company is making a $225 million insurance claim, and will launch SXM-8 later than planned. In 2019, the six-ton Intelsat 29e satellite failed after a fuel leak. Its propulsion system experienced damage that caused a leak of the on-board propellant, disrupting service to the satellite’s customers. A second anomaly occurred during recovery efforts after which it was judged lost. Launched in 2016, it served just three of its planned 15 years. That same year, almost the whole Galileo network – Europe’s equivalent to the US GPS – went down. It was later recovered, but two of the network’s 26 satellites have had to be retired early due to on-board issues, and at time of writing a further two have been ‘temporarily down’ for over a month. Satellites generally remain in service for between seven to 10 years in low earth orbit
(LEO) below 2,000 km (1,200 mi), and more than 15 in GEO. Aside from the loss of service and the impact unexpected failures have on Earth, severely damaged or failed satellites create risks for the operational satellites in close proximity. At best, large failed satellites are multi-ton hunks of metal traveling at thousands of miles per hour in uncontrolled trajectories that could collide with functioning satellites and interfere with signals. If laden with fuel – whether in the form of propellant or energy in batteries – they become potential weapons of destruction. Astrophysicist Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics described the failed Intelsat 29e as “a floating bomb in GEO” given it was now slightly off track on its planned orbit and could potentially cross paths with other GEO satellites in the future. At higher orbits, satellites are larger and move somewhat slower, which means they can survive impacts with small piece of debris. But at lower orbits even tiny pieces of debris can be highly destructive. In 2016, a fleck of paint was enough to damage a window on the International Space Station. The ISS does have metal shielding – panels of layered thin metal sheets akin to Kevlar vests – to protect it from larger pieces, but this isn’t practical or possible
for most satellites due to cost, weight, and size restrictions, leaving most to either make evasive maneuvers or cross their fingers and hope for a near miss. New small satellite mega-constellations Where once space was purely the domain of military, government, and large telecoms companies, a new fleet of commercial startups are sending up huge numbers of small satellites, which are changing the industry. Today, there are currently around 3,000 operational satellites in orbit, but that number is increasing rapidly, with a massive potential impact on the future of the industry. Over 1,000 satellites were launched in 2020 alone, the vast majority of them coming from commercial actors looking to deploy huge numbers of small satellites. It’s not uncommon to see rockets fired into space now launching more than 100 satellites at a time. SpaceX’s Starlink is the biggest player amongst the new wave of space satellites. Elon Musk’s company has launched over 1,000 satellites since 2019 to provide high-speed broadband Internet connectivity, and has permission from the FCC to launch more than 40,000 into LEO. These satellites weigh around 260 kg (570 lb) and are about the size of a large table and generally operate from below 550 km (341 mi) altitude. But Starlink is just one of a growing number of companies looking to fill the skies and provide connectivity from LEO. Amazon’s Project Kuiper will see the company invest $10 billion to launch 3,000 satellites over this decade. Though it has scaled back plans since emerging from bankruptcy, the UK’s OneWeb still plans to have almost 650 satellites in orbit by June 2022 with a second generation of sats arriving in 2024-25. Its satellites are smaller than Starlink’s at just 150kg, but orbit at an
Issue 40 ∞ April 2021 43
altitude of 1,200 km (750 mi). There are other commercial players: Planet has launched over 350 of its 4kg Dove cubesats since 2013, and currently has more than 200 in operation. Kleos Space plans to launch up to 20 clusters of small sats to offer maritime intelligence to commercial and defense companies. Californian company Swarm is planning to build a space-based Internet of things (IoT) for uses such as vehicle tracking, logistics, water and resource monitoring. It plans to have 150 satellites up by the end of 2021. In February LyteLoop raised $40 million for its vision of inorbit data storage on up to 300 250kg satellites (see page 44). The US Defense Advanced Research Projects Agency (DARPA) is looking to get in on the act and turn satellites to military use. Project Blackjack is investigating how LEO smallsats can supplement and/or replace the US’ GEO satellites for activities such as surveillance. One space mainframe or a cloud of satellites The arrival of these constellations means the industry is seeing a divergence. There are huge, highly-resilient individual machines in high orbits; and large swarms of small and breakable machines in low orbits that, while individually fragile, create a more resilient overall system because there can be tens or even hundreds of failover points. “If you have a geosynchronous satellite, the critical system redundancy might be threefold,” says Bell. “Now I've got 1,000 fold, it makes the system much more resilient and reliable from the failure standpoint.” SFW’s Weeden likens the change to the switch from mainframe computing to distributed servers in the data center industry. “You go from a few very large, very expensive, very powerful things to a more distributed set of satellites. Maybe each one individually is not quite as powerful but you've got dozens to hundreds or thousands of them, which is a different kind of resilience,” he says. “The bigger ones are more resilient on an individual basis. We’re seeing a shift towards individual satellites that are probably less resilient, but a system that is more resilient on the whole. If you've got one satellite and it fails you're screwed. If you've got 100, and five of them fail, you're probably okay.” While having a thousand points of failure could create greater systemic resilience, any fleet-wide design flaws could potentially have massive effects if thousands of machines in orbit suddenly all suffer the same defect and cease to function. “One of the major risks is just making sure you don't have a systemic design issue,” says Aerospace's Bell. “You certainly want to make sure you've rooted out a design problem that is across all 1,000 satellites that could cause them
"It's lurking design problems that worry me. If a generic design flaw pops up years down the road in a mega constellation design that could be very bad" to fail prematurely.” In the 1990s, a number of Boeing 601 satellites were found to have a design flaw in their spacecraft control processor (SCP) where a tin-plated relay formed crystalline ‘whiskers’ that could cause an electrical short. Though each satellite contained two SCPs, there were cases of both SCPs failing. At least eight 601s have seen SCP failures and four of them were lost, including the Galaxy IV communications satellite, which caused 80 percent of pager services in the US to go down. Similar issues in thousands of satellites could be catastrophic. “It's lurking design problems that suddenly appear [that worry me],” says McDowell. “If there were a generic design flaw lurking that pops up, years down the road in one of these mega constellation designs that could be very bad. You could end up with high failure rates.” Move fast and break things - in space Even when generic design flaws are ruled out and system resilience is increased, questions remain over how many of those individual satellites might fail. LEO satellites have some natural protection from some of the worst space weather thanks to their lower orbits, but their smaller size means they are generally less
44 DCD Magazine • datacenterdynamics.com
protected from any adverse weather effects they do see, and the speed of orbit means they would be unlikely to survive any collisions with debris or other satellites. With so many new players in space, some of them lacking the manufacturing nous of the incumbents, there is an increased possibility of on-board failures. Christopher Jackson, director of Acuitas Reliability, has previously said around 35 percent of small satellites fail to complete their mission, with almost 20 percent being Dead on Arrival (DOA). Small sats often suffer from design, manufacturing, and testing flaws, as well as often failing to conduct proper analysis after failures, he claimed. While that might be the case with many cubesats, the new commercial companies are taking a highly iterative approach to developing their satellites and failure rates are dropping quickly. In a series of tweets last year, Harvard’s McDowell noted how SpaceX went from a 13 percent failure rate with its V0.9 prototypes, to a 3 percent failure rate with its first V1 sats, to just 0.2 percent after that. “They really improved reliability, about halfway through last year,” McDowell tells DCD. “The more recent ones have had almost no
Space Resiliency standards, but it’s difficult to produce them because there will always be people who will be trying to argue that we shouldn't allow anybody to do anything less than the best.” SWF’s Weeden agrees on the need for more standards: “Just like we have different rules and standards for semi-trucks and bicycles and station wagons, we probably need them for satellites as well, but so far we really haven't done the science to figure out what those different rules should be.”
failures.” While that failure rate is good, in a constellation of thousands that could still create added space debris risks if they are not de-orbited properly. “The sort of failure rates you can tolerate in a constellation of 100 satellites, you can't really tolerate in a constellation of 30,000,” he adds. What the industry is missing, argues RAL Space’s Vick, is standards that are applicable to this new breed of smaller and cheaper satellite. “At the high level, ESA’s ECSS and Nasa’s NTSS are very prescriptive; they're very engineered, and they do provide an ultimate best solution,” he says. “But they are not necessarily affordable, and the problem is that there is no cheaper alternative to those standards at this point in time and I think there does need to be.” Without a middle or lower standard for smaller commercial companies, Vick says commercial companies are left with nothing to inform them about standards they could working to, which aren't the most costly and engineered option. “There are attempts to provide those
Small sats teach incumbents about risk Tight regulation around slotting, combined with the harsh conditions and the costs of getting there means GEO and higher Earth orbits will remain the domain of large satellite and incumbent operators, at least for the foreseeable future. But it remains to be seen whether the two sectors continue, or we end up with a general move towards smaller units built at scale across the industry. “We're at a pivot point right now where space needs to be more agile, and we don't have a production footing to do that,” says Aerospace Corp’s Bell. The massive fleets of new small satellites provide the chance to apply mass production techniques, that previously haven’t been applicable to small-scale manufacturing of lower numbers of large buses. “Right now we're on a cycle of about every five or ten years, which doesn't allow you to keep pace with technology,” says Bell. “We don't have a way to turn out a new vehicle with upgrades once a year and a block upgrade or a brand new model every four or five years like the automotive or phone industry.” “Something like GPS is putting out two to three satellites a year, which is by no means a production run. If it costs more, then you want to make sure it works, so you end up in a spiral where you're spending more money to put in the redundancy, put in the fault management, and test it to make sure it works on the ground perfectly.” The arrival of ambitious startups has also seen large incumbents forced to act. As part of its Lightspeed constellation, Telesat plans to launch almost 300 satellites weighing 700 kilos each, which will provide high-speed broadband by 2023. The rapid iteration, smaller units, and large numbers of machines means the new commercial players can improve technology more quickly, improve testing capability, and glean more reliability data from a larger pool of sources, which can provide
new learnings for the incumbents. “The new players have effectively scaled for production,” Bell adds. “They're able to evolve because of quantity and the amount of industrial base, it's huge compared to the kinds of quantity and scale we have. They’re trying to look at what it takes to build production lines where you can stabilize the production line and build large unit counts, and they've actually been able to spend more time optimizing testing.” Smaller satellites can be tested more easily. They no longer need cranes and high bays, but can be pushed around on a wheeled cart by a person, which can massively simplify assembly integration and test. And once in space, companies can glean more information about what causes failures. “Thousands of units are now giving you a statistical sample of parts reliability,” says Bell. “You can monitor and can start to get a feel for the environment and how bad the environment is even embedded inside the spacecraft.” The higher risk appetite and ‘test and re-flight policy’ is closer to how the software industry operates than the traditional space industry, according to RAL Space’s Vick, but has positive effects throughout the sector. “I think that's good for all of us because it does mean we're getting new ideas put into practice in space far quicker and with less direct investment from government, so that's good for us,” he says. “Those highly engineered satellites can't really afford to trial new technologies and new methods for the first time. But once those technologies are proven in the new space environment they can find their way into the bigger and more highly engineered satellites, so the older style satellites are actually benefiting from what's happening in technology.” The fact these commercial companies are willing to take risks and fail on some iterations of satellites marks a change from the more traditional companies, which are reluctant to accept the larger costs of failure, and the political ramifications if Government/military agencies are on board. “Somebody like Elon Musk and Starlink, he's obviously answerable to his shareholders but they won't be too worried about the political fallout of it going wrong,” says Vick. “Whereas that's not necessarily true for a big government-led mission, the risk appetite is much lower in big multi-agency, multicountry developed development.”
Highly engineered satellites can't afford to trial new technologies. But once those technologies are proven in small units, the incumbents benefit Issue 40 ∞ April 2021 45
Light storage
The space between satellites We talk to LyteLoop’s CEO about using light to store data
T
his article is stored on a hard drive, a solid state drive, or on paper. Years from now, if it proves popular enough, it might live on tape. Then, when someone wants to read it, the data will be transferred to light, sent across fiber cables, and beamed to the user’s local device. It’s a setup we’re all used to, with data kept on static storage systems, before being converted to photons for communication. But why not keep it as light the whole time? “We just keep light moving around and around,” CEO Ohad Harlev explained. In theory, this could prove far more powerefficient than storage on a fixed medium: “What anybody else is doing in megawatts, we're doing in kilowatts,” he said. Storing an exabyte on Earth in conventional hard drives could take megawatts, although the latest 6.5W, 18Tbyte units could reduce the average power for the actual drives themselves to as low as 360kW - and of course long term storage would not use active powered media. By comparison, LyteLoop proposes it would take 35kW to 40kW to store exabytes of data in space. And then, of course, there’s the energy it would take to get the satellites up there. Bizarre as LyteLoop’s idea sounds, the concept has deep roots. In the late 1940s, the first computers, EDSAC and EDVAC, used
Sebastian Moss Deputy Editor
"Between the $40m we just raised, and what we raised in the past, we have enough to hire all the people that we need and launch the six satellites" “delay lines” (see Box) which stored data as a wave reflected and circulated in columns of mercury. LyteLoop has a patent (US20170280211A1) for a system which recirculates light between satellites, amplifying it and regenerating it as required. The company has made earthbound prototypes, including one in 2018 which used a coil of 2,000 kilometers (1,242 miles) of fiber. "We stored over a gigabyte for about 30 days - that means that each packet traveled over 300 billion kilometers, that's the equivalent of 17 times what Voyager One has been traveling for the last 40 years.” The company also proposes an earthbound system called Tube, that attempts to reproduce the spaceborne version, in which a long light path is created in metal chambers 10m to 100m long, using reflections and angle multiplexing, where light is reflected from a large number of
46 DCD Magazine • datacenterdynamics.com
apertures, designed to increase the distance of travel. LyteLoop’s site proposes a 100m long near-vacuum Tube, 30m wide, which could hold 10 exabytes. "And then you can take the same cube and miniaturize it to anything in the centimeters to meters range," Harlev claimed, "You could put it in a data center, or in a vehicle. And that can store anywhere between 10s of gigabytes to hundreds of petabytes, depending on the form factor." LyteLoop’s site describes a Cell which would be 2.5m long and hold 200 petabytes. Each time the light hops, a small amount of energy is lost. In its most recent prototype, the company has managed to do 432 hops on a single beam, but hopes to reach 5,000 hops by June. The technology uses optical networking equipment, but with a difference, said Harlev: "We are talking with ourselves, we
Delayed reaction LyteLoop's space-age storage is actually similar to the "delay lines" used in the very earliest electronic computers in the 1940s. and 1950s. The UK's National Museum of Computing is restoring EDSAC, the world's first practical computer, using nickel wires instead of the original tubes of poisonouse mercury. "The issues in such things are getting sufficient bandwidth and distance so that the delay line stores a useful amount of data," commented Andrew Herbert, leader of the EDSAC restoration project, "but then the longer the pipe in bits, the longer the latency you have to tolerate before the bits you want re-emerge. "The LyteLoop concept is sound, but I would have questions about the bandwidth required to compete with electronic/electromagnetic storage in terms of scale and access time."
The company will have to convince customers to trust a novel storage concept - and the idea of shunting their data beyond the surly bonds of Earth
know what packets we're sending, and we're sending them to us. So we can eliminate a lot of the digital processes needed in the telecom industry. We're able to do it very efficiently both on the hardware side of the components and digital processes needed." These earthbound systems require effort to recreate something which LyteLoop can get for free, in the vast expanse above the atmosphere, says Harlev. LyteLoop’s proposed network of 300 small satellites would continually shift light between them, before beaming it to a third-party satellite that would handle uplink and downlink. "We're focused on doing space first, both because I think it adds more value to the customers with different value propositions," Harlev explained. "But on top of that, we are slightly cheating - because if we do
the space, then we have developed all the technology we need for the [terrestrial solutions]." There is still a long way to go before it gets to that point. Operating in stealth for the past five years, LyteLoop now hopes to launch six prototype satellites within three years. "And the final product ready for deployment in five," Harlev said. The project has become significantly more feasible, Harlev noted, thanks to the dramatic drop in rocket launch costs, as well as satellite uplink/downlink. "Between the $40m we just raised, and what we raised in the past, we have enough to hire all the people that we need and to launch the six satellites in space," he said, even assuming no further rocket launch discounts happen. Should all go to plan, the 300 satellites should be able to store around two exabytes of data, which could increase as more systems are added to the constellation. LyteLoop plans to own and operate its satellites as a "cloud above the clouds" storage provider, and then later sell the terrestrial versions as hardware. The company will have to convince customers to trust a novel storage concept and the idea of shunting their data beyond the surly bonds of Earth. "I believe that you need to show the benefits of [a new approach] overwhelmingly,
to be adopted," Harlev admitted. "People may keep a copy on the ground for a while." Another advantage Harlev sees is the vast need for data to go somewhere. We have insane, mind-boggling, terrifying quantities of it squirreled away in corporate data centers, sloshing between cloud providers, and stored in great reams of tape. LyteLoop doesn't need it all, but Harlev hopes for some of it: "Let's be honest, what's 100 petabytes between friends?" Light storage still has one great nemesis - darkness. The light has to keep moving, topped up by a trickle of energy. Any power disruption is instantly and irrevocably fatal. Harlev says the systems have layers of redundancy and safety features, and selfhealing software. But, as we learnt in Texas and elsewhere, power cuts happen, and this approach cannot handle even a brief outage. LyteLoop says the loss of a single system is not too concerning in itself, as the lower costs of light-based storage simply means customers would keep multiple copies in ground tubes or satellites in different locations. But it still requires a leap of faith for companies to put their data in a system that must forever keep moving, lest it dies. Over the next five years LyteLoop hopes to bring its photonic space storage platform out of the prototype stage and into reality, making a bold bet that others will see the light. “We believe that technology will work,” Harlev said. “So far, we have not encountered a challenge we haven't been able to overcome. But it doesn't mean that it won't happen tomorrow morning. “We're really confident we can do this. Space is always hard, but we will get it done.”
Issue 40 ∞ April 2021 47
The Smartest Battery Choice for Resilient, Profitable Data Centers Reduce Your Risk, Cost, Hassle — with Proven Technology. We get it. Data center leaders and CIOs face endless demands — greater efficiency, agility and operational sustainability — while mitigating risk and lowering costs. Deka Fahrenheit is your answer. It’s an advanced battery technology for conquering your biggest data center battery challenges.
The Deka Fahrenheit Difference Deka Fahrenheit is a long-life, high-tech battery system designed exclusively for fast-paced data centers like yours. Our system provides the most reliable and flexible power protection you need at the most competitive Total Cost of Ownership (TCO) available.
Your Biggest Benefits:
Best TCO for Data Centers
Proven Longer Life
Environmentally Sustainable
Slash lifetime TCO with lower
Field testing and customer
Virtually 100% recyclable. End-of-
upfront cost, no battery
experience show an extended
life value and recycling helps lower
management system required,
battery life that reduces the number
cost of new batteries and ensures
longer life and less maintenance.
of battery replacements over
self-sustaining supply chain.
the life of the system.
Safe, Dependable
Flexible, Scalable
Trusted Battery Experts
A technology known for its long
Expand and adapt as needed,
Located on over 520 acres in Lyon
history as a safe, reliable,
without making a long-term
Station, PA, East Penn is one of
high-performance solution —
commitment to an unproven
the world’s largest and most
for added peace of mind.
battery technology chosen by
trusted battery manufacturers.
a cabinet supplier.
We’ll be there, for the long-term.
Let Facts Drive Your Decision. Balancing your data center needs isn’t easy. Deka Fahrenheit simplifies your battery decision by comparing the TCO of a Deka Fahrenheit battery system to lithium iron phosphate.
Overall TCO: Deka Fahrenheit Wins (1036.3kWb - 480VDC Battery System)
TOTAL COST OF OWNERSHIP
$1,000,000 $800,000
Lithium Iron Phosphate
$600,000 $400,000
Deka Fahrenheit
$200,000
1
5
YEARS IN SERVICE
10
15
Data Center TCO Analysis Factors 1 MW System (1036.3kWb - 480VDC Battery System)
Lithium Iron Phosphate
Deka Fahrenheit
10-yr Warranty
7-yr Warranty
Initial System Cost
$236,420
$180,489
Maintenance Cost Per Battery
$39
$5
Replacement Cost Per Battery
$1,750
$525
Replacement Labor Cost Per Battery
$25
$40
Battery End-of-Life Value or Cost Total Cost of Ownership (TCO)*
$91 COST per kwh
$33 CREDIT per kwh
TCO = $832,662
TCO = $568,111
Approximately $264,551 in Savings
* Space calculations assume floor space costs of $60 per ft2, and Net Present Value (NPV) of 6%. Space assumptions include 2018 NFPA855 requirements with 4’ aisle. Does not include additional costs for UL9540A design changes or facility insurance for lithium iron phosphate systems. Total decommissioning costs for a 1MW Li-Ion battery based grid energy storage system is estimated at $91,000. Source: EPRI, Recycling and Disposal of Battery-Based Grid Energy Storage Systems: A Preliminary Investigation, B. Westlake. https://www.epri.com/#/pages/summary/000000003002006911/ Terms and conditions: Nothing contained herein, including TCO costs and assumptions utilized, constitute an offer of sale. There is no warranty, express or implied, related to the accuracy of the assumptions or the costs. These assumptions include estimates related to capital and operating expenses, maintenance, product life, initial and replacement product price and labor over a 15-year period. All data subject to change without notice.
Specifications: The High Tech Behind Deka Fahrenheit • Advanced AGM front access design decreases maintenance, improves safety and longevity • IPF® Technology — Optimizes capacity and reliability • Microcat® Catalyst — Increases recombination and prevents dryout • Sustainably designed for recyclability — End-of-life value enhances profitability • Exclusive Thermal Management Technology System: ° THT™ Plastic — Optimizes internal compression ° Helios™ Additive — Lowers float current and corrosion ° TempX™ Alloy — Inhibits corrosion
Deka Shield Protection Allow Deka Services to install and maintain your Deka Fahrenheit batteries and your site will receive extended warranty benefits. Deka Services provides full-service turnkey EF&I solutions across North America. Ask East Penn for details.
Do you have the best battery system for your data center? You can’t afford downtime or extra costs. Contact East Penn for a full TCO analysis.
Stuart, output to 20 inches Stuart, output to 20 inches, two copies
610-682-3263 | www.dekafahrenheit.com | reservepowersales@dekabatteries.com Deka Road, Lyon Station, PA 19536-0147 USA
Data center NIMBYism:
Dan Swinhoe News Editor
How to engage with local communities properly during data center projects Opposition to data center development often stems from lack of understanding and engagement with local communities and stakeholders.
D
ata center construction continues to accelerate at a rapid pace globally. But while many projects get the green light with little objection, occasionally projects are faced with opposition from the local community. While it might often only be a vocal minority, attempting to railroad through data center proposals without trying to engage with the local community could lead to larger and more organized resistance with the potential to kill a project before it can be realized. In order to avoid facing Not In My Back Yard (NIMBY) opposition, data center firms need to understand and engage the location and surrounding community of any proposed site being looked at for construction. Data center opposition is real, can sometimes kill projects Though less noticeable or controversial than wind farms and cell towers, and less space-hogging than solar farms, data centers occasionally do face opposition “It’s relatively rare [to see opposition],” says Chris Sumter, EVP of acquisitions at Prime Data Centers. “Most decisions on where to develop data centers are generally made within an understanding of not building near housing or schools and staying in more industrial or open land development sites.”
“Social media means that opposition groups can move a lot faster. They can mobilize and grow their reach and their numbers a lot quicker.” NIMBYism can come in different forms, from different sources, for different reasons. A farmers' group is appealing against a Microsoft data center in the Netherlands because they don’t think the local agricultural sector can spare the land. A proposed data center on an Illinois golf course has drawn the ire of locals due to the impact on the landscape. Protest groups were against a Google data center in Luxembourg because of the large energy draw a hyperscale facility would have in a relatively small country (accounting for 12 percent of the nation's power use). A proposed floating data center in Ireland was opposed due the large amounts of space the facility would take up in the dock and the impact that would have on other businesses in the area. Welsh data center provider Next Generation Data saw locals say its proposed three-story facility in Newport would damage the standard of life for residents. Although the lure of local investment, tax receipts and locals jobs often ensures data center proposals get the go-ahead, occasionally large-scale projects can be halted in their tracks in the face of particularly strong opposition. Apple’s
52 DCD Magazine • datacenterdynamics.com
proposed data center in Galway, Ireland was met with fierce resistance including local protest gatherings, and environmental protesters eventually took the iPhone maker to the Supreme Court. After nearly five years in development, Apple canceled its plans and sold off the land in 2019. In 2014 a proposed data center and cogeneration power plant on the University of Delaware's Science Technology & Advanced Research Campus saw strong opposition from the Newark Residents Against the Power Plant group, eventually leading the university to cancel the project. Data Center Knowledge called it the “Battle of Newark.” Sumter says Prime has experienced a small amount of resistance on some of its California projects where local concerned citizen groups have filed appeals with city planners to slow down a development over environmental concerns. “These concerns are generally baseless and are a pre-cursor to a PLA or union labor agreement that is offered up in order to call off the opposition," he claimed. "This has been largely driven by the unions to ensure that all subcontractors on jobsites are union-based contractors.”
Work With Communities Developers building on industrial sites, or land and buildings which previously held large industrial facilities such as factories or power plants, are less likely to meet opposition as communities are accustomed to infrastructure and industrial development generally. But building in areas not historically used to industrial facilities is more likely to cause uproar. “Opposition to a development may typically come from those immediately bordering [the development], or those who have a particular issue,” says Andrew Turner, associate director at communications firm Madano. “But you tend to find the opposition doesn't happen overnight. And large, mobilized, well organized opposition is a result of either ignoring or not responding to early warning signs.” Covid and social media are changing outreach and opposition The ever-increasing reach of social media, coupled with Covid-19, has meant data center firms have had to change how they engage with local communities It’s also
“Opposition doesn't happen overnight. And large, mobilized, well organized opposition is as a result of either ignoring or not responding to early warning signs.” meant the stakeholder landscape and scope for opposition has changed. “In the last year, we've seen that sense of community grow greater and ever stronger and ironically using the data that we're trying to build infrastructure to support,” says Turner. “Social media means that opposition groups can move a lot faster. They can mobilize their networks and grow their reach and their numbers a lot quicker.” Social media also means the stakeholder landscapes are much bigger than they were previously. Companies are forced to change how they engage with many communities, especially now that Covid-19 has restricted in-person gatherings. “It's not so much about dropping 100 flyers through local homes
now. Influencers, stakeholders, and representatives of the community may not necessarily live just around the corner from the development site any more.” “We've moved away from parish council meetings and village halls, to having to be more digital. But that has meant that some engagement is lost and we are working out how to ensure those that may choose not to engage digitally can get involved.” Online petitions are becoming increasingly popular in 2020-2021 because they are easy to set up and can easily drive attention and gather sign-ups, although they can often draw attention and resistance from special interest groups well beyond the local concern of the facility itself.
Issue 40 ∞ April 2021 53
Turner adds that these special interest groups can often be very quick to mobilize opposition. In the UK these could be social groups such as horse riders, night joggers, cyclists etc, and can be hard to account for when trying to understand local communities. “They're already active, they're already together, and they may not choose to engage with traditional channels, they may not be a recognized group with the council and recognized Residents Association.” How to engage with locals on data center construction projects It’s important for data center firms to understand the location and proposed site; its history and previous use, the community in the surrounding area and any connection to the proposed site, as well as political landscape that a developer seeking to enter. Having the right information and getting it to the right people can ensure that local communities accept, or at least don’t resist, proposed data centers. “If you're a new developer or an investor that wants to come into a community, you really need to take time to get to know that community to understand their priorities, their concerns,” says Turner. “Work with that community to understand, engage, and be part of it rather than just being an imposition on it.” “Developers need to be authentic. People live there, they know the area and if you try and tell them that this is the right site and
"If you're a new developer or an investor that wants to come into a community, you really need to take time to get to know that community to understand their priorities, their concerns.” you haven't got the proper evidence to back it up, they'll see straight through it and you lose credibility.” Robert Thorogood, executive director at Hurley Palmer Flatt’s data center division, says there may often be concerns around local environmental issues such as noise, power usage, water usage, and emissions, and it’s important to understand and satisfy any such concerns that might arise around how you’re planning to address them. “If a data center has really large flues, for example, some people could be alarmed at seeing such large chimneys,” he says. “But explain the rationale behind them. When you do that, there’s more tolerance. For those concerned with noise generation, we work closely with the local environmental officer and develop up a strategy and significant details showing how we won’t lift the local background noise level. “More and more planning authorities are becoming increasingly sensitive around the specification of data centers in terms
54 DCD Magazine • datacenterdynamics.com
of where the energy is coming from, particularly with regards to any local carbon offsetting commitments.” As well as noise, power, and pollution, aesthetics can be a common concern, so be conscious of how a data center might impact its surroundings and be willing to offer concessions around facades, landscaping, and tree planting. Turner thinks it’s important to pitch data centers as a new form of utility; on par with the likes of power lines and gas pipes as an essential part of today’s world that people just have to accept. “We need the Internet in our day-to-day lives, and we need to accept that there is associated infrastructure,” he says. “Being data-hungry, going paper-free, streaming something to watch, finding something to buy, all has an associated impact. “Developers need to take communities on that journey. The need case for data centers is very strong, but there's a lot of work still to do around getting people to understand it.”
Work With Communities
“Flying under the radar can sometimes be a benefit. But if you come into problems, you'll get caught out... the community will feel rushed." Secrecy and speed doesn’t help the cause Data center companies often use subsidiaries to hide their involvement in projects. Facebook has used companies such as Greater Kudu LLC, Raven Northbrook LLC, Starbelt LLC, and Stadion LLC. Google has previously used Jasmine Development LLC, Fireball Group LLC and Montauk Innovations LLC. Amazon recently used Willow Developments LLC to hide its involvement in developing the Didcot A Power Station, in Oxfordshire, UK. Hurley Palmer Flatt’s Thorogood says this could be for commercial reasons that might impact the cost of land or construction, for example. But this can come at a risk. Challenges can arise when developers try and do things quietly or secretly, as the community feels like it is being intentionally left out. It can also be hard to get away with, as people will always be interested in new large-scale developments in their area. “Flying under the radar can sometimes be a benefit. But if you come into problems, you'll get caught out,” he says. “If you turn up and you say, ‘hello I'm looking to put this data center around the corner and the application's going in next Thursday’ – and I have seen that done – the community will feel rushed and that there's no opportunity to influence the development.” While some companies might be willing to gamble on operating quickly and quietly, Turner says that doesn’t excuse the need to come prepared in terms of understanding the heritage of the site or the potential benefits it can bring the community. “You can go under the radar, but have that compelling story. That is where people have fallen down; in the process of not explaining the client and the end user, the message hasn't been there too. That compelling narrative can absolutely still come through, regardless of who the applicant is.” Highlight the positives of a data center Pointing out the positives of a data center can be critical in developing a support network for your projects. Emphasizing job creation, especially if building in an area where employment has suffered, is a simple way to gain support. Another example would be to highlight the bio-diversity net gain if the site includes
plans for additional green space, or the energy benefits if there’s a district heating element to the development. “A lot of developers forget that,” says Turner. “Not only are we providing data infrastructure for the wider community to do their shopping on, but fundamentally why it's a good thing for their community is often forgotten part of the development story.” “Some people will always oppose sites, but if you structure your communications and engagement and you're able to understand their priorities and their needs, you can work with the community to shape the proposals to form an acceptance.” Sometimes firms will do local outreach and help the community completely outside of the day-to-day running of data centers. Sabey recently provided a $15,000 grant to purchase Chrome tablet computers for the district’s youngest early learners in Douglas County, Washington. Google has made similar donations of equipment. Rennie Dalrymple, managing director at Concert says on one data center development scheme, the company is working closely with Uxbridge College to see what the team can do in terms of technical qualifications such as BTECs and NVQs and help provide specialist skills to young people in the area. “Emotions can snowball,” adds Prime’s Sumter. “Data center developers can do a better job at explaining the benefits that a data center can bring to a region and thereby guiding the narrative. “Be prepared to discuss the good and diffuse the perception of big box, noisy and polluting sites. Be proactive to reach out to community leaders and neighborhood organizations and invite them in for discussions and show them the renderings and the data collected to demonstrate the care taken to ensure a well thought out low impact project.” “Don't be afraid to tell a compelling story,” says Turner. “People care about their local area, wherever they live. They’re looking for partners, for someone to come and genuinely add to their community. "It might seem like an added cost or a hidden cost or an ongoing burden, but forming partnerships in the community is the key to successful and sustainable development.”
The Dos and Don'ts of community engagement
Do your homework: Understand the site, its history, and that of the community. This should inform the story you tell about your development Do get your messaging right: Lean into the site’s heritage, emphasize positives such as job creation or environmental/ community benefits. Do understand your stakeholder landscape: Speak to local stakeholders, understand the community to ensure you don’t miss anyone out when you're trying to engage. Do research around special interest groups: The groups with particular interests that may focus on one particular topic can be vocal and quick to mobilize. Do engage early: Ensure the community has time to share their concerns and can have input on some of the proposals. Do make the information available: Ensure you have a clear communication strategy, make it easy to find information and allow residents to have clear channels to engage with you through. Don’t come unprepared: Not understanding the site or the community and just showing up with proposals will anger locals. Don’t forget to follow through: If commitments are made around meetings, communications, or engagement then developers need to follow that through, failure to fulfill promises will quickly get people opposing a project.
Issue 40 ∞ April 2021 55
The Next Crisis
The next crisis is coming
I
n the short few months since the last magazine, we’ve seen data centers and semiconductor fabs burn down. We’ve suffered record storms, lengthy droughts, and watched as the global supply chain ground to a halt when one wayward ship blocked the Suez Canal. Meanwhile, a conspiracy theorist blew himself up next to a telecoms facility, while far-right extremists have targeted cell towers and network infrastructure. All this, while we’re still recovering from the great pandemic. The data center industry has always been a cautious one, relying on technology and practices that work, instead of putting too much faith in what’s new. That has served it well, helping facilities keep going when other industries have struggled.
But too much conservatism can also be a problem. Designs and procedures from years ago look at ‘hundred year floods’ as something that happens once in a lifetime, if that. Now, they can happen annually. To weather the next great wave of natural and human disasters will require keeping the focus on using what works, but also demands new thinking and aggressive innovation. A slightly longer storm in Texas, a slightly worse drought in Taiwan, a more carefully planned terrorist attack… the systems we have in place now would have faltered. Just because data centers have mostly survived recent storms - and have even prospered during the pandemic - should not fool us into thinking the network is secure. The Internet is ridiculously fragile. A reliance on the past is not going to fix that.
56 DCD Magazine • datacenterdynamics.com
We’ve suffered record storms, lengthy droughts, and watched as the global supply chain ground to a halt when one wayward ship blocked the Suez Canal.
Sebastian Moss Deputy Editor
MILLIONS OF STANDARD CONFIGURATIONS AND CUSTOM RACKS AVAILABLE
CUSTOMIZATION IS OUR STANDARD.
THE POSSIBILITIES ARE ENDLESS... MILLIONS OF CONFIGURABLE RACKS AVAILABLE IN TWO WEEKS OR LESS
MADE IN THE USA www.amcoenclosures.com/data www.amcoenclosures.com/data
847-391-8100
an IMS Engineered Products Brand
847-391-8100
MADE IN THE USA