Issue 44 • April 2022 datacenterdynamics.com
The world’s largest data center
The battle for Northern Virginia
Dublin’s data center dilemma
Trouble in Soviet Florida
We search for the biggest facility on the planet
The industry’s primary market turns against it
The end of data centers in Ireland’s capital?
Cryptocurrency risks upending Abkhazia
CUSTOMIZATION IS OUR STANDARD. MILLIONS OF CONFIGURABLE RACKS AVAILABLE IN TWO WEEKS OR LESS
www.amcoenclosures.com/data
an IMS Engineered Products Brand
847-391-8100
MADE IN THE USA
ISSN 2058-4946
Contents April 2022
6 News Meta fallout, data center fires, bankruptcies, Russia’s invasion, ISIS, and how AWS makes money 12 A forgotten history of computing Three decades of photographs and films risk being lost. We publish some of them for the first time.
19
The CEO interview “ We had a strong conviction from day one that we are only in Asia,” says PDG’s Rangu Salgame. “So no other geographies.”
12
19
23 T he biggest on the planet We go in search of the world’s largest data center
23
Sponsored by
28 Dublin calls time on data centers Is this it for data centers in Ireland’s capital? 31 T he Automation supplement Drones, network automation, and AR construction 45 The battle for Northern Virginia Locals fight back against more facilities in the data center heartland
Automation Supplement
31
INSIDE
Toward the self-driving data center
45
The Network Conundrum
AR for building
> Automated networks can be a double-edged sword. We need intelligent automation
> Engineering grade augmented reality can help get a project right first time
Drones in a data center > Security guards may need a helping hand to patrol today’s mega-campuses
49 Singapore’s comeback The country hopes to bounce back after its own moratorium 52 T rouble in Soviet Florida Abkhazia risks being torn apart by Russian-backed crypto entrepreneurs 56 SMART cables Using submarine cables for science 60 Spinning up a quantum computer Quantum Motion turns to silicon 65 Data warehouses Logistics giants are coming for the data center 68 Terrestrial radiation & FPGAs How to protect your kit
28
70 Sustainable financing Green ways to get your green bucks 74 Op-ed: The end of techno-optimism
Issue 44 • April 2022 | 3
ENHANCED STORE & DEPLOY
Rigging & Warehousing All Datacenter Equipment OFCI Equipment (Owner-Furnished, Contractor-Installed) Contractor-Furnished Equipment (Electrical, Mechanical, OEM, Modular)
SERVICES INCLUDE: • • • •
Receive & Inspect at Warehouse Store – Short & Long Term Transport & Deliver to Site Rig & Set
National provider of data center equipment rigging and warehousing services. Can initiate Level 2 Commissioning process at our warehouse providing confirmations of equipment received, damage inspections, and shipping-splits. We help you identify damages and inconsistencies before they create cost or schedule impact on the project. Customers receive automatic notifications of project shipments in & out of the warehouse and can access inspection images and information on your items in the warehouse via an online portal. Specialized Carriers & Riggers Association 2021 Job of The Year American Crane & Transport SpecializedLifting50 Ranking
BUILT ON COMMITMENT meiriggingcrating.com | Tony Cygan – 215-783-0502
3.7m sq ft
From the Editor Can data centers be good neighbors?
A
s we write, there is a war in Europe - and the data center sector is tangled up in it. We have news of the technological fallout from Russia's unprovoked invasion of Ukraine (p7-8). But it seems there's increasing tension within our sector, quite independent of the wider geopolitical struggles. Everywhere we look, data centers are at the center of disputes, with protests, bans and moratoriums the order of the day.
How data centers relate to their neighbors matters, because they are so huge In the US an actual battlefield, from the US Civil War, might just be the site where the growth of the Northern Virginia mega-hub finally comes to a halt. Residents near the Manassas Historic Battlefield don't all agree on the benefits of a QTS-driven development along a rural belt (p45). Meanwhile, Irish politicians have accepted a de facto halt on developments near Dublin, that threaten the greening (and the stability) of the country's electrical grid (p28). They do things more quietly in Singapore, where a moratorium no one spoke of is ending with conditions that are mostly secret. There's a conflict between the country's size and its ambition to be a hub (p49).
In Abkhazia, data center conflict is more extreme, and likely to lead to actual gunfire. Russian cryptominers seized energy in the post-Communist chaos of the separatist state. And that gives the Russians an opportunity (p52).
Size brings stress Data centers can be good or bad neighbors. That's become a serious issue, because they now have a significant size and weight compared with the states where they live. We have a look at other aspects, including how they are coming up against adjacent sectors like logistics (p65) and how they can be financed sustainably (p70). In APAC, Rangu Salgame wants to avoid conflicts - he's CEO of the pan-Asian developer Princeton Digital Group (p19). We also asked ourselves a simpler question just how big is the world's largest data center? The answer wasn't quite what we expected (p24).
(and counting) The current size of Meta's Prineville campus
Debates
Training
Partner Content Editor Claire Fletcher Head of Partner Content Graeme Burton @graemeburton SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Eleni Zevgaridou Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison
Head Office
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
News Editor Dan Swinhoe @DanSwinhoe
DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
Dive even deeper
Events
Editor-in-Chief Sebastian Moss @SebMoss
Chief Marketing Officer Dan Loosemore
Tech challenges DCD always focuses on the technology challenges of the sector - past, present, and future. Our cover features exclusive images from a superb archive documenting crucial years in IBM's history (p12). Our automation supplement predicts self-driving data centers (p31). We learn how subsea cables that could sense global disasters (p56). And we meet a startup that wants to build quantum tech with old-school silicon (p60). If that leaves you feeling optimistic for tech, your reality check is on p74.
Meet the team Executive Editor Peter Judge @Judgecorp
Awards
CEEDA
www.pefc.org
© 2022 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
Issue 44 • April 2022 | 5
Whitespace
News
NEWS IN BRIEF
The biggest data center news stories of the last three months
Samsung joins Tomorrow Water to combine data centers and sewage data centers placed at sewage plants can benefit from locally-created biofuel, and waste heat can assist the water treatment process.
Spanish police raid indoor marijuana plantation, find it is actually an illegal crypto mining operation Either way, the operation is smoked; the makeshift data center was being used to mine Bitcoin, using public electricity.
Japanese snow-cooled data center opens an eel farm The White Data Center (WDC) in the city of Bibai has imported 1700 elvers (young eels) and will farm them in tanks at the data center. The data center is cooled with melted snow, gathered in winter and used all year round.
Facebook owner Meta suspends Zeewolde, Netherlands data center due to political pushback Facebook owner Meta has paused plans to build a huge data center in the Netherlands. The facility in Zeewolde gained approval from the local council, but has faced increasing political opposition - including the Dutch Senate voting in favor of reconsidering the project. The data center would have been the Netherlands’ largest, and one of the largest in Europe, with five halls and 200MW. “We strongly believe in being good neighbors, so from day one of this journey we stressed a good fit between our project and the community is foremost among the criteria we consider when initiating and continuing our development processes,” Meta said in a statement. “Given the current circumstances, we have decided to pause our development efforts in Zeewolde.” After the local council approved the project, party Leefbaar (Liveable) Zeewolde ran on a platform of opposition to the data center, citing environmental concerns and a lack of local input. The party gained a majority in the municipality. The Dutch government also in February enacted a nine-month moratorium on
permits for data centers larger than 10 hectares. Last June, the Dutch province of Flevoland, where Zeewolde is located, announced an indefinite halt on data center developments. Meta’s project predated these bans, but the company put it on hold anyway. Meta said that it would continue to cooperate with the municipality, and may restart the project at a later date. “Our space is limited, so we have to make the right choices,” said housing and planning minister Hugo De Jonge in a letter to the House of Representatives, when the Dutch government said it would enact its moratorium. “Hyperscale data centers take up a lot of space and consume a disproportionate amount of available renewable energy. That is why the cabinet wants to prevent hyperscale data centers being built throughout the Netherlands.” Minister De Jonge will investigate the possibility of only allowing new hyperscale data centers at coastal landing points for wind energy on the coast, “if there is room for this.” bit.ly/FacebookVSZeewolde
6 | DCD Magazine • datacenterdynamics.com
Small plane makes emergency landing across from NTT’s Dulles data center campus The 1977 Cessna 210 landed without major damage or fire, coming to a halt when it hit an embankment across the street from the data center. The plane went down due to engine troubles.
Microsoft accused of spending hundreds of millions on bribes A former senior director at Microsoft has accused the company of paying illegal bribes to win business deals in the Middle East and Africa. He estimates that “a minimum of $200 million each year” goes to Microsoft employees, partners, and government employees.
Top US markets switched on 493MW of data center space in 2021 In the top seven US markets, data center leasing was 31 percent bigger than the previous record year, 2019, according to CBRE. It was also a full 50 percent bigger than 2020, which dropped due to the pandemic. Northern Virginia was the biggest market, with more than 60 percent of the nation’s total new data center space.
DCD Magazine #44
Sungard files for second US bankruptcy in three years Sungard Availability Services has filed for Chapter 11 bankruptcy, three years after emerging from its last bankruptcy. The US and Canadian filing comes just two weeks after its UK division entered into administration, blaming surging UK energy prices and landlords declining to reduce rent costs despite the pandemic. The managed IT company filed in the US Bankruptcy Court for the Southern District of Texas with about $424 million in secured debt. During its last Chapter 11 it was able to reduce its debt by more than $800 million, but said that that did not resolve “challenges inherent to the company’s operating structure.” Among the issues listed were high leasing
costs and underused space. CEO Michael Robinson blamed the Covid-19 pandemic and rising energy prices - although neither existed at the time of the first bankruptcy. “Like many companies, our business has been affected by challenges in our capital structure, driven by the global Covid-19 pandemic and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation, and reduction in demand for certain services,” said Robinson. “Over the past three years, we’ve made significant network, product, and infrastructure investments which are being well-received by customers and gaining
significant traction. We believe the Chapter 11 process is a right and critical step forward for the future of our business and our stakeholders.” Sungard has taken out a $95m loan to fund operations during the bankruptcy, which is expected to continue until mid to late summer. Sungard has about $5m in cash, and is considering a sale of its assets or converting its existing debt to equity. Its UK business is also looking for buyers. Court documents show that it owes money to landlords North Broad Lessee, NetWorks Associates, Russo Family Ld, Redwood DC Assets, Landmark Infrastructure Partners, Landmark Dividend, and 410 Commerce. Other creditors include Amazon Web Services, Microsoft, Vertiv, Ensono, and Micro Focus. The company operates around 16 data centers and 14 workplace recovery facilities across North America. Sungard’s largest shareholder is investment firm Angelo Gordon. Blackstone Credit, Carlyle Group, FS/ KKR Advisor LLC, and Arbour Lane Capital Management LP also hold stakes in the company. The company noted its operations in Ireland, France, India, Belgium, Luxembourg, and Poland are not impacted by the proceedings in the US, Canada, or UK. Sungard AS is advised by Akin Gump Strauss Hauer & Feld LLP, Jackson Walker LLP, Cassels Brock & Blackwell LLP, FTI Consulting, Inc., DH Capital, LLC, and Houlihan Lokey Capital, Inc. bit.ly/Sunsetgard
Ericsson workers were kidnapped when telco sent them to negotiate with ISIS Swedish telecoms giant Ericsson sent contract workers into land controlled by the Islamic State in Iraq, leading to some of them being kidnapped. The revelation comes after Ericsson was forced to admit it had paid ISIS to allow it to operate in, and travel through, land controlled by the terrorist group. After it admitted to the crime earlier this month, shares in the company fell more than 14 percent. In a leaked Ericsson report obtained by the International Consortium of International Journalists and seen by BBC News Arabic, the company said that it had sent contractors into IS-held territory. The 2019 internal investigation into corruption and bribery said that a senior Ericsson lawyer recommended shutting down the company’s operation in Iraq after ISIS took over Iraq’s second-largest city, Mosul, in June 2014. But he was overruled by senior managers that claimed the action was “premature,” and would “destroy” Ericsson’s business in the country. The company insisted contractors continued to work in the region, putting their lives at risk, the report found. An undisclosed number of contractors were taken hostage. bit.ly/DoNotWorkWithISIS
Issue 44 • April 2022 | 7
Whitespace
US Treasury exempts Internet communication providers from Russia sanctions The US Treasury has exempted the provision of Internet communication services from US sanctions against Russia, imposed in response to its unprovoked invasion of Ukraine. The move was welcomed by human rights and open access groups. “The exportation or reexportation, sale, or supply, directly or indirectly, from the United States or by US persons, wherever located, to the Russian Federation of services, software, hardware, or technology incident to the exchange of communications over the Internet, such as instant messaging, videoconferencing, chat and email, social networking, sharing of photos, movies, and documents, web browsing, blogging, web
hosting, and domain name registration services, that is prohibited by the RuHSR, is authorized,” Bradley T. Smith, the deputy director at the Office of Foreign Assets Control, said. There are a few exceptions, however, such as working with certain Russian financial institutions or other transactions prohibited by Executive Order (E.O.) 14066 or E.O. 14068. Several companies providing the ‘exchange of communications over the Internet’ have reduced their business in Russia, or ceased altogether. Internet backbone companies Cogent and Lumen said that they were ending data transfers to Russia, although the latter business appears to still be connected.
The London Internet Exchange (LINX) disconnected Russian telecoms companies Megafon and Rostelecom. Cisco, Google, Microsoft, and Oracle have also suspended work in the country, along with Apple, Ericsson, PayPal, Mastercard, Visa, Intel, AMD, Nvidia, TSMC, Nokia, and Ericsson. Amazon Web Services will not accept new customers in Russia and Belarus. But content delivery network (CDN), Edge, and web infrastructure companies Cloudflare and Akamai this week said that it would continue to operate in Russia, arguing that Russian citizens need access to the Internet. The Ukrainian government has called on organizations like RIPE NCC and ICANN to help disconnect Russia from the global Internet, but human rights groups say a cutoff would block free information access. “This is exactly what Access Now, Human Rights Watch, Electronic Frontier Foundation, Article 19, and 50+ global, regional & local orgs & individuals from Russia, Ukraine, etc. have been asking for,” Access Now tech-legal counsel Natalia Krapiva said. “The US Treasury decision helps Russian independent media, human rights defenders and anti-war protesters who have been relying on US technologies to report on and oppose Russian aggression in Ukraine to continue their work & communicate & organize in a safer way.” Krapiva previously criticized Namecheap, Slack, and Mailchimp for dropping services to all Russian customers, including human rights groups and independent media. “What’s your excuse now?,” she said. bit.ly/KeepAnOpenChannel
Russian gov’t faces computing shortage due to sanctions, may seize data center IT The Russian government may take over the IT resources of companies that have departed the country due to its unprovoked invasion of Ukraine. The country’s public sector faces an acute computing shortage due to sanctions, and is also considering taking up additional space in Russian-based commercial data centers. The government may have as little as a month and a half of data storage supplies on hand. It is exploring potential emergency measures, with the Ministry of Digital Transformation holding a meeting with executives from Sberbank, MTS, Oxygen, Rostelecom, Atom-Data, Croc, and Yandex. Much of that storage is used for Russian ‘smart city’ surveillance efforts. Russian business publication Kommersant reports that authorities are preparing to buy out all the capacity at commercial data centers (potentially including IT already contracted out), and take over the IT resources of companies that have announced their withdrawal from Russia. Russian businesses are seeking data center space in Russia, owing to the effeccts of sanctions, and companies like AWS not serving new customers in Russia, while others like Equinix have ended all Russian business. bit.ly/PanicShopping
8 | DCD Magazine • datacenterdynamics.com
DCD Magazine #44
Elon Musk’s SpaceX was paid by US to send Starlink terminals to Ukraine The US government gave SpaceX millions to send Starlink terminals to Ukraine to ensure connectivity during Russia’s invasion. Elon Musk’s rocket company previously said that it had not received money for its Starlink deliveries, and cast the whole effort as a charitable endeavor. However, the company did donate thousands more terminals, along with those funded by taxpayers. SpaceX President Gwynne Shotwell told CNBC: “I don’t think the US has given us any money to give terminals to the Ukraine.” But documents seen by The Washington Post show that USAID purchased around 1,500 Starlink terminals at $1,500 apiece, and spent $800,000 for transportation, adding up to over $3 million in public funds. It later bought another 175 units. USAID also paid for the shipping of nearly 3,700 terminals, which were likely donated by SpaceX. The French government also covered the cost of delivering 200 Starlink kits, while Poland is believed to have helped with some deliveries. bit.ly/MuskScoresAnotherGovContract
Viasat: Our network was hit by a “multifaceted and deliberate” cyberattack Satellite operators outlines how threat actors targeted its KA-SAT network Viasat has provided an overview of the cyberattack that crippled its European satellite services, especially in Ukraine. On the same day that Russia invaded Ukraine, Viasat began suffering issues with its KA-SAT network. The company later acknowledged that it was engaging with cybersecurity firms to investigate the issue in the midst of a suspected cyberattack. “On 24 February 2022, a multifaceted and deliberate cyberattack against Viasat’s KA-SAT network resulted in a partial interruption of KA-SAT’s consumer-oriented satellite broadband service,” the company said in a breakdown of the incident. “While most users were unaffected by the incident, the cyberattack did impact several thousand customers located in Ukraine and tens of thousands of other fixed broadband customers across Europe.” Viasat said the attack was focused on a consumer-oriented partition of the KA-SAT network that is operated on Viasat’s behalf by a Eutelsat subsidiary, Skylogic, following a 2021 acquisition arrangement. On the day in question large high volumes of “focused, malicious traffic” were detected emanating from several SurfBeam2 and SurfBeam 2+ modems and/or associated customer premise equipment (CPE) located
within Ukraine, making it difficult for many modems to remain online. Other modems emerged on the network to continue the targeted DDoS attack throughout the next several hours, degrading the ability of other modems to enter or otherwise remain active on the network. Viasat and Skylogic then began to observe a decline in the number of modems online in the same commercial-oriented partition. Tens of thousands of modems dropped off the network and never tried to re-connect. Viasat said the attack impacted a majority of previously active modems within Ukraine, and a “substantial number” of modems in other parts of Europe. “We believe the purpose of the attack was to interrupt service. There is no evidence that any end-user data was accessed or compromised, nor customer personal equipment (PCs, mobile devices, etc.) was improperly accessed, nor is there any evidence that the KA-SAT satellite itself or its supporting satellite ground infrastructure itself were directly involved, impaired or compromised.” A German wind turbine manufacturer said remote operation of more than 5,000 turbines had been impacted by the disruption bit.ly/SatAttack
Nokia helped build Russian state surveillance network Nokia helped build Russia’s state surveillance network, the System for Operative Investigative Activities (SORM). The company, which this month said that it would stop sales in Russia, spent more than five years providing equipment and services to link SORM to Russia’s largest telecom service provider, MTS. According to company documents obtained by The New York Times, Nokia worked with state-linked Russian companies to plan, streamline and troubleshoot the SORM system’s connection to the MTS network. It provided SORM-related work at facilities in at least 12 cities in Russia, and worked with the company that manufactured the SORM hardware, Mavin. SORM is used by Russia’s intelligence service, the FSB, to eavesdrop on phone calls, intercept text messages and emails, and track Internet communications. It is used to help trace dissidents and is tied to the assassination of Kremlin critics. bit.ly/AidingandAbetting
Issue 44 • April 2022 | 9
Whitespace Data center fire hits Manila’s Supreme Court The fire started at around 6am and was swiftly brought under control by the Bureau of Fire Protection. The Supreme Court’s website remained offline for more than a day. The fire was quickly contained and put out, although traffic was heavy for a time because streets were closed. Witnesses told Dobol B TV.’s Manny Vargas that a UPS at the data center had caught fire, reporting the sound of a blast and a smell of burnt wire. The Supreme Court’s chief public information officer said the UPS had exploded, according to a Manila Times report. The Supreme Court was able to continue with plans to announce Bar examination results on the same day, according to Supreme Court SC Associate Justice Marvic Leonen: “This will NOT affect the release of the results of the bar,” he tweeted. “We have secured the data files and our chambers and OBC [Office of the Bar Confidant] are fully operational. As has characterized our operations, we have preparations for every contingency. Keep safe everyone.”
OVHcloud fire report: SBG2 data center had wooden ceilings, no extinguisher, and no power cut-out One year after the devastating fire which destroyed an OVHcloud data center in Strasbourg, the local firefighters have issued a report with strong criticism of the French operator’s facility. The Bas-Rhin fire service says that the SBG2 data center had no automatic fire extinguishing system and no general electrical cut-off switch. The whole building went up in flames on March 10 2021, and other data centers on the site were also damaged. More than 130 customers have joined a class-action suit alleging that OVHcloud failed its responsibility and has not given enough compensation to businesses that suffered. According to the report, firefighters on the scene found electrical arcs more than one meter long flashing around the door to the power room, and it took three hours to cut off the power supply
because there was no universal cut-off. “The flashes were impressive and the noises deafening,” the report said. The power room had a wooden ceiling designed to withstand fire for one hour, and the electrical ducts were not insulated. Once the fire escaped from the power room, it grew rapidly. The report says that “the two interior courtyards acted as fire chimneys.” Journal Du Net claims the spread of fire may have been accelerated by the site’s free cooling design, which encourages the flow of outside air through the building to cool the servers. OVHcloud told DCD said that it could not respond in detail to the fire report, as it is still working with its insurers and government agencies on a formal report. bit.ly/OVHupinaCloud
Peter’s OVHcloud fire factoid 103 firms have joined the OVHcloud class action, and four large players are taking action individually. Law firm Ziegler & Associés says OVHcloud has offered a flat rate of €900 in compensation each.
bit.ly/InterruptiblePowerSupply
Fire at Iranian telco data center causes widespread Internet outages Around 64-69 percent of Iranian citizens A fire at a telecommunications building in Tehran caused Internet outages across Iran in early March. Firefighters were seen outside Telecom Infrastructure Company (TIC), the monopoly provider of telecom infrastructure to all public and private operators in Iran. Mahdi Salem, Deputy Telecommunication Minister, said that an “electrical connection” had caused the fire, but it was later repaired. The data center is used by Iran to help censor the local Internet - making it a bottleneck that, if disrupted, brings down a large chunk of local connections.
10 | DCD Magazine • datacenterdynamics.com
are believed to use the Internet, as of 2018. But the government heavily censors the Internet that they can reach, blocking platforms like YouTube, Facebook, Twitter, Blogger, Telegram, Snapchat, Medium, Netflix, Hulu, and most major western news outlets. Its data centers are built with black market tech due to US and European sanctions. Last year it launched Simurgh, its most powerful supecomputer to date. With one petaflops of power, it is 100 times more powerful than previous Iranian supercomputers. bit.ly/OneWayToCensorTheInternet
DCD Magazine #44
Amazon Web Services owns 11.9 million square feet of property, leases 14.1 million square feet In its latest 10-K filing, Amazon Web Services has revealed how much property it has control over - 26 million square foot (2,415,500 sq m). That is broken down between 11.9 million sq ft (1,105,500 sq m) that it owns and 14.1 million sq ft (1,310,000 sq m) that it leases. The filing does not disclose how much of that is actually data center white space, rather than all the other additional parts of data center, sales, and other property. However, it does not include corporate facilities or headquarters. “Property and equipment acquired under finance leases was $11.6 billion and $7.1 billion in 2020 and 2021, reflecting investments in support of continued business growth primarily due to investments in technology infrastructure for AWS,” the filing states. Amazon operates 84 availability zones in 26 regions. It does not disclose the location of its data centers, nor which wholesale data center companies it partners with. In 2018, WikiLeaks revealed the locations of Amazon’s data center footprint from 2015. At the time, the company operated some 38 facilities in Northern Virginia, eight in San Francisco, another eight in its hometown of Seattle and seven in northeastern Oregon. In Europe, it had seven data center buildings in
Dublin, Ireland, four in Germany, and three in Luxembourg. Over in the APAC region, there were 12 facilities in Japan, nine in China, six in Singapore, and eight in Australia. It also housed infrastructure in six sites in Brazil. It also disclosed that the company had partnerships with Equinix, CyrusOne, Digital Fortress, Hitachi, Terremark, KVH, KDDI, Keppel, Tata Communications, Colt, Global Switch, iseek-KDC, NextDC, and Ascenty (now owned by Digital Realty). Since then, Amazon has aggressively expanded its data center presence. Last year, it spent $32.5 million and $40m just to buy two plots of land in Virginia, where it already operates around 40 data centers in Haymarket, Manassas, Ashburn, Chantilly, and McNair. It also plans to build on a 41-acre plot in Fauquier County and has filed to build four data centers on an empty plot of land in Manassas. That year, DCD discovered the company was behind a large data center in Oxfordshire, UK, and a smaller facility in Swindon - both of which it tried to keep secret. It is also trying to build up to four upcoming data centers in Clonshaugh, Dublin, and a data center design site in Croatia, to name but a few. bit.ly/TheLandBaron
Amazon planning 100MW data center campus in the City of Gilroy, California AWS is looking to build a data center in the City of Gilroy, California. The company paid $31.3 million for the plot along Highway 10 in 2020. According to documents filed with the City, the company is looking to build two 49MW data centers buildings on the site totaling 438,500 square feet (40,700 sq m), along with two 50MW Battery Energy Storage Systems (BESS) and some ancillary buildings. The site would be developed in two phases. Phase I would include the first single-story data center building totaling 218,000 sq ft and 50MW BESS facility and 25 diesel generators. Phase II, constructed within 4-7 years would consist of the second singlestory data center building, a second 50MW BESS facility and as yet undetermined ‘alternative backup generation technologies’ to avoid using diesel-fired generators. The company said the facilities would have an average PUE of 1.18. bit.ly/BESSBuy
Global governments gave Amazon $4.7bn+ in subsidies for data centers & warehouses Over the past decade, Amazon received at least $4.7 billion in state and federal government subsidies around the world to support its build out of data centers, warehouses, offices, call centers, and film production projects. A new report by watchdog group Good Jobs First and labor federation UNI Global Union collated data from public records, investor reports, company statements, and government marketing materials. But, it noted that “due to poor disclosure practices in most countries, the costs of such deals are hidden: the total is undoubtedly significantly higher.” Among the known subsidies are for data centers in the US, Argentina, Bahrain, Chile, and Ireland, as well as for two data centers in China, and four in India. “We especially urge all nations, including the United States, to disclose subsidies given to data centers (including energy discounts and utility tax exemptions), which are the most opaque,” the report states. bit.ly/TheyNeedTheHelp
Issue 44 • April 2022 | 11
DCD Magazine #44
A lost history of Canadian computing risks being forgotten Sebastian Moss Editor-in-Chief Photography courtesy of IBM
12 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Forgotten Forgotten Photographs Photographs
I
n 1957, George Dunbar made one of the best decisions of his life. He joined IBM. "I feel so lucky, because IBM was looking for a photographer, and I was hired as their first," he told DCD.
"They were buying all of their photography from commercial studios mainly employee pictures because they had an extensive employee magazine, a country club, a golf course, a baseball diamond, lawn bowling, and tennis courts," he explained. "I was pretty lucky to get that job and be the first photographer there."
Art from mainframes Dunbar would go on to work with IBM out of Canada for an incredible 32 years, from 1957 to 1989, amassing an enormous portfolio that documents a little-studied computing revolution. Now those records could be lost. "I was there when they had the huge mainframe computers in 1957, and I saw all the changes in computers over the coming years. Now, of course, I'm amazed that
"There is this purely artistic experimentation in his work - if you look at some of these photographs, it's just amazing, you will immediately want it hanging on your wall" people are walking around with a computer in their pockets,” he said. “It's more powerful than those mainframes." Dunbar's photography is powerful in its own way, too. Given what could be seen as a rather lifeless and corporate remit, he managed to inject art and beauty into his job. "I would create my own assignments," he said. "I photographed all the manufacturing procedures, something the company never had done before. They were quite surprised to see the kind of photographs I was able to produce, they really ate those up and started publishing it in newspapers and magazines, and in some of the trade magazines."
He also had to do what he called 'gripand-grins,' the more traditional photographs of white men in white shirts and dark suits shaking hands. "I took thousands of those, I never really enjoyed that," he said. Where he took joy was in photographing the systems, as well as use cases. "I was sent in to do assembly plants, steel mills, and all kinds of industries across Canada, and see all those technical operations." Among his favorite works are his images of memory cores. "I've seen a lot of pictures of memory cores. They are all basically close-ups. I took a photograph with a face behind it, which I think was very unique."
Issue 44 • April 2022 | 13
DCD Magazine #44
"The minute it was no longer on their loading dock and entered our vault it became less of their problem and more of ours"
14 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Forgotten Photographs Another was his ‘glass head’ photograph, where he used a prop his son brought home and "I saw that it would pick up the reflection from the computer screen. I played with it for a while and that was what I came up with." Despite the photo being from the early 1970s, Dunbar still thinks it could be relevant. "I would hope someday that that photograph might be used to illustrate artificial intelligence," he said, noting that nebulous modern concepts like AI are much harder to visualize than the physical hardware of his day. "Fortunately, it's not my challenge anymore." Today's photographers have another challenge: "Data centers and servers are not as interesting as they used to be in the old days, right? They're just grey boxes, filled with rows and rows of grey boxes. There's nothing to take a photograph of these days, because they're not very interesting." Back then, they were much more visually fascinating, he said. "We had them in glassed rooms, open to the public in some cases, and you could see the spinning tape drives and a lot of the mechanisms that work." The physicality of the servers and storage was a side effect of the level of technology of the day. The open and glassed nature was a choice. "A lot of computer rooms were in show windows in the ground floor of an IBM office building," he said. "It was kind of a promotional thing. People in the streets of Toronto or Montreal would always see a big IBM computer room on Main Street," with the mainframes used to sell the concept first to enterprises, and then solidify the company's brand with consumers for PCs.
Most of these photographs are sitting undisturbed in an air-conditioned room, with little chance of ever being seen history of computing - especially outside of the much-studied and mythologized Silicon Valley. It’s also easy to forget that, for a time, there was one undisputed tech king: IBM. "When I mentioned IBM to my students, they were really thinking 'why am I talking about a company nobody really knows much about?,' I have to explain that in 1960s, 70s, 80s, and well into the 90s, IBM was the company. It was the Apple, Google, Microsoft, and everything combined."
The technology king IBM began in 1911 as an amalgamation of four companies, under the clunky name 'the Computing-Tabulating-Recording Company.' "It was a very convoluted name that basically consisted of the original companies, but a Canadian subsidiary called International Business Machines was formed in 1917," Stachniak explained. "It began being used in adverts, and by 1924, it was the name of the whole company." With this early change, IBM's Canadian division played a crucial role in ensuring the corporation’s success, helping usher in the information age as we know it.
In turn, IBM's business in the country swelled, bringing commercial computers, early artificial intelligence systems, and vast data centers to Canada for the first time. But the history of IBM's impact in Canada has been little studied, and mostly forgotten. Stachniak believes that the key to understanding some of that impact lies in Dunbar’s work. Over his lengthy career, Dunbar took tens of thousands of photographs and films, many of which are of artistic or historical significance. "Some of them were awarded prizes, but they were not distributed widely,” Stachniak said. "There is this purely artistic experimentation in his work - if you look at some of these photographs, it's just amazing, you will immediately want it hanging on your wall." Stachniak regrets that Dunbar was unable to have his images published outside of IBM and technical periodicals, "because he would be a world-renowned photographer by now, that's no doubt. "In my life, I saw so many corporate technological photographs of various technologies, and it's not often that you see that type of photography."
One IBM data center was even located at the ground floor of a major Toronto hotel. When The Beatles came to town, they stayed at the hotel, with crowds filling the street as the IBM facility whirred in the background. “It was a huge corporate propaganda win,” Professor Zbigniew Stachniak recalls. As an associate professor at the department of Computer Science and Engineering at York University, now retired, Stachniak has spent decades trying to piece together a history of the Canadian computing industry. After writing a book about one of the first personal computers in Inventing the PC: the MCM/70 Story, Stachniak began to delve into "what else was done in Canada, and it turns out that there were very many world firsts and significant achievements on the hardware side, computer services side, and on the software side.” In an industry of perpetual change and product launches, forever in search of the next big thing, it's easy to forget the early
Issue 44 • April 2022 | 15
DCD Magazine #44
Unfortunately, most of that photography is sitting undisturbed in an air-conditioned room, with little chance of it ever seeing the world. In the early 2000s, Dunbar offered to give a large portion of his life's work to Stachniak, who runs the York University Computer Museum in Toronto. But the idea initially fizzled due to copyright issues and funding concerns. A few years later, Stachniak tried to get some money from IBM Canada for his
museum. "They listened and said ‘we will let you know, but also would you look at these boxes of photographs in the next room?’” There, Stachniak discovered Dunbar’s work, along with other photographs, trade magazines, newsletters, and films from the IBM film studio Dunbar set up. He was astonished at the neglected treasure trove, and kept visiting. “Eventually, they got tired of me coming,” he recalled. Instead, they offered to donate it to his museum. “IBM really wanted to get rid of it,” he said.
16 | DCD Magazine • datacenterdynamics.com
But his museum lacked the necessary airconditioned rooms for the proper storage of old materials, and the two groups settled on donating it to the York University’s Special Collections archive, where they now remain. The hope was that, with funding, the many thousands of pieces would be digitized for the world to see. But, instead, they languish as low-priority material in an overstretched archive. Representatives of the archive declined to comment on the record, but one
Forgotten Photographs
Issue 44 • April 2022 | 17
DCD Magazine #44 employee spoke to DCD under the condition of anonymity. "It is still unprocessed, we just don't have the staff to deal with it," they said, adding that the archive’s backlog has ballooned with the pandemic. "We're still, I believe, trying to get some money out of IBM to have it processed. But the minute it was no longer on their loading dock and entered our vault it became less of their problem and more of ours."
Saved from the trash The person "didn't want it to go into the garbage. But at the same time, they don't seem to be assisting us and getting it further along." IBM declined to comment on its efforts to help digitize the images or fund the archiving process, but DCD understands it is no longer in communications with the archive. Thousands of images chronicling IBM’s golden era and a fascinating part of Canadian history remain in limbo. It is not clear if they will ever see the light of day. “There's no other collection that visually shows the computing industry in Canada,” Dunbar said. “And I tried to emphasize to them that this is a very valuable historical collection - it's the history of computing in Canada for 30 years. And it is just nowhere else.” A few photographs have been viewed by outside eyes, some of which were exhibited by Stachniak as a limited event at the museum, titled ‘Portraits of a Digital Canada.’ A few others are published here, for the first time in more than thirty years. The rest, however, risk being lost to history. Should these photos be all that remains, Dunbar hopes that they stay with you. "I've always judged photographs by how long a person will look at them," he said. "If you give them a stack of photographs, and they go through them, giving each a one-second glance and go on to the next one, then their photographs are not very good. "But if the person goes through the stack of photographs, and suddenly pauses at one and says, 'Wow, this is something I'm going to look at for a while,' that indicates that it's a good photograph," he said. "And that's what I've always tried to do - something different that will cause the viewer's attention to last a little more than just one second."
18 | DCD Magazine • datacenterdynamics.com
The Pan-Asian Operator
THE PAN-ASIAN OPERATOR Rangu Salgame is building a hyperscale data center business across from his adopted home in New York, and new head office in Singapore
R
angu Salgame comes from India, and has built at least three big businesses there - but he sees himself as a citizen of the world, most at home in New York and Singapore.
He caught the technology wave early at M S Baroda University, Gujarat. He got funding for a solar project while still a student - and then lit out for the US where he spent the first two decades of the Internet in New York building a technology career, rising to become a Verizon president. “I’m originally from India, but that has no bearing on my business life,” he says. “I'm from New York and that is the center of my business life.” In the dot-com boom, he had a startup. A content delivery platform called Edgix, its name and premise sound oddly prescient,
Peter Judge Executive Editor
Growing Cisco, building Tata
both: “I was based in New York, not India,” he reiterated. “I ran a portfolio business in Tata Communications. I ran a wholesale business, a carrier business, and a media business.”
In 2003, Salgame became Cisco’s leader in India, building the network giant’s Indian business to $1 billion in the five years he worked there. However, he points out that he ran that business from New York.
At that stage, he says: “One of my businesses was data center, which included the India business, but we had data centers in Europe and the US - in San Jose, New York City, and LA, as well Slough in the UK.”
After Cisco, he was involved in the Indian network startup Tejas Networks for a brief time, leaving after only four months. A few years at network security firm Niksun followed, after which he joined the Indian telco Tata Communications in 2012, running a set of businesses as CEO of growth ventures and the service provider group.
At this stage, a lot of telcos were quickly building up data center arms, seeing the sector as a growth opportunity. Tata’s growth in India was striking, expanding quickly to a total of 44 data centers, giving it a quarter of India’s data center market.
like one of today’s would-be unicorns. But in the early 2000s, like a lot of other dot-com babies, it died.
While at Tata, he had an impact on data centers, and India, but he downplays
This wasn’t just a colocation business, but under Salgame’s guidance, it became a separate business, dealing with hyperscale customers: “I took the Asia part of the data
Issue 44 • April 2022 | 19
DCD Magazine #44 center business - India and Singapore - and I put that into a separate subsidiary [in 2014]. Then we invested capital to transform that to a hyperscale business.” Tata was early in focusing on hyperscale, he said: “Particularly, at that time, in India at the time and Singapore.”
billion data center portfolio” across Asia. Salgame’s co-founder is Varoon Raghavan, who worked with him at Tata: “He was my key guy on data centers. He and I were very closely involved in building the hyperscale business, and then in doing the M&A transaction to sell it.”
That business was all “about anticipating where the hyperscale market is going, anticipating the demand and making speculative bets from a capital perspective so you can build something that you believe that the customers are gonna come to. I think that's what we did in Tata.”
PDG had a good chance, because “the hyperscale business for data centers in Asia is becoming a sector in itself,” he says. “It has a large-scale capital requirement, and you need design engineering capabilities, that can be replicated across different markets. I think that's a core to how the industry is going.”
The telco sell-off
The company began acquiring capacity, starting with five facilities from Indonesian telco XL Axiata in 2019. This, like Tata’s sale, was another fallout from the telcos’ big data center divestment, he explains.
Then abruptly, in 2016, Tata sold its data center business: “I was part of that journey,” he told us. “Now the world is doing hyperscale, but six years ago Tata decided to exit.” It was a global phenomenon, as telcos around the world thought better of their data center involvement: “Telecoms companies couldn't afford to be a data center business. The scale of capital and the economics were very different than the network economics. It was a trend that started and it is just going to continue, where the industry is quite bifurcated now.” He explains: “Public shareholders look at the telecom business very differently than the data center business - where it takes half a billion dollars to build one project. In data centers, the economics are very long term, and you get better returns as a data center company - but in a different way.” Tata was one of the first telcos to see the writing on the wall: “We were one of the first telecoms companies to get out of the data center business, and then many others followed, “ he says, mentioning Verizon, AT&T, and CenturyLink, as well as Telefonica in Europe. Tata’s sold its majority stake in the data center business to STT: “We saw it as very logical to let the business grow with a different kind of capital. We sold the majority stake to STT – and after the transaction, I left to form PDG.”
Birth of Princeton Princeton Digital Group would emerge in 2018, based in Singapore, with backing from Warburg Pincus, and a goal to build “a multi-
PDG also bought IO’s facility in Singapore – which became available after Iron Mountain bought the provider’s US facilities. But as well as buying, PDG was building from the start, and making informed guesses where the hyperscalers would want to be. Its first $500 million phase included land in the Chinese industrial cities of Nanjing, Nantong, and Wuxi, along with a 40MW data center campus in Shanghai. “We have a multi-country data center play, to engage with the hyperscalers, strategically across different markets,” he tells us. “And that has played out very well for us.” New investors have come on board, including the Ontario Teachers’ Pension Plan Board, and China Merchants Bank. And during 2020 and 2021, PDG began projects in India and Japan, taking it to five nations, with a 600MW portfolio of 20 data center projects - 10 in operation and ten in various stages of development.
Indian campuses “India is a long-term play, with an aggressive starting point,” says Salgame. “It’s a 48MW campus in Navi Mumbai, just outside the city.” The building is a data center cluster, with the same building design PDG uses in other markets: “That gives us entry into the market at scale.” Looking forward three to five years, he is ambitious, predicting that PDG’s
We have a multi-country data center play, to engage with the hyperscalers strategically across different markets” 20 | DCD Magazine • datacenterdynamics.com
India operation itself could be bigger than the 600MW size of the whole company’s portfolio today: “I won't be surprised if PDG has that scale across three to four cities in India,” he says, calling out Chennai and Delhi as potential targets. Another Tata alumnus, Vipin Shirsat, is head of India for PDG. Recently India classified data centers as critical infrastructure. “That is already helping,” says Salgame. “Long term, it's going to be a tremendous boost to the industry. When the nation classifies this as strategic infrastructure, then there is a lot of impetus for financial institutions. The cost of capital for the industry go down, which is going to spark a lot of investment in India.” Among other things, mobile operators now have the confidence to invest in 5G networks, he says, “and that is going to spawn more apps and services.” In the last year, he reckons “30 or 40 unicorns have popped up in India, on the tech side of the market.”
Abu Dhabi money When we speak, PDG had just received another $500 million, led by the Abu Dhabi government's investment arm, Mubadala,
The Pan-Asian Operator
and Salgame is looking back on significant growth in 2021. “In 2021 we entered new markets like Japan, we announced a project in Mumbai, and we started a new project in Jakarta,” he tells us. “We are expanding further in China. So we were coming out of the year in a much stronger position than we entered the year.” Now he says that PDG has “proven its thesis,” and is consolidating its position as “the de facto leading operator in the Asia Pacific region.” That gives it the clout to get Arab money from Mubadala, with no strings attached: “This is a straight out equity investment at the highest level of the company. The money's in the bank.” To reach that spot, PDG has been “very disciplined in our investment approach,” and chose its markets carefully, picking nations that were investing heavily in digitization and were open to developers. “The economies we serve are investing more in digitalization,” he says, and with demands growing “even faster than anticipated,” PDG is accelerating its alreadyimpressive growth plans.
Multi-country strategy But being the leading multi-country developer in Asia does have its positives and negatives. On the one hand, it isn’t a global player. Is that limiting? “No, we had a strong conviction from day one, that we are only in Asia. We are not taking our eyes off Asia. So no other geographies.” On the other hand, maybe it loses through not focusing on a single country? “No, I think we are not spread too thin across Asia. We are in key markets that matter to our customers. A little over four years ago, when we started the business, we had a very strong view that a time was just about to come when hyperscalers’ investment in Asia was going to go up dramatically. And when that happens, they would want their partners, suppliers, and operators to be strategic, and multi-country because doing business in Asia is not easy.” The multi-country model enables PDG to iron out the differences between countries for its giant customers: “They are a business which is growing fast across multiple countries, and they need partners who can help them grow across multiple countries at the same time,” he says. “The thesis of being a provider across
multiple countries is that our customers want the company's liquidity to be across multiple countries. They want to have the same quality of service, the same SLAs across markets. They want to set the same set of expectations to an operator, so that no matter what geography, what city they go to they can call up PDG and work with PDG.” He says PDG’s relationship with its customers is “symbiotic.” PDG gets insights into what hyperscalers want and delivers that.
Choosing target countries “I think the multicountry strategy is really playing out fortunately for us. Asia has so many countries, we can't afford to be in every one of them, so we have chosen our markets carefully from two perspectives. First, what matters to our customer. And second, the long term size of the market has to be reasonable to deploy capital and get good financial returns,” he said. There are some smaller Asian countries, he says, “that we will not have the energy to pay attention to in the coming years. The countries we are in - Singapore, China, Indonesia, India, and Japan - are the five countries that we are able to execute.
Issue 44 • April 2022 | 21
DCD Magazine #44
“In 2021 we entered new markets like Japan, we announced a project in Mumbai, and we started a new project in Jakarta. Now, we are expanding further in China" “We may enter a couple of more countries, maybe two or at a maximum three,” he says. He won’t be specific about which ones, but agrees that Korea is a large market - though “it's a very difficult country to enter for data center customers” Australia is also an important market, where an operator with no ties to China can get government business, and PDG is “backed by capital from North America and sovereign funds from Abu Dhabi.” But don’t look for any sudden moves: “We are very disciplined. We look for the right opportunity to get in. We also have very patient capital management, so we don't rush for the sake of rushing.”
After the Singapore moratorium Singapore is a potentially awkward choice, given the country has had a moratorium, which is opening up, but is likely to restrict the size of new data center projects. Salgame says he will build in Singapore, if PDG gets the chance, but has a strong base in its acquired IO data center: “Singapore will continue to be an important market. We are headquartered here and I think our customers want us to be doing more in Singapore. We don't know how the new policies will play out, so we will watch it and see.” He’s still keen, even if size is limited: “Singapore is not going to be the campus scale that we have in Tokyo, Mumbai, or China. We understand that, but I think it's about it's going to be a digital hub. “I think we are going to watch for the policy to really get laid out. And we’ve been told, in the next month or so, it will be, it will be clearer in terms of the process and some of the criteria details.”
Renewable energy Salgame says renewable energy is “front and center,” but it has to be done “market by market, because national grids and policies vary so much. “We are maximizing our green acquisition where we can and doing long term thinking in each market,” he says “In Indonesia, we are the first company to get renewable energy credits (RECs) from the PLN utility in Jakarta.” Beyond the scenes, PDG is “working with
policymakers and others to be at the front of the industry, to be able to acquire more green power,” he says, and Indonesia has invested quite a bit in that sector. India, meanwhile, “has had a pretty healthy renewable energy sector for a while,” he says. “Not as much as a lot of people would like to see, it’s a sector we as a company know quite well. So in India, PDG is getting quite a lot of green energy but it varies by state: ”Solar is very big in the Western part of India, while in Tamil Nadu, wind power is much more prevalent. We are working with the market to procure solar on a project in Mumbai, to get wind power in Tamil Nadu.” Right now, of course, PDG has comparatively few working data centers, so its current energy use - and renewable energy use - is relatively small, and the company has not set a target. “We are working quite intensely within the company to make a policy framework,” he says. “This year, we will be paying some attention to share it in the public domain.” Some data centers just use fossil power and replace it later, but Salgame wants to do better: “I think I have an opportunity and an obligation to do better. What we as an industry need to do is be at the forefront in each of our markets, working with utilities, policymakers and regulators, to create more supply and easy procurement frameworks. “If green power is available and we don't apply it into our mix, then shame on us! We have to be driving adoption and consumption at the same pace, or even more than the growth of the industry in those markets.”
Digital sovereignty As nations move to have their citizens’ data stored in-country, this will benefit data center growth, he says. “The way we look at it, almost every country in the world is dealing with data sovereignty issues from a legislative perspective and from a regulation perspective. And for the data center operator, from a long-term perspective, it's a very good thing, because there needs to be a lot more capacity in-country.” The legislation will push data into countries, even if the growth of fiber might
22 | DCD Magazine • datacenterdynamics.com
allow cloud operators to move it to the US or China, he says: “Nations are dealing with a complex problem. Rather than being influenced by the capacity of submarine cables or data centers, they truly are driving towards the importance of data.” He thinks governments are “dealing with it truly as a national issue,” but “in India, once the regulations are set, the industry gets played out by private industry, right. So it's a private sector driven model.” Demand is only going to increase as people create more data: “A lot more data is going to come, whether it is consumer data or from AI. We believe that is an extremely good trend for our business.” Across Asia, he sees data center markets growing rapidly to become independent of hubs outside the country: “The shift of workloads in-country is becoming very important for hyperscalers. In Indonesia and India, part of that evolution is becoming independent. They had to bring a lot more compute and storage into the market. I think we're gonna see a hub and spoke layout.”
Sometimes small, never retail But not all countries need massive capacity, he says: “We don’t just do big campuses. A 22MW data center may not look big in China, India, or Japan, but in the Jakarta market, 22MW is a very good size. Three or four years from now, it will become a 50 to 100MW campus, but for now, for that market, we are in scale.” However small PDG’s facilities are, they are never retail colocation: “We don't do retail, as a strategy. It's still hyperscale business for us, even in Indonesia.” He describes PDG as the biggest hyperscale data center operator in Asia. Would he call it the Digital Realty of Asia? “We’re not at that level yet!,” he laughs. “But we are the only data center operator which is in all these five countries. None of them are in these five markets. PDG is the only one.” That’s good from the hyperscaler customer’s perspective, he says. One operator can help in multiple countries, and provide a standardized offering across all of them. Of course, if PDG does a really good job of covering countries which Digital Realty and Equinix can’t, that could make it an acquisition target when they want to fill in those countries, as they did in Europe. He shrugs that idea off: “We are enjoying the confidence our customers have placed in us, and focusing on scaling our business, expanding our capacity, and deepening the markets we are in as well as entering a couple of new markets.”
Credit: Xinhua
The Substantial, Mountainous The Biggest,Voluminous, Most Immense, Towering
IN SEARCH OF THE WORLD'S LARGEST DATA CENTER
Sebastian Moss Editor-in-Chief
Finding out who owns the biggest data center is harder than you might think
W
ho owns the world's largest data center? Earlier this year, I tried to answer this question for a feature I was working on, assuming the result would be a quick Google search away. But, after trawling through a number of simple listicles, it became clear that the reality is a little more complex. According to numerous publications, the world's largest data center is the China Telecom-Inner Mongolia Information Park.
At a cost of $3 billion, it spans one million square meters (10,763,910 square feet) and consumes 150MW across six data halls. There's one problem, however: It's not clear the 2013 project is actually anywhere near that scale. We asked China Telecom, and the company - recently banned in the US - did not confirm nor deny its existence. On its website, the company does show that there is a China Telecom-owned data center in the region. Further digging finds that it is on Jinsheng Road in Horinger, Hohhot. State media reports from 2017 show
images of six halls, which we matched to satellite photography. Satellite measurements put each hall as 89m long, 46m wide, giving the buildings around 4,361 sq m per floor (we confirmed the accuracy of the measurements against cars viewable in the same images). State media photographs suggest four floors per building, making each data center span 17,444 sq m. Across the six halls, that's 104,664 sq m (1.1 million sq ft) - or a tenth of the public figure - and it’s not clear how much of that space is dedicated to data centers.
Issue 44 • April 2022 | 23
DCD Magazine #44
Even with a margin for error on satellite photography or an extra floor we couldn’t see, it’s hard to understand how this cluster of buildings could be seen as the world’s largest data center. We reached out to data center technicians working at the facility, but have yet to hear back. The visible buildings appear to be the beginning of a centrally planned urban and industrial project, where development was either abandoned mid-way, or is still ongoing a decade after the campus began. Empty ten-lane highways end abruptly, giving way to dry grasslands. So let's move on to the next largest data center. It turns out that it's also in Hohhot - a region with low ambient temperatures, cheap power, and lots of land, making it an attractive investment. In fact, it’s meant to be right next to the China Telecoms data center,
on what is referred to as the Shengle Modern Services Cluster. "With 5G and China’s ambitious plans, a lot more data centers will need to be built and China wants these to be inland (coastal cities are too crowded), and also wants them to use less energy (as it aims for carbon neutrality by 2060)," Jeroen GroenewegenLau, at the Mercator Institute for China Studies, told DCD. "Places like inner Mongolia, with abundant renewable energy and a cool climate, are set to benefit." Back in 2014, China Mobile said that it would spend $1.92 billion on a 715,000 sq m (7.7 million sq ft) data center campus in Hohhot. This project is real - but, again, the figures might not be. The US Green Building Council lists show a 7,400 sq m (80,000 sq ft) China Mobile Hohhot Data Center Office which is LEED-
Companies often report the total size a facility could grow to, based on what planning permission allows, or what sounds good to investors. Then they grow in stages, hoping to reach that ultimate number 24 | DCD Magazine • datacenterdynamics.com
certified, confirming its existence, and in 2019 the company said that it had completed the development of some data halls. At the time, it had space for 9,000 racks, with 100,000 servers. A stage two development will add 15,000 racks and 150,000-200,000 servers, while a further stage after that could add yet more space (although square footage calculations are complicated because the company is also building an exhibition center). This all could indeed make it one of the world's largest data centers, but it's not completed as far as we can tell. We searched the area again for data center-like buildings and only found two 12,240 sq m (131,750 sq ft) structures that fit the profile of a data center. Chinese publication Sohu reports that three computer rooms were built, including a spare parts storage center (which matches a nearby building we found). A second phase was reportedly underway, but Soho said that work had yet to begin on new data halls as of 2020. China Unicom also claims to operate facilities at the Shengle campus, so it is not clear whether it owns any of the visible buildings. Representatives for both companies were not available for comment. There are similar issues with several
The Substantial, Voluminous, Mountainous 585,000 sq m (6.3 million sq ft) facility, according to numerous reports. Situated between Beijing and Tianjin, it is a perfect place for a data center. Steven Sams, an IBM executive at the time, told DCD that "we had a series of conceptual meetings [about 10 years ago] with the technical and executive teams for the project to talk about designing and building highly energy-efficient and scalable data centers. "Our view was that traditional data centers were very inefficient from both an energy and technology use perspective and that scalable technology virtualization which was emerging through cloud computing models required different data center designs flexible for different computing models and technologies. "I had visited the site that had been defined for the massive multi-building complex and the Chairman of the project in China. In January 2011 coinciding with a state visit to Washington by President Hu Jintao, the chairman and I signed an extensive agreement in Chicago in which IBM was the design principal." He added: "The work has obviously proceeded significantly over the last ten years, but without my involvement."
other purported massive China Mobile data centers. Companies often report the total size a facility could grow to, either based on what planning permission allows, or what sounds good to investors. Then they grow in stages, hoping to reach that ultimate number. This may have been difficult for many of these eastern Chinese data centers. In 2018, China’s Ministry of Industry and Information Technology found that demand for data centers in Beijing and Shanghai outstripped supply by 20-25 percent, but that in the northeast there were twice as many facilities as required. It is possible that the companies began with large multi-billion dollar plans, and scaled back as demand failed to keep pace with their ambitions. The nation is now trying to incentivize data center construction in the east and offload resources from cities, with major state subsidies - but such efforts will have been too late for the above projects. A similar scaling back of ambitions may have happened for the next facility that is often touted as the world's largest: The Range International Information Hub. Located in Langfang, China and co-built with IBM, this was originally meant to be
But the question that is critical to our search is just how much work has proceeded, and whether the focus is on data centers or other space. While many reports claim Range is a giant data center that is as large as the Pentagon, initial documents state that it will also include offices, apartments, and a hotel - so a lot of that space is not data center-related. The Range Technology website still talks about the data center as a future project, saying that it has a "planned professional data center room area of one million square meters," which is actually more than the initial pitch. According to the plan, that size will be spread across 22 data centers. The website says that currently six have been built and that two more are in construction (although as the site refers to 2020 as a future date, this figure may be out of date). "It is estimated that by 2020, the park will have a computer room environment of 550,000 square meters." In a post from November 19, 2021, the company said that the top of the main structure was just completed - suggesting some way was yet to go before the project is completed. The whole complex was initially planned for 2016 completion. IBM declined to say if it was still involved in the partnership, suggesting that its spun out Kyndryl business might have an answer. Kyndryl did not respond to
requests for comment. Moving on. Multiple reports say the next largest data center is AT TOKYO's Chuo Data Center, with a total floor area of 140,000 square meters (1.5 million sq ft). The site is real, and indeed it is huge - it’s simply a hefty cube in the middle of Tokyo. The colocation facility is the largest single data center building in all of Japan. But is it the biggest in the world? It's time to break away from poorly researched listicles and look ourselves at the thousands of facilities we have written about over the past two decades. Most companies don't build that large economies of scale only go so far, and cloud providers and colo customers often value geographic redundancy over mega projects. But there are still those that like to go big. Perhaps one of the largest proponents of massive data repositories is the National Security Agency. The NSA tried to keep most of its giant Utah data center a secret, but being a giant building in Utah, that's not been entirely successful. Spanning two large data center structures, each with two halls, as well as surrounding infrastructure, the data center is believed to be around 139,000 sq m (1.5 million sq ft), of which only 9,300 sq m (100,000 sq ft) is data center space and more than 84,000 sq m (900,000 sq ft) is technical support and administrative space. That's large, but just shy of what's found in Tokyo. Another fan of largess is Digital Realty, one of the biggest data center companies out there. It owns the Lakeside Technology Center (350 East Cermak), a huge carrier hotel in Chicago. With more than 70 tenants and a robust business from financial firms serving Chicago’s commodity markets, the building spans 102,200 sq m (1.1 million sq ft). There are some caveats: Do we count a building that contains multiple data center companies as a single site? Plus, Digital Realty operates another giant building with seven data centers within it just 2.5km away - should we include that?
Before we get too deep into the weeds, let's keep looking Another potential mega data center can be found in China - Centrin’s data center in Wuhan. The company, which recently partnered with SpaceDC, claims it spans around 207,000 sqm (2.2 million sq ft) in the city’s Lingkonggang Economic and Technological Development Zone. However, the facility is not finished - it is still in the first phase, totaling 70MW of IT load, and hopes to grow to 225MW. Work appears to be ongoing, with the company recently being awarded an Uptime
Issue 44 • April 2022 | 25
DCD Magazine #44
"The really critical question here is just how much power can be delivered to the facility, and what is the density? Just because the size is large doesn't mean you can fit as much as a smaller data center" Tier IV design work for upcoming data halls at the site. For now, the facility is too small. A different possible huge campus is one from Huawei, in the Gui'an New Area of southwest China's Guizhou Province. The company often styles its campuses in an almost fairytale-like rendition of European architecture, with this one based on Prague facades. It's a peculiar-looking campus, more akin to a Disney theme park or a movie set, and yet the company claims it will be home to one million servers. As Huawei's largest data center campus, the site currently spans 480,000 square meters (5 million sq ft), if local media can be believed. However, that also includes 98 training rooms capable of holding 3,000 people, R&D labs, an IT Maintenance Engineer Base, and what appears to be a 'Huawei University.'
to open two more 41,800 sq m halls, and this time they will be two stories each. That means a total of 427,000 sq m (4.6 million sq m).
Can we beat that? It’s time to look at Switch, which has five large campuses dotted around America that it calls "Switch Primes." The company has never been shy about its love of embracing scale, and its Switch Citadel campus is set to be its largest Prime site. At full build, the 650MW+ campus will include 'up to' 761,800 sq m (8.2 million sq ft)
of data center space across 12 data centers. That would make it, without any doubt, the largest data center complex in the world. But it is not at full build, and 'up to' includes a lot of wiggle room. Switch told us that campus currently features 120,800 sq m (1.3 million sq ft) of operational data centers, with two additional buildings under construction for another 111,500 sq m (1.2 million sq ft). That means it’s a huge site, but not the largest. At the moment, it is not even Switch’s biggest: That prize goes to its Las Vegas Core site, with 250,800 sq m (2.7 million sq ft) with additional buildings 15,16,17 and 18 “currently under various stages of construction for a total square footage of 390,200 sq m (4.2 million sq ft),” Switch told DCD. Neither site is bigger than Facebook’s current 344,000 sq m, but Core could briefly overtake it until Facebook grows to 427,000 sq m. Should Citadel fully build out - to a timeline Switch declined to disclose other
10,000 people are expected to visit the campus a year - much more than would come to see a standard data center. Of the dozens of buildings visible in photographs, only nine hold data center servers, a state media visit appears to suggest. "Each data center equipment room has three floors, and each floor has two modules," Huawei's William Dong told them. "In these modules we deploy servers, storage, and network devices." Eventually, there may be 14 data centers on the site.
Credit: Switch
Whatever the true size of the site - we have asked Huawei - it does not appear to be fully finished. The campus officially opened on December 20, 2021, but the main structure is expected to be completed this August. Images from a state media visit earlier this year show that the artificial river and lake are currently dry. For now, it is too early to crown this facility as the world’s largest - although it gets points for being one of the strangest to look at.
Let’s head back to the US Facebook loves to go big, building multiple 41,800 sq m (450,000 sq ft) data centers on sprawling campuses around the world. The largest of those is in Prineville. Across nine buildings and 344,000 sq m (3.7 million sq ft), it is an astronomically large site - and it's getting bigger. By 2023, the company plans
26 | DCD Magazine • datacenterdynamics.com
Credit: AT TOKYO
The Substantial, Voluminous, The Brobdingnagian, Guargantuan,Mountainous Elephantine than “within 10 years” - it would then comfortably overtake Facebook. While this was mostly a thought experiment, Switch’s EVP of technical solutions, Bill Kleyman, noted that "size is certainly important because you're able to do more and facilitate more customers." However, he told DCD, "the other really important critical question here is just how much power can be delivered to the facility, and what is the density? Just because the size is large doesn't ultimately mean you can fit as much as some smaller data center." Talking about other large data centers, Kleyman said that “if you have 10 million square feet, but your density is like 5kW a rack, then you're wasting a lot of space. You're doing something wrong.” He added: "And when people tell you how much power is available at the facility, are they're saying how much power is available at the substation, or how much power is actually going into the facility? With Citadel, it's 900MVA that's in the building, and 1.5 gigawatts at the substation." Still, that is a ways off. If we look at future promises of data centers, then we should also consider Quantum Loophole. The company has bought a 2,100-acre property in Frederick County, Maryland,
Credit: Parker Higgins (EFF)
where it hopes to develop a 1GW campus consisting of 30-120MW data center modules it sells to other companies. That's twice as much land as Switch has for its Citadel - but again, one could debate whether Quantum's planned community of data centers can be counted as a singular data center campus, or rather a collection. Other large potential data center projects include the 57,935 sq m (1.7 million sq ft) Digital Crossroad campus in Indiana, Corscale’s planned 213,700 sq m (2.3 million sq ft) campus in Northern Virginia, and Amazon Web Services’ planned 162,600 sq m (1.75 million sq ft) data center in Loudoun County. But, as it stands, and as far as we can tell: The largest data center cluster owned by a single entity is Meta/Facebook’s Prineville data center campus. How about the largest single data center building? It is not actually AT TOKYO's Chuo Data Center data center, but may instead be another Facebook facility - the 170,000 sq m (1.8m sq ft), 11-story Singapore data center (although it is still in its first phase). There, land constraints meant that it made sense to concentrate a lot of servers in a single structure, something that companies usually avoid. As for the largest potential data center
project, it is either Switch’s Citadel, Range, or Huawei’s cloud campus, depending on whose publicity you believe. However, we have to admit: We may be wrong. Many of the projects are shrouded in secrecy, or intentional marketing manipulation. There may still be a larger data center out there, quietly humming away in a Chinese province we have yet to scour. If you think you know of a larger data center project, let us know at editorial@datacenterdynamics.com
By 2023, Facebook plans to open two more 41,800 sq m halls, this time two stories each. That means a total of 427,000 sq m (4.6 million sq m) of data centers in Prineville
Issue 44 • April 2022 | 27
DCD Magazine #44
Dublin and data centers: The end of the road?
Dan Swinhoe News Editor
T
hough Northern Virginia remains the capital of data centers, the key ‘FLAP-D’ markets in Europe (Frankfurt, London, Amsterdam, Paris, and Dublin), along with Singapore in Asia Pacific, continue to be major hubs for new development. But success brings problems. While Singapore looks to emerge from its threeyear moratorium, existing markets such as Ireland, the Netherlands, and parts of Northern Virginia are seeing major pushback. In Amsterdam, regulators are trying to create rules to limit data centers’ demands for energy and land.
EirGrid says it will only consider new applications for connection to the grid on a case-by-case basis in the wake of a November 2021 Commission for Regulation of Utilities (CRU) decision to limit their impact.
developing new data centers.
The CRU stopped short of a nationwide ban, saying a full moratorium was not a "suitable response”, but said that there will be limits on where data centers can be built, and they may be asked to provide on-site dispatchable energy storage or generation equivalent to their demand, so they can reduce demand, or take themselves off-grid completely.
Google’s submission to the CRU said that any moratorium on data center development in the area needed to be avoided “at all costs,” threatening to walk away from future investments there.
In Virginia, local residents and some officials are opposing plans to expand the state’s data center sprawl from Loudoun County into neighboring Prince William County on a massive scale.
The Irish Industrial Development Agency (IDA), a body charged with attracting investment into Ireland, says data centers bring in foreign money and boost the economy. It has asked Eirgrid for assurance that applications will at least be considered on that case by case basis.
In Dublin, however, it’s not politics or environmentalism that’s causing issues, but infrastructure limitations.
Despite IDA’s plea, DCD understands that EirGrid has denied several applications, even with some on-site generation.
Data centers in Ireland still enjoy a large amount of political cachet, especially from the ruling coalition. Opposition to new developments has sprung up in more rural areas, but is minimal for developments in existing industrial parks.
“When the one-to-one meetings took place between EirGrid and industry, it's effectively a moratorium in all but name,” a source told DCD.
Yet grid issues and concerns about capacity in the greater Dublin area have led the state-owned electric power provider, Eirgrid, to impose a de facto moratorium against new data centers that could last until 2028.
“There have been some interactions with some companies directly with EirGrid, and they're effectively doubling down on the moratorium despite the assurances given to the IDA," they added.
A halt in Dublin
More than 40 companies responded to the CRU’s 2021 consultation, and DCD understands a number of them are “puzzled” by the way EirGrid interpreted the CRU’s decision.
Pressures on the nation’s energy grid and strong renewable energy goals effectively halted new data center developments since the turn of 2021.
IDA CEO Martin Shanahan recently conceded that new data centers “are unlikely to happen in Dublin and the East Coast, at this point, but the IDA remains committed to
28 | DCD Magazine • Datacenterdynamics.com
“Shutting down the whole of the data center industry for 10 years is going to have a massive impact on the country,” says Eddie Kilbane, CEO of data center operator Dataplex.
In its submission, obtained by the Irish Times through a Freedom of Information request, Google said that any such ban would send the “wrong signals” about Ireland’s ambitions as a digital economy and render any further investments in its infrastructure in the country “impossible.” Google suggested a new tariff system could be imposed for data center operators who reserved more capacity than they ultimately needed, or were too slow to grow into that capacity. “Unfortunately, none of those comments were actually taken on board,” says Garry Connolly, founder of Host in Ireland. “I think that was the crux of so many people's disappointment; that they thought that there was an opportunity to influence the recommendations that were coming out the other side, but they were effectively discounted.”
Is the grid the problem? Data centers make up a sizable part of Ireland’s energy demand, but some within the industry argue they are being scapegoated for EirGrid’s failure to invest in energy transmission infrastructure to move renewable energy from wind farms in the west to the east where it's most needed. “It's a network issue,” says Kilbane. “The
Photography: Max Schulze
Ireland was all set to be a major hub. Then data center power demands came up against the limitations of the grid
Dublin's Data Center Dilemma issue is that EirGid have not invested in the infrastructure; we don't have the grid infrastructure to get that to the area where the demand is.” EirGrid has said data centers currently use 12 percent of electricity in Ireland, and this could grow to between 21 and 30 percent by 2030. At the same time, the country has set a target that 70 percent of Ireland’s electricity will come from renewable sources by 2030. Data from Ireland's Central Statistics Office (CSO) suggest electricity consumption by data centers in the country increased by 144 percent between 2015 and 2020. Niamh Shanahan, a statistician in the Environment and Climate Division of the CSO, noted: "This is the first time the CSO has published figures on electricity consumption by data centers. Data center
consumption increased from 290 Gigawatt hours in January to March 2015 to 849 GWh in October to December 2020." That explosion in consumption, coupled with transmission issues, has led to fears over adequate supply. The country is looking to move away from fossil fuels and close legacy power plants, but renewable projects have been slow to come online. Environmentalists argue the facilities take an unfair proportion of the green energy available and will prevent the nation’s overall transition to sustainable power. However, EirGrid has more immediate and local concerns. In 2021, it issued at least seven amber alerts, warning of a potential shortfall in power. Six of these alerts were due to a “reduced margin” between the level of electricity generation and demand, Eamon Ryan, Minister for the Environment, Climate and Communications told the Irish Parliament last year.
A seventh in April was related to temporary systems failure in EirGrid’s control center. Two power plants in Cork and Dublin had been offline for parts of the year due to maintenance. The current government has promised at least five Renewable Electricity Support Scheme (RESS) auctions – where projects bid for capacity and get a guaranteed price for their output – before 2025, but only the second auction is currently underway. RESS 2 is expected to boost renewable energy generation by up to 3,500 GWh by the end of 2024. Host in Ireland’s Connolly says the situation is a “challenge of success: If you look back 25 years, Ireland was the largest
exporter of software in the world; that has evolved on to data centers. “The increased digitalization thanks to Covid has meant that projects have started already in the greater Dublin that weren't meant to start until 2025. With the decarbonization of society and the decarbonization of the grid, this incredible demand is a real challenge.” More offshore wind on the east coast could help alleviate some of the capacity issues in the area, but would take several years to bring online. Grid projects in the Dublin area, such as the Kildare-Meath Grid Upgrade to a high-capacity electricity connection between Dunstown substation in Kildare and Woodland substation in Meath, have yet to begin. Wind makes up around 40 percent of Ireland’s energy mix, but the nation has little energy storage. ESB operates a 292MW pumped storage station in Co. Wicklow called Turlough Hill. Energy developer
Gaelectric had more than €100m ($108m) of funding for a compressed air energy storage project in salt caverns near Larne. The scheme was intended to provide a 250-330MW buffer for six to eight hours but was canceled after Gaelectric entered administration in 2017. Amazon Web Services told the CRU Ireland had ‘missed opportunities’ in the past to deal with supply issues. “During the previous decade, there were opportunities to deploy reinforcements, prepare the grid for growth and investment, and equip the grid for the integration of more intermittent resources,” the company said. DCD understands EirGrid staff have gone as far as suggesting some data center operators develop their own interconnectors
to connect data centers directly with renewable energy projects in other parts of the country.
Can developers go west? Draft proposals put forward by EirGrid as part of its Shaping Our Electricity Future consultation suggested moving future data centers west, out of Dublin and close to renewable energy projects on the coast. However, there seems to be little appetite to go west, especially amongst the global players. “They won't pick other places in Ireland,” argues Kilbane. “They're around the Dublin area because of the latency connection to their market, which is Europe. The further west you go, the more latency, the more compromises to the network, and they're not prepared to do that. It's client-driven.” Some local players are building facilities in the west, aiming to be close to the subsea cables that land on the coast. But Edge demand is generally low in the area, as small local populations can largely
Issue 44 • April 2022 | 29
DCD Magazine #44
be served by existing facilities. Apple is the only hyperscale player known to have looked at the west of the country. In 2018, after years of local opposition, it publicly abandoned plans for a facility in Athenry, County Galway. In 2021, Apple was granted an extension to
new County Development Plan 2022-2028; the plan is still in draft and hasn’t been finalized. In November 2021, People Before Profit staged a protest at the Data Centres Ireland trade show in Dublin, arguing that data centers already use 12 percent of the country's electricity, and any increase in the sector would make Ireland's climate goals impossible. A number of industry people DCD spoke to dismissed the opposition, with one person describing PBP as “Trotskyites” - though Kilbane maintains the problem is infrastructure: “It's not political. It’s a network issue.” IDA and IBEC, the Irish Business and Employers Confederation, say foreign direct investment in IT and data centers generate employment within Ireland. Government figures suggest more than a third of the country works in computing and electronics products or related IT service industries. Operators say the majority of employers are foreign firms, and data centers are essential extensions of those IT services.
its planning permission, in case it should change its mind. If Apple ever does build in Galway, its facility would house iCloud and App Store data, rather than real time applications. Not every company has that much back-end data, says Kilbane. “You're not going to get multiple companies doing that because the rest of them don't have that in their stack," he adds.
Political battles In the Netherlands and Singapore, moratoriums were imposed by the government. In Ireland, the current coalition government – which includes the environmentally-focused Green party – remains pro-data center.
“Data centers are critical infrastructure for all businesses, especially the technology sector contributing €52 billion ($56.2bn) to the economy and employing 150,000 people,” IBEC’s Cloud Infrastructure Ireland (CII) group told DCD. “In 2018 the IDA published a study that found between 2010-2018 DCs contributed €7bn ($7.5bn) in economic activity in a wide range of services including jobs in construction and engineering. Data centers are an essential part of that rich ecosystem of job creation.” The group added: “Data centers are driving and will continue to drive the decarbonization agenda. Data centers are part of the solution to climate change, and DC operators are leading purchasers of renewable electricity.”
What’s next in Dublin?
Opposition parties including People Before Profit and the Social Democrats have both put forward proposals that would ban data centers at the national level.
As of June 2021, Eirgrid had agreed to connect 1,800MW of data centers, but had received applications for another 2,000MW, with more than 30 proposed facilities at risk.
Neither has passed, although South Dublin County Council (SDCC) recently voted in favor of a PBP-led motion to ban any new data centers for the duration of the
However, with more than 1GW in installed capacity, and the aforementioned 1.8GW or so EirGrid has already agreed to, there is already a sizable pipeline of
30 | DCD Magazine • datacenterdynamics.com
developments still to be completed in the city – assuming EirGrid is still willing to connect and the developers are still interested in building. However, the long term effect is less clear. In February 2022, Environment Minister Eamon Ryan said his department is working with the Department of Enterprise on a new policy in relation to data centers. Responding to a parliamentary question, Ryan said officials were also working with relevant agencies to ensure a “plan-led, regional, balanced approach” to large developments. This would take into account “existing grid availability” and the opportunity to “colocate” data centers alongside renewable energy sites, seemingly reaffirming Dublin is a no-go zone for new development applications for the foreseeable future. DCD understands at least one global player has already had a major customer cancel plans to occupy space in Dublin, reportedly saying ‘there’s no future to expand in Ireland.’ Dataplex’s Kilbane argues companies are more likely to look to the Nordics, where energy is cheap. “Our company has no choice but to do nothing more in Ireland,” says Kilbane. “EirGrid’s decision is a commercial decision they've made and it's going to be a death knell for the industry. “Go to Norway. Ireland, as far as the industry is concerned, is finished. We just need to move on and get over it. ” However, even if another data center is never built in Ireland, an existing 1GW is no small cluster, and the area continues to have a large number of cable landing stations along with a skilled workforce. Host in Ireland’s Connolly says the large pipeline of approved developments means growth will continue for now. He also notes that as existing facilities go through hardware refreshes, the amount of data stored processed in existing facilities on the next generation of IT hardware will increase without a corresponding increase in energy use. “It’s taken us around 15 years to become a 1GW cluster,” he says. This will rise to at least 2GW, which he says is a big cluster. ”Will we see a barren period? I can't say, but what we certainly will see is the incredible retrofit stuff. Billions will be spent on refresh cycles using the same electrons to give you more output. “I'm confident Ireland will play a part in the globe’s continued digitization, and it may not be on 500-800MW of data centers a year, but it's all about the value of the data and the packet.”
Sponsored by
Automation Supplement
INSIDE
Toward the self-driving data center The Network Conundrum
AR for building
Drones in a data center
> Automated networks can be a double-edged sword. We need intelligent automation
> Engineering grade augmented reality can help get a project right first time
> Security guards may need a helping hand to patrol today’s mega-campuses
Drive decarbonisation via the unique EnergyAware UPS solution
Contribute to the broader energy transition and support a new renewable grid system
Generate more value form battery storage assets: 1 to 3 year payback
EnergyAware technology: Leading the way to a greener energy future Energy is mostly taken for granted, a commodity that can be bought and utilised. Chief Financial Officers see energy as an unavoidable, undesirable cost. But the status quo is changing. The opportunity to transform from energy consumer to energy pro-sumer and a grid contributor has never been greater. This new future will see data centres make valuable environmental and financial gains with no loss of resilience, control or productivity. And it could help all of us move to a greener energy grid, adding a valuable new dimension to an organisation’s Corporate Social Responsibility activities. To learn more about Eaton’s EnergyAware UPS visit Eaton.com/upsaar 32 | DCD Magazine • datacenterdynamics.com
Sponsored by
Contents 34. The network conundrum Automated networks can be a doubleedged sword. We need intelligent automation
38. Augmented reality for building Engineering grade augmented reality can help get a project right first time 41. D rones in a data center Security guards may need a helping hand to patrol today’s mega-campuses
Self-driving data centers
D
igital infrastructure is expanding at a colossal rate, and only technology can keep pace with that. In other words, data centers must automate to survive.
The job of managing our digital infrastructure, from the physical buildings all the way up to the fibers connections, network links and code, is too much for unaided humans. So data center builders and operators are turning to technology for help. This supplement looks at the progress we are making towards a data center that controls, heals and patrols itself. And maybe one day, can even build itself.
Networks that think
38
41
It used to be that the network was a fixed thing, and the software had to fit in with it. Now the network has to adapt to the demands of applications, which run on multiple clouds to precise performance levels. The only way to deal with that situation is with an intelligent network which can adapt its performance ot handle what is thrown its way. The first attempts to automate networks created complex systems of rules, which broke when they encountered the unexpected. Now, network equipment is approaching a level which could actually be called intelligent. It understands the goals of designers and the intent of users, and works to deliver the best possible performance - and knows when to call for help.
Headsets that think Augmented reality (AR) tends to look science fictional and a bit unrealistic, while construction is the ultimate hands-on, muddy boot occupation. And yet, data center builders are among the first in the construction industry to see the present and future of their project combined through AR headsets. It's all down to having a headset that can locate real and virtual data points to within millimeters, and spot errors before the happen. We don't have a self-building data center, but this is a serious step to help automate the construction process
Robots on patrol Physical security has always meant staff pounding a physical beat, patrolling perimeters with their eyes and ears open. Maybe not for much longer. Autonomous drones can make independent airborne patrols of the campus perimeters, reporting back on anomalies. Truly integrated drone security is in its early stages, but data centers are getting a head start on using smart flying devices.
What's coming next? More and more of the data center sector is becoming self-managing and self-monitoring. This may be cause for some concern. When outages happen, it's often because complex systems are running automatically with no one seeing every consequence of a change. Next time we return to automation, we may have to check out security and resilience.
Automation Automation Supplement Supplement | 33 | 33
DCD Magazine #44 44
NETWORK AUTOMATION:
the complexity conundrum
Peter Judge Executive Editor
Automation can be a double-edged sword in the network arena. What we need is intelligent automation
D
ata center networks have changed, and there’s no going back. They have become so complex, that they cannot be understood in real time by human beings.
They have to be automated. But that brings a danger: how can you be sure they have been automated correctly?
The complexity comes from the demands of the applications which are running - and the need to run them on multiple clouds and infrastructure. “There are a lot of new demands and pressures being placed on data center networks,” says analyst Brad Casemore of IDC, speaking at a DCD online event. “Application architectures have really redefined data center networking requirements. As a result of the evolution of application architectures, there's a need for modernization within data centers.”
34 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Applications rule the network
Fast is not good enough
Networks used to dictate terms to the applications that ran on them, says Sanjeevan Srikrishnan, senior global solutions architect at Equinix: “In the good old days, we'd go out and build a killer network. We'd be like hey, I want 100G backbone links and all of this crazy infrastructure. Then the business would come to us and say, ‘Can I run my app on your network?’ and we’d say “No, sorry, it doesn't meet our needs.’”
The services that run on these hybrid distributed networks have to respond instantly - but also very consistently - says White: “When I work on hyper scaling networks, it's not even really the delay that matters. It's the jitter.”
Network architects could actually ask the business to go away and rebuild applications to suit the network: “Take it away, break it into these three tiers. Bring it back to me like this.”
With all these different parts of the network to manage, it’s impossible for network administrators to respond quickly enough to keep up with changing demands.
It’s not like that anymore: “Nobody does it that way anymore. Application is the king Kahuna. It's the bottom of the triangle. It’s the base of the technology equivalent of Maslow's hierarchy of needs. User experience is king.” Meeting user experience demands would be complex enough, but networks are now constructed from diversified parts, and have to respond coherently: “Everything is really responsive to the applications,” says Sagi Brody, CTO of managed service provider Opti9. “And the production environment for an enterprise organization today is typically hybrid. It spans across colocation, private clouds, public clouds, and SaaS.” These complex networks have been put together from parts which were historically siloed - and under the covers some of that hasn’t changed: “You're seeing organizations go from fixed and siloed configurations, into this new digital world.” says Srikrishnan, “and it's never a clean migration. You always have “tech debt” that sits there and it may stick around for 10 to 15 years.” Alongside that, responsibilities shift: “Many of the things you thought the provider was going to own you still own. You are jumbling together four different types of services, and you have to own the compliance and security of all of them those individually, as well as how they work together,” says Brody. These networks also distribute more network decisions, says Russ White, infrastructure architect at Juniper: “From a network architecture perspective, how do I build these networks that can handle this Edge traffic and distribute stuff intelligently and still have some sort of a core?”
Delay is when network packets take a long while to arrive. Jitter is when they arrive, but the delay is variable, garbling real-time traffic such as voice calls, he explains: “Consistency is a huge key right now, how can I make the network perform consistently all the time?”
“The hybrid use cases are forcing us into scenarios where we need to deploy things like VPN, and VxLAN,” says Sagi Brody, CTO of managed service provider Opti9. “These are just literally not configurable by hand anymore.” The obvious thing to do is to use automated tools to control the network’s response to changing conditions, and to take the burden off the admin: “What I believe in automating the crap out of everything,” says Srikrishnan. But what exactly is being automated? Srikrishnan says the network is a “nebulous” term. “Are we talking about the virtual networks that the developers see? Or are we talking about the underlying infrastructure that powers all of that? Those are two very different things.” Another issue is that automation is not simple. The first approach was to make a set of rules which provide a canned version of the response an administrator would make to specific events. That works fine most of the time, but if an event is slightly outside the possibilities considered by the network programmer, the response may actually be wrong - and sometimes disastrously so.
Fast automation is dangerous Brody says: “Automation is important. But it could also be dangerous, it has to be done right. It has to be use case-specific.” Brody says: “The intelligence has to come in and add some layers of logic,” to check if any action will cause problems. “An example is IPAM [IP address management]. If the IPAM says that a subnet is free and not in use, before we go and assign an IP address, let's check if it's routable.”
Automation Automation Supplement Supplement || 35 35
44 DCD Magazine #44
People used to think they could build things as complex as they wanted, he says, “as long as we automate it. And I think we need to get away from that line of thinking and start thinking about how do I make my network more intelligent, so I can actually automate less, but have the automation be more intelligent.” Brody says: “I think we're moving away from a world where you can halfautomate, and half do things manually. We have to focus on simplicity, everything has to be as simple as possible.” Shrikrishnan thinks the answer may be automating early, from the ground up: “If we talk about automation early on, and you use best practice, you're not using hands to keyboard to deploy anything, unless you're using a product like Terraform or Ansible to push your code up into production into your infrastructure. As you do this, you should be validating it.”
Observability for security
intelligent: “You add some intelligence, you add some logical checking, and so on.”
Casemore wants to see verification: “So when you automate a change at scale, it's not going to cause all sorts of problems or potentially break down part of your network.” “We're in a transition phase,” says White. “We had all these really smart people who could type on the keyboard and get the console working. And we thought we would just automate them out of a job. But we haven't turned that corner.” Brody has been through that cycle: “Years ago I built a lot of network automation myself. There's always this natural progression. You build the automation, and then at some point, it fails. At some point, it takes down your network, it does the exact opposite of what you want it to do.”
Intelligent automation Brody says the answer is to make a network which is not just automated, but
White agrees: “I want the network to be down as little as possible. And I think we're almost over-relying on automation, and under-relying on intelligent automation. We should put the emphasis on intelligence and not on automation.” IDC’s Casemore says: “The automation not only becomes more comprehensive, but it becomes smarter and a little more anticipatory. we move to a more proactive form of automation.” But this has to be done without adding layers of complexity. Brody wants to bring it back to a more simple view: “We have to turn it around. We need to focus on declarative models, imposing our ideal configuration on the network.” Instead of the automation configuring the network, he wants to see a “single point of truth”, a configuration imposed on the physical network. “This is a new paradigm,” he says. “We need to move towards machineto-machine interfaces. And we need to rethink.”
36 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Network behavior also has to be “observable,” a key word emerging in network discussions. “I think it's a whole new genre of software,” says Brody. “I was at [the AWS tech event] Re:Invent this year, and the big buzzword was observability. Because we've made things so complex, we now have this new challenge of how do we observe what's happening where? And how do we troubleshoot it? That wasn't a problem years ago.” Shrikirshnan agrees that “observability is huge,” and say a network has to be able to “receive logs and respond to events in real time.” For instance, what if a user is normally in Toronto, but suddenly shows up in Manila? “What's going on there? Is this a legitimate use case? Or is this a bad actor?” says Shrikrishnan. “That user in Manila may have left their iPad at home, and the iPad is now checking in for emails, but the user is physically in Manila. “Do you now take the traditional SecOps approach and kill their user account because you notice malicious activity? Or do you say, hey, wait a minute, this could be legitimate? Let me prompt them for credentials. And if it is a legitimate use case, do I need to now spin up
digital infrastructure in Singapore to support them? because they need reliable secured connectivity back to my core infrastructure.” In a zero-trust network, users authentication is automatic, and continuous, says White: “When I was at Cisco, and I talked about network security, I had a slide which said we could do a crunchy edge with a really nice DMZ [De-Militarized Zone]. And the inside of the network could be really chewy, like a chocolate chip cookie, Nowadays, I’m sorry, but the entire network has to be crunchy all the way through security has got to be built in from the ground up and all the way through.”
Lifecycle Network automation also has to be able to handle the lifecycle of a network, during which time it will be maintained by multiple people with different levels of skill. “if you're a senior network engineer, and you've been in the trenches, you know what to build,” says Brody. “But someone newer and more junior may be tasked with simply deploying hardware and plugging it into some automation software. My fear is, how do I ensure that it's not going to do more damage than good?” It‘s tempting to design a network for Day Zero and deliver it on Day One, expecting it to carry on working, says Casemore: “It’s not just Day Zero and Day One. When you plan and design something, you have to deal with things like troubleshooting and remediation, and that closed loop.” Automation has to work on Day N, he says: “So you're able to optimize change management, and ensure that the network is continually refined so that it produces the results that it needs to deliver for those applications that it supports.” For Brody, the important thing is to have a reference architecture that determines how different clouds and services can be combined as needed. White says it’s a matter of trying to build networks things in a simple, modular way that can be automated: “Because there's a limit to how much you can hold your head. And if you've made it too complex,
you can't be flexible, because nobody can figure out how to make it work.”
when you add things to it you're pushing things into the graph using a graphical interface.”
How it works in practice
The system can continually probe whether the graph matches the intent, ie whether there is a fault or a failure.
So far, so theoretical. But what happens when you want to actually deliver an automated network? Let’s take as an example, the Apstra network automation system that Juniper acquired and uses. Apstra coined the term “intent-based” network for the jump from automation to intelligent automation, explains Juniper network engineer Mikko Kiukkanen: “You're describing the intent. What you want to do, not how you get there.” Some tools automate tasks like IP address generation, but don’t verify them. An intent-based network will be based on a reference design or “graph” which describes what the network is meant to achieve. This is mapped onto a network which can be made from multiple vendors’ hardware. “We generate the syntax, after validating that the configuration and the design is correct, and push the configuration to the switches,” he says. “This happens on day one, where you implement it, and hand it over to operations. After that, we do the dayto-day operations, which means the monitoring and troubleshooting side of things. The network behavior is generated from the network design, which is stored in a graph database on Day Zero, say Kiukkanen: “It's a complex data store, which is connected to a router. It gives us now a much, much more granular view into the data center.” When the network is running, the graph runs in sync with the real network he says. “Rather than querying devices and looking at log files in real time, we can query the graph, because it's a single source of truth.”
The intents can include service levels, so if a network link needs to operate at no more than 90 percent capacity, the system will flag up when a change breaks that intent, he says. “If there is an anomaly like a duplicate address, we flag it. And then you just hit one click, and it will say don't let it happen,” he says. “We can make an alarm for an anomaly,” he says, and issue a trouble ticket automatically for the fix if human intervention is required. “It’s like autonomous cars,” says Kiukkanen. “We want a self-driving or selfoperating network. Are we there yet? Not quite. But we have the pieces.”
No going back In the pandemic, network automation was put to the test, as thousands of users started to work from home: “An inflexible core data center architecture would not have allowed that.” Intelligent networks have to operate autonomously, adjusting to deal with faults and surges in demands. “This is the coolest time when you're talking about intelligence and automation,” says Shrikrishnan. But it’s always going to be a limited kind of autonomy, he says: “I don't want to put the intention forward or the message forward that we're trying to build Skynet here with Intelligent Automation. It's a little different.”
The automation runs on the “control plane”, the management interface of the switches, not the the “data plane”, the general bits they transfer: “This gives you the flexibility to add equipment, because
Automation Automation Supplement Supplement || 37 37
DCD Magazine #44 44
C
Peter Judge Executive Editor
onstruction might seem like the ultimate muddy-boot occupation, but there’s a company aiming to disrupt it with augmented reality (AR). And unlike a lot of AR headsets, this is not just a high-tech toy: XYZ Reality reckons that a high-tech hard hat on an engineer’s head can pay for itself in six months. It’s hard to grasp the proposition at first. The XYZ Reality Atom headset looks more like a futuristic bike helmet than something you’d see on a construction site. Shimmering mirrors glint inside the goggles. Videos of the HoloSite AR system show virtual buildings springing into existence before the engineers’ augmented eyes. But CEO and founder David Mitchell is very definite that the headsets are not enabling science-fiction castles in the sky. “I do like sci-fi,” he tells us. “But I am very much focused on the human aspect, and this can allow humans to build better.”
HEADSETS
that think
Engineering grade augmented reality headsets could make the construction process more reliable
The headset includes laser-driven AR that is accurate to within 5mm, and can load up schematics for every aspect of the building including civil, mechanical, architectural and structural, so the wearer sees the electrics, pipework, and the eventual physical structure, all overlaid over the current state of the project.
Right first time “The value proposition is, build it right the first time,” Mitchell tells us. “The industry is plagued by rework.” All building projects suffer when parts are not quite correct, and need to be worked over, he says: ”Seven to 11 percent of each project is rework. That is wasted effort.” AR can change that, he says, citing a very concrete example from one project. “After they broke ground, two months in, we came onsite with the headset, and they spotted problems left, right and center.” One concrete pad had been made 500mm too high. “That would have incurred substantial delays later,” he explains, “It normally takes two weeks to identify that kind of error. And if you verify the work after it is completed, the cost has already been incurred.” With a two-week delay, the project managers would have been faced with the choice between
38 | DCD Magazine • datacenterdynamics.com
Engineering grade AR
knocking down the work so far, or engaging in expensive adaptations. As it was, the fix was relatively quick. But the benefits could have been far greater: “That’s using our tech as a reactive process. Drive it deeper into the quality process, and you can go in advance.” If the AR system had been in use at the start of the project, the error would never have occurred. It would have been spotted and fixed when the rebar was installed, before the concrete was poured.
Architect’s view Mitchell has a family background in construction. He learned to walk on his father’s building sites on the West Coast of Ireland and got involved in commercial construction, helping to build small tower blocks and hotels. “Then I became a bit of an architect,” he says, studying the subject and working in Paris for a time. “But I missed the construction side, and moved back to London. I got involved in some incredible projects - like the Shard and Battersea Power Station.” He also helped build hyperscale data centers in Europe, though he’s not saying who the clients were. “I started questioning why builders use
2D drawings on site,” he tells us. Architects and engineers use 3D CAD systems and the designs exist as BIM (building information modeling) data. But on site, the construction staff have to work from a hard copy, which holds less information and is fiendishly hard to marry up with the real world. “As an architect, sometimes I struggled to understand my own drawings,” he says. Working for J. Coffey Construction, he wanted to give the building workers something better. “Back in 2015, we deployed paperless constructions on a building in Ireland,” he says. The project gave the builders more direct access to BIM data. “It was a huge success - everyone was sold on the idea.” At that stage, he could see the potential, and wanted to take it further. He told his firm: ”This is the future. I’m going for it.” And Coffey agreed, giving him pre-seed funding. Five years later, XYZ Reality has 70 staff. Backed by $26.5 million in venture funding, its headsets have showed up at projects worth a total of nearly $2 billion. Data centers are a major part of its market - albeit a secretive one. All Mitchell will say about a major current project is it’s a hyperscale data center in Europe. Elsewhere, however, XYZ’s partner PM Group mentions Denmark, so we can be pretty sure we’re talking about the Meta/Facebook site currently expanding in Odense.
XYZ designed a health and safety compliant helmet, with waveguides to project a hologram in front of the wearer’s eyes. The company opted for a laser-driven positioning system to improve the accuracy over consumer-grade AR systems which sometimes use cameras for positioning: “You cannot rely on feature points and the camera position.” The positioning system works with a site coordinate system to give millimeter accuracy, and the images are validated by laser-scanned reality capture. “There are a number of prosumer devices that are marketed in a way that makes this kind of thing look feasible, but we quickly realized that, for AR to be usable, it needs to be accurate within construction tolerances.” Prosumer devices like Microsoft’s Hololens can often drift by 100mm, and be affected by factors such as sunlight he says: “We guarantee we can position AR with engineering-grade accuracy.” Those holograms can selectively show different disciplines including civil, mechanical, architectural, and structural engineering. “We are 3D creatures. It makes more sense to see things positioned accurately,” he says. “When someone puts on that headset, they get a big smile on their face.” He sums up: “Ultimately our vision is builders building from holograms.”
Automation Supplement | 39
DCD Magazine #44
“We are not potentially replacing people. We can delay the invasion of the robots. But we could feature ‘cobots’ that interface with people” Data centers - the perfect use case Historically construction is seen as conservative, but data centers are more aware of the potential of technology - and they also have a skills shortage that has become acute during a surge in demand for new facilities. Those facilities are also often very similar, with one providing wanting to build the same building, over and over again. And that’s a big opportunity for a new datadriven technology to show its worth. “Data centers are cookie-cutter projects, so we can measure benefits directly against previous buildings,” says Mitchell. Data center builders that adopt AR can directly see that, where one project required 10 percent rework, the next one needed less than one percent. Given the large costs of that rework, he says the return on investment can work out well, with one system paying for itself nine times over in six months: “The drop off in rework happens instantly.” It’s also been used to verify the equipment going into the data center. “We have done factory witness tests, streaming into the factory environment.” In these tests, engineers measure skids of equipment designed for a data center before they are shipped. “We’re able to check that the site can host it accurately, and all key interfaces are built correctly Mitchell won’t say how much the headsets cost, explaining that they are
Audits and support
Plenty of virtual reality and augmented reality headsets have already been seen in working data centers, and in virtual simulations of them. Some have been demonstrations or prototypes, while others have helped with troubleshooting.
Back in 2017, Future Facilities, a company that optimized airflow using computational fluid dynamics (CFD), investigated using VR to explore CFD simulations of data centers. Future Facilities used VR to let users view the internals of several different generations of data center, including temperature levels and airflow.
provided as-a-service: “The cost depends on the duration of the project. We have a subscription model, charging per unit per month.” This means that customers can increase the number of users with headsets as required, during construction peaks, and can roll on headsets to new projects. “One project started with one headset, now it has three, and it’s moving to eight-plus. They keep coming back for more.”
Can you automate it? But can the system evolve into a level of automation? Mitchell says it already allows for an increase in productivity from existing workers: “This industry is suffering from a massive skills shortage. Giving this to people in the industry, assisted reality will assist them in building what they are building.” The system also allows remote staff to share the view from the headset: “We are happy to stream into the headset,” he says. “We prioritized streaming and remote working as soon as pandemic came in,” he says. “We wanted to help the market.” Adapting what the company had XYZ patched users into Teams calls so some customers would work remotely on-site using Teams and HoloSite. “We reduced the amount of labor and travel,” he says. “40 people streaming into a headset were able to do a virtual handover In 2018, veteran data center engineer Greg Sherry, launched a company called Virtual Augmented Reality for Critical Environment Technical Infrastructure (VARceti), proposing a training program based on VR and AR technologies aimed at data center engineers and technicians, called Avros DC. Describing it at the time as a “flight simulator for data centers,” Sherry told DCD. “We can start a fire in the data center, or flood it, or make a tank come crashing through it.” During the Covid-19 pandemic, Microsoft's cloud operations team used the company’s HoloLens 2 AR glasses to conduct data center audits remotely. This allowed the team to comply with Covid-19 precautions and made the inspections quicker and cheaper. The team used Microsoft Dynamics
40 | DCD Magazine • datacenterdynamics.com
and walk down of the project.” He doesn’t see it going further any time soon, where are a robot or drone provides the remote eyes. “We are not potentially replacing people. We can delay the invasion of the robots. But we could feature ‘cobots’ that interface with people.” So the headset isn’t a step towards a self-building data center, but there are fascinating additions in the pipeline. “The next phases involve ‘assisted reality’” he explains. In this development, the headset can monitor aspects beyond the current user’s job definition, and passively detect what’s been done on site. Once again, it’s about spotting errors, but in a wider context. The system can be worn by a construction worker, and passively detect a mechanical system is out of tolerance. It would then pass an alert directly to the mechanical engineering team - assigning the work to the right people automatically. “We might have made mistakes historically,” says Mitchell. “Now we know it will be built right first time.” 365 Remote Assist running on its Azure cloud, with HoloLens 2, having first assured themselves that the Azure platform complied with standards such as the PCI data security specifications. Meanwhile, some hardware manufacturers have added AR capabilities to their remote management products. Again, inspired by travel restrictions early in the pandemic, ABB Electrification added immersive augmented reality to its support and maintenance offerings. The company launched Closer, a set of interactive step-by-step guides to help lesstechnical staff get through troubleshooting tasks. It also offered a product called Raise for remotely guided repairs and maintenance of ABB products.
Drones at (and even in) a data center Do today’s mega-campuses need aerial security as much as human patrols?
D
ata centers are continually driving to become more efficient, and to get more useful work out of staff. In the case of giant hyperscale data centers, increasing automation may be calling time on the practice of people patrolling thousands of yards of data center perimeters. Novva data centers, led by former C7 CEO Wes Swenson and backed by CIM Group, has a 100-acre flagship in West Jordan, Utah - a campus which could reach to over 1.5 million sq ft (139,350 sq m) of data center space. The first phase, involving a 300,000 sq ft (28,000 sq m) data center was completed in late 2021 and includes a 120MW substation as well as
an 80,000 sq ft (7,500 sq m) office building for Novva’s headquarters. To cover that much space, Novva is turning to autonomous drones and robots. The company is deploying Boston Dynamics’ Spot robot to patrol data halls, as well as semi-automated security drones (also known as Unmanned Ariel Vehicles, or UAVs). “When you run a 100-acre campus, you really should have an aerial view of the operation,” says Swenson.
Drones take flight In 2021, Novva deployed two Blackbird drones from Nightingale Security – and plans to have four on the Utah campus. “For the most part, it just does its own
Dan Swinhoe News Editor
thing and then just autonomously goes back to its landing site,” says Swenson. Equipped with 4K cameras, LiDAR and infrared, the quadcopter drones perform regular autonomous perimeter checks, as well as responding to ad hoc alerts. The drones run around 10 pre-defined missions, providing perimeter checks, facility inspections, and can react to certain alerts automatically. “We have detectors on our fencing that detect any kind of vibration, and an algorithm that detects whether that vibration is due to the wind speed or some other interference,” says Swenson. “If we think it’s somebody trying to climb over the fence or cut it, the drone will
Automation Automation Supplement Supplement || 41 41
DCD Magazine #44 44
automatically launch towards that sensor and get eyes on it.” Robots in data centers are still niche: while they might be used during the construction phase, use of drones at data centers for security and ongoing inspection is rare. As well as Novva, Nightingale has been reportedly working with ‘a very large Silicon Valley company that runs their own data centers’ for around two years, and is in the process of conducting a pilot with an e-commerce company in Washington. The company was deployed at the Ohio data center site of a US investment bank in during the construction phase. Data center operator BSC is also considering drones at its facilities. Hector Castenada, senior director, service delivery and technology for BCS, says the company is hoping to deploy a pilot project at a 300,000 sq ft data center close to its Texas HQ by Q3 of 2022. As well as pervasive security drones, Castenada is considering drones for facilities inspection – so visiting engineers can inspect chillers or other equipment on the roof without climbing up themselves – as well as wider land and data center surveying. While the project is still in the research phase, he would expect to see around two permanent security drones at each facility – ensuring one can patrol while another recharges – alongside smaller held-held machines engineers can bring on-site for inspections. “I definitely see this becoming the standard in a few years,” he says.
Integrating data center and machines Nightingale offers its drones, base station, and software as a package to buy outright alongside a yearly subscription, or on a lease basis. Swenson says Nightingale was the UAV company which offered Novva the closest to a complete package, as opposed to point solutions. “Most drone companies are basically remote control aircraft companies,” says Jack Wu, founder & CEO, Nightingale Security. “They’re just building something based on RC [remote control] technology that’s been around for decades. We’re trying to marry the autonomous technology, with robotic
software and robotic infrastructure.” Drones have improved in leaps and bounds in recent years. They can now remain in the air for long periods and are able to fly autonomously in difficult conditions. The software and wider support ecosystem isn’t quite as advanced. If data centers are to become more automated, the robots deployed there will need to be easily integrated with existing management systems, something which is still in the nascent stages. “Automation is a huge trend,” says Wu. “You’re going to have ground-based robots, drones, other automated systems. We’re not at the age where we’re all integrated yet. But that is coming.” Wu suggests we are still in the ‘discovery phase’ where customers learn and understand the potential use cases of different robots, while the robotics companies are working on getting each platform working properly, independently. After that, the industry can focus more on integration and putting more robot platforms into a single dashboard. “First phase is the introduction of these select platforms. And then after that, it will be the phase of integration, where all the different platforms will start working together.” While Swenson says Nightingale was one of the most ‘enterprise-ready’ offerings on the market, deployment still takes work. “This is not something that you just go buy off the shelf and drop in. It does take some work. It takes some on-site consultations and setup,” he says. “The drones themselves are pretty mature, but to integrate it and run missions, that is still pretty advanced.” Castenada says BSC is researching the integration of drones for extended duties. “I’m really keen to use the drone not only
for perimeter security, but to interact with, a chiller or an external generator; external devices and external critical equipment.” “What we’re looking for is integration capabilities with our BMS [building management systems]. Right now we’re researching which drone to integrate with video management systems like Genetec.” Currently, Azur Drones of France appears to be the only UAV company with a publicly announced integration with Genetec, says Castaneda. He also says there are a few data centers that integrate with drones. The company is also reportedly exploring Wingtra, which offers a vertical take-off fixed-wing drone that can be used for surveying.
Humans need drone training While different drone companies will have different processes, prior to deployment Nightingale will come to a location, survey the environment, map the buildings and potential obstacles, and help train the staff. Wu says the process normally takes about a week. “Each local environment is different,” he adds. “They have different wind patterns, different temperature, and also their operational tempo is different; some use it 12-13 times a day, others use it twice a day.” The software system includes predefined limits around altitude and no-fly areas. At Novva drones won’t fly over the on-site substation or peek into neighboring buildings. Nightingale will help establish those initial restrictions but the customer can change those as required. It’s surprisingly straightforward to pass FAA requirements to fly a drone. The Nightingale flight system is designed to meet US Army standards. Wu says this means they are usable by people with a minimum IQ of around 80. “Part 107 [the FAA’s drone certification
“If your data center is in a residential, super metropolitan area, it’d be a little bit risky to be sending a drone up in tight quarter. People are gonna get sensitive about it and you’re gonna get calls; they’re just too dystopian"
42 | DCD Magazine • datacenterdynamics.com
program] is only a 60-question, multiplechoice questionnaire,” he explains. “Anybody who can use an Uber app and Google Maps can use our software.” Nightingale put DCD in charge of a drone in a live demonstration, allowing this writer to fly a drone round the company’s facility in California, from the comfort of a flat in London. Moving the machine was a simple case of point and click, and despite my efforts I wasn’t able to crash or break through the no fly zones. During the flight, the drone was able to identify and flag vehicles in the company’s parking lot outside usual working hours. Swenson says everyone at Novva’s control center, including himself, is FAAcertified to fly a drone, and the company has different internal pilot statuses. Less certified members of the team can pause and rewind pre-set missions to check for anything the drones may have missed – and control viewing angles – but those with top-flight status can take full control of the drones. “The drone itself requires very little human intervention once it’s preprogrammed,” says Swenson. “And even then it has all sorts of safeguards; emergency landing zones, no-fly zones, altitude zoning.” “It’s also something that we felt like we could copy and paste to other facilities and build a training structure for every data center. It’s something we can replicate more easily.” Wu is clear, however, that his company’s machines are complementary to humans, rather than a replacement. “We’re not here to replace all human guards because there are tons of things that humans can do that we can’t. So robots are good at repetitive tasks, dangerous environments, and the ability to be very cost-effective, but we can’t answer telephones, and we can’t provide directions.” “And we definitely can’t discern the context of a bunch of kids smoking weed, or a bunch of guys with guns, in the parking lot. The context is very important, and humans have that context.”
Building a base for your robots The Blackbirds live within a designated enclosure with a retractable roof that acts as a landing pad and charging pad. Swenson describes Utah as a ‘cold desert’, just as likely to see snow as intense heat, so it is important the base station is weatherproof
and can deal with snow without needing a person to clear the landing pads. The razor-wire-covered drone compound where the base stations reside sits at ground level on the Utah campus. It measures around 16 feet by 20 feet – the base stations are around 7 feet by 7 feet – and the pad has dedicated power and fiber running to the stations. “We needed something that could sit in a weatherized vestibule. You could put this on the top of the building but it does take some planning; it’s not something that you can just go in and drop in in a day with a WiFi signal. But compared to building a data center, putting in one of these drones was relatively easy.” Wu describes the base station as an ‘Edgebased computing device’; the station stores the video feeds and other data collected by the drones and runs the footage collected through machine learning algorithms, for example, to detect cars or people. “It has a large PC inside, tons of memory, tons of storage, and a lot of horsepower to crunch numbers,” says Wu, adding that processing and storing the video streams at the base saves cloud costs and provides added security. When asked if the drones have helped prevent or deal with any incidents, Swenson says the company has not caught anybody trying to get into the facility since it opened, but the drone has made some avian friends. “Sometimes we have some birds of prey out here that find the drone quite interesting when it’s flying around; we have a few eagles and some hawks that find it interesting but for the most part, they keep their distance.”
When drones makes sense Wu suggests campuses less than 30 acres probably won’t need a drone for patrols, as people and cameras can do the job well enough. For larger campuses where it can take minutes to get across site even in a car, drones potentially offer a more efficient patrol and quicker response option. “If you have large perimeters, it just doesn’t make sense for you to put cameras on them because it’s very expensive,” he says. “It’s not just the cameras - you also have to run power. And then you also have to run data, so that becomes a huge cost factor.”
While large remote industrial sites are suitable for drones, deploying a drone at an inner-city carrier hotel might not make sense. Wu notes the more urban a proposed site is, the more difficult it is to gain approval from regulators to fly. “If your data center is in a residential, super metropolitan area, it’d be a little bit risky to be sending a drone up in tight quarters,” adds Swenson. “People are gonna get sensitive about it and you’re gonna get calls; they’re just too dystopian, and people might feel like they’re being spied on.” While the company helps train staff, Wu notes it’s ultimately up to the customer company to understand restrictions and potential liabilities with relation to flying drones, likening the relationship to that between a carmaker and the driver.
Drones inside data halls In future, Wu suggests the systems will be more autonomous and less reliant on a dashboard and point-and-click interfaces. “Eventually you’ll just install the system, and the person will just talk to it: ‘Hey, Blackbird. Can you do a perimeter patrol of Section A or Parking Lot 1’, and the drone will take off go and do it.” “And then when it lands, it will give an alert to the person the mission is complete, or, if it sees something suspicious will automatically say, ‘you should take a look at this’ and provide the live video.’” Castenada says if his test pilot goes well, he’d like to see BSC start to use headsets such as Microsoft’s HoloLens to control the drones. He also wants to share the drone’s footage with customers. “We would like to introduce transparency. A customer wants to see their facility, and we’d like to offer a secure portal for them to be able to view and watch their property.” Today Novva only has medium-sized drones that operate outdoors, but in the future, the company wants to have microdrones within the data halls - perhaps flying eight to 10 microdrones per building. “Right now we’re experimenting with smaller drones that are three inches in diameter,” says Swenson. “They would operate within the data center and be able to monitor things [and watch for] anomalies within an aerial view.”
Automation Supplement | 43
DCD Magazine 44
Drive decarbonisation via the unique EnergyAware UPS solution
Contribute to the broader energy transition and support a new renewable grid system
Generate more value form battery storage assets: 1 to 3 year payback
EnergyAware technology: Leading the way to a greener energy future Energy is mostly taken for granted, a commodity that can be bought and utilised. Chief Financial Officers see energy as an unavoidable, undesirable cost. But the status quo is changing. The opportunity to transform from energy consumer to energy pro-sumer and a grid contributor has never been greater. This new future will see data centres make valuable environmental and financial gains with no loss of resilience, control or productivity. And it could help all of us move to a greener energy grid, adding a valuable new dimension to an organisation’s Corporate Social Responsibility activities. To learn more about Eaton’s EnergyAware UPS visit Eaton.com/upsaar 44 | DCD Magazine • datacenterdynamics.com
Northern Virginia
THE BATTLE FOR NORTHERN VIRGINIA: SATURATION POINT?
Dan Swinhoe News Editor
Residents are fighting against proposals for huge new data center proposals across Northern Virginia. While there might be ample land and power, are residents reaching saturation point?
N
orthern Virginia is the undisputed capital of data centers worldwide. A Cushman & Wakefield report suggests the market measures more than 1,600MW of installed capacity, more than double the next largest market.
Within that, Loudoun County is the epicenter; home to the “Data Center Alley” neighborhood, official figures suggest it currently has around 26 million sq ft of data centers across the whole county. However, as Loudoun continues to fill, a number of developers are looking to expand to neighboring counties. Developments in Prince William are growing, and a new project could see thousands of acres rezoned, paving the way for tens of millions of square feet of new data center development. Locals, however, are rallying against the proposals, as are others throughout the county and wider Northern Virginia. Even if there is enough land and power, the residents may be reaching saturation point.
Prince William Digital Gateway - a rival zone? Reports surfaced last year of PW Digital Gateway, a development in Prince William County. First reports described an 800-acre development that would string together 30 parcels of agricultural land along Pageland Lane owned by 15 property owners, to be developed by a single unnamed data center developer. Since then, the PW Gateway project has expanded massively to something that, if fully built out, could see Prince William overtake Loudoun as the data center hotspot of the world. More than 200 landowners have elected to join the proposal; the current version of the project would replan 2,133 acres of the county's "Rural Crescent" for data centers. Approval could pave the way for up to 27.6 million square feet of data centers; more than all the facilities in Loudoun right now - and on top of other existing developments in PWC.
It’s worth noting, however, that even if the Gateway plan is approved individual developments would still need planning permission; data center developments outside the county’s existing designated ‘Data Center Overlay Zone’ also need special permission. The original developer for the 800-acre project was revealed as QTS, when the company’s logo was spotted on slides shown at a town hall meeting QTS confirmed this to the press: “QTS has been approached and is seeking to acquire land as part of the Prince William County Digital Gateway project,” a QTS spokesperson wrote in a statement to Prince William Times. “This is a unique opportunity to play a role in what could be the most significant economic development initiative in the county’s history. QTS has a strong track record of being a considerate, supportive and sustainable neighbor and is committed to a thoughtful development strategy that will preserve the historical significance and aesthetic beauty of the area,” the statement continued. QTS declined to expand on these
Issue 44 • April 2022 | 45
DCD Magazine #44
comments to DCD. Not everyone was convinced by QTS, and protests focused on the Manassas National Battlefield, a historic Civil War site adjacent to the development. Around 50 people from the Coalition to Protect PWC gathered outside QTS facilities in Manassas, with chants and signs saying “stay out of the rural crescent” and “save our sacred battlefield.” “We are here to say to QTS data centers… go away,” said Elena Schlossberg, executive director of the coalition, said at the event. Schlossberg told DCD that, when QTS was revealed as the project backer, the Coalition thought: “We need to just let QTS know that, clearly, they haven't been paying attention to the concerns of the community. “We needed to make sure that they hear us, and had the idea to have a press conference/little rally/introduction in front of the QTS building.” QTS officials have not acknowledged the rally, on the day or subsequently. “They didn't come out. They haven't reached out to us. Their response was to file the rezoning application. I think that they're hoping that we'll get tired and we'll go away, but we're not going anywhere.”
The fight for Pageland Local officials are yet to vote on the Digital Gateway proposals, but local residents and officials have voiced concerns over inviting large amounts of new development to
a mostly rural area and worry about the potential impact the rezoning could have on the National Battlefield. The fight has even made it to the national press, with the WSJ and Reuters reporting on the uproar from local residents, heritage & conservation groups, which even includes a local retirement home. A public hearing about the proposal in January lasted more than three hours and 73 people spoke. Board of Supervisors Chair Ann Wheeler has warned that rural land doesn’t have guaranteed protection: “We can’t just take things off the table because it’s in an undeveloped rural area,” she said. But there are other concerns. Parts of the development would disturb the graveyards of enslaved people dating from before the Civil War. And the project could also impact local watersheds that run to the Occoquan reservoir – a major source of drinking water in the area. “I call it the plunder of Pageland. You are always going to fight to protect your land in the rural area,” says Schlossberg, “because developers see it as empty space. They don't see it for its value, especially for its environmental resources and how we fight climate change.” Brandon Bies, superintendent of the National Battlefield Park, has written to the Prince William Board of County Supervisors and the county planning office, outlining “grave concerns,” saying the plans could cause “potential irreparable harm” to the
park’s historical, environmental, and visual aspects. Documentary filmmaker Ken Burns has said the proposals could have a “devastating impact” on the National Battlefield. Environmental officials at Prince William County asked the board of supervisors to reject the proposal, and US Rep. Jennifer Wexton, a Democrat representing Virginia’s 10th Congressional District – encompassing parts of Loudoun, Fairfax, and Prince William Counties – wrote to the Board of Supervisors last month. She said the proposal would have a “significant negative impact on the surrounding environment and community.” Even those outside the county are wary of the proposals. In letters submitted to PWC, officials from neighboring Fairfax County laid out a series of objections to the land-use change. “Fairfax County staff has significant concerns regarding the impacts that will accrue from adoption of this CPA (Comprehensive Plan Amendment) and encourages Prince William County to reconsider the proposal,” warned Fairfax County Officials in the letter, signed by Fairfax Planning Director Barbara Byron. “We have an overarching concern about the proposal to permit higher density development within the larger Occoquan Watershed due to cumulative impacts on the reservoir which provides drinking water to a large portion of Northern Virginia,” the letter reads. Loudoun County officials, by contrast, do not oppose the proposals.
“We told them it would be the most expensive transmission line fight they would ever experience and the longest, and we were right” 46 | DCD Magazine • datacenterdynamics.com
PW County’s own finance projections suggest the proposals won’t generate as much tax revenue as previously thought. Prince William County Deputy Finance Director Tim Leclerc told county officials the Gateway would eventually generate about $400.5 million a year in local tax revenue
Northern Virginia
under current tax rates – not the $700 million the project’s applicants estimate. Leclerc also said this tax revenue would ramp up slowly, rising from about $9.8 million in the first year of operation, to $204 million in 10 years’ time, reaching about $336.8 million in year 15.
Another fight in PWC The Coalition to Protect PWC originally started around 2015 to oppose a proposed power transmission line. Dominion Energy was seeking to build a new line to serve an AWS facility in Haymarket. The case went to court and, eventually, the power line was partially buried. “We told them it would be the most expensive transmission line fight they would ever experience and the longest, and we were right,” says Schlossberg. The Coalition is now focused on fighting the PW Gateway project and protecting the Rural Crescent, arguing the County’s existing allocated land for data centers is sufficient. “It's fun to be called an activist, but that's not really how I see myself. I see myself as someone who tries to inject common sense and rational growth patterns which are based on Smart Growth principles,” says Schlossberg. “We just do it because we believe that what we leave behind matters. Anybody who's paying attention to climate change should be terrified that this is the direction that we're moving.” As with many outside the industry, Schlossberg and the members of the coalition were not previously well-versed in the data center industry until facilities and the accompanying infrastructure started showing up on their doorsteps. “We all live our lives in ignorant bliss. That was a big eye-opener and learning curve; I had to learn about megawatts and the load that could be handled at a substation, and transmission lines. It was surprising that we had not been having these discussions earlier. ” The area in question along PageLand and close to Manassas Battlefield is called the Rural Crescent because it's designed as a rural barrier to prevent sprawling, uncontrolled development. The area has fought off several proposed
developments over the years, most notably a major mall and a proposed Disneyland resort in the 1990s. Referring to that battle. Schlossberg warned QTS: “You think Disney had it hard? Just wait.” The Coalition first heard about the Gateway project when Dominion Energy alluded to it during a public meeting about another project outside the existing overlay zone. The Coalition says the Council is now pro-development, and has lost interest in protecting PWC’s rural areas. PWC Council officials declined to speak to DCD. Schlossberg says today the PW Gateway project is being led by several local landowners that have long had hopes of developing their plots – including some who opposed the likes of Disney – and have simply pivoted from residential or commercial towards data centers as the opportunity arose. “They've abandoned the higher density development into the Rural Crescent and they've gone right for data center development,” says Schlossberg. One who switched sides is County Supervisor Pete Candland, who opposed the proposals at first, but eventually joined the PW Gateway group. His home is in the development area and Candland says he has no choice but to sell, or see his home surrounded by data centers. He is now excluded from participating in Board of County Supervisors discussions or votes on the proposal. Some locals called him to resign. As landowners like Candland join up, the project has expanded from 800 acres to more than 200 landowners with 2,000 acres. The Coalition says the smaller landowners close to the larger plots leading the project feel compelled to join the project out of desperation. “The developers spin this narrative that this is going to happen and you could either join them or see your home and your quality of life flushed down the toilet,” says Schlossberg. “The landowners can't talk anymore because they've subsequently signed an NDA, but we know people did not want to sign on. “They don't want to sell, but they also feel trapped. If they have three acres and end
up surrounded by these massive buildings and substations, their quality of life is going down. And that's what the developers did; convince people that either they were going to make a million dollars an acre, or, if they didn't buy in, they would be doomed. “I don't believe that you should screw your neighbor so that you can make a bunch of money. It's ripping apart the fabric of our community.” The county has an existing data center overlay zone – an area where data centers are permitted and require less zoning and planning permission applications. Proposed by the Coalition, the overlay zone aims to promote the development of data centers within areas of the County where there is existing infrastructure to support them. It is largely centered on the land between Manassas and Gainesville. The existing overlay encompasses around 10,000 acres, mostly in close proximity to high-voltage transmission lines. As of December 2021, approximately 500 acres of the land within the overlay is currently developed with data centers, An additional 1,600 acres is in preconstruction (permit review or planning stages) or under construction. A report from the review suggests approximately 600-1,110 acres of remaining land in the overlay are considered ‘market viable.’ The County is currently exploring whether or not the overlay zone should be expanded. This is another bone of contention with the Coalition, but is entirely separate to the PW Gateway project, and could see even more land in the county designated for data centers. “[With the introduction of the overlay zone] we wanted to prevent that kind of thing from happening again,” says Schlossberg. “This new board has taken that effort with the overlay zone and they've crushed it beneath their feet and I don't know why. What a joke.”
Are Northern Virginia’s residents hitting data center saturation point? Even beyond the Digital Gateway Project and the overlay zone, smaller fights against data center campus proposals are popping up both in PWC and neighboring counties. Housing developer Stanley Martin is
Issue 44 • April 2022 | 47
DCD Magazine #44
The ‘ladies of Pageland’ change sides Two of the local residents leading the charge in favor of the PW Digital Gateway project were among the area’s staunchest defenders. Page Snyder and Mary Ann Ghadban have told press that it was the relentless encroachment of new developments and loss of rural land, not the lure of money, that caused them to change sides. “We’ve spent our entire lives fighting one thing after another, it’s just gotten worse and worse,” Page Snyder, 71, told the Wall Street Journal. “Basically, we’ve just thrown in the towel.” During the 1950s, Snyder’s mother, Annie Snyder, fought against the widening of local roads and a motocross speedway racetrack. The Snyders had previously fought against an amusement park, a large retail mall, and the proposed Disney theme park. Snyder and Ghadban were key players in a years-long battle against a planned ‘Bi-County Parkway’ which would take acres of the Battlefield and connect I-66 in Prince William to the Dulles International Airport in Loudoun. One profile piece from that time dubbed them ‘the ladies of Pageland lane.’ Now Ghadban, 68, who has a history as a local developer, says that “the writing is on the wall” and the bypass is inevitable as the county continues to develop. “We all thought we’d get to die here,” Ghadban said. But “it’s time for Prince William to evolve.” According to a piece in Prince William Times, Ghadban helped the current project along, hiring an expert to see if power lines and the fiber-optic cable along Pageland Lane would interest data center operators in the wake of a 2019 county decision to approve a 2.3 million-square-foot data center complex outside the overlay zone and less than two miles away. The publication notes she managed to attract more landowners to the project with the promise of up to $1 million an acre. Proponents of the PW Digital Gateway argue that it will “turn the transmission line lemons into lemonade for all PWC residents” and bring prosperity to an otherwise overlooked area. Some ‘half a dozen’ data center firms were reportedly looked at as a partner for that initial 800-acres, before settling on QTS. The PW Times suggests Compass Data Centers may be involved in buying some of the other acreage.
"We believe you don't have to sacrifice your natural resources, your environment, your hallowed ground, your clean drinking water, for economic development" seeking to rezone more than 250 acres in PWC’s Bristow to allow for up to 4.25 million square feet (394,800 sqm) of data center development. A neighboring plot has already been given the greenlight for a 1 million sq ft (93,000 sq m) data center campus, likely for Yondr. Locals opposed the proposals in the local press. Another 80MW development in PWC’s Haymarket moved forward on a technicality after county officials couldn’t agree on whether the development should go ahead, especially before the results of the data center overlay review. Near PWC’s Dale City, Plaza Realty Management Inc. is proposing a 1.16 million sq ft data center campus. Culpeper County has eschewed data centers, aside from four Equinix data centers that opened 14 years ago, but it is now a target for AWS. The cloud giant wants to put up two buildings spanning up to 430,000 sq ft (40,000 sq m) on 243 acres of land in Stevensburg. The county Planning Commission voted to recommend denying the application during a five-hour meeting that saw dozens of people speak and ran till after midnight. More than 40 people addressed the board during the next meeting, the vast majority against the proposal. The plans were also opposed by State Sen. Bryce Reeves. The county still ruled in
48 | DCD Magazine • datacenterdynamics.com
favor of the zoning change. However, in Fauquier County, the town of Warrenton changed its zoning laws to allow for data centers after AWS expressed an interest in developing there. Even Loudoun may be hitting its limit. The county is currently considering a rezoning proposal that could add another 56 million sq ft of potential data center space. But local officials have been hostile towards such proposals. “If we turn this large chunk of land into data center alley… it would be totally opposite of the majority of public input we just got recently from our residents,“ said Loudoun County Supervisor Tony Buffington at a public hearing. “I will fight this so hard, every step of the way,” he added, saying that if the current space zoned for data centers is all used up, “then maybe we have just run out of data center space.” Back in Prince William, local officials are due to vote on the PW Digital Gateway project in April. But even if the vote is passed – previous reports suggest the Council’s members are broadly in favor – opposition from local residents will continue. “I don't think QTS understands the passion of this community,” says Schlossberg. “And I don't think that this new board does either. The overlay was adopted for a reason. We believe you don't have to sacrifice your natural resources, your environment, your hallowed ground, your clean drinking water, for economic development. “It's not this Hobson's choice of either you have your environment or you have economic development. Sustainability begins with site selection. And when you have the local state and national conservation groups telling you you're in the wrong place, you're in the wrong place.” “And I think that this is a time when data centers should really be trying to avoid this kind of community opposition. It's a bad look.”
Singapore's Comeback
AFTER THE MORATORIUM: HOW SINGAPORE PLANS TO STAY AHEAD IN DATA CENTER RACE
Paul Mah APAC Correspondent
As it sets limits for fewer, smaller data centers
D
espite an “implicit” moratorium on new data centers in place since 2019, Singapore was ranked this year as one of the top data center markets in the world, tying with Silicon Valley for second place, according to Cushman & Wakefield’s 2022 Data Center Global Market Comparison.
With the moratorium officially lifted in January this year and new guidelines revealed, what does the future herald for the island state of 721 square kilometers?
Singapore Economic Development Board (EDB), Infocomm Media Development Authority (IMDA), and JTC Corporation (JTC) to a local broadsheet at the end of 2020.
The story so far
Various reasons were cited earlier for the moratorium, including the strain that data centers put on the electrical grid. However, a long list of initiatives since 2016 suggests that the key concern had
We first reported on the moratorium on new data centers in 2019. It was subsequently confirmed in a terse joint statement by the
Issue 44 • April 2022 | 49
DCD Magazine #44 always revolved around sustainability. For instance, Singapore announced the trial of a tropical data center in 2016, and high-rise data centers in 2017, and also funded research into technologies such as water cooling. And through Keppel Data Centres, which is owned by the governmentlinked Keppel conglomerate, Singapore is actively exploring the use of cold energy from the Singapore LNG Terminal to cool data centers, as well as building a floating data center park with the ultimate aim of harnessing seawater for cooling. In its quest to reduce its carbon emission, Singapore had embarked on a plan to eke out as much renewable energy as possible. One prong of this plan involves increasing solar capacity to two gigawatt peak (GWp) by 2030, which will entail maximizing the deployment of solar panels onto available surfaces such as rooftops, reservoirs, and even near-shore sea. Despite the herculean effort, on Singapore’s limited land area, solar power will only be able to produce four percent of Singapore’s current electricity demand by 2030. In the meantime, data centers alone took up a staggering seven percent of Singapore’s total electrical consumption in 2020.
The hyperscale carbon dilemma A Parliamentary response from the Minister for Trade and Industry in February succinctly summed up the conundrum hyperscale data centers pose to the resource-scarce country. “In the last five years, 14 [data centers] with a total IT capacity of 768MW were approved to be constructed on industrial State land. This was a rapid increase compared to the 12 [data centers] with a total IT capacity of 307MW in the preceding five-year period.” Even if the size of data centers stayed unchanged, the unbridled growth of hyperscale data centers over the next five years will likely result in additional power requirements surpassing the previouslyrecorded 768MW over the next five years – practically doubling the 1,000 MW consumed by data centers in 2021. In the face of this intractable consideration, merely improving energy efficiency will not alter the underlying
reality. Clearly, the powers that be in Singapore have now concluded that hard limits must be established before new data centers can be built.
the year. During this pilot phase of 12 to 18 months, the government will monitor the awarded projects and continue to evaluate its policies.
Crucially, Singapore will henceforth focus on quality, not quantity, and new facilities will have to bring something to the table on the resource efficiency front.
An initial 5MW cap might have been initially mooted, based on at least two reports citing industry feedback. It appears to have been be dropped. And while some reports cited a maximum of three applicants, industry insiders DCD spoke to said there is no cap.
“[We will] seek to anchor a range of [data centers] that can meet both industry and society’s needs, are best in class in terms of resource efficiency, and that continuously innovate to push the boundaries of resource efficiency of data centers in a tropical climate,” wrote Minister Chan Chun Sing in his parliamentary response.
For now, Singapore’s Communications Minister has confirmed that the city-state will begin accepting applications for new data center developments from Q2 as part of a pilot phase.
After the moratorium
Late to the party
News of the moratorium lifting broke in January, again in a parliamentary response. Minister for Trade and Industry Gan Kim Yong, who took up the role after a Cabinet reshuffle, confirmed that the moratorium is set to end, but noted that the country will be “more selective” of data center projects moving forward.
The latest developments do put certain loose pieces of the puzzle to rest, such as Facebook’s US$1billion 150MW facility announced in September 2018.
At a closed-door virtual meeting in late January, organized by trade body SGTech and attended by at least two dozen key data center leaders, IMDA and EDB outlined their proposed criteria for data centers. In a nutshell, Singapore wants data centers that are “best in class” in terms of resource efficiency and which can contribute to the country’s economic and strategic objectives. In addition, measures will be put in place to raise the efficiency of these data centers over time. To this end, a pilot will be conducted this year to allocate up to 60MW of capacity for new data centers. Applications for data centers with capacities between 10MW and 30MW will be considered. While relevant government agencies say they have land designated for these data centers, other sites, including proposals to expand existing facilities, will be considered. New facilities should have PUE (power usage effectiveness) of 1.3 or less, and must incorporate efficiency or sustainability innovations. Applications by consortia will be considered, presumably to allow larger enterprises such as banks to get a foot in the door. An RFP will be called in Q2 this year, and a decision made before the end of
“We will put in place measures to raise the efficiency of existing data centers over time. We are seeking feedback from the industry" 50 | DCD Magazine • datacenterdynamics.com
At the launch briefing Tom Furlong, Facebook’s VP of data center infrastructure told DCD that the facility is scheduled to complete its phase one and commence operations in 2022, four years later. The lengthy time frame was puzzling: while the building will be a mammoth 11-story facility, it is located on a piece of vacant land at the Tanjong Kling data center park served by two dedicated – and operational – power substations. We now know that the moratorium came into effect months later in 2019 and it seems possible that Facebook pushed forward the announcement to get the project started before new data center projects were halted in the moratorium. Likewise, SPH and Keppel also announced a joint venture in June 2020 – after the moratorium took effect but before it was officially acknowledged, to "maximize economic returns and improve the return on capital of an existing asset and to enter and participate in a growing sector.” Though there was no mention of its power capacity, the agreement to acquire SPH’s leasehold land at 82 Genting Lane for the facility was pegged at S$50 million (US$35.8m), which makes it unlikely to be a small data center. While anecdotal evidence suggests that a decision on Singapore’s data center roadmap was only made recently, the moratorium and the possibility of a decision to limit hyperscale developments should logically have been known at that point. What is clear is the days of new, hyperscale data centers in Singapore are over, and any players that have not got projects under way have missed the boat. They are now left with the option of acquiring an existing facility, vying
Singapore's Comeback
“In the last five years, 14 data centers with a total IT capacity of 768MW were approved to be constructed on industrial State land... a rapid increase"
for limited slots with far more stringent rules and modestly-sized data centers, or building new data centers elsewhere. For instance, Singapore-based Bridge Data Centres originally told us of its expansion plans in Singapore back in 2019. Derailed by the moratorium, it dramatically sped up its expansion plans in Malaysia instead. Late last year, Bridge Data Centres announced plans to build a data center campus in Johor with a combined capacity of 100MW – almost double the capacity under Singapore’s post-moratorium pilot.
A journey of continuous innovation Where does the pilot phase leave potential data center contenders? While energy efficiency has always been important to data centers in general, the stipulation for innovation should prioritize novel approaches over the tried-and-tested technologies which normally get used. Suddenly, floating data centers and the ability to leverage seawater for cooling look a lot more enticing. Keppel’s motivation for leveraging cold energy from LNG gasification plants makes sense in this context, as do the plans Big Data Exchange (BDx) announced last year for floating data centers off the coast of Singapore. Meanwhile, larger enterprise data centers with disparate IT systems and a higher PUE may be more likely to move to liquid-cooling to bring the PUE back down.
What remains unclear is the road ahead for existing data centers. As a country that prides itself for its business-friendly climate, Singapore will likely allow redevelopment of older facilities. It is unclear yet how it will happen, though. A representative from IMDA declined to comment, pointing us to a press statement released for the 27 January virtual meeting. “We will [put] in place measures to raise the efficiency of existing [data centers] over time. We are seeking feedback from the industry on the proposed criteria that [were] presented during the discussion. More details will be announced in the coming months,” said the statement that DCD obtained. Depending on how the guidelines pan out, old facilities that are due for redevelopment might become a lot more valuable overnight - as that redevelopment may be the only way to create sugstantial new data center capacity.
The Singapore hub of the future Is Singapore destined for a long fall from its current pinnacle? A possible answer might just be found in an anecdote. When we visited the Iron Mountain Singapore data center in 2019, general manager Michael Goh was quick to share about its direct high-speed link to another data center in Singapore (Equinix SG1) as a major selling point. This was hardly the first time we have heard of this. Operators in Singapore
commonly tout their access to various “carrier hotels” in their sales decks. These carrier hotels are not coveted for their colocation space, but their extensive network connectivity and critical mass of key players that have deployed systems in their facilities. With top public cloud players like AWS, Microsoft, Google Cloud, Alibaba Cloud, Huawei Cloud, and OVH, Singapore is ranked among the most advanced public cloud markets in the APAC region, according to the Boston Consulting Group. Coupled with more than 1,000MW of data centers and exceptional international and regional network connectivity, Singapore might well have the critical mass of digital infrastructure and its accompanying ecosystem to serve as a regional hub for new hyperscale developments in Southeast Asia and across Asia. In this vision, major cloud and colocation players will find it beneficial to build smaller data centers in Singapore and run their data center operations for the Asia region there. New hyperscale facilities will all be connected through subsea cables to Singapore for its digital ecosystem. And far from fading into inconsequence, cuttingedge innovations for enhancing energy efficiency in tropical environments will be developed and promulgated across the region from Singapore. Of course, the future is far from certain. But the journey ahead can only be an eventful and interesting one for the data center hub of Singapore.
Issue 44 • April 2022 | 51
DCD Magazine #44
Trouble in Soviet Florida On Russia’s border, cryptocurrency miners risk inflaming tensions in the small separatist state of Abkhazia
I
n the world of cryptocurrency, there is only one question that matters: ‘Is the cost of electricity less than the value of what I am mining?’ Once the initial setup has been paid for, it is this margin that makes their business possible. In Abkhazia, cryptominers have found virtually free electricity, making mining viable even when crypto prices crash. But, for the people of the small breakaway state, there’s another cost entirely. Tense geopolitical and cultural conflicts dating back decades are being stoked by an influx of cryptominers, who are siphoning off power from a crumbling dam that lies directly in the middle of an uneasy and unofficial border. Their presence risks upsetting an already fragile peace.
Where is Abkhazia? To most of the world, Abkhazia is not a country. "It was an autonomous region that was part of Soviet Georgia. Large parts of the population ideally wanted to stay in the Soviet Union, but when that broke up [in 1991] they felt they had no place in independent Georgia,” explained Thomas de Waal, senior fellow with Carnegie Europe and author of several books on the region, including The Caucasus: An Introduction. “They had links to Russia, and they also
identified Georgia as an oppressive place for them, because of all the history - including when Tbilisi suppressed their language and culture and forced them to switch to the Georgian script in the 1930s.” In its heyday, when its Sukhumi region was favored by Stalin and the elite, Abkhazia was billed as ‘Soviet Florida,’ and saw an influx of immigration from Georgia. By the time of the collapse, “Georgians were about just under half the population, the Abkhaz were about 20 percent, and then the others were lots of Russians, a lot of Armenians, and some Greeks - so it was a very mixed population.” Georgians had the demographic claim as the largest group, as well as the legal claim to the land, which had been split along Soviet Republic lines. “It was this incredibly naive political culture in which people believed that all you had to do was just declare independence and paradise would begin,” de Waal recalls. “But there were a lot of criminalized armed groups who were ready to fight over all the resources. The Russian military and Soviet military were suddenly impoverished and had tons of weapons to show around and many of them were happy to fight on the Abkhaz side against the Georgians. It was just lots and lots of ingredients for conflict, which eventually, unfortunately,
52 | DCD Magazine • datacenterdynamics.com
Sebastian Moss Editor-in-Chief
did break out in 1992.” The Georgians won the first wave of the civil war, taking over Sukhumi. But then the Abkhaz sought help from Russians and Chechnians, turning the tide in their favor. “Ninety percent of the 200,000 plus Georgians were either driven out their homes or fled of their own accord,” de Waal explained. “You ended up with Abkhazia suddenly losing pretty much all of its Georgian population, who lived in terrible conditions in Georgia-proper - some of them still don't own their own houses 30 years after the conflict,” he said. It is not known how many Georgians died in the ethnic cleansing, but thousands were killed by the Abkhaz and Russian forces. Others were sexually assaulted amid widespread war crimes and human rights abuses. “Abkhazia was sanctioned and isolated pretty much since then, until 2008 when Russia recognized it,” de Waal said. “Now it has a whole new set of problems from Russia.”
The Enguri Dam After the conflict, the border line between Abkhazia and the rest of Georgia went partly along the Enguri river, putting the world's second-highest concrete arch hydroelectric
Crypto Chaos Soviet in Abkhazia Florida
dam in a disputed zone. Part of the Enguri Dam lives in Georgia, while the powerhouse is found in Abkhazia, requiring cooperation for the plant to work. Despite the continuing tensions, both sides have agreed to a truce of necessity around the dam, which provides a huge portion of the electricity for both regions. As part of the 1997 ceasefire agreement, Abkhazia gets around 40 percent of the energy generated by the 1,320MW power station - virtually for free. That arrangement worked for years, with Abkhaz locals nominally charged a tiny $0.005 per kilowatt-hour in Abkhazia (compared to Georgia's $0.08 per kWh). In practice, most residents lack electricity meters or any effective way of tracking usage, and the small costs are primarily covered by the Abkhaz government. “For the first 15 years, that was sufficient, it was enough to keep everything in Abkhazia going,” Prof. Dr. Theresa SabonisHelf, chair of Georgetown University's Science, Technology and International Affairs program, said. “And then in 2016, demand started going up,” with Abkhaz consuming more than its allotted 40 percent. Both the Georgian government and the de facto Abkhaz government didn't really understand what was going on, she said. “Then they realized that it was crypto.” Sukhumi remains a popular resort for Russians. “A lot of young crypto entrepreneurs went there," Sabonis-Helf said. “Particularly because Russia regulates cryptocurrency, while Abkhazia has no capacity to do that. They discovered that they can rent beach houses and get cheap
electricity for crypto, without regulation.” In the story of this crypto power surge, the shadow of Enguri looms large. Opened in 1978, it has been left in a state of disrepair for decades, slowly deteriorating after years of neglect. The European Bank for Reconstruction and Development (EBRD) and others launched a 15-year emergency renovation project in 1996, implementing critical fixes that staved off total collapse. The effort appeared set to restore the dam to its former glory. Then war broke out.
The Russian invasion On 8 August 2008, after escalating tensions, Russia invaded Georgia, taking sides with South Ossetia, another separatist territory that is internationally regarded as a part of Georgia. A day later, Abkhaz forces opened a second front, attacking the Kodori Gorge, the only part of Abkhazia then under Georgian control. The conflict was brief, but pushed the two breakaway regions closer into Russia's orbit. Following the ceasefire of 12 August, Russia kept troops in both regions, recognizing them as independent states. Georgia maintains that they are both Russianoccupied Georgian territories. For Enguri, this presented a problem. Georgia did not want to be seen to be investing and working with an entity it did not recognize as legal. Six months after the war it entered into a secret agreement with Russia to manage Enguri, but the deal was leaked - drawing public furore in Abkhazia. The deal was scrapped, slowing work.
In the meantime, with so much power sent to Abkhazia for free, USAID estimates that the dam only brings in about 50 percent of the revenues required to sustain operations and repairs. It still relies on donors and cooperation, both of which can be fickle. Last year, the hydropower plant had to be shut down for more than three months for repairs. Russia stepped in to supply Abkhazia with energy, for a price. "The grid in Abkhazia was already crashing before the shutdown due to crypto, so the de facto government put in a ban," Sabonis-Helf said. It has had a limited effect, with Russian nationals splashing cash around in Sukhumi, widespread rumors of politicians running their own crypto farms, and the lack of metering making it hard to track such sites. Around the same time as the ban, the price of Bitcoin happened to surge - so mining actually increased. By 2020, roughly 30 percent of power used in Abkhazia was believed to go to cryptomining.
Mining at gunpoint It’s also hard for the authorities to actually do anything if mining is suspected. “USAID approved me going across the border and spending a day with the (now former) de facto Minister of Energy,” Sabonis-Helf said. “He was quite adamant that if we're going to cut people off, for any reason, we have to be able to do it remotely. Abkhaz households are all heavily armed, and they can't find any low-level electricians who are willing to go and cut people off in a place where they might get shot.”
Issue 44 • April 2022 | 53
DCD Magazine #44 opposition," Inal Khashig, the Abkhaz editor of the Caucasus-based publication Jam News, said in an opinion piece. "These 'managers,' periodically replacing each other, have long mastered the golden rule that the best business is one making profit off of the state." This influx of investment appears linked to nodes of power within Russia. "When oligarchs get connected to crypto, it becomes yet another way to asset strip from the state," Sabonis-Helf said. While the Russian miners are making huge profits, that is not going back into the system. “They’re hosting big parties and doing the things that oligarchies do like spending money on the Black Sea coast while they vacation, but they’re not paying much in taxes,” she explained. In fact, it’s quite the opposite: “The way they pay taxes is bizarre enough that it gives the appearance that the Abkhaz government is still subsidizing them, because when Abkhazia has to buy electricity from Russia, the price they pay is greater than the price that the crypto entrepreneurs pay in tariffs or taxes,” Sabonis-Helf said. It’s not clear what the relationship between the Russian cryptominers and the state is, and how much tension or cooperation there is. What is clear, however, is that the Russian government sees it as an opportunity.
Remote disconnects are not possible with Abkhazia’s antiquated grid. “The downside of the Enguri deal was that they've just had this drug of almost free electricity all these years, and they haven't sorted out their energy sector,” de Waal said. The crypto ban has simply forced the practice further underground, which has mixed dangerously with the excess of guns. Last year, a man was killed during an attempted robbery at a cryptocurrency mining data center, after men defending their facility accidentally shot one of their own people. Robberies at gunpoint and late-night burglaries have become increasingly common as mining is seen as a rare
opportunity for wealth in a region with unemployment estimated at between 40-70 percent. Locals, most of whom do not have enough money to cover the initial setup fees, are not the major beneficiaries of the crypto boom. Some may be operating small rigs, leased from others in a profit-sharing agreement, that they run at their homes. But the larger facilities appear to primarily benefit Russian nationalists, as well as their local political backers.
Enter the oligarchs "The owners of the largest crypto farms are those who are usually called the 'political elite,' regardless if they’re in power or in
54 | DCD Magazine • datacenterdynamics.com
Not only are they benefiting from the current arrangement, but they also seek to consolidate their power by gaining a deeper involvement in Abkhazia’s grid.
Climate change raises tensions Enguri faces another crisis, one imperiling us all: Climate change. Discharges and sediment content data are not routinely tracked, and hydro-meteorological data is limited, but water levels appear to be dropping. "This presents a significant economic risk as it is expected that in the medium term Georgia will experience increased variability of hydrological patterns, whereas in a more remote future, more profound climate changes may materialize," the EBRD has warned.
Crypto Chaos in Abkhazia
- who was pro-cryptomining - enacted the ban, and said he was against the freefor-all approach currently crippling the grid. He also pledged to crack down on government officials that were themselves mining, but some claim he operates his own facilities, a rumor that has not been confirmed. Giving Russia the ability to buy property and land as well as more control over the grid would help alleviate such tensions, but opens the door to Russia solidifying its power over the region. "Russia has gotten increasingly sophisticated in the ways in which it uses energy as a weapon," Sabonis-Helf said. "And if Russia comes bearing any kind of energy gifts or infrastructure or crypto entrepreneurs, beware." Anecdotally, locals complain of less snowmelt, and signs that the water supplying the dam is being disrupted by a changing climate.
presence of 30 Russian military bases within its borders, Abkhazia is not a simple vassal state. Unlike South Ossetia, it does not wish to become a part of Russia.
Between Enguri’s problems, crypto’s rising demands, and the other strains on Abkhaz’s grid, Russia sees itself as the salvation.
"They're very eager to preserve some kind of autonomy,” Sabonis-Helf said. “So it's very difficult to buy land, and they have blocked the Russians from buying it. This has been a Russian ask for years, and now it's an ask on behalf of the crypto entrepreneurs.”
Russia's Gazprom is pushing to install a gas grid, and the government is offering to fix other, smaller hydropower facilities in Abkhazia, that were looted in the last war. But such offers come with a price, one greater than simple financial transactions. Terms of the deal are shrouded in secrecy, misinformation, and are subject to change. But last year Aslan Bzhania, the president of Abkhazia, said a new substation would be used to power “a technopark for cryptocurrency mining" after holding a summit with Russian partners. Russia appears to have offered new energy supplies - which must be dedicated to crypto.
With power comes property The country also wants another major concession: The ability for Russian nationals to buy property in Abkhazia. For all its closeness to Russia, and the
This time, they may get what they want. Enguri’s issues and the rise of crypto has put pressure on the local regime. "With 20 years of no reform, grid crashes are relatively common, so people's expectations for outages are higher than most," Sabonis-Helf said, comparing it to Kazakstan, which saw widespread political unrest when the grid crashed following its own crypto boom after China's ban. "But it's one thing to have some crashes, it's another thing to have the continuous crashes that they've seen,” she added.
The people versus crypto? Last year, local residents in the village of Duripsh seized a substation in anger at the outages. A few months later, a mob shut down local mining farms. Sensing the change, President Bzhania
Such weaponization could also be turned on Georgia, she added. "Georgia is very adversely affected by shutting down the Black Sea, which if the Russians succeed in doing that, will do substantial damage to the Georgian economy." Many in the Georgian government see Abkhazia as a puppet regime of the Russian state, even if Abkhazians would disagree.
The endgame This fear has only been made worse by Russia's unprovoked invasion of Ukraine. "You've got a new 'foreign minister' in Abkhazia who was straight from Moscow who did work in Donbas, so he's on the Western sanctions list," de Waal said. "The majority of locals are in the Russian information space, and are likely supportive of Russia rather than Ukraine." But, he added, "they still have this funny relationship with Russia, who is their big protector, patron, everything - and yet they still don’t want to be part of it." What they want, however, may not be what happens, Sabonis-Helf fears: “I really do think that if you do any more asset stripping out of Abkhazia, you end up with systems that don't work at all, and then you have to turn to Russia to rebuild them. And now, Russia owns your grid.”
Issue 44 • April 2022 | 55
DCD Magazine #44
Making subsea cables smart, and maybe saving the planet How SMART repeaters could help the fight against global warming
U
nderstanding the oceans is critical to help understand our planet. It’s also key to understanding and measuring climate change and the changing patterns of the ocean that result from rising temperatures and melting ice caps. Of the 400 or so subsea fiber cables in operation today, cables dedicated to carrying scientific data number in the dozens. However, a number of upcoming subsea cables are looking to marry commercial cables carrying regular Internet traffic with sensors that could provide researchers with critical information about the status of our oceans and provide
Fiber cables: good for science, if rarely deployed The first subsea telegraph cables date back to the mid-1800s, with the first fiber cable going live in the 1980s. And researchers have long understood the potential use of undersea communication cables for
scientific purposes. The Dumand project (Deep Underwater Muon And Neutrino Detector project) was an early example of dedicated science cables. The system proposed placing an underwater neutrino telescope in the Pacific Ocean off the coast of Hawaii, 5km beneath the surface. The Project existed from about 1976 through 1995, but was never completed. The hardware was donated to other underwater neutrino projects. “My whole career has been partly dominated by trying to measure large scale ocean temperature using acoustic transmissions through the ocean to infer that temperature,” says Professor Bruce M. Howe, Research Professor at the Department of Ocean and Resources Engineering within the University of Hawaii’s School of Ocean and Earth Science and Technology (SOEST). “My goal is to get more acoustics on all these cable systems to ultimately come up with the basin-scale system that can make these temperature measurements on a routine basis.”
56 | DCD Magazine • datacenterdynamics.com
Dan Swinhoe News Editor There are few purpose-built science cables, but scientists also use retired commercial cables that have been moved and/or repurposed for research projects. The Hawaii end of the retired HAW-4 cable is now being used for the ALOHA Cabled Observatory and is managed by Howe. The NPS Pt Sur cable runs 50km from Point Sur to Sur Ridge and is a retired US Navy acoustic cable operated by a naval post-graduate school. “Typical telecom submarine cables are designed to transport data and don’t have environmental sensors meaning they’re not intended to measure undersea temperature, pressure, vibrations, and so on,” says Brian Lavallée, senior director of solutions marketing at Ciena. “This limits the reusability of telecom cables as scientific cables, unless the repurpose is to transport scientific data between two locations, such as shore-toshore communications." Underwater telescopes make up a notable
Sub Smart sea Cables proportion of the dedicated and purpose-built science cables in the world. The Cubic Kilometre Neutrino Telescope (KM3NeT) aims to build on the previous Antares project off the coast of France as well as the Italian Nemo and Greek Nestor neutrino telescope projects (both deployed prototypes but were never fully built). The telescope will be distributed over three locations in the Mediterranean; Toulon, France; Sicily, Italy; and Peloponnese, Greece. MK3NET aims to search for neutrinos from distant astrophysical sources like supernova remnants, gamma-ray bursts, supernovae, or colliding stars. It hopes to find dark matter. ASN is working with Italy’s Istituto Nazionale di Fisica Nucleare (INFN) to deploy an undersea laboratory cable incorporating a high energy neutrino telescope. The IDMAR project, off the coast of Sicily, aims to provide scientists with data to better understand physical processes in deep marine environments. Orange has worked with French research institute CNRS on the Meust and Prima science cable projects. Other cabled observatories include the 2006 coastal Venus system and Neptune observatory in 2009, both now within Ocean Networks Canada (ONC), and the Ocean Observatories Initiative (OOI). “Dedicated scientific cables are still relatively expensive and can only measure where they’re physically deployed meaning a limited scope,” notes Lavallée. “They are still hindered by a variety of challenges around sensors, how they are attached to repeaters, who manages (and pays for) the installation and maintenance, etc.”
SMART Repeaters offer a new use for subsea cables Instead of spending money on dedicated science cables or repurposing old and retired commercial cables, Science Monitoring And Reliable Telecommunications (SMART) cables could combine the two, allowing commercial traffic to flow uninterrupted while also collecting important data for researchers and governments. Instead of standard repeaters every 100km or so, SMART repeaters would also contain a variety of sensors that also collect important oceanographic data without interfering with the cables’ commercial operations. SMART repeaters could measure ocean temperature and circulation and sea-level rise, as well as provide early warning detection for earthquakes and tsunamis. “There may be only 50 or 100 sites maintained around the world with deep ocean temperature [sensors]. And those rely on ships at a cost of $50-60,000 a day,” says Howe. The Joint Task Force (JTF) for SMART cables envisions a ‘planetary-scale array’ that monitors ocean heat and circulation and sea-level rise to provide real-time warning systems for earthquakes and tsunamis. With more than one million kilometers of cable in operation and more than 10,000 repeaters, the potential number of data points for a global-wide ocean monitoring system is massive. Repeaters could have the required systems embedded in the standard modules, or as a small
additional external node, and could be stored and installed on subsea cables in the same manner as traditional repeaters without interfering with a cable’s normal traffic. “It's providing a new window into the deep ocean because there are just so few measurements, and whatever measurements there are, are typically very intermittent,” explains Howe. “By adding 2,000 repeaters of deep temperature measurements, that's 2,000 measurements we don't have [currently]. “We would be providing a time and space-dependent measurement of that sea-level rise, and it really won't all be the same as waters melting from different locations flow around the globe and try to equilibrate.” While distributed acoustic sensing (DAS) offers a way to detect potential movement of the cable – a number of projects have seen DAS fiber deployed on ice-sheets to measure movement while other undersea projects are using DAS on dedicated cables for earthquake and tsunami detection – DAS is currently only deployable at either end of a cable and usable for around 100km. It also doesn’t provide the same number of potential data points as a repeater full of sensors.
SMART cables vs Tsunami buoys and floats A number of observatory cables have been deployed off the coast of Japan – including Donet 1 and 2 and the S-net cable – that are dedicated to research and early warning: the former has nodes that contain seismometers and bottom pressure sensors, the latter nodes with accelerometers for seismic measurements and pressure sensors for tsunami detection and seafloor level monitoring. The 100km Ocean Bottom Cable Seismic and Tsunami (OBCST) off Sanriku uses acoustic sensing to detect seismic activity. While they offer proof the technology works, these cables do not, however, carry commercial traffic in the same way SMART repeaters could potentially allow. Non-cable options for ocean monitoring exist, but are often pricey and suffer reliability issues. Fixed Deep-ocean Assessment and Reporting of Tsunami (DART) buoys are in place across the Pacific ocean to act as an early warning system. DART buoys are designed to sense pressure changes at the bottom of the ocean caused by passing tsunamis and to communicate these changes to the tsunami warning centers. Each DART system consists of a bottom pressure recorder anchored to the ocean floor and a separately moored companion surface buoy. However, the system was down for a number of days last year: other units have previously been inoperable for months at a time. They are also costly, with each DART buoy reportedly costing more than $500,000 to install and $300,000 to maintain each year; the US National Oceanic and Atmospheric Administration (NOAA) maintains more than 30 of them. The Argo array, in operation since the early 2000s, uses thousands of floats to collect important oceanographic data; Argo was collecting 12,000 data profiles each month in 2020. Despite its importance to the scientific community, the floats have depth limits and are limited in their coverage compared to what a
Issue 44 • April 2022 | 57
DCD Magazine #44 a cable – retrofits would be much more expensive and harder to do – which could be one of the reasons commercial operators are yet to adopt the concept. Howe said he had hoped hyperscalers might be amenable to the concept, given their increasing investments into subsea cables and large resources, but notes that governments are a better fit for leading such projects to prove their utility, given the public nature of the research data they can generate and public safety benefit of early warning systems. Howe hopes that government, state ocean, and early warning agencies, and development banks will be able to get a number of these projects off the ground to prove their value.
network of fully equipped and persistent intercontinental SMART cable could provide. In 2020 the National Science Foundation approved a $53 million grant to deploy 500 robotic ocean-monitoring floats around the globe. The network of floats, called the Global Ocean Biogeochemistry Array (GO-BGC Array), will collect observations of ocean chemistry and biology between the surface and a depth of 2,000 meters. Startup Sofar Ocean has recently raised $39 million to develop its Spotter floating sensor buoys. Howe says SMART cables would be complementary to the likes of DART and Argo to provide greater coverage at deeper depths.
Making the business case
“We've found that it's easier to try and sell this for tsunami warning, just because people can understand that more easily and so we've concentrated on countries that need that capability and Portugal is a perfect example,” he says. “I think once the first smart cable is under contract, then the hope and expectation is that the other companies will follow suit and offer the same, or slightly improved capability.”
Introducing SMART to commercial cables Some SMART concepts are starting to make their way into commercial operations. In 2020, Alcatel Submarine Networks announced it would begin including smart capabilities into its portfolio.
Advocacy for the SMART cables concept began with a 2010 paper in Nature by Yuzhu You, Harnessing telecoms cables for science. Uptake on the concept since has been slow, partly down to the cost and lack of commercial incentive.
“Our entire portfolio will benefit from this new climate change philosophy to propose dedicated applications such as TEWS (tsunami early warning system), monitoring of underwater seismic activity, global warming, and water temperature and level,” the company said.
“A light bulb went off in his mind about putting sensors in commercial cables. And he followed that enough to write an article in Nature,” says Howe. “And then that caught attention in the International Telecommunications Union (ITU) as a possible green activity.”
Subsea Data Systems, a new partnership between Samara/Data and Ocean Specialists, Inc. is looking at developing SMART repeater technology, and has been awarded a $250,000 grant by the NSF. The company aims to develop a prototype by the end of the year.
The concept has UN backing, and was recently endorsed by the UN Decade of ocean science for sustainable development, and technically there’s no reason repeaters can’t have more sensors built-in. The JTF is working with the ITU to ratify resolutions that recommend smart cables to be implemented as standard in new cable projects.
While not using SMART repeaters, the EllaLink cable, which runs from Brazil to Portugal, is launching a GeoLab, which will use DAS Technology on dedicated fiber in the Madeira branch of the system. It will collect data along the route which will be optically transmitted back to the shore, independently and without impacting either telecoms traffic or the design life of the cable.
“The industry has always said they can do it technically,” says Howe. He and the task force estimate SMART repeaters could add up to 10 percent on the cost of deploying
The cable itself is a 75km stretch between Funchal, Madeira, and the junction box on the main EllaLink cable; the cable contains the GeoLab fiber pair and the primary
58 | DCD Magazine • datacenterdynamics.com
EllaLink telecoms fiber pair. A Febus A1-R module from Febus Optics has been installed in the Funchal cable landing station to measure acoustic changes in the fiber, but other parties can connect equipment to the GeoLab cable. In the US last year, seismologists at Caltech worked with Google to develop a method to use existing subsea cables to detect earthquakes. Caltech created a way to analyze the light traveling through "lit" fibers to detect earthquakes and ocean waves without the need for any additional equipment. During nine months of testing between December 2019 and September 2020, researchers detected about 20 moderate-tolarge earthquakes along Google’s US-Chile Curie Cable, including the magnitude-7.7 earthquake that took place off Jamaica on January 28, 2020. Caltech is now developing a machine learning algorithm that would be able to determine whether detected changes in polarization are produced by earthquakes or ocean waves, compared to a ship or crab moving the cable. Ciena’s Lavallée says the company’s WaveLogic 5 Extreme modems can be deployed on commercial in-service submarine cables to detect earthquakes and tsunamis. “The challenges of widescale SMART cable deployments are still daunting. However, detecting earthquakes and tsunamis using the over 400 in-service telecom submarine cables is promising,” he says.
Portugal and Antarctica get SMART The SMART cable JTF – where Howe is a Chair – notes the first major SMART projects are funded and underway in Portugal and Sicily, with around seven others in various stages of planning and funding. In 2020, the Portuguese government announced the Continent, Azores, and Madeira Islands (CAM2) system, which would include seismic and environmental sensors; the 3,700 km cable is estimated to cost around €120 million ($131.7m) and go live in 2024. As the name suggests, the cable would run from mainland Portugal to the Azores and Madeira Islands. As well as providing connectivity to the islands, the government is aiming to use the cable to detect earthquakes and tsunamis in the area. The Azores sits close to where the boundaries of three tectonic plates intersect: the North American Plate, the Eurasian Plate, and the African Plate. The aim would be to further geophysical research, and generate early warnings for potential disaster events.
Smart Cables Portugal’s Anacom tells DCD that the current CAM Ring submarine cable system should reach the end of its useful life around 2024/25, and a replacement cable with SMART repeaters offers a costeffective way to combine a new cable with its goal of having greater Tsunami Early Warning capabilities, rather than developing a dedicated early warning system. A vendor for the system hasn’t been chosen yet. “We believe that the time has come for telecom submarine cables to start giving an additional return to society beyond establishing people-to-people communications, and SMART Cables are a good example of this and are fully aligned with the UN Sustainable Development Goals,” an Anacom spokesperson told DCD. “We also hope that the SMART CAM, serving as some kind of a guinea pig, will be a catalyst for the establishment of a global network of SMART Cables.” While not known as a regular earthquake or tsunami hotspot, Portugal has suffered in the past. In 1755 an earthquake, along with subsequent fires and a tsunami, almost completely destroyed Lisbon. Estimated to have killed up to 50,000 people, the incident impacted the country’s colonial ambitions for the rest of the century, and led to the start of modern seismology studies. “We have slowly been getting more endorsement and I think working with the
The future of subsea networks: the power & connectivity infrastructure for underwater robotics A number of firms are developing autonomous ships to sail the oceans; some will collect data, others carry cargo, or even act as tug boats. And subsea cables might become as essential to underwater operations as satellites or cell towers are to operations above the water and on solid ground. The European Multidisciplinary Seafloor and water column Observatory (EMSO) is deploying BathyBot, an underwater robot that will roam the observatory site in the Mediterranean sea for several years. BathyBot will use onboard
Portuguese has been transformational,” says Howe. “With these systems that are really geared towards early warning, there's a clear business case; the economic benefit is clear, but it's just persuading governments to put up the money.”
provide greater connectivity to researchers and base staff. Workshops discussing the cable highlighted the potential benefits of turning the cable itself into a science instrument through the use of SMART repeaters.
“The Portuguese have done simulations showing that if there's one event in the 25year life of one of these systems, even tens of seconds of early warning for earthquake purposes would more than pay for the cable itself; the entire thing, not just the incremental cost.”
Patrick Smith, manager of technology development and polar research support at the National Science Foundation, previously told DCD there was a lot of enthusiasm around the concept, and the potential for science instrumentation on the cable had “generated a lot of interest.”
The InSEA project off the coast of Italy is also due to demonstrate SMART repeater technology in the water later this year.
Preliminary studies on the costs and benefits of a cable are being conducted by the NSF before it makes a final decision. Howe says an Antarctic cable running to Australasia would be especially useful as a SMART cable because of its influence on sea levels and other ocean currents.
Other potential SMART cable projects in the works include Vanuatu-New Caledonia (partially funded); Indonesia (a pilot system is under development); the Medusa system in the Mediterranean; French Polynesia; Namaste (India-Oman); New ZealandChatham Islands; and the Nzadi cable in Angola. The University of Hawai‘i (UH) at Mānoa recently received $7 million in funding from the Gordon and Betty Moore Foundation. The funding will hopefully help install SMART repeaters in the proposed VanuatuNew Caledonia cable system. The presence of SMART repeaters could help Antarctica get its first subsea cable. The National Science Foundation in the US is considering a cable to connect its McMurdo Antarctic station to New Zealand in order to
cameras and sensors to study biogeochemical dynamics and biodiversity, including bioluminescence for a better understanding of the deep sea environment. On the ocean floor, the tracked robot will be connected to the Scientific Junction Box developed by Ifremer, which provides energy and an Internet connection to the instruments at the site. Firms such as Bluefin Robotics [acquired by General Dynamics Mission Systems in 2016] build torpedo-like unmanned and autonomous underwater vehicles (known as UUVs or AUVs) which can be used for ship salvage, mapping, environmental monitoring, and more. On the surface, Liquid Robotics offers the Wave Glider; an autonomously-navigating solar-powered floating sensor platform and communication relay. Sensor options include weather, wave measurement,
“The coldest water in the ocean is formed in Antarctica, and sinks down and spreads out through the global ocean, so you would have direct measurements of temperature versus time going out, over 5,000 kilometers,” he says. “And then also the pressure would inform us about the Antarctic Circumpolar Current which is the strongest, biggest current in the world.” “Even in the doom and gloom of climate change that we're seeing now, I see that countries will have to invest in all aspects of climate change. And part of that is detection and observation of the change.”
camera, chemical sensors, hydrophones, and more. The Glider is able to collect and process data in real-time and send it via Cell, WiFi, or satellite to ship, shore, or nearby aircraft as required. James Gosling, creator of the Java programming language, was the company’s chief software architect and had previously described the Wave Glider as “a data center rack designed to sit in saltwater.” The company was acquired by Boeing in 2016; the defense firm also offers a number of larger autonomous submersible drones. But to really enable truly autonomous robotic operations that are run remotely, the infrastructure needs to be in place to provide power and connectivity, and Howe thinks subsea cables can take that role. “My dream, which is still unrealized, is to put a charging docking station for autonomous
undersea vehicles on cables to extend the spatial footprint of the fixed infrastructure,” says Howe, saying that while he wants the data to understand the ocean, he hopes cables can eventually provide an equivalent of a GPS system and provide long-range navigation services to autonomous vehicles on and below the water. “Once you get down 100 meters or so, you're basically in a new domain where you do not have those services [power and radio communication] at all and very little hardware down there,” says Howe. “As I see it, undersea cable systems have to play a role in ocean observing and really, in any future development of the deep ocean. “Everything depends on power, and just given the realities of the difficulties of the deep-sea observations, it's really the only way to get a reliable power source down there.”
Issue 44 • April 2022 | 59
DCD Magazine #44 44
Spinning up a universal quantum computer Quantum Motion is a small startup with big ambitions. It hopes to change the world, but is it all just spin?
Sebastian Moss Editor-in-Chief
A
bout 30 minutes into our conversation about the merits of quantum computers and his company’s specific approach to developing them, Professor John Morton leant forward. “I want to build the most powerful computer on the planet,” he said.
He radiated the sure-fire certainty only found in preachers, lunatics, and scientists convinced that they are at the cusp of a revolution. A spin-off from Oxford University and University College London (UCL), Morton's British-based Quantum Motion believes that it has picked the right
approach to building a truly universal quantum computer, capable of outperforming what classic systems can handle, across a wide variety of workloads. It's a big claim. Universal quantum computers remain theoretical - forever just five years away, forever right around the corner - and Quantum Motion is far from the only one chasing that goal. Among the many well-funded competitors are Google, which controversially claimed 'quantum supremacy,' where its system was able to significantly outperform the world's most powerful supercomputer at one single, highly technical calculation. Then there's IBM, which claimed Google had overhyped its achievements, while trying to roll out its own benchmark for quantum systems that said it was in the lead. Microsoft and Amazon Web Services are also working on their own quantum systems, although they have been less public about their progress. Competing with the hyperscalers is early pioneer D-Wave, which plans to IPO via a $1.6bn SPAC merger. There's also IonQ, which did a $2bn SPAC merger, and Rigetti Computing, which had a $1.5bn SPAC merger. There are yet more: Honeywell and Cambridge Quantum this year merged efforts, launching Quantinuum with a $300m investment. Pasqal and Qu&Co merged, targeting a 1,000-quantum bit (qubit) quantum system in the coming years. Over in Canada, Xanadu Quantum Technologies has raised $145m for a photonic quantum computer. Nation-states are also heavily investing, most notably China - which has pledged to spend tens
60 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Photography: Sebastian Moss
A Universal Quantum Sub sea Computer Cables
of billions on quantum computers, with the research filtering down into Alibaba, Tencent, and others. Against all of this stands Quantum Motion, a small startup with less than £20 million ($27m) to its name, and a little over 30 employees. Evening the odds, Morton argues, is the hundreds of billions of dollars invested into semiconductors over the past five decades. While most of the competition are trying to build entirely new superconducting quantum systems, Quantum Motion is taking a different path. It hopes to develop quantum computers using traditional CMOS chips, an approach favored by Intel but few others. "There's different ways to build a quantum computer," Morton, Quantum Motion's CTO and co-founder, explained on a tour of the company's new London lab. "Anything that obeys quantum mechanics you can use, whether it's superconducting circuits or trappedions," he said. Those two approaches were long used to carry out quantum physics experiments in the lab, so "they were
Issue 44 • Fabruary Issue 2022 44 /• April 2022 | 61
DCD Magazine #44
natural to use as qubits," he said. "But if you're really serious about developing this as a technology that can build a universal quantum computer, then you need to think about scale and error correction, and being able to get not just to tens, or even hundreds of qubits, but hundreds of thousands or millions of qubits. And that's quite a daunting obstacle." Silicon transistors have already overcome the challenge of scaling to extreme levels with the latest chips featuring tens of billions of transistors. "Research from us and others over the past 10 years has shown that you can use similar kinds of structures with silicon and metal gates to trap individual electrons, and to use their magnetic state - their spin - as a qubit," Morton explained.
"And actually, it's not a bad qubit, it has a lifetime in the seconds-range." The company logged its largest success to date last year when it performed the first measurement of a single electron spin in a transistor device that was made on 300 millimeter wafer scale. “We were able to measure its spin state, whether the single electrode was pointing up or down,” Morton said. “It stays in that state for up to nine seconds. Some might think that’s not long, but if you're talking about a single quantum particle made using this industrial process, it is extremely exciting.” Quantum Motion was built out of the research Morton, a professor of
nanoelectronics at UCL, and co-founder Simon Benjamin, a professor of quantum technologies at Oxford University, were already doing into spin qubits. "We realized five years ago that if we're going to build this, then this isn't something that can happen in a university - we need to bring together integrated circuit engineers, software engineers, and quantum computing architects,” Morton said. “And so what we're doing is combining that with commercial silicon processors, and trying to see if we can we build high quality qubits using industrial CMOS, and then use that to integrate qubits with all of the control and measurement electronics that you need to build a fully integrated quantum processor." The idea is enticing, with promise for
significant scale. "One transistor is one qubit. And that's why, at a simplistic level, you could say, ‘well, we have billions of transistors, therefore, we can get billions of qubits,’" Morton said. Of course, such a system is far away, even with Morton's ambitious projections. First, the company plans to get a few working qubits. Quantum bits are extremely fragile. Different approaches lead to qubits of varying qualities, but all are ephemeral and easy to disrupt. In the beginning, the plan is to use hundreds or a thousand 'noisy' qubits to act as "one perfect qubit," Morton said. As time goes on, the number of qubits required to act
62 | DCD Magazine • datacenterdynamics.com
as one noiseless one should go down, just as the number of qubits goes up. But before all that, the company has to pull off one good qubit, John M. Martinis, professor of physics at the University of California, Santa Barbara, told DCD. Martinis, who cautioned he had not been able to study Quantum Motion's approach in detail, explained that "various people have been pitching silicon qubits for a while already." But, he said, "the key question is when they will be able to make a qubit system that shows one can integrate all the necessary components for a quantum computer. "Good demonstrations occur at the 10-20 qubit level, especially if they make
A Universal Quantum Computer
"The fundamental issue here is that control electronics and its wiring is more like a millimeter size, at best. This size scale matches to superconducting qubits because they are of millimeter size, but not to many other qubits, because they are so small"
Issue 44 • Fabruary Issue 2022 44 /• April 2022 | 63
DCD Magazine #44 a square (2D) array like what was done for the superconducting quantum supremacy experiment. Before that demonstration, these ideas are still a bit at the proposal stage." Martinis led the development of Google's first quantum computers, including the one that achieved quantum supremacy. He left in April 2020 after disagreeing on the strategy, and joined Australian startup Silicon Quantum Computing, a rival siliconbased quantum company. He also founded Quantala, a business that works with quantum computing firms to overcome specific technical challenges. "The real challenge is to make it work, which depends on many system-integration details," Martinis explained. While Quantum Motion points to small transistors as an advantage for rapid scaling
and density, Martinis views it as a potential weakness. "I think the bottleneck for integration is that the qubits are so small," he said. "You then have to build 'escape wiring' from the electrodes out to pads to connect to the control. If you look at every experiment published in silicon, even for a few qubits there is a lot of wiring to deal with. I think an important milestone is to see someone build a large enough system in silicon to demonstrate a good solution. "The fundamental issue here is that control electronics and its wiring is more like a millimeter size, at best,” he continued.
"If you're really serious about developing the technology that can build a universal quantum computer, then you need to think about scale and error correction, and being able to get not just to tens, or even hundreds of qubits...
...But hundreds of thousands or millions of qubits. And that's quite a daunting obstacle. Luckily, silicon transitors have already scaled into the billions"
64 | DCD Magazine • datacenterdynamics.com
"So this size scale matches to superconducting qubits because they are of millimeter size, but not to many other qubits because they are so small. I am not saying this size mismatch makes things impossible, just that it is an important 'systems-integration' issue to demonstrate when scaling up the number of qubits," Martinis said. Like most quantum computing companies, there remains a gulf between what is theoretically doable and what is currently possible. Some of the businesses will fail because they picked a dead end. Others will be onto a workable solution, but run out of money before they can pull it off. COO James Palles-Dimmock declined to share the company’s burn rate, but said Quantum Motion hadn’t struggled to raise money from investors despite the esoteric subject.
“The question is: ‘Can standard CMOS-SoI technology be used to make a qubit?’ If the answer is yes, why on Earth would you do it any other way?,’” he said. “That will be a $100 billion+ company. If the answer's no, then you've lost a few tens of millions funding something that is eventually going to be incredibly useful anyway, because it's silicon.” He added: “But all of what we've been doing over the past five years is just knocking down those milestones to say, yeah, it looks more and more convincingly like the answer is: ‘Yes, this is the technology to build a quantum processor.”
Logistics Real Estate Moving to Data Centers
Playing both sides: How logistics real estate firms are moving into data centers Despite a boom in their home turf, logistics real estate players are moving into building data centers
F
or a long while, private equity investors have invested in both digital infrastructure and industrial real estate. Now, logistics-focused real estate firms are beginning to eye up the data center industry as an opportunity for growth. Despite an e-commerce-driven boom in logistics, these firms are playing both sides and looking to tap into the burgeoning data center sector at the same time in an effort to grow further, even if both sectors might have competing demands for the same sites. Big players in logistics and industrial real estate, like GLP, Prologis, Segro, and ESR, are looking to data centers, many of them partnering with established data center players to use available facilities and land for both brown- and greenfield developments.
Boom times for both Both data centers and logistics are benefitting from the same underlying trends, but competition may be tight. After pent-up demand due to Covid, the logistics, industrial, and warehouse real estate space is enjoying a boom. In a February 2022, Prologis said vacancies are at record lows across the globe and high
demand is leading to ‘bidding wars’ and global rent increases of around 15 percent. Another report from October 2021 said that “space is effectively sold out.” Despite huge amounts of speculative building, pre-lease rates are around 70 percent, the company said. As with data centers, warehouse construction costs have been driven up by increasing land, material and labor costs, but that hasn’t stopped many of the large real estate firms in the sector posting healthy profits. CBRE says that each $1 billion in e-commerce sales needs an additional 1.25 million sq ft (116,000 sq m) of distribution space to support it. Prologis estimates a five percent increase in inventory levels would require an additional 300 million sq ft (28 million sq m) of industrial space, while JLL predicts demand for industrial real estate could rise by an additional 1 billion sq ft (93 million sq m) by 2025. At the same time, work and personal lives are being increasingly digitized, producing an ever-increasing amount of data, and driving data center demand to record highs. The biggest data center markets in the US absorbed 493Mw of capacity in 2021, setting
Dan Swinhoe News Editor
a new record 31 percent higher than the previous record year, according to real estate specialist CBRE. Gartner predicts that the data center market will grow year-over-year through to 2024. The fact that e-commerce giants AWS, Alibaba, and JD.com are major players in both sectors shows their potentially synergistic nature. As both industries continue to heat up, this is leading to increased competition for land and power. DCD has previously reported that in major markets where the data centers are historically located close to distribution facilities, rents and land acquisition costs are being driven up by high demand and low availability. Earlier this year, a former Speedcast data center in New York was bought for $21 million by a Blackstone-backed logistics firm that plans to demolish the facility to build a new distribution center. In Staffordshire in the UK, a former Royal Bank of Scotland data center is due to be demolished and replaced with a new warehouse by logistics firm PLP. A former Fidelity Investments data center in Irving, Texas, was last year turned into an industrial space and sold.
Issue 44 • April 2022 | 65
DCD Magazine #44 Logistics people want data centers Despite the e-commerce boom, logisticsfocused real estate firms are increasingly eyeing the data center industry as a growth opportunity. APAC logistics firm Logos last year announced a partnership with Pure Data Centers to develop data centers across Asia Pacific. Logos owns, develops, and leases logistics properties across Australia, China, Singapore, Indonesia, Malaysia, Vietnam, India, and New Zealand. The two companies have announced plan to build a new 20,000 square meter (215,000 sq ft), 20MW data center in Jakarta, Indonesia. Pure had also been planning a similar move with Panattoni in the UK. The two
the expansion was “a natural move” for the company. It went on to acquire ARA Asset Management, including Logos. Today, the firm’s data center portfolio consists of six owned development assets totaling 260MW across Hong Kong, Osaka, Sydney, Mumbai, Jakarta, and Singapore. In its 2021 annual results, ESR said data centers were a “key strategic focus” for the company going forward. “With the closing of the ARA acquisition which brings together Logos to form a multi-pronged platform, the enlarged ESR Group has a combined data center pipeline of over 1,200MW of capacity across the region,” the company said. “E-commerce acceleration and digital transformation will continue to drive
Last year, US real estate investment firm PRP said it was diversifying away from office properties and had set a goal of spending $2 billion acquiring logistics and data center properties across the country. Goodman Group – again largely focused on warehouse and logistics-type real estate – is listed by French data center builder Cap DC as a ‘key partner,' especially in markets such as Frankfurt, Paris, Amsterdam, Milan, and Madrid. The company has also partnered with STT GDC for two facilities in Tokyo, Japan. Segro is traditionally focused on warehousing and light industrial properties, but also manages the UK’s main data center clusters at its Slough Trading Estate. It recently bought an additional 1 million sq ft of nearby office space it plans to convert and merge into the trading estate as more data center space. Equinix, Virtus, IO, and KKR’s Global Technical Realty all use Segro facilities. Prologis has partnered with Skybox Datacenters in the US (see Box) and in the UK has previously supplied Virtus with land to develop a data center at Stockley Park outside London that was previously earmarked for a warehouse.
What’s the appeal of partnering? Amid a competitive market where land and power can be difficult to source in constrained markets, data center operators are always looking for ways to deliver projects in key markets quickly.
companies were aiming to develop a threestory, 41,763 sqm (449,500 sq ft) data center due to offer up to 50MW of capacity on the site where the logistics firm had already started developing warehouse space. The plans were later dropped, though DCD understands this was due to extended timelines around connecting the proposed facility to the power grid. Hong Kong-based real estate company ESR Cayman, traditionally focused on logistics, has made a number of moves into the data center space since 2021. In April 2021, the firm announced it was buying a data center in Osaka, Japan, which it plans to expand into a 78MW campus. It has since bought a building in Hong Kong to convert into a 40MW data center, saying
demand for logistics infrastructure and data centers.” Singapore-based logistics real estate firm GLP – a major player with $89 billion in assets and managing more than 60 million square meters of real estate – has been investing in data centers since 2018, and claims that its assets, including those under construction, will deliver about 1,400MW of capacity across China and Japan upon completion. In 2019, GLP acquired a 60 percent stake in local data center company Cloud-Tripod. In Spain, logistics-focused real estate firm Renta Corporacion is building a data center in Barcelona, and Merlin is building a number of facilities across the Iberian region.
"The significant amount of capex you require to create a data center is still a barrier for a lot of companies despite the amount of capital out there" 66 | DCD Magazine • datacenterdynamics.com
“It is quite a difficult market to get into,” says Tom Glover, head of data center transactions, EMEA, JLL. “The significant amount of capex you require to create a data center is still a barrier for a lot of companies despite the amount of capital out there. “Supply and demand, and speed to market are drivers here. It's difficult to find the right pieces of land and the warehouse logistics marketplace has been for a long time buying land to develop it for industrial logistics uses. Their proposition is ‘we'll get it there quicker than you can probably do it on your own.’” As a result, he says, large logistics companies offering data center operators potential facilities could create an appealing proposition. “If you've got a shed that you can bring power into, get the relevant permitting permissions to put a data center there, and the market agrees it's in a good location, it's going to be very appealing for an operator to explore taking a long term lease on the site.” “It will never be a short-term lease, because of the amount of investment that the operator will be putting into the ground.”
Data Warehouses Glover says he would be surprised if companies like Prologis, Segro, and others don't look at exploring the “cookie-cutter effect” of providing power shells in other markets in the future. “I can see that still being a business model they will look to benefit from,” he says. In major markets where there is little available land, relying on a partner that has a sizable portfolio of land and potentially convertible facilities could be appealing. However, in Edge or secondary markets where there is less fiber or pressures around available land, such partnerships may not be as necessary. “In markets with high demand and
limited supply, ergo very strong business case, this can be interesting to operators,” says Glover. “In markets with medium demand and a high supply of potential space, I think there's going to be a harder sell unless they have good relationships that they formulated in those major marketplaces.” Most of the above-mentioned partnerships are focused on using the logistics firms’ landbanks rather than developing facilities in existing buildings, but operators might appreciate the ESG benefits of retrofits. Serverfarm calculates that reused existing buildings can deliver embodied carbon savings of 88 percent compared with the material carbon cost of
new projects. “This offers another string to that bow of development. It would be unwise not to at least consider the idea of potentially working in a partnership with a logistics provider of sheds,” says Glover. “Anybody that is a landowner of suitable property is going to find that those symbiotic relationships will evolve.” “Most operators prefer to own their own freehold,” he adds. “And for some operators, [such partnerships] won't be a path that they will ideally go down. But the quid pro quo is they accept that they may not be able to deliver product to a marketplace as quickly as they would like.”
Prologis moves into data centers with the help of Skybox Prologis, the largest industrial real estate company in the world, owns more than 4,000 buildings spanning almost 1 billion sq ft across 19 countries in North America, Latin America, Europe, and Asia. Though it is the dominant force in warehouses, the company is also starting to make moves into data centers. In the US, Prologis has partnered with Skybox Datacenters. In early 2021, the two companies filed to convert an empty warehouse owned by the real estate firm into a data center in Elk Grove, Illinois. In December, Prologis filed for permission to develop a new 500,000 sq ft (46,500 sqm) data center on a 19.5-acre industrial site that it currently owns in Sterling, Virginia. Additional filings suggest Prologis was again partnering with Skybox Datacenters for the project, though the companies haven't confirmed this yet. In March 2022, the two companies announced plans for a greenfield 30MW, 141,240 square-foot (13,100 sqm) data center on Prologis land in Austin, Texas. On its website, Skybox lists the partnership as ‘Skybox, Powered by Prologis.’ R. Haynes Strader, Jr, chief development officer at Skybox, says Prologis wanted a data center partner that could “come in and help them activate real estate that they owned as well as identify new opportunities.
“For over a year they interviewed a number of data center operators, from small private groups like Skybox all the way up to some of the largest REITs in the world,” he says. “For North America, they selected Skybox. We did an exhaustive study of a number of their sites, and the original process resulted in the identification of an opportunity in Chicago and so that was our first project with them.” Though the arrangement of each project is different, broadly Prologis is the majority partner in a joint venture which owns the land or property, while Skybox works on the development, marketing, and operation of the resulting data center. “The concept is using the strength of Prologis' real estate portfolio, general expertise, as well as their balance sheet, to drive fast data center developments in very tight markets where they control real estate,” says Strader. “Skybox brings the expertise around data center development, power delivery, marketing for data centers. And then we also operate data centers, so we can offer a full package solution to an end-user. “What Prologis can uniquely offer that few others can is many dots on the map in some of the tightest markets in the world.” Haynes says that both companies are aiming to be
flexible, looking at greenfield and brownfield developments, both speculative and build-to-suit. He notes that while not under an exclusive partnership, the company is focusing solely on developing through Prologis at the moment, and believes it could yield “two to five” development opportunities in the US per year. “We're mostly focused on larger wholesale type users. Prologis already has relationships with a lot of the large end-users from an industrial standpoint and Skybox typically has relationships from a data center component. It's a partnership combining strengths to create a unique opportunity.” In terms of stability and security, Haynes claims that having assets backed by Prologis, a company with a market cap of $121.9 billion, offers more security. That is much bigger than the largest data centers players, Equinix’s (currently $64bn) and Digital Realty ($41bn). “Prologis are larger than any data center [firm] in the business, so from a financial perspective for a customer, there's more security doing a deal with Skybox & Prologis than you would find with any other operator in the world. When asked if there could be competing priorities as to where Skybox may want to build vs where Prologis may be willing to put up land or property, Haynes
says the decision remains in the hands of the real estate firm, but says the company has been “very open” to considering opportunities. “They have a rigorous analysis process that is applied. For us, the key is understanding where there are opportunities that are relatively low impact to the industrial relationships that they have, yet high impacts for the data center. And with the amount of real estate that Prologis has, there are ample opportunities to do that.” Another benefit, according to Haynes, is Prologis’ clout in the supply chain, and the benefit that can bring to getting projects done quickly. “They have incredible expertise in the development of industrial buildings and the procurement of key components: when we're ordering steel and concrete and roofing materials etc, Prologis' supply chain access and leverage is tremendously higher than most other data center users. “They're able to get timelines expedited, they can order things speculatively. A lot of companies couldn't do that, but they can always reposition it elsewhere in the portfolio.” “Prologis is the 800-pound gorilla in the room. Skybox is certainly fortunate to have the partnership that we do with them here in the United States.”
Issue 44 • April 2022 | 67
DCD Magazine #44
Understanding how terrestrial radiation impacts FPGAs at scale
Sebastian Moss Editor-in-Chief
Radiation shouldn't pose a threat to the data center if you know how to mitigate
O
perating at scale can have interesting side effects. Events that are highly unlikely to occur at the local scale suddenly become near certainties, when the low probability event is multiplied by the number of hardware components in use.
Take background radiation. The small amount that bathes us all is mostly harmless, to humans and modern electronics. On a chip like a field-programmable gate array, the chances of minuscule levels of irradiation causing damage are equally tiny. At the scale of something like a hyperscale data center, that calculation begins to change. What if, for example, there was a data center in Denver, Colorado, with 100,000 operational FPGAs? Would the large sample size mean that low possibility of radiationinduced errors suddenly became a lot more likely? Researchers at Brigham Young University hoped to find out, testing the susceptibility of FPGAs to normal background radiation.
Vulnerable chips All hardware at scale can be susceptible to radiation issues, but the researchers focused on FPGAs for two reasons. First, the CPUs and GPUs designed by the likes of AMD
and Nvidia are made with error correction code (ECC) and data backups in mind, while FPGAs are reprogrammed to a desired application, so may not include as much protection. Secondly, the study's lead researcher, Andrew Keller, explained: "While there's a lot of state in GPUs and CPUs as well, none of that state necessarily is used to configure how things are connected. "There's a large portion of configuration memory in the FPGA that is dedicated to configuring routes, electrical connections between components, and those connections can be configured in many, many different ways." Radiation could corrupt a configuration bit "and electrically disconnect components," in an FPGA, Keller said. "Whereas on an application-specific integrated circuit (ASIC), those are hard-wired connections, they're not configurable, they're not going to change with radiation." When ionizing radiation passes through an FPGA, it can deposit enough energy to disrupt the proper flow of electricity through the device. This 'funneling phenomenon' can have different effects, including altering the value stored in a memory cell. In an FPGA, that data could include circuit configuration and state, which means that a change in the value could
68 | DCD Magazine • datacenterdynamics.com
disconnect components, or short them together. It could also change the intended behavior of a component. Across 100,000 FPGAs, this slightly higher susceptibility becomes noticeable. In the paper, The Impact of Terrestrial Radiation on FPGAs in Data Centers, Keller found that such a deployment would experience a configuration memory issue every half-hour on average and silent data corruption (SDC) every 0.5-11 days. It is that latter issue that is more concerning. "One of the challenges that an FPGA might face is a radiation effect causes your design to wedge or stall without the system knowing that it is wedged or stalled,” Keller said. “With SDC, the FPGA was still processing bits. But the data it gave you was wrong. And it didn't know it was wrong.” The study’s advisor, Prof. Dr. Mike Wirthlin, called silent data corruption "the biggest risk - you get a wrong computation," he told DCD. "If you're doing something small, this is such a low risk, but for some people this is a really big deal. For example, if you do financial calculations." Another potential impact of the radiation is that an FPGA could be rendered completely unusable - although this is probably preferable for a data center operator, because at least they know something is wrong.
Radiation Bath
"The FPGA was still processing bits, but the data it gave you was wrong - and it didn't know it was wrong” Luckily, the industry already has a way to significantly minimize the risk of these events. Configuration scrubbing, an upset mitigation technique that detects and corrects upsets in an FPGA's configuration memory, can reduce the occurrence of an SDC by three to 22 times, Keller and Wirthlin found. “The vendors provide scrubbing, but most of the products that use FPGAs probably do not use it,” Wirthlin said, adding that larger hyperscalers appear to be better at enabling such protections, but there is no published data on this. Vendors also appear to be getting better at building FPGAs that on a per-bit level can handle higher and higher amounts of radiation. But there is yet again a matter of
scale to consider - while they are becoming more reliable per bit, FPGAs are featuring more and more bits per chip.
Larger chips, larger risks? It's not clear yet what this means for overall reliability, Keller said. "We are doing some studies trying to map how many bits there are versus how much radiation a single bit takes to upset, and it's not conclusive. My gut feeling says we're getting better." There is yet another question of scale to consider - as chip components get smaller and smaller, it means that "as a single particle passes through the device, it now affects multiple cells," Keller said. "It's a multi-cell upset, not a single-bit upset." But, curiously, "we are not seeing that,"
Keller said. "There's something that's being done that we don't know about, that's preventing that climax." He added: "Vendors are doing a great job, they're aware of the problem, they're pushing to make their systems as reliable as possible." With companies like Xilinx and Intel working to reduce the risk, and mitigation efforts like scrubbing available, Prof. Dr. Wirthlin noted that the most important thing was for the user to be aware of the risk. "Some people get overly alarmed, but we're not trying to scare people," he said. "It's an issue that you just need to be aware of, and there's plenty of techniques and hardware to address it. Just follow the proper recommendations and the risk can be adequately mitigated.”
Issue 44 • April 2022 | 69
DCD Magazine #44
PUT YOUR MONEY WHERE YOUR MOUTH IS:
How sustainable financing is helping data centers go green Data center companies are increasingly looking to green bonds and sustainability-linked loans as a way to meet their ESG targets
70 | DCD Magazine • datacenterdynamics.com Datacenterdynamics.com
Dan Swinhoe News Editor
Data Centers Go Green
D
ata centers and other digital infrastructure such as cell towers and subsea cables require huge amounts of capital investment. They often rely on credit, loans, and bonds to help cover much of the upfront costs of these expensive projects. At the same time, many digital infrastructure firms are looking to become more sustainable and make climate commitments, joining various pledges and initiatives. That’s a good start, but some companies are going a step further and tying their finance directly to sustainable projects and ESG targets. Green Bonds and Sustainability Linked Loans (SLLs) force companies to spend their money on environmental projects and impose financial penalties for failing to meet their stated goals. They sometimes come with tax incentives.
Sustainable financing In the data center and cloud space, companies including Equinix, Digital Realty, Nabiax, Atos, Baidu, AirTrunk, and Penta Infra have all jumped into sustainable financing, raising new funds or shifting existing debt to packages focused on green projects or with interest rates tied to sustainability and ESG goals. Telcos KPN, NTT, and Verizon, and industry vendors including Johnson Controls have also adopted this approach. In December 2021, Flexential completed a new $2.1 billion securitization offering. The company said it was the largest single asset-backed securities (ABS) issuance to date in the data center industry, with the notes issued under its Green Finance Framework, which requires any new-build data centers funded through the offering to demonstrate a power usage effectiveness (PUE) of 1.4 or below, as well as zero water usage in cooling to be included under the framework. Chris Downie, CEO of Flexential, tells DCD the funding was the first time the company was able to formally tie its sustainable ethos to capital investment. “We're using the loan to support the development of our data center platform,” he
says. “With a ‘legacy’ debt format, you don't really make any commitments relative to sustainability, and in this vehicle we had the opportunity to effectively link our facilities to sustainability outcomes that we're looking to deliver. “We're making a bit of a social contract, if you will, to do our part; to demonstrate we'll be responsible in the consumption of the raw materials that we use to run our business. We've made a multi-year, if not multi-decade, commitment here and it's going to be something that you're going to be able to come back to us in a year plus and say, ‘How did you do? Where are you on this?’” Sustainable financing can broadly be categorized into two buckets. Green Bonds are use-of-proceeds financing, where money is raised for a particular goal or goals; funds raised will be spent on sustainability projects such as installing solar panels at facilities or making power purchase agreements to procure renewable energy. The other bucket, sustainability-linked loans (SLLs), are more flexible business loans with interest rates tied to particular ESG goals. The business can spend the money how it likes, but pays an interest rate tied to a particular metric such as PUE targets, carbon emissions, or water use. The more targets a company achieves, the lower its interest rates will be. Sometimes the two can be linked, with Green Bonds taken out to fund projects that will help a company reach wider ESG targets defined under SLLs. “The upside (to SLLs) is that there's no limitation on the size, you're not tying it to specific proceeds,” says Dan Shurey, director of sustainable finance at ING, which has worked on a number of these raises. “But there's the risk that they might not meet those KPIs, and therefore, cost of financing will become more expensive for them.” In 2020 Aligned took out an SLL, raising $1 billion for sustainability initiatives and general expansion, with the interest rate tied to sustainability goals. The company added a further $250 million in July 2021 after hitting its previous targets, before adding another $375 million later in the year. The company has also issued $1.35 billion in securitized notes under its sustainability framework.
“We're making a social contract to do our part; to demonstrate we'll be responsible in the consumption of the raw materials. We've made a multi-year, if not multi-decade, commitment here"
“We started on the sustainability linked loan piece, as we thought it was more impactful for our first foray into it,” says Aligned’s CFO Anubhav Raj. “Sustainability has been at the core of our company's DNA from its origins. However, we challenged ourselves and asked 'how do we more directly tie sustainability into our financing? As opposed to just saying that we are a sustainable company, how do we go to specific KPIs?'” Environmental goals are directly linked to and Aligned’s finance costs.
Investors and customers want it Green bonds are not new. Dating back to 2007, they were traditionally used by utility and energy companies to fund large-scale projects. As sustainability issues come to the fore, companies of all stripes have looked to them to raise money that goes directly to green projects. SLLs are newer, and offer a way for companies to tie environmental goals to their bottom line. Aligned’s Raj says the primary drivers for choosing sustainable financing are investor interest, customer demand, and employee retention. “This is valuable from an employee retention and recruiting standpoint. Making investments in this space and trying to be at the forefront within our industry, will help us retain and attract top talent in this space. Investors want these type of financing products that incentivize the right behavior. “When there are significant dollars behind it, it does drive the business; you can't just put it off until next year when there are actual teeth and benefits of getting it done this year.” Flexential’s Downie adds: “Raising capital is something that we've been doing year on year to support the growth of our business, and ultimately this enables us to demonstrate to our customers that we're putting our money where our mouth is.” Parties funding sustainable bonds or loans span the spectrum, including traditional infrastructure-focused funds, as well as more ESG-focused investors and the ESG desks of larger investment companies. The move doesn’t just let data center companies show green credentials. They actually open up a potentially new selection of ESG-focused investors that previously may not have considered funding digital infrastructure. Mark Richards, partner and regional practice group leader - energy, environment and infrastructure at law firm BCLP, tells
Issue 44 • April 2022 | 71
DCD Magazine #44
"You've got to a point where you're looking at a much longer-term and you're looking to also lock in some good financing. These ESG financing packages benefit from much longer strategic perspectives in terms of compliance" DCD that private equity infrastructure companies are very sophisticated in their approach to ESG and these types of funding mechanisms. Likewise, companies like hyperscalers are very sophisticated in how they raise capital. He notes that real estate investors are “coming up to speed and catching up quickly” on ESG and are often “very savvy” when it comes to ensuring they get the best cost of capital for financing.
“We would ask the engineers; are you thinking about water usage? Are you thinking about PUE? Do you have guidelines within your architectural blueprint for what you expect the PUE to be? Are you working with LEED, for example, to obtain a green building certification?
framework, and should be allocated (though not necessarily spent) within a certain timeframe, usually a year. After that, there’s no requirement for ongoing reporting of the bond’s use, though much of it would be going towards projects likely included in any ESG report. For SLLs, companies have to report progress against stated targets annually, which are then audited by third parties. After that, depending on the results, the pricing levers get put into place for the next year. Flexential developed its framework by first creating an ESG committee consisting of cross-functional leaders in the organization that are responsible for the different components that together impact
“The quantity, as well as the quality, of conversations that we're having is increasing,” says Aligned’s Raj. “[Compared to our 2020 raise], there were more institutions that had done sustainabilitylinked loans and were more familiar with it in the conversations I had last year. “Investors’ partners are requiring that a certain percentage of their overall allocations be dedicated to green loans, sustainabilitylinked loans, things that check that box.”
When to do it BCLP’s Richards says SLLs may make the most sense for companies that are looking for long-term stable financing platforms: “You've got to a point where you're looking at a much longer-term and you're looking to also lock in some good financing. These ESG financing packages benefit from much longer strategic perspectives in terms of compliance with the data collection and other requirements.” It’s not for everyone, he says: “If you're in the development phase, where you're growing fast and you're breaking things, that [type of] financing may be less relevant.” If companies want finance quickly, without sophisticated ESG audits and compliance and due diligence, he says, “I suspect that green finance is not the ideal solution to start with a capital raising journey.” ING’s Shurey says green finance is a collaborative process: “We'll speak to the financial team, and the sustainability team, as well as engineering, operations, legal, communications, accounting. “We have found that is an incredible value-add for a company, even if they never actually go ahead with a capital raise,” he adds. Sometimes we're putting together groups within the business that have never spoken before.
“We try and push the maximum feasible threshold that the company can achieve within a five or 10-year plan, so the company is going beyond business as usual, but it's doing so in a way that it's actually going to be able to achieve.”
its sustainability objectives. Downie says the committee meets regularly to ensure the integrity of the framework and ensures the company has an internal process to ensure that it’s meeting its green commitments as it deploys capital.
The green playbook
“It is a living and breathing framework,” he says.
ESG reports often outline sustainability targets alongside current projects and progress. Green finance frameworks outline the company’s sustainability goals and define the projects it considers suitable to helping them towards those targets, and therefore suitable for financing under green bonds. Funds raised via green bonds should go towards projects outlined in the
72 | DCD Magazine • datacenterdynamics.com
The International Capital Market Association (ICMA) has published its Green Bond Principles – as well as SustainabilityLinked Bond and Sustainability Bond principles – that provide guidelines and recommendations around structuring features, disclosure, and reporting for sustainable financing vehicles. First published in 2014, they were last updated in 2021. Companies including Digital Realty, Equinix, Verizon, and
Data Centers Go Green Aligned have made their frameworks available online. Frameworks outline key sustainability goals, the types of projects the company will be using the proceeds towards, and processes for determining eligibility, reporting and auditing. As well as the likes of ING, external third parties such as SustainedAnalytics will assess the company and its green frameworks to determine its suitability for raising money through green bonds and SLL. “We had to meet those foundational elements,” says Downie, “but I'm sure those will evolve over time and we'll need to ensure that we continue to monitor those [goals].”
number, or an intensity-based number. He says, “renewable energy and PUE are all trying to do the same thing which is reducing carbon emissions. Therefore, you should focus on the most effective way of demonstrating that, which is tons of carbon or tons of carbon per revenue or whatever the intensity-based metric may be. “We also want to make sure that the targets that they set off those KPIs are meaningful, that they are benchmarkable so they can compare them to their peers, if possible, and that they are ambitious. We typically ask clients for at least three KPIs, but we prefer five KPIs, and we also encourage companies to align their decarbonization targets to things like science-based targets.” While companies can pick their own targets, they need to be sure to choose goals that are ambitious, otherwise investors won’t be interested - especially if competitors with grander targets are seeking financing at the same time. Raj says Aligned’s KPIs are designed to be impactful and go beyond table stakes, and tell a story of how this works.
“The bar is going to continuously be set higher and higher, which is a good thing. What wasn't table stakes two years ago when we set our first SLL target may be viewed as such five years from now, so it's going to change" Setting targets Different companies use different SLL targets. Many will cross over with targets outlined in the Green Finance Frameworks, and proceeds will be used towards those goals in conjunction with proceeds from green bonds. Nabiax’s included the percentage of renewable electricity, water consumption (m3 per MW), and a target for hiring more women in the workplace. Atos will pay less if it reduces its annual greenhouse gas emissions (Scopes 1, 2, & 3) by 50 percent in 2025 compared to 2019. “I always recommend that companies use very specific KPIs that can be measurable,” says ING’s Shurey. “We found that it's most effective to use things like an absolute greenhouse gas (GHG) or carbon equivalent
“We focus on safety within our projects, and so when we were selecting the target for safety, we started with ‘we’re less than the median from a total recordable incident rate standpoint,’” he says. “Our sustainability advisor ING said that's not aggressive enough of a target, so we revised it to be in the top quartile in order to get an interest rate and savings. “There's a balance and that's an example of one where it's in our best interest to have targets that are not just the status quo. But I don't want to sign up for something that I think there's no chance I'm going to be able to achieve; I would not be doing my job if I just sign up for extremely low probability stretch goals. “The bar is going to continuously be set higher and higher, which is a good thing.
What wasn't table stakes two years ago when we set our first SLL target may be viewed as such five years from now, so it's going to change.” How hard some of those targets will get in the future is unclear. Gains through PPAs and PUE reduction can only go so far, but many of the people DCD spoke to for this piece suggest they’ll still be there. Goals will be broken down into smaller targets, such as GHG emissions, rather than resorting to all-encompassing single targets such as to become carbon neutral or negative. This is partly due to the fact such binary long-term goals would still need yearly targets for companies to be measured against in order to re-set interest rates on long-term ESG-linked loans.
Paying the penalties The benefit of SLLs versus merely putting out ESG reports and making pledges is that if you fail to meet your stated goals, there’s a financial penalty. None of the companies DCD spoke to said they had yet failed to meet their SLL goals – Aligned publicly said it extended its SLL facility after meeting its original 2020 goals – and seem comfortable with the risk it has taken on. It’s not necessarily about profit for the banks either: BCLP’s Richards claims he’s seen one example where if a company fails to meet its SLL targets, the additional interest the bank receives is spent on additional ESG consultants, with some seconded to a sustainable charity. Breaking the terms of green bonds doesn’t have direct penalties like the increased terms on SLLs, but there would be significant reputational damage. Future raises would become much more difficult, and failure could even leave the company open to legal repercussions depending on the terms of the agreement. “There would be a significant reputational impact if Flexential is unable to complete the audit and allocate the proceeds as required,” notes Downie. "The organization would lose a lot of credibility with the investors who bought the bonds, and the green program would be dead." Ultimately, the goal of ESG financing isn’t to get companies securing such funding to pay more, but to drive better behaviors. “We think of sustainable finance as a really key part of the journey,” says Shurey. “But it's certainly not the end result. What we're trying to do is help companies to better integrate ESG decision making into their overall business strategy.”
Issue 44 • April 2022 | 73
DCD Magazine It's Time44To Panic
The end of techno-optimism
T
he huge scientific and technological advances of the past few decades have brought immeasurable knowledge to billions, lifted entire nations out of abject poverty, and given us wonders few could have imagined.
It is easy to feel like a species that can solve anything: With sheer force of will, we can surmount any challenge and survive any disaster. But such a view blinds us to the reality of what lies ahead. As climate change shifts from the issue of tomorrow to the crisis of today, relying on technology to save us is a dangerous folly - and one that is promoted by the same fossil fuel corporations that are profiting from our existential risk. Many in this industry have laudably set 'science-based' climate targets to reduce emissions dramatically by 2030. But that's when every nation and sector needs to have at least halved emissions. In reality, as we all sadly know, that won't happen - there will be many who ignore the science. Data centers need to beat that target as soon as possible to do their part to make up for others' failure. Plus, as it's worth reiterating, even if everyone does meet that goal, the emissions between now and 2030 will ensure catastrophic temperature changes that will destroy fragile ecosystems and displace hundreds of millions. It’s just a matter of how bad it will get. Some would disagree. Technology will save us, they claim, pointing to carbon dioxide removal equipment. Indeed, in the latest terrifying IPCC report, scientists note some removal may now be necessary given our past inability to reduce emissions. But the technology is wildly unprepared for the
74 | DCD Magazine • datacenterdynamics.com
challenge ahead. Sector leader Climeworks operates the world’s largest direct-air capture plant in Iceland, capable of capturing about 4,000 tons of CO₂ each year (which then must be subtracted from the emissions of building and operating the facility, as well as flying out dozens of journalists to the site). By 2030, the company hopes to remove more than a million tons of CO₂. That's essentially meaningless against the targeted 18 gigatons of CO₂ that will be emitted in 2030 alone (which will likely end up being higher). What about trees? Can't we just plant our way out of the apocalypse? Even putting biodiversity issues to one side, relying on trees for carbon offsets is fraught with corruption, double counting, and ultimately means we are shifting carbon into places that may soon burn. Last year, forest offsets bought by Microsoft burned down amid a climate-change-induced wildfire. Such fires will only get more frequent and more intense. So that just leaves a Hail Mary, yet-to-be-discovered technology that will save us, just like the Haber–Bosch process which ushered in the era of man-made fertilizer, and Norman Borlaug's Green Revolution which fed billions. But can we really stake our future on a blind bet? Such a technology would have to operate at a planetary scale in less than a decade. And yet, political will for funding such equipment does not exist. There is no reason to believe it is on the way. Yes, we should fund such research. Yes, we should build CO₂ capture machines. Yes, we should plant trees where they make sense. But above all, we should reduce emissions immediately. Like our lives depend on it.
Listen to the DCD podcast, new episodes available every two weeks.
Brought to you in partnership with Vertiv
Hear from Cyxtera, Novva, SDIA, Google, Digital Realty, and more!
RELIABLE, SUSTAINABLE SOLUTIONS Legrand’s expertly designed portfolio of server racks, aisle containment systems, high density rack PDUs, and above-rack power distribution drive innovation around three crucial market requirements: • Ensuring service continuity • Optimizing energy efficiency • Incorporating scalability
Visit us at www.legrand.us to learn more about our critical power solutions