50 ISSUE
Issue 50 • October 2023 datacenterdynamics.com
UNM THE AKIN G OF
BROA DBAN D
DIGITAL REALTY CEO CYRUSONE CEO OMNIVA UNCOVERED
Connect Data Center Solutions Delivering global value at every phase of data center development. Whether you operate a multi-tenant data center, work exclusively within edge environments, require a strategic point of distribution for your hyperscale, or have new enterprise facility needs around the world, Wesco can help.
We build, connect, power and protect the world.
Wesco.com/datacenters
Contents October 2023
6
50 in review A look at magazines past, and a celebration of our present
8
News SMRs, crypto, & giant Tracts
16
The unmaking of Enron Broadband Fraud, failure, and futures lost
24 CyrusOne’s CEO - Eric Schwartz On 300kW racks for AI
16
26
26
Digital Realty’s CEO - Andy Power On operating at scale
29
he Cooling supplement T Air, nature, and cleaning
45
AI moves to Norway AQ Compute’s CEO on AI plumbers
48
Meta’s data center redesign A new look for a new compute era
51 Qbits come of age Data centers get ready for quantum
45
51
57
A laser challenge Computing with light
61
Inventing the mobile phone From the man who did it
65
All of the above Aalyria hopes to connect the world
69
A MD’s CTO On how to compete with Nvidia
72
B ehind Omniva The secretive startup uncovered
76 Germany in review Regulation and growth 81
Will the real 5G please stand up? 5G Standalone explained
85 The subsea tide Climate change & cables 89 Space data centers A feasibility study 92 Staying up IEC UPS standards
61
76
95 G oing metric Beyond PUE 98 Op-ed: A choice of futures
Issue 50 • October 2023 | 3
>>CONTENTS
DCD Magazine hits
W
e've been thinking about time a lot. This issue is our 50th, and is our largest ever in terms of team size, and the range and number of features - too many to mention below. We've been fortunate to grow alongside a sector that is bracing for another expansion amid the generative AI boom, and are currently hiring for two more journalists. But we'd be stupid to think that success is guaranteed and that the industry won't contract in the future.
We're in the midst of an AI explosion, but we don't know when the good times will end
$70bn
From the Editor
50
Finally, we investigate Omniva, a secretive startup hoping to build giant AI data centers. But all is not well at the company, as we uncover its troubled past.
The big crash We live in a cyclical sector. The dot-com crash and telco winter claimed many businesses that overexpanded during a boom. For the cover of this issue, we look at a company that not only overexpanded, but committed fraud at a breathtaking scale. Read our four-year investigation into the history of Enron Broadband Services, and learn about how it helped birth a new wave of data center companies.
Enron's valuation at peak. It would claim that Broadband accounted for $40bn.
Quantum computing is coming. At some point. Maybe. Across two features we look at the state of quantum computing today, and a promising startup of tomorrow.
Reporter Georgia Butler Head of Partner Content Claire Fletcher
Partner Content Editor Chris Merriman @ChrisTheDJ Designer Eleni Zevgaridou Head of Sales Erica Baeta Conference Director, Global Rebecca Davison
Matthew Welch Channel Management Team Lead Alex Dickins
Data centers are only one part of the story. We talk to Google X spinout Aalyria, about connecting the next billion, as well as telco giants about the future of 5G. Oh, and we interview the inventor of the mobile phone.
Channel Manager Kat Sullivan Channel Manager Emma Brooks CEO Dan Loosemore
Head Office DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
Plus more What does climate change mean for subsea cable infrastructure? Can data centers live in space? How is Germany regulating data centers? What comes after PUE? We try to answer all this and more in this issue. Here's to the next 50.
Training
Telecoms Editor Paul Lipscombe
Content & Project Manager - Live Events
Connect them all
Debates
News Editor Dan Swinhoe @DanSwinhoe
Gabriella Gillett-Perez
Sebastian Moss Editor-in-Chief
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
Executive Editor Peter Judge @Judgecorp
Content & Project Manager - Live Events
The next black swan
Dive even deeper
Events
Publisher & Editor-in-Chief Sebastian Moss @SebMoss
Partner Content Editor Graeme Burton @graemeburton
This AI moment Virtually every feature in this magazine touches on AI, but a few take a deep dive into the impact on different parts of the industry. We talk to Digital Realty's CEO about how the data center giant is adapting to what's coming, CyrusOne's CEO about its 300kW-per-rack pitch, and AQ Compute's CEO about the need for plumbers. On the hyperscaler side, we get the exclusive from Meta about why the company scrapped its data center designs and started from the ground up. We also look at the compute, talking to AMD's CTO about how the company plans to compete with Nvidia.
Meet the team
Awards
CEEDA
© 2023 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
Issue 50 • October 2023 | 5
DCD Magazine #50
>>CONTENTS
Looking back om
mics.c
15 • ry 20
na erdy cent
data
Februa
The B
M ICS.CO RDYNAM
Peter Judge Executive Editor
NTE DATACE
The last fifty issues have seen some big changes
ss of usine
rs Cente Data
ICS RDYNAM
NTE DATACE
ith ad w n Opening han w in
We also covered Telx’s refit of two iconic New York buildings - 60 Hudson Street and 32 Avenue of the Americas. We’ve visited those buildings multiple times since then, following other owners and tenants since Digital Realty bought Telx later in 2015. Among the features in that first issue, we speculated that open source would drive changes in the IT provision within data centers.
5
Continuing stories Our first covered a protest in Virginia, against new powerlines demanded by an AWS data center, alongside a piece about a 15-story facility in Hong Kong.
RY 201
The cornerstone of the publication from the beginning has been concise and analytical news coverage, combined with longer articles that explain and explore the deeper issues in the sector.
Some of our other thoughts have not aged all that well. We examined a new idea, hyperscale data centers, and wondered whether they could possibly deliver better energy efficiency than the existing colocation facilities.
RUA • FEB
From 2008 to 2014, there were 38 issues of DCD Focus magazine, and DCD produced a series of newsletters and other publications from the start, supporting its events and building a community of data center professionals.
E 01
To tell the truth, Issue #1 wasn’t the start of the story. DCD’s publication history goes back more than 20 years, to the foundation of the company at the turn of the century.
ISSU
Eight and a half years later, we know the answer to that question, but there have been plenty more issues to explore in the 48 magazines that came between that first publication and the 50th issue we’re bringing you today.
04 UME • VOL
"W
ll rack densities go beyond 5kW?” That was one of the questions posed in the Datacenter Dynamics Magazine Issue #1, published back in February 2015.
In the following years, we visited as many sites as possible - preferably where data centers were deep underground or in striking refurbished buildings, or serving pioneering scientific research.
da r facto umanal issue the h tr Why l the cen is stil
ories Tall st
g M akinlevel mutltai-centers da
ap Skills g ts be
wan ine Who enter eng data c
And we reached back into the past. We found a trove of beautiful photographs from the early years of mainframes, and spoke to a team reconstructing the first programmable computer. This issue we continue that thread with the story of the surprising role that the Enron financial disaster played in the development of data centers and the Internet.
Around the same time, we hit pause on print. The magazine went PDF-only. This made distribution simpler to a readership who were suddenly absent from their offices and working from home - and wanting the communication we could provide. It also unlocked the page limits we’d previously worked within. From now on, articles could be as long as the subject demanded. Both moves were a success.
6 | DCD Magazine • datacenterdynamics.com
2
ber ovem m ber/N Octo nterdyna ce data
2017 /May April ics.com nam erdy tacent
We found out as much as we could about every significant part of the sector - as regular supplements have given us a chance to drive deep into topics including cooling and power.
The last three years dealt us some of the biggest changes in the magazine’s history. When the Covid-19 pandemic struck, DCD’s face-to-face events were all canceled or postponed indefinitely. The company executed a pivot to online events and communications, and the magazine took a central role in that.
.com
mics
na erdy cent
data
T t but rvers ging, ng se the chan hapi and un olded by d solid ing rem try be looke They rvers are the indus se nds of now ve mi creati
Re-s tiers w fron rd
g ne ndin
nters
ta ce ge to ul da mana ss us that autif ilities ll impre me facne, and sti are so Here ir work do get the ir looks the with
10 be
r is ha Fi cente in Angola data one ing a Build . Building gh r enou toughe is even
22 ber 20 om s.c
cem
mic 47 • De Issue nterdyna ce data
Readership increased, and so did our value and recognition in the industry. We now have more journalists, with their own individual beats, so you can look forward to deeper coverage of all subjects. That has only been made possible with support from our readers, to whom we are forever indebted. And as to that question we started with. We predicted power densities would stay below 5kW per rack until 2020. I think you know how well that one turned out.
r 2018 om mics.c
Data ss of usine
om
mics.c
na erdy cent
rs Cente
The B
• data 2016 March
The age of AI
>>CONTENTS om
mics.c
na erdy tacent
2016 Dec
ss of usine
• da 2017
/ Jan
rs Cente Data
The B
ese Chin e for recip th grow
y 2022 om
nuar
s.c
mic 43 • Ja Issue nterdyna ce data
on
EO s’s C tPoint es in
Dar Edge liv s The citie
t men pple wer suOpen19,ore The Po gen, , & m ydro
ballo Loon cting the y the sk Conne from world
gust
2020
s.com
mic 37 • Au Issue nterdyna ce data
H tech wable rene
Tier 3
s man Sc hneider’ n on
20 ber 20 om
vem
s.c
mic 38 • No Issue nterdyna ce data
do Why eed a we nntum Qua net? from Interpected paythoeffquantule m
nso Dave Johy of Edge the realit
tin adap r s a re k cente ting bac ta a d h How we’re fig w & ho
ERY COV S I D OF AGE VOY
Power
a eer?
THE ID COV E ISSU
cente t data abou atick rned N ft lea Project so ro m fro t Mic Wha reliability
c hes approa ty Newto electrici
CEO
lem g supp t heat nter Coolinways to ge ta ce New yo out of
a rctic Anta links Wiringea cable ent
ur da
be
vem
m 46 • No Issue nterdyna ce data
23
s.com
23
ly 20
I: tive A a r e Gen ture of
s.com
u The f centers, ters data rcompu supe cloud & the
s.com
THE
ia Virgin lty al Rea Digit House er Pow rScale Co GDC NTT
Ro curit for se
g
ril 20
mic 44 • Ap Issue nterdyna ce data
ISSU
t? od bo rt po ’s a go Who t dogs reduty bo y
in A subslast cont the
mic 48 • Ap Issue nterdyna ce data
mic 49 • Ju Issue nterdyna ce data
22
ring
as fla
es of er G Smin w Ingvil ind & Po W Earth
ent
r
of tion brain a e r c The lectronic e e th ril 20
unex its, and possib The b es it d qu linke that mak ic mag
CEO
One’s mers Cyrus ing custo utive ep ec On ke amid ex y happ turnover
E
tum Quan ole ay h Loop ital Gatew ig PW D dy Rizer d & Bu
EO C’s C ers North data cent by
w ed On ho welcom can be locals
ollo at on Ap Missi sive look g orkin exclu
tw An le’s ne Goog volution re
e: Insid Also
t larges orld’s er The w ta cent biggest da for the et arch e plan We se ity on th facil
r ttle fo ia The ban Virgin ary er ’s prim it North
nter ta ce ’s da a Dublindilemm centers ta of da end d’s capital? The lan in Ire
Soviet ble in Trou Florida risks
cars Race hips c g i B o towers Telc
y rrenc azia tocu Cryp ding Abkh upen
try ainst indus The t turns ag marke
Issue 50 • October 2023 | 7
X ng VFge Ed ernizi Mod cloud and ging How te is chan pu ing com ie mak mov
Whitespace
>>CONTENTS
News
NEWS IN BRIEF
The biggest data center news stories of the last three months
AWS launches Dedicated Local Zones offering The cloud company has launched a new on-premise cloud service called Dedicated Local Zones. The service is a larger version of its Outpost on-premise rack offering.
CyrusOne offers 300kW-perrack AI data center design Colo firm has launched Intelliscale, an artificial intelligence (AI) workloadspecific data center design. The company said the facilities will be able to handle GPUs and TPUs on a smaller footprint.
StackPath quits CDN biz, Akamai takes on contracts
Blockchain firm Standard Power to procure 24 SMRs to power two US data center sites Blockchain firm Standard Power says that it will procure power for two data center sites in the US with nuclear energy. October saw the company announced plans to develop two facilities backed by nearly 2GW of energy from 24 small modular reactors (SMRs). Standard Power will work with technology provider NuScale Power Corporation, the only technology provider and producer of SMRs that has obtained US regulatory approval, and Entra1 Energy, an independent global energy development and production company, to support Standard Power’s two projects. If they are built, the facilities will be located in Ohio and Pennsylvania; NuScale will provide SMRs to the two locations. NuScale will reportedly end up providing 24 units of 77MWe modules collectively producing 1,848MWe across the two locations. Timelines for delivery or value of the deal were not shared. “We see a lot of legacy baseload grid capacity going offline with a lack of new sustainable baseload generation options on the market especially as power demand for artificial intelligence computing and
data centers is growing. We look forward to working with Entra1 and NuScale to deploy NuScale’s proven SMR technology to deliver carbon-free, baseload energy to address this large gap in the generation market,” said Maxim Serezhin, Standard Power CEO. SMRs, which are also under development at companies including the UK’s RollsRoyce, have been proposed as a potential source of low-carbon energy for data centers which could effectively allow facilities to operate independently from the local grid. The NuScale Voygr design was the first SMR to gain final approval from the Nuclear Regulatory Commission (NRC) for deployment in the US earlier this year. Entra1 Energy, part of investment firm Entra1, is the exclusive commercialization partner of NuScale Power. Standard Power has previously signed a deal with energy firm Energy Harbor Corp to place a data center at the latter’s Beaver Valley nuclear facility in Shippingport, Pennsylvania The two companies previously formed an agreement to power Standard’s Ohio cryptomining facility in Coshocton with nuclear energy from one of Energy Habor’s plants. bit.ly/StandardCryptoReactor
8 | DCD Magazine • datacenterdynamics.com
Content Delivery firm Akamai gained 100 enterprise customers this quarter after rival firm StackPath announced it was quitting the CDN business. StackPath’s CDN operations will cease in November.
Microsoft hiring for SMR strategy lead A job listing, first reported by DCD, reveals that the company is hiring for someone to be “responsible for maturing and implementing a global small modular reactor (SMR) and microreactor energy strategy” to help combat the data center power crunch.
Fujitsu exits US data center business, may do same in South America Fujitsu has closed its North America data center business, and is exploring options in South America. Fujitsu’s recently-appointed CEO for the Americas, Asif Poonja, said the company “didn’t have the size and the scale” to compete in North America.
Solar-powered data center to heat surfing lagoon Leisure park developer Aventurr is planning to heat a surfing lagoon in New Zealand using a solar-powered data center. The Auckland site, which will include a 2.2-hectare surfing lagoon, will house a data center powered by a nearby solar farm that will use waste heat to warm the water. Aventuur says it will invest $50-100 million in each of its parks.
DCD Magazine #50
>>CONTENTS
A new company from the founders of Cologix is planning a massive new data center park outside Reno, Nevada. October saw Tract, a new developer of master-planned data center parks, announce its recent acquisition of more than 2,200 acres of land inside the Tahoe-Reno Industrial Center in Storey County. The total acreage is comprised of two areas commonly referred to as the Peru Shelf and South Valley. In addition to the land, Tract also controls over 1,100 acre-feet of water rights and has commitments from NV Energy to deliver over two gigawatts of power, beginning in 2026. “We appreciate the relationships we have built with NV Energy, Storey County, TRI, and the State of Nevada. We look forward to building on those partnerships for decades to come,” said Grant van Rooyen, CEO of Tract.
Tract announces plans for 2,200acre data center park in Nevada
“Our customers are facing challenges resulting from their rapid growth. We believe our master-planned, shovel-ready campuses will allow them to leverage our investments to gain speed and certainty.” Colorado-based Tract describes itself as a company that acquires, zones, entitles, and develops master-planned data center parks to provide data center end users. The company says it has real estate holdings throughout the United States. News of the company surfaced last year – at the time the company had reportedly identified 40,000 acres of potential investment sites, including prospective sites in Eagle Mountain in Utah, and the since-
announced development in Reno. Grant van Rooyen is president of the van Rooyen Group, which founded US data center firm Cologix alongside ColCap in 2010. Stonepeak Infrastructure Partners acquired a majority stake in the colo company in 2017. Van Rooyen was president and CEO from 2010 until 2018. “We have been working with Tract for nearly a year now and are excited to partner with them on these projects,” explained NV Energy President and CEO Doug Cannon. “These data center parks will be some of the biggest consumers of energy on our system. Tract’s approach of long-range planning allows us to engage and collaborate early to ensure reliable, affordable, and sustainable power will be delivered.” Tract acquired the land from Blockchain LLC, but terms weren’t shared. Tract was advised by L. Lance Gilman Commercial Real Estate. Blockchain had planned a 5,000-acre blockchain-powered smart city/innovation zone that would house 35,000 people. After acquiring the land in 2018, the plans were dropped around 2021 amid concerns from local officials. “We look forward to working with Tract on their future plans for northern Nevada and welcome them to the state,” said Governor Joe Lombardo. “As the Nevada economy continues to diversify, technology companies will be a key component of our growth.” bit.ly/GrantingTraction
Cogent to convert 45 Sprint switch sites into colo data centers Fiber firm Cogent Communications is planning to turn the former Sprint switching site real estate footprint acquired from T-Mobile into a sizeable colocation data center portfolio. T-Mobile sold its Wireline business to Cogent for a symbolic $1 last year, with the telco operator set to take a $1 billion charge on the deal. Much of the business sold to Cogent was Sprint‘s legacy US long-haul fiber network, which T-Mobile had acquired as part of its $26bn merger with Sprint in 2020. However, the sale to Cogent also included more than 40 data centers totaling some 400,000 sq ft (37,160 sqm) of space and a significant real estate footprint totaling 482 technical spaces and switch sites. During the earnings call for Cogent’s Q2 2023 quarterly earnings in August, Cogent founder and CEO David Schaeffer said the company can repurpose a number of switch sites and sell colocation. The largest 45 sites comprise 1.3 million sq ft (120,775 sqm) and already have 160MW of power, and are the ones the company aims to convert into colocation facilities. Shaeffer added there are almost 300,000 squar feet of leased technical space that Cogent will be exiting. bit.ly/CogentColo
Issue 50 • October 2023 | 9
Whitespace
>>CONTENTS
In a relatively quiet quarter for acquisitions, Digital Transformation Capital Partners (DTCP) has become the majority owner of German data center firm Maincubes, acquiring an additional stake from co-owner Art-Invest Real Estate.
APAC data center operator AirTrunk considers AU$10 billion IPO
Reports have surfaced that APAC-focused data center firm AirTrunk is considering an initial public offering. The company is reportedly considering a float that could value the company as “well north” of AU$10 billion (US$6.4 billion). AirTrunk shareholders Macquarie Asset Management and PSP sent a request for a proposal to seven investment banks earlier this month requesting pitch ideas around a capital review, which would look at a potential listing on the Australian Securities Exchange (ASX). The capital review may also examine options around selling a minority stake in the business. APAC-focused operator AirTrunk was
Art-Invest retains a ~25 percent stake in the company – down from around 75 percent before DTCP’s initial investment – while CEO and founder Oliver Menzel also remains a shareholder. Terms of the deal weren’t shared. Maincubes currently operates the FRA01 data center in Frankfurt and another in Amsterdam in the Netherlands; the company has two more sites in Frankfurt and one in Berlin in development. September saw investment firm KKR acquire a minority stake in Singtel’s data center unit. founded in 2016 with plans to develop hyperscale data centers in Australia. It opened its first facility, in Sydney, in 2017. Since then the company has expanded across the region, operating and developing campuses in Australia, Hong Kong, Japan, Malaysia, and Singapore. A consortium led by Macquarie’s Asia Infrastructure Fund 2 (MAIF2) and including Public Sector Pension Investment Board (PSP Investments), acquired a major stake in the business in 2020, investing alongside AirTrunk’s Founder and CEO Robin Khuda. At the time, the business was valued at around AU$3 billion ($1.93bn). Documents suggest the company is planning to invest around $1 billion in each project, most of which will be built out in phases until around 2035.
Bain to take ChinData private in $3.16 billion deal
Singtel and KKR have reached a definitive agreement that will see a fund managed by KKR commit up to S$1.1 billion (~US$800 million) for a 20 percent stake in the Singaporean telco’s recently-formed regional data center business. The deal puts the enterprise value of Singtel’s overall regional data center business at S$5.5 billion (US$4bn). KKR will also have the option to increase its stake to 25 percent of the business by 2027 at the pre-agreed valuation. Singtel’s portfolio comprises 62MW of existing capacity and a new 58MW data center in development in Singapore as well as developments in Batam, Indonesia and Bangkok, Thailand. bit.ly/AirPO
Chinese data center operator ChinData is to be taken private by existing investor Bain Capital. August saw ChinData Group Holdings Limited announce that it has entered into a $3.16 billion merger agreement with two wholly-owned Bain subsidiaries, BCPE Chivalry Bidco Limited, and BCPE Chivalry Merger Sub Limited, to be taken private. “The company’s board of directors, acting upon the unanimous recommendation of a committee of independent directors established by the board of directors (the Special Committee), approved the merger agreement and the merger, and resolved to recommend that the company’s shareholders vote to authorize and approve the merger agreement and the merger,” ChinData said. The merger is currently expected to close during the fourth quarter of 2023 or the first quarter of 2024 and is subject to customary closing conditions. The deal will be funded through a combination of cash contributions from the sponsors or their affiliates, debt financing provided by Shanghai Pudong Development Bank Co., Ltd. Lujiazui Sub-branch and Industrial Bank Co., Ltd. Shanghai Branch, and equity rollover. ChinData and its Bridge DC subsidiary operate more than 17 data centers across China, Malaysia, and Thailand, with a site in India under development. bit.ly/BuyBuyBain
10 | DCD Magazine • datacenterdynamics.com
Consett, UK
Rockbridge, Virginia Grenada, Mississippi
Leeds, UK
Guadalajara, Spain
Dubai, UAE India
The Global Critical Cooling Specialist With British engineering at the heart of our products, Airedale manufactures across several continents so that our clients can apply our solutions worldwide. A digital world needs a global specialist. Our chillers, software, CRACs, CRAHs and fan walls are engineered to perform in the toughest conditions, all year round. When you partner with Airedale, you can be reassured of quality, reliability and efficiency. Start your journey with us today.
Global Locations: Leeds, UK Global Headquarters Chillers, CRAHs, Telecoms, R&D, Test Labs
India CRAHs
Consett, UK AHUs, CRAHs, Fan Walls
Grenada, MS, US CRAHs, Fan Walls, Test Lab
Guadalajara, ES CRAHs, Fan Walls, Test Labs
Dubai, UAE Sales Office
www.airedale.com
Rockbridge, VA, US Chillers, Test Lab
Whitespace
Amazon plots data center campuses in Virginia, Ohio, Arizona, and Ireland Amazon is continuing its expansive data center build-out across the US and in Europe. Over the last three months the company has filed to develop campuses in Virginia, Ohio, Arizona, and expand its existing campus in Dublin, Ireland.
>>CONTENTS
data center space across two two-story buildings on a plot outside Stafford. In Ohio, the company is planning a new five-building campus in New Albany. Built over a five-year period between 2025 and 2030, the five buildings would total 1.25 million square feet on 439 acres. In Arizona, the company has filed for permission for two two-building campuses in the Mesa area. Each building would span 227,000 sq ft (21,090 sqm). In Dublin, Amazon recently gained planning permission from Fingal County Council for three new data center buildings. The company was seeking permission to construct three data center buildings on a 65-acre site at its existing
campus: Data Centre E, Data Centre F, and Data Centre G, with a gross floor area of 15,350 sq ft (1,425 sqm), 221,520 sq ft (20,580 sqm), and 221,520 sq ft respectively, each over two levels. All three would be completed by 2026. At full build-out, the campus will include one more building and provide around 220MW of capacity. Over the summer, Amazon filed for four data center campuses in Virginia’s Spotsylvania and Caroline Counties that would span more than 10 million square feet of development. It also filed to demolish and replace nine office buildings in Sterling with four data centers spanning more than 900,000 sq ft (83,600 sqm) bit.ly/AmazonKeepsGoing
The developments total more than 12 million sq ft (1.1 million sqm) and around 40 buildings. In Virginia, the company is looking to develop at least four campuses across Lousia, King George, and Stafford Counties. One of the two planned campuses in Louisa will span 1. 7 million sq ft (157,935 sqm) and see up to seven buildings developed. Details on the second campus in the county haven’t been disclosed yet. In King George, the company was approved in September to rezone 869 acres of land from farmland to industrial for a 19-building data center campus spanning 7.25 million sq ft (673,550 sqm). In Stafford, the cloud firm is aiming to develop 510,000 sq ft (47,380 sqm) of
Microsoft building facilities in Wales, Ireland, and Wisconsin
Meta behind $700 million Minnesota data center project Social network firm Meta has confirmed it is behind a
The cloud company in October announced it would be applying for planning permission for a data center at the Quinn Imperial Park in Newport, Wales.
$700 million data center project in the Minneapolis area of
Details around the scope of the development or potential timelines weren’t shared. A rendering on the blog post suggests a large single-story building. The building is located virtually next door to Vantage’s Newport campus - and both were factories for Korean firm LG Electronics till it pulled out of Wales some 20 years ago.
Amber Kestral, and was seeking to acquire 280 acres of
September saw Microsoft break ground on a data center in Mount Pleasant, Wisconsin. The site was previously earmarked for a Foxconn manufacturing hub which controversially never came to fruition.
center’ project. At the time, the energy firm said the ‘Amber
The company is also expanding in Ireland, but outside the traditional hub of Dublin.
10 years in service.
Microsoft has confirmed it is in the early stages of developing plans for a data center campus near Jigginstown in Naas in County Kildare, to the southwest of Dublin.
munitions plant. The campus plan for the park was
Details on campus size or capacity were not shared. bit.ly/CloudsDontKillPeopleRappersDo
12 | DCD Magazine • datacenterdynamics.com
Minnesota. The company in September confirmed it was behind UMore Park property next to Dakota County Technical College for $40 million. The deal passed the same month. In November 2022, Xcel Energy said it was working with an unnamed Fortune 100 company for an ‘enterprise data Kestral’ project was expected to achieve an initial load of at least 10MW and grow to exceed 75MW by the end of its first The 4,772-acre UMore Park property is a former known as “Project Bigfoot” by the Rosemount Planning Commission and could see five single-story buildings developed, two of which will be “main buildings.” bit.ly/Metasota
24 / 7 CLEAN ENERGY Oklo designs and operates advanced fission power plants to provide clean, reliable, affordable energy
Power Purchase Agreements at cost equal to or less than other energy sources No transmission bottlenecks No waiting for utilities
oklo.com
Whitespace Nvidia may lease data center space for DGX Cloud service GPU-maker Nvidia is in talks to lease space from a data center operator as it looks to expand its cloud offerings. Nvidia launched its DGX Cloud offering in March 2023 to offer GPU supercomputers-as-aService. To expand the service, the company has reportedly held discussions with at least one data center owner about leasing its own space for its DGX Cloud service. With the hyperscalers, the company has tried to use its position to convince them to adopt DGX Cloud - which would essentially see the companies lease Nvidia’s servers, deploying them as a cloud within their cloud that Nvidia can market and sell to enterprises looking for large GPU supercomputers. While Google, Microsoft, and Oracle agreed to the proposal, Amazon Web Services has not. Nvidia is now considering cutting out the cloud providers completely and becoming a wholesale customer.
>>CONTENTS
GLP launches new Ada data center platform with 850MW pipeline APAC logistics real estate firm GLP has launched a new data center platform and detailed its development pipeline across Europe, APAC, and South America. September saw the company announce the launch of Ada Infrastructure, a new global data center business encompassing its data center projects outside China. The unit has launched with 850MW of secured IT capacity in development across Japan, the UK, and Brazil, and claims nearly 1.5GW of future capacity. In Japan, Ada said it has 900MW in power commitments in Japan and is planning five campuses totaling 600MW; four in Tokyo (TKW1, TKW2, TKE1, and TKE2) and one in Osaka. Ground has been broken on TKW1. In the UK, Ada said it is developing a 210MW campus in east London’s Docklands. The site will be
Discussions are believed to be still at an early stage. hyperscaler of its own. Details around which data center provider, and the scope and locations of deployment have not been shared. https://bit.ly/DGXDC
ready for service in 2026 and consist of three eightstory buildings. The company is planning two campuses in Brazil – Rio de Janeiro and São Paulo – totaling 100MW. Rio will offer 60MW across three singlestory buildings from 2025, while São Paulo will offer 40MW across two single-story buildings from 2025. Founded in 2009, GLP is a global investment manager in logistics, digital infrastructure, and related technologies. It first began moving into data centers in China in 2018, and acquired a 60 percent stake in local data center company Cloud-Tripod the following year. In China, the company also has data centers in Changshu and Huailai, and says it could reach 1.4GW of in-market capacity in the future. The Chinese facilities are being kept separate from Ada Infrastructure. bit.ly/AdaLoadOfThat
Peter’s factoid Data centers have contributed $2.1 trillion to the US Economy, if a new study from PwC is to be believed. The generous math includes direct, indirect, and induced impacts over a five year period 2017-2021.
Crown Castle CEO Brown: We’re not interested in investing in data centers US tower company Crown Castle has ruled out investing in data centers as part of its infrastructure portfolio. The comments were made by Crown Castle’s president, CEO, and director, Jay Brown, during the Goldman Sachs Communacopia & Technology Conference in September. “We don’t see revenue synergies there in the same way that we see it with the other products that we’ve offered,” he said.
14 | DCD Magazine • datacenterdynamics.com
The company has over 40,000 towers, 120,000 small cells on air or under contract to go on air; and around 85,000 route miles of fiber. That approach is markedly different from rival tower firms such as American Tower and SBA. American Tower acquired CoreSite and has previously said it has identified more than 1,000 sites that could support 1MW Edge data center locations. SBA Communications has said the company had more than 40 Edge sites in operation or development. bit.ly/KingOfTheEdge
New York ∙ Virginia ∙ Florida
greenchiprecycling.com | 844-783-0443
The Largest Purchaser of End-of-Life HDD and SSD Drives Also Providing: State-of-the-Art Shredding Facility Located In Northern VA
Transparent Partnership Program, Guaranteeing Stable, Top Tier Revenue Share On-Site & Off-Site Hard Drive Shredding
Nationwide Service
DCD Magazine #50
>>CONTENTS
The unmaking of Enron Broadband The spectacular collapse of a Wall Street darling with dreams of Internet dominance
I
t was early in 2001 and Kevin Moss had a job interview at the world’s most coveted corporation.
question: “I'm leaving a lot of stuff behind to come down here, are things good financially?”
Moss (no relation to this writer), traveled up the 50 stories of Enron’s imposing headquarters in Houston, Texas, awe-struck by the success that the building represented. This, he hoped, would be where he made his fortune, a company where he could spend decades rising through the ranks.
Ken Rice, at the time the CEO of Enron Broadband Services, smiled. “He leaned back, popped his cowboy boots up on his desk, and proceeded to feed me the biggest line of bullshit ever,” Moss told us, two decades later. “And I bought every bit of it.”
But it would mean moving across half the country and giving up a promising career. Doubt niggling at him, he asked what would prove to be a prescient
Within a year, the company would collapse, Enron Broadband would be sold for scrap, and Ken Rice would be fighting to stay out of prison. Six years later, he would lose that fight. When Moss attended his interview,
16 | DCD Magazine • datacenterdynamics.com
Sebastian Moss Editor-in-Chief
Enron’s broadband division was part of a giant global utility company, valued at $70 billion. By 2002, its value had plummeted in what was, at the time, the largest corporate bankruptcy in US history. Enron Broadband was instrumental in causing that spectacular collapse, but it has been overshadowed by the broader corporate fraud at the energy conglomerate. The broadband business has drifted from public memory, and its role in the wider telecommunications and data center industry is little discussed. But for those who worked at EBS, during its brief moment in the sun, many
Photography by: Moon Immisch
The rise and fall of Enron Broadband
>>CONTENTS
the vast sums they were able to make trading energy, Hanks had an idea. “I just woke up in the middle of the night and said 'I can create a commodity market for bandwidth,’” he remembered. “The guys in Houston went crazy, because the potential market size on this was enormous. They could see the ascendancy of the Internet age - they weren't sure when, but they knew it was going to happen, and they thought this would be an opportunity to gain dominance,” Hanks said. “It looked like a license to print money, so they gave us $2 billion and said ‘Go make Enron Broadband.’” cannot forget their time there - both from the scars that the trauma of bankruptcy left on them, and from the sincere belief that they were working on something magical. Backed by near-bottomless funding, Enron Broadband set out to dominate the nascent Internet, hiring visionaries with dreams of video-on-demand, cloud computing, and Edge networks long before the market was ready. It would fail spectacularly, but its legacy would help birth a new generation of data center companies. We spoke to more than a dozen former Enron employees and contractors over the past four years, and they kept returning to one question: Could it have worked?
The commodification of everything Founded by Kenneth Lay in 1985 through the merger of two smaller gas businesses, Enron began as a traditional gas and electricity enterprise supplying power across America. Over the next few years, the company sought greater returns in riskier ventures - first by expanding into unregulated markets, and then with the “Gas Bank,” which hedged against the price risk of gas. It was this concept, pushed by thenMcKinsey consultant and eventual Enron CEO Jeff Skilling, that transformed Enron from a normal company into a rocket ship destined for an explosive end. The company began to shift from producing energy to trading energy futures and building complex financial schemes to profit over every part of the energy sector. It would go on to trade on all types of futures, including the weather. At the same time, Enron embraced mark-to-market accounting - in which a company counts all the potential income
from a deal as revenue as soon as it is signed. For example, if Enron signed a $100 million contract over a decade, it would immediately report $100m in revenues for the quarter, even if the contract eventually fell apart. The subjective nature of many of the deals also meant that the company would assign huge revenue numbers to contracts that could never live up to the promise. But it meant that Enron could report higher and higher revenue figures. At least for a while.
From gas to bits Enron fell into the broadband business. With the 1997 acquisition of utility Portland General, it also gained FirstPoint Communications, a fledging telco with a traditional business plan. Initially, Enron expected to sell FirstPoint as soon as the acquisition closed, but Skilling saw an opportunity to remake Enron as an Internet business. That is, at least in the eyes of investors, who were in the midst of a dot-com frenzy. “One day, I get this phone call from a guy I knew at Enron,” Stan Hanks, who would go on to become EBS’ CTO, recalled. “He said: ‘We bought this company that’s trying to build a fiber optic network from Portland to Los Angeles, and we don't know if we should let them do it, kill it, or put lots of money into it.’” The idea of a gas company muscling into telecoms was not without precedent - the Williams Companies were instrumental in deploying America's first fiber along disused gas pipes, developing two nationwide networks. One was sold off for a healthy profit, while the other would eventually go bankrupt (but appeared successful at the time of EBS' formation). Talking to the Enron employees about
The flaw in the machine Every company likes to boost its stock and overpay its executives, but Enron liked to take things a little further. Enron was already playing with fire with its mark-to-market accounting practices, using it as a way to juice its revenue numbers and puff up its stock price. It then decided to double its risk by pinning a credit line to the stock price of the company. “That basically means that if the stock price went up, they had more money to do things with,” Hanks said “And so they now had an additional incentive to do things to pull the stock price up, which would then bring in money, which they could use to pull the stock price up a little more. But if you keep ratcheting that up over the course of time...” The company was already, inexorably, heading for disaster, as those in charge became hooked on endlessly-growing share prices on the back of ever-greater promises. Every year, revenues needed to go up, or at least look like they were. And, every year, Enron needed to promise something bigger.
Something bigger These were heady times. The promise of the Internet was only just beginning to crystalize, and investors were eagerly assigning huge valuations to fledgling tech companies. Enron was willing to spend whatever it took to be seen as one of these and was equally eager to greenlight ambitious and far-out ideas. At its height, it would value the broadband business at $40 billion, and claim that it was on track to become the Internet’s most valuable business. Money was easy to come by. "We were
Issue 50 • October 2023 | 17
DCD Magazine #50
splashing it around like we were rolling in it," Daryl Dunbar, the former head of engineering at EBS, said. Those we spoke to had different tales of excess, driven both by an overall sense of opulence and an extreme focus on speed. "The budget was almost unlimited, we were throwing money around like it was water because we had so many orders coming in and they just wanted it built as quickly as possible," a former Europeanbased employee in charge of the Benelux colo and data center site said. "Some of the stuff we needed from the UK we would fly it in, instead of truck in, that's how desperate they were to get it built." Another European employee remembered being flown across the continent to do simple repairs and being given access to the best equipment. "We were building everything from scratch, the requirement level was very, very high - I'm working for a cloud company right now, and our standard level doesn't go anywhere near what was requested at that time," he said. Despite moving fast, they were also trying new things: "We were running DC currents to feed the devices,” he said. “It helps you flip faster to the battery backup system but that means that it's a lot more dangerous to connect to feed those devices. I haven't seen anyone do it since." US-based Jim Silva agreed: “There was very little we would ever want for. Expense reports were approved without questions. Budgets to complete tasks or projects were blanket approved. It was a "get 'er done" culture and ‘how’ was not the focus, first to market was.”
>>CONTENTS
By this point, Kevin Moss had accepted the job at Enron and set about trying to rein in costs. "I thought, 'Oh my god, I could be a superhero.’ It was no problem cutting costs there."
profitable, deals began to materialize. Enron would charge Hollywood studios thousands to send film rushes across North America when production could not wait for a FedEx shipment.
When he first traveled between his office at the Houston headquarters and another at a nearby warehouse, the company ordered him a car for the short journey. "I got out there and asked how much it cost. The guy looked at me like I was stupid for asking. It was like 150 bucks just to run me out to the office. The spending was out of control."
It streamed the 1999 Country Music Awards and The Drew Carey Show.
Before gravity caught up Enron Broadband began to muscle its way into becoming a major telecoms player, deploying thousands of route miles of fiber and several data centers across the United States, along with Points of Presence. Other PoPs sprung up around the world. It funneled money into R&D, looking for ways to make the most out of the network it built. “We started a group led by Scott Yeager to create basically science fiction, developing these incredibly sexy products that ate bandwidth like crazy, that were incredibly sticky and incredibly appealing,” Hanks said. By 1998, it was able to provide 480p streaming video over the Internet, at least to a number of US locations. “My content distribution network went live about a year before the Akamai network,” Hanks said. The company also tried to buy Akamai to complement its growing Edge portfolio, but the deal fell apart. Several profitable, or at least potentially
Enron also spent untold millions on HP servers, envisioning a shared on-demand compute and storage service similar to today’s cloud computing. As the new millennium passed, it signed a deal with one of the world’s most important tech companies. “We were going to be the broadband network underneath Microsoft’s MSN and the national network for Xbox Live,” Dunbar said. “Enron introduced a bankruptcy clause in the contract, very arrogantly saying: ‘Microsoft, you're a software company, you might blow up.’ “And then it turned out that that clause ended up going in the other direction.”
The three tribes Already, even in the good times, cracks were showing in EBS’ shaky foundations. Fundamentally, EBS acted more like three companies than a unified whole. There were two main sides: the Portland group that was more like a traditional telecoms business, and the Houston office led by the ‘cowboys’ chasing the next big thing. A third office in Denver tried to act as an intermediary group, although it mainly sided with Portland. But it was Houston that called the shots. “The Portland office was a great group,” Silva said. “A professional yet relaxed atmosphere. Houston was filled with young graduates, project managers, and folks that sort of had the ‘we'll take it from here’ approach.” At the headquarters, they liked to throw around the phrase 'entrepreneurial spirit' to capture the bold ideas and fast pace, as well as a brutal culture that regularly laid off poor performers. Dayne Relihan, out of the Denver office, had another name for it: "Completely screwing it up." He said: "These guys would start building stuff in the network, get out there and mess it up. I'd get a call from operations and then I'd have to send my engineers out there to figure out what the problem was and try and solve it."
Photo Credit: Stu Pendousmat
18 | DCD Magazine • datacenterdynamics.com
One example that might symbolize the fundamental dysfunction at Enron was its flagship Nevada data center. Set to be the heart of its US operations, it was built, for
Enron Broadband
>>CONTENTS
no reason at all, with a sloping floor. "The engineering people said it's got to be [Americans with Disabilities Act] compliant, but they were just talking about having a ramp in front of the facility to get wheelchairs out. They weren't talking about the main floor,” Relihan said. “The whole thing ended up being built on a slope, so when my guys began to build all the equipment it started out at one height, but then it got to the point where the relay racks were too tall. So they had to tear it all back out again. It was an amazing comedy of errors. That's about how much knowledge they had. “There were some really brilliant people, but they had no idea what they were doing." Also out of the Denver office, Ron Vokoun joined Enron to help build telco sites along its network, as he had for Qwest the year before. “All those initial facilities that were in progress were all stick built, just the dumbest thing that you can do,” he said. “And then I found out that some were different. “So we changed that and immediately started doing prefab facilities for the route from Houston to New Orleans, as we had at Qwest. We were done with that route before the one from Portland to Houston was done, even though it had been going for over a year.”
Trading places Vokoun and Relihan were able to work around the design challenges - it was just a part of the job. “Telecommunications is actually a pretty straightforward business, there's not a lot of magic to it,” Relihan said, noting that the core business of deploying fiber and data centers is a tried and tested model. The problem, Vokoun said, was that Enron was “basically saying ‘we don't want to build or own anything anymore. We want to use everyone else's infrastructure and trade stuff.’” Colleague David Leatherwood concurred: "There was a fabulous network, it was fiber all over the place. It's just when Enron got involved with it and decided they wanted to commoditize it all and just do trading that the end started." For the three Enron workers, the end had truly come. “We realized they were crazy; two weeks later we left,” Vokoun said. The idea did not seem as crazy to Hanks, who remains convinced that
it could have worked. He envisioned a world where access to capacity could be traded as futures, with companies paying more during times like the release of a blockbuster movie, and less during the night. It wasn’t ready - and even then, Enron tried to rush it. The plan had been for EBS to spin off and list on the stock market on its own, slowly developing trading while building out a network, but Enron scrapped the idea and instead moved 200 people from Enron Capital and Trade into the division to speed up the commodity market concept. “So now we're being pulled in two directions, because the commodity market bit is not ready for prime time,” Hanks said. “It wasn't there yet. And then, all of a sudden, I've got 200 guys trying to make bills and things started getting kind of out of control.” It got worse. In January 2000, at Enron’s analyst conference, Jeff Skilling and EBS executives “made presentations about what we were doing, and the science fiction aspect of what we had, where they managed to communicate it as though it was something that you could get today. “Enron overrepresented what we had by a fair amount, and I decided that I really didn't want to be around for what I knew was going to be a shit show. So I ended up leaving.”
Enron cloud services The company’s cloud efforts are one of those ideas that got overrepresented. Enron was shelling out millions for the servers, but they didn’t seem to be going anywhere.
“I was the engineering program manager, so I'm supposed to know where they went,” Relihan said. “I started inquiring about all these servers that we bought. So I fired off an email saying that I was going to do a network audit. And then the fireworks began.” Three Houston VPs replied to the email telling him to stop the audit. “I still to this day don't know who they were or what they did, but they absolutely shut me down and said ‘No, Arthur Andersen Consulting is going to do that.’ Well, Arthur Andersen doesn't know jack shit about this stuff.” When the fraud at Enron was eventually uncovered, Arthur Andersen would go from being one of the ‘Big Five’ accounting firms to collapsing amid the scandal - because it did nothing to stop it. “At Enron, it was one idea after another stacked on top of another, and they were failing miserably at it because they didn't have the right people,” Relihan said. “And they didn't want to have the right people.”
Ferraris in the fortress Within the Houston group, there was another team nestled inside the logistics division. “We used to call it the fortress,” Vokoun said. Stuff went into a warehouse, but never came out. Those HP servers Relihan had tried to find were in the fortress, slowly gathering dust next to Ferraris. “There used to be a bunch of Ken's cars over in that warehouse,” Moss said. “Two Harleys were sitting back in there, too.” While all was not well behind the scenes, the company appeared outwardly
Issue 50 • October 2023 | 19
DCD Magazine #50
>>CONTENTS
"She called me up and said: 'Could I ask you how much you were claiming in profit from this test?' And I replied, 'What are you talking about? There's no profit.’"
successful - both financially and technologically. This impression would help it score what could have been its largest and most impactful project. Instead, it would help mark its undoing.
A blockbuster deal Seven years before Netflix began streaming films, set-top boxes were being installed in a Salt Lake City suburb. They purported to offer to residents what we now take for granted, but that was revolutionary at the time - films on demand, beamed over the Internet. In July 2000, Enron teamed up with the largest video rental company in the world, signing a 20-year deal with Blockbuster to usher in the future of home entertainment. “We thought, ‘Enron is delivering electricity to everyone, and they say that they have the infrastructure in place to do the same thing with video content. And we have the connections with Hollywood studios,’” said a former Blockbuster executive, who requested anonymity. “So we got into this partnership with them - and you have to understand, Enron was the number one company in America. It was on the cover of every magazine, and they were the darling of Wall Street.” The exec remembers constant demands to move faster, and to sign contracts with film studios immediately. “There was this intense pressure from Enron, they were pummeling us all the time: 'Where's the content? Where's the content? Where's the content?' It was ratcheting up, and we're like, ‘We told you from the beginning that this is going to take a couple of years.’” Enron, focused more on external appearance than the actual product, would make matters worse. It announced the partnership early, against the advice of Blockbuster, before any studio deals were signed. It gave unrealistic timelines and made wild pronouncements, envisioning a billion-dollar business in a decade. Studios were particularly concerned
about digital rights management (DRM), which was still in its infancy. They were convinced that once a film was streamed over the Internet it could be copied for free.
managed four pilot deployments in the outskirts of Seattle, New York, Portland, and Salt Lake City. It had roughly a thousand users, and a handful of those paid a small fee.
“We had skunkworks guys, some really smart people researching how do we buffer the content into the device, keep it safe, make sure that it can't be decoded, can't be saved,” Dunbar said. “This was pretty leading-edge stuff.”
And yet, Enron told investors that it had made $110 million.
Questions also remained about the feasibility of the network to support it at scale - its Edge deployments were able to help, as were its advances in compression, but fiber still only went as far as the content delivery network (CDN). Blockbuster originally believed that the partnership would move slowly, testing out the technology as the network grew to support it in the years to come. With the constant demands for immediate content, it became clear that that wasn’t the case. “And so, finally, it got to the point where we said to them, ‘Look, this relationship isn't working, we need to break up,’” the exec said. “Of course, at the time, we looked like the loser company, and they were the big dog.” For Blockbuster, the strange ordeal would be the end of its early dalliance with video streaming. “I know it sounds odd at this point, but our attention shifted over to DVD. “Videotapes are not nearly as easy to ship, so the advent of DVD dramatically changed the whole scene for us. At that point, we didn't really recognize the threat that Netflix could become.” The company tried to move on. “And then I got a phone call from a WSJ reporter, Rebecca Smith. She called me up and said: 'Could I ask you how much you were claiming in profit from this test?' And I replied, 'What are you talking about? There's no profit.’ “There was this silence on the phone. And then she told me that Enron had been booking profit off of it.” For the short period the partnership, known as Project Braveheart, lasted, it
20 | DCD Magazine • datacenterdynamics.com
Braveheart was a product more of financial engineering than it was a technological achievement. While it was still in the earliest stages, the company entered a joint venture with nCube, owned by Oracle-founder Larry Ellison. In return for technology from the vendor, it gave three percent equity to nCube and the Enron-controlled investment group Thunderbird. This was the minimum necessary equity for Enron to be able to treat it as a separate entity for accounting purposes. The venture was then sold to Hawaii 125-0, which was created by Enron Broadband with the Canadian Imperial Bank of Commerce, with a valuation based on its projections over the 20-year Blockbuster deal. In essence, the company sold itself to itself and booked it as revenue, and then hid any losses. EBS CFO Kevin Howard and senior accounting director Michael Krautz would later be charged with fraud for the scheme, but it took a while before the full scale of the problem was uncovered. It would soon become clear that Braveheart was far from the only dodgy shell corporation.
The wheels come off In February 2001, Kenneth Lay stepped down as CEO, with Jeff Skilling taking the top spot. He lasted six months in the role, before abruptly resigning, spooking investors. At the same time, executives were struggling to hide mounting disasters. Enron Broadband reported losses of $102 million. Growing cost overruns at a delayed power plant in India (that Enron had already booked profits for) were making debt harder to shift.
The rise and fall of Enron Broadband
>>CONTENTS
Journalists like Smith and Bethany McLean began asking questions, and analysts finally began updating their glowing reports with notes about growing debt piles. Enron employee Sherron Watkins wrote an anonymous memo to Lay after Skilling's departure laying out a number of possible accounting scandals that she had uncovered.
was hard to process.
"4,000 people losing their job in one day, suddenly the real estate market collapses in Houston.
When contracts signed with mark-tomarket failed, Enron had to then report it as a loss, but it didn’t want to reveal how many of its deals had fallen apart. So it hid the debts in shell companies, offshore bank accounts, and complex schemes that included temporarily selling assets to coconspirators to get through a quarter. But the jig was finally up.
criticizing BOS.
The company's share price began to dip, putting pressure on a business model that only worked if the price went up. Enron then announced it was going to post a $638m loss for the third quarter and take a $1.2bn reduction in shareholder equity. Compounding matters, the SEC announced that it was opening an investigation.
The operating system had been pitched to investors as a system that could talk to and connect to every part of the network through a single platform. "That was just never going to happen, because the only way that would work is if every service provider in the country allowed us access into their network management systems," Relihan said.
As more stories on the scale of the fraud began to come out, Enron stock entered freefall. On December 2, 2001, Enron filed for bankruptcy.
"It has to be able to speak every language that you can possibly imagine, because there's not an operating system out there that can talk machine language to transport equipment, fax, router, servers, and everything else."
World’s best BOS Unpicking the fraud from the failure is a difficult task. Some projects started with the best intentions, but failed, with the fraud only coming in later to cover up losses. Others may have been flawed from the start, with executives knowing they were destined for failure but simply not caring. US prosecutors would spend countless hours trying to understand on which side of the dividing line to place Enron’s Broadband Operating System (BOS). "A year after I left, I got a call from the FBI," Relihan said. "They were focused on the Broadband Operating System and if it actually worked because that was the thing they were looking at as the pump and dump on the stock." Relihan came into the Denver FBI office to meet the Assistant US Attorney. "At one point I said 'Do I need an attorney?' and he just looked at me across the table and said 'I don't know, do you?’ And that's when I knew I was in trouble." But Relihan was not there because he had done anything wrong. Far from it, they were curious about his emails
"I couldn't sell my house."
He told this to the investigator, "and he goes 'so that was a fraudulent statement?' And I said: 'Oh, most definitely.'" Vokoun concurred: "It all sounds great if you're not an engineer and you don't realize that crap can't actually be done that way." The stink around BOS and Braveheart has led many to write off the whole broadband division as a con. "There was a government case that alleged Broadband as a whole was a fraudulent company that didn't work," Dunbar said. “I'll put my hand up, especially as the head of engineering, and say, ‘No, it really did work.’ It got caught in the mess and some of its assets were sold to special purpose vehicles, bought back, sold back again multiple times over, and fraud was committed on top of these assets. But Broadband worked.”
I gotta get out of Houston The rapid collapse of Enron hit people differently. The sudden change of fortune
“They went floor by floor and called everybody down to one end of the room and said ‘You've got 10 minutes to get your stuff and get out,’” Moss said. “I took a break and went downstairs to clear my head and it looked like a rock concert there. People were coming out of that building crying, oh my god, it was horrible. And then all of a sudden, the cameras start showing up.” The implosion became the biggest story in the world, but most focused on the villains of the story, on the greed and the lack of regulatory oversight. For the thousands of workers without a job, a more pressing question was how to survive. "4,000 people losing their job in one day, suddenly the real estate market collapses in Houston," Dunbar said. "I couldn't sell my house." He added: "It crushed my career for about four years.” Moss had a similar experience: "I ended up having to sell my house for less than what I owed just to get out of there, because I figured, man, I gotta get out of Houston." Making matters worse, the company had encouraged employees to convert their pensions to Enron stock. Every year, as its share price soared, more bought in. When the valuation soured, they were left with nothing. This long after the fact, most of the people we spoke to for this piece are looking back at the start or the middle of their careers. Many were able to get back on track after a few years, and slowly rebuild their lives. But the company’s older employees, many of whom have since passed away, were not so fortunate. Those we spoke to knew colleagues who lost everything and were unable to recover. At least one turned to suicide. Those left suddenly looking for jobs were forced to compete with colleagues amid a difficult job market. The bursting of the dot-com bubble a year before had investors cautious of backing Internet startups, the September 11 terrorist attacks unsettled markets, and the wider telco sector was going through its own downturn. After a flurry of over-investment, dozens of debt-laden telecommunications companies began declaring bankruptcy, with around $2 trillion wiped off the market. About 450,000 telco jobs were lost around the world in 2001, and the industry
Issue 50 • October 2023 | 21
DCD Magazine #50
saw years of consolidation as it pulled itself out of a hole.
From the ashes But Enron did not disappear overnight. As investigators scoured through company documents looking to uncover the scale of the fraud, others were tasked with salvaging some value out of the carnage. “We had people lined up doing auctions for all this equipment and vehicles and all that stuff,” Moss said. “It was all going for five cents on the dollar.” PoPs around the world were picked clean, a European-based worker said. When it became clear that staff weren’t going to be paid, equipment disappeared. "We sold off what we could," Hanks recalled. "My DC and Network Operations Center in Portland wound up going to Integra Telecom, and they're still running their network off of that today.” Fiber routes were sold off piecemeal, and a fiber swap deal with Qwest was unwound (the deal was itself subject to investigation as the companies had both recorded the deal as a profit). Hanks also launched an unsuccessful bid to raise money to buy as much of the bones of the business as he could. Even now, there’s still Enron Broadband fiber out there, unlit. “There’s almost 3,000 route miles of fiber across the American West that is just sitting there that no one is using or can use because it’s still wrapped up with the various counties and operator tax authorities,” Hanks said. But perhaps the biggest asset to be sold, and the one that remains the most impactful today, is Enron’s data center in Nevada. Designed with some 27 carrier connections, it became the first data center operated by Switch. Switch declined to comment for the piece, but in a tour of the data center last year an employee told DCD the company’s foundational story: “[CEO] Rob Roy turned up to the auction for the Enron data center and put in a low bid. They called him up later saying he was the only person to bid, so he retracted the offer and resubmitted one that was even lower.” Dunbar remembers it slightly differently, without the added drama of the double bid, but confirmed that the facility was sold for a fraction of its value. “I was the guy that sold it to Rob.” Dunbar said: "He was really smart to see
>>CONTENTS
the opportunity and how overbuilt that data center actually was.” A facility that cost millions and took years to build was sold for just $930,000. 'The Core' campus, as it is now known, has changed dramatically since the acquisition in 2002. The site is unrecognizable from the original facility and is expected to support 495MW of IT load after its latest upgrade. But its location owes everything to Enron. That early win allowed Rob Roy to break into the sector for a fraction of the usual price, which he then used to build a huge data center empire. In late 2022, Switch was sold to DigitalBridge and IFM for $11 billion. Over a few years, all that was left of Enron Broadband that could be sold was flogged off. “It was pathetic compared to what we thought the valuation of Broadband was,” Dunbar said. “We sold it for tens of millions.” Those we spoke to discussed another EBS legacy that is harder to quantify than buildings, fiber, and dollars: connections. Many would go on to work together, start businesses, and keep some of the ideas that they formulated at the company alive over the years to come. The bankruptcy brought those who lived through it together, and their work would help shape the digital infrastructure industry as it came out of the dark days of the early 2000s.
What could have been For all its failures and the systemic corruption at the core of Enron, most of those former employees were convinced that EBS could have worked. "Every build I did made money,” Hanks said proudly. “We deployed 144-fiber cables with one extra conduit, and we only kept 12 fibers. We sold the rest to other people to pay for the cost of everything else we were doing. We made an enormous amount of money off of the construction phase. “When I left, we were really close to some major contracts with CBS. If things had not gone sideways, I believe we would have had the contract for the 2003 Super Bowl.” He added: “All we needed was time, and that's the one thing we didn't have.” Hanks takes solace in one of the things that time has provided - proof that at least some of the ideas were correct. “This is the future we saw, this is where we were going. So my horse was shot and rendered
22 | DCD Magazine • datacenterdynamics.com
into glue, but we got here, by God, and so I take a lot of joy in that.” It’s impossible to know whether there was a contract, a merger, or an idea that could have helped Enron Broadband survive. Fraud was at the heart of the business, it was not just an unrelated activity that helped fund the company. Juicing the share price was the business, and it required constantly feeding the market new, ever more outlandish, ideas, and using accounting trickery to assign revenue to them. Perhaps the version of Enron Broadband that was capable of bold visions and aggressive business moves was only able to exist because the pressure was on ideation and not execution. “It seemed like they were just a big hype machine,” Vokoun said. “Someone's got an idea, let's hype it up, prop up the share price.”
Here we go again The rise and fall of Enron Broadband is a story of an industry gripped by delusions, a company with a business model based purely around enticing investors instead of making a profit, and unproven and impossible technologies sold on a dream. It is not a unique tale. Former Enron employees can’t help but see parallels between their time at the turn of the century and this age of AI. The base infrastructure of artificial intelligence has value, just like Enron’s network in its day; and the overall trend of the market may prove correct, just as telecoms and data center businesses have thrived since the fall of Enron. But the mania in the market, the hyperbole gripping every company announcement, and the speed at which everyone must move echo the chaos that led to the telco winter. Those who lived through it are left wondering not if the next Enron is being built right now, but rather when it will collapse. “I was talking to [a friend working in the AI sector], and it was very, very reminiscent of the telecoms boom that was going on in the late ‘90s and early 2000s,” Relihan said. “It’s going to settle down to three or four players at the end of the day, but there's going to be a lot of stuff going on until that point where people are going to make a lot of money, some that are doing fraudulent stuff, and some that will get caught. “It's like, ‘Okay, here we go again.’”
Enron Broadband
T5 DATA CENTERS LIFECYCLE SERVICES
TO KEEP YOUR BUSINESS FOREVER ON The T5 Lifecycle Data Center Services Platform is built on a foundational belief that meaningful and lasting data center services require insights beyond traditional location, space, and power. An efficient and financially focused data center strategy requires a partner with the experience and ability to support a common, ongoing business goal, not just build or operate a facility. LEARN MORE AT T5DATACENTERS.COM
DATA CENTERS
CONSTRUCTION SERVICES
FACILITY MANAGEMENT
LEASING AND DEVELOPMENT
SUSTAINABILITY AND RENEWABLES
Issue 50 • October 2023 | 23
DCD Magazine #50
>>CONTENTS
CyrusOne's CEO on the age of AI
Sebastian Moss Editor-in-Chief
Eric Schwartz on 300kW racks and doubling in size
A
s the industry doubles down to meet an unprecedented wave of artificial intelligence demand, data center executives are faced with two fundamental questions.
The first is obvious: How do I get in on the action? But the second is harder to answer: How long will it last? It’s clear that we are in the midst of a frenzied hype cycle, but it’s harder to predict length of the cycle and where the market will end up. For Eric Schwartz, CEO of hyperscalerfocused CyrusOne, a form of aggressive moderation is pivotal to success. “Even though we're investing a significant amount of capital, we're very fortunate to have deep relationships with our largest customers who are a major portion of what you were referring to as the hype cycle, but I refer to as the volume growth of the industry.”
Speaking at the company’s London offices, Schwartz said that the close connection to the tech giants allowed it to be “thoughtful about capital so that, however things play out, we will be well positioned to continue to drive growth without finding ourselves overextended.” He admitted that “everyone's concerned about how much is hype and how much is real,” and noted the previous hype cycles he and others in the company had survived. “But I'm very comfortable that what we're building and investing is tied to the tangible part of [artificial intelligence].” Schwartz came to CyrusOne after a 16year stint at Equinix, hoping to bring the same steady growth of the rival data center growth with him. Now a year into his tenure, he also appears set to bring executive stability to a company that went through four other CEOs in a three-year period. “We've made a lot of progress in a year,
and I'd say it sets us up for a very ambitious trajectory in the future,” he said of the KKR and Global Infrastructure Partners-backed business. Key to that future is, of course, AI. “It has clearly brought a level of demand, ambition, and opportunity above and beyond what was being discussed and contemplated a year ago,” Schwartz said. To capture this demand, the company this summer announced ‘Intelliscale,’ a broad brand term for a suite of products that allow the company to support up to 300kW per rack. Intelliscale uses modular manufacturing and the company's zero-water design, and enables customers to utilize liquid-tochip cooling technology, rear door heat exchanger cooling systems, and immersion cooling. Schwartz admits that much of the technology and work on Intelliscale predates him, but “over the past year we have consolidated and synthesized that experience, in conjunction with the dialog we were having with customers about what their future requirements were, and moved that knowledge from a bespoke discussion to an organized and more repeatable and productized model.” He doesn’t think that most customers will need to go as high as 300kW, but that by setting the benchmark high “it almost takes the density question off of the table.” In the past, he said customers would ask if they supported a density, then a few years later ask for a little more, and a little more. “I hope what we've accomplished by putting 300kW out there is to say ‘we can support your requirement,’ and then we can get into subsequent discussions.”
24 | DCD Magazine • datacenterdynamics.com
The age of AI
>>CONTENTS
Schwartz demurred on sharing what he thinks the AI average rack density will be, citing proprietary agreements and the early days of the deployments.
facilities as for its broader data centers. “It’s incorrect to assume that latency is not a factor for [AI training] data centers,” Schwartz said.
“But these artificial intelligence and GPU-intense applications do represent a step function in destiny. We used to debate ‘is it four kilowatts per rack, or is it going to be five, six - maybe even eight?’ The discussions have clearly progressed well beyond that.”
“It’s just a different type of latency to what we've seen in the past: The models currently indicate that their latency optimization is different than what it has been for the Internet, which is proximity to end users. Training models are far more focused on proximity to data and resource.”
This dramatic increase in density, which has been ongoing for years but is now accelerating, is “changing the design principles and ethos of the data center,” he said. Intelliscale represents CyrusOne’s bet on that future.
That view will evolve over time, as latency elements are better understood and the market matures, but for now CyrusOne continues to invest in the largest markets, “driven by where we can identify the power.”
Denser racks also mean a change to wider facility design due to the limitations on how much power can be brought to the building. CyrusOne expects Intelliscale data centers to be a quarter of the size of traditional ones, depending on the application. “Even though these are smaller buildings on a relative basis, they're still both large and expensive,” Schwartz said. That is a problem, given the grid limitations data center companies face in much of the world. “This is really a global situation, but requires local solutions, because the solution is very specific to the local conditions.” For now, CyrusOne is still targeting the same major metros for its AI-focused
a hockey stick. I'm very comfortable in our ability to grow at that level.” Many of the major players in the market have targeted similar double-digit growth, even as newcomers have flooded into the sector. “You have to take into account, more so for our competitors than us, but some of the capacity they're deploying is displacing existing capacity with enterprises moving data centers out. But yes, the potential scale of the data center industry will be substantial.”
The company continues to expand in North America and Europe, and in May entered Japan in a $7 billion joint venture with local energy firm Kansai Electric Power Company.
But, he noted, data centers only exist to support other business activities. “So, assuming the data center industry does get to that size, the technology value that's coming out of it should track right in line. I can remember what the world looked like before the iPhone, and yet now we consider it an indispensable portion of our lives.
"There's a lot of resources and presence that KEPCO brings us there that gives us a lot of confidence and enthusiasm for what we can do in Japan," Schwartz said.
“Whether we actually end up with AI appliances or not is sort of secondary to all of the benefits expected to be delivered on that infrastructure.”
Globally, the exec expects the company’s IT capacity to “double over five years, which is a rough growth of 20 percent per year,” he said.
As we stand on the cusp of another period of breakneck growth for an industry that expanded manically during the worst of the pandemic, such expectations fill Schwartz with confidence.
“We could do more than that, but that's the trajectory that we're planning to. And we've been on that trajectory for a couple of years already - before I joined - so it's not
“There's always this expectation that things will level off,” he said. “And that just hasn't happened."
Issue 50 • October 2023 | 25
DCD Magazine #50
>>CONTENTS
Power and Money: Thoughts from Digital Realty’s CEO
Peter Judge Executive Editor
We meet the deal-maker who runs the world’s largest data center company He was there at “the dawn of the first data center REIT, although at that time it was barely a data center REIT.” For the next decade or so, his path regularly crossed with Digital Realty. Working for Citigroup and Merrill Lynch, he raised around $30 billion of capital for various clients, (including Paramount Group, the largest REIT IPO), and handled $19 billion of mergers and acquisitions. Pretty much every time Digital Realty raised public capital, Power was in the room, until he joined the company full time, as CFO, then president, and finally CEO.
Fast and furious
A
re data centers a tech sector, or a real estate and finance business? The answer is both, of course, but opinions vary on which is the most important of the two essential components of a new facility. Andy Power is not a tech specialist. He’s come in through finance, but he has strong opinions on the potential for AI, and the shift to hyperscale - and he has some ideas on how to play the data center game. Power ascended to the CEO’s seat suddenly at the end of 2022, replacing Digital’s long-term leader Bill Stein, who had been with the company since 2004, and led it since 2014. Stein was reportedly “fired” from his role, and COO Erich Sanchack was terminated a month later. No reason for the sudden change in leadership has been given - and
Power didn’t offer one when we spoke to him. Power was appointed by the board immediately, and it’s natural to read this as move to dispel any uncertainties with a safe, immediate internal replacement. Finance leaders are generally seen as a steady hand, and Power has been involved at Digital nearly as long as Stien. He joined Digital from Bank of America Merrill Lynch in 2015 as CFO, but his roots at Digital actually go back much further. “I joined the company about eight years ago, but I came across it almost 20 years ago, at its origins. I was a financial analyst on the IPO.” Power says he was then a comparatively junior analyst, “literally building the model in Excel and fetching the coffee.”
26 | DCD Magazine • datacenterdynamics.com
It’s an interesting time to make that step, he says, with big changes in the financial climate, and demand for capacity: “I would say 2023 has been quite the year overall for our world, and quite the year for data centers. It feels a little bit like whiplash on so many fronts. It's been fast and furious, to say the least.” He explains: “The capital market volatility, the interest rates, the capital supply chains are moving against us, and at the same time demand has accelerated again and again.” Eight months into the job, he praises his team, and sets goals that are organizational and financial: “Strategy-wise, balance sheetwise, to essentially accelerate growth for our customers and our stakeholders.” He wants to strengthen the customer value proposition, focus on parts where Digital can add value for enterprise and hyperscale customers: “Greater pricing power, and organic growth. At the end of the day, too, we need to innovate and integrate.” The integration he speaks of is tying together a diverse set of data center
Digital's Reality
>>CONTENTS
properties that have been acquired and merged: “We're a product of years and years of M&A. We are building a global platform across 50-plus metropolitan areas, six continents, and 5,000 customers. We have been doing a lot of work tying our systems together, removing the internal friction for our internal customers to help our external customers.”
data center, and connectivity infrastructure. But we also are very sure that this is a very localized business. And there are certain parts of the world where we just feel we're better partnering with different capital sources or strategic partners.” That’s not a new idea, he says: “It’s our heritage. When we went to Japan we initially went alone, and then we ultimately entered into a partnership with Mitsubishi called MC Digital Realty that we own 50/50.”
The company structure has evolved, with Steve Smith from CoreSite coming in to run a newly-founded Americas region. As a former chief revenue officer at CoreSite, Smith also comes from the finance side of data centers. “We had an APAC region, but we never had an Americas region,” says Power. "It’s to provide consistency for our customers, but also local differentiation - and that been key on the balance sheet.” There is also a technology side. Power wants to use the company’s ServiceFabric orchestration platform to create facilities with higher density and performance to meet new use cases. The most important of these is the growth in AI. To support this, the company announced an AI-ready data center in Japan. The KIX13 facility in Osaka is certified ready for Nvidia DGX H100 systems. This, he says, is just the start of a trend.
The eve of AI industrialization
In Latin America, he says, “We found a business that had a lot of heritage on the ground there called Ascenty.”
benefiting from earlier waves of data center development.
In buying Ascenty, Brookfield was a natural partner, he says. Brookfield was called Brascan [Brazil-Canada] Investments until 2005, and it was founded in 1899 as the São Paulo Tramway, Light and Power Company.
“I visit customers and they have server closets, and their own data centers,” he says, “They haven't moved to hybrid IT, they haven't moved to multi-cloud.”
“We did the same playbook in South Africa,” he says, referring to Digital’s 2022 purchase of a majority stake in local operator Teraco.
So the cloud is a massive piece of the business, and it’s still growing, he says: “And then AI is this wave on top of it. So these waves and waves of demand keep coming. I don't see a world where we have to empty out our 300-plus data centers to put AI versions in there. I think there'll be matches of GPUs and CPUs with adjacency, and we'll have some new builds dedicated to AI.”
He describes these as “strategic partnerships,” where Digital brings the data center expertise, and “our partners are not only bringing capital and ownership, but they're bringing extensions of our salesforce and supply chain, and local know-how.”
Andy Power
“The short and sweet on AI is this: it is unlike a lot of the bingo buzzwords that have hit this industry, be it Edge, or crypto, or 5G, or augmented virtual reality. I'm not dismissing them, but artificial intelligence for data centers is real growth in demand.”
That’s happening because “the AI is using datasets that are living within our four walls today. That proximity and connectivity to those datasets, is going to be important just like the models that AI is running.
In the long term, he envisages a world where new workloads go 50/50 between GPUs and CPUs. “I’m not saying that's all AI necessarily. When you combine that with the growth, that means you basically have an end state where five percent of CPUs are moving to GPUs essentially.”
A bigger boat of money
And it is just beginning: “We've been supporting AI workloads in our data centers for some time, but it's been a small minority of our existing book of business, That’s changing. We're seeing more of it pop up in large-scale training models.” He says AI is not a “fad that comes and goes” and hopes it is harnessed in an “environmentally friendly, societally proper way.” And this “early innings” of AI is coming “at a time where the fundamentals are already really in our favor. Given demand remaining robust and supply being constrained due to power generation and transmission, I think this is going to be another wind in our sails for years to come.” It is happening while the industry is still
“We're just getting started. It's almost a toy at this stage of the game. It is really on the eve of true industrialization.”
Responding to that, he says, will take money: “On the balance sheet, we really need a bigger boat. We need to diversify and bolster our sources of capital.” He says that bringing in private capital partners will increase Digital’s “runway of growth.” When we spoke, Power was celebrating a “big, big July.” It entered joint ventures with GI in Chicago, and with TPG in Virginia, alongside the biggest news - a partnership in India where local giant Reliance came in with Digital’s regular partner Brookfield. “The Indian market is tremendously large and we really wanted someone with boots on the ground and a deep enterprise outreach in the Indian market,” he says. Power explains further: “We have a view that we're very good at delivering for our customers the design, build operations of
Passive majority partners There’s another kind of deal, where Digital brings in what he calls “passive financial partners” to back big, fully sold or “stabilized” hyperscale developments. “There are parts of our campuses, where we have filled capacity, we have long-term contracts with our largest highest credit quality customers. And we were recycling capital out of that.” A deal with Prudential in Singapore, that set up Digital Realty CORE REIT there in 2013 followed that model, as did this year’s GI and TPG deals in the US. “They are both examples of passive majority financial partners that invest alongside us so that we can recycle that capital into the future for our customers,” he says. “We can buy larger land banks, have longer inventory runways, and really futureproof our customers’ growth.”
Wholesale colo and the cloud The industry seems to be shifting to larger units provided to cloud operators, and those operators are more likely to be built-to-suit and fully handed over to the customer. “The pendulum kind of goes back and forth on that,” he says, explaining that Digital deals with the top cloud providers, alongside everyone else. “We have 5,000 customers, and we're
Issue 50 • October 2023 | 27
DCD Magazine #50 adding 130 or 140 each every quarter, but our top customers are the biggest cloud service providers or hyperscalers in the world, that are with us in 20-50 different locations.” For those big players, he says, “not only are they our top customers, we are their top providers. We supplement, when they do some self-builds.” He says he’s “proud” of Digital’s strategy: “We didn't go all hyperscale like many of the private companies (or those recently taken private) that really are really catering to just a small handful of customers. And we didn't go pure colocation or connection. It's really just ourselves and Equinix in that arena on a global scale.” Catering to the full spectrum, he thinks there are benefits of colocation close to the cloud: “There’s synergies for our customers, in this virtuous cycle of hybrid IT.” He describes Digital’s Ashburn facilities, where “we have multiple buildings, some with colocation suites, with many, many customers and cross-connects, and then we have dedicated data halls and even dedicated buildings. ”We really tried to bring the puzzle pieces, the ingredients where they're needed for data center infrastructure,” he says. “In some markets, the big customers want networkoriented deployments, in other markets they want multiple megawatts in dedicated suites and shared buildings, in other markets they want dedicated buildings with a runway for growth.” All this, he says, is “essentially just another version of the old school colocation model just at a massive, massive scale. At the end of the day, sharing a campus is no different to sharing a hall.” The biggest advantage is at the physical level, he says: “The land, the substation, the supply chain, the generators, the HVACs. There's a customer synergy too, because our enterprise customers are consuming cloud, connecting from one of our suites to another, via physical cross-connects, virtual cross-connects or ServiceFabric.”
Don’t do everything There are limits to all this: “We don't believe we need to be in every major metro in the whole world,” he says. “We recycle assets out of markets which we don’t see as core to us. So in Germany, we're really concentrated on Frankfurt and Dusseldorf. In France, it's Paris and Marseille.” In the United States, he says: “Hybrid IT is clustered into major hubs. Philadelphia is between New York and Washington DC, and it didn't really remain an active data center market, even though there's lots of enterprises and CIOs and CTOs and IT associated with it. Those workloads went to
>>CONTENTS
New York or went to Ashburn, Virginia.” He says the major metros let the company cater to small and large IT enterprise customers: “Just because we're in colo, I’m not saying we need to chase every server hugger.”
Chasing net zero Turning to the sustainability agenda, he says: “I think net zero is obtainable for our industry and I'm proud to raise the bar, because it's so important for our customers and the broader societal impact.” The size and scale of the industry gives it opportunities and responsibilities, he says. “If we weren't in the full business model with hyperscale, we would be a fraction of our size and we wouldn't have the ability to go with some of these power purchase agreements that essentially bring more green power to the grid.” He goes on: “We were very early on in this initiative, before it really became in vogue. And honestly, I'm very proud of our accomplishments on multiple fronts. We've put out science-based targets which we've made great progress on. “We've been really moving the needle when it comes to renewable energy which, in my opinion, is going to be the largest gating item to net zero.” Digital has a lot of power purchase agreements (PPAs) for low-carbon energy: “We take a different approach to some of our competitors - we don’t really do RECs [renewable energy certificates], or financial derivatives.” A few percent of Digital’s capacity is covered by RECs, where there is no option: “Usually we do them where a customer is assisting on that requirement. But I want a way that provides true additionality to the grid - and the financial derivative market can be used and abused.” PPAs are long-term contracts to establish the creation of new solar and wind farms, but it could go further. “In some less sophisticated markets, we’d invest in the actual solar farms,” he says. He likes the idea that “when I'm signing a contract for a wind turbine, or a solar farm, I can go visit it myself. I know that we're doing something that's really making a difference. “So that's a long-winded way of saying I'm proud of our accomplishments - but we're not done yet. We're not 100 percent green, but we're charting a path to get there.”
to in 20 years is phenomenal. And it has outstripped the talent capabilities both at the top and the bottom of organizations.” More diverse hiring will improve the talent pool, he says, and “I'm proud of what we've done at both ends but I also think we’ve got continuing more work to do.” Mary Hogan Preusse is chairwoman of Digital’s board, and remains one of a very few chairwomen in the Standard & Poors 500, says Power. “What keeps us up at night is how we bring more talent into our industry through the ranks all the way down to the engineers that are walking the halls of our data centers,” he says. To address that, he says, “we've tapped into numerous employee resource groups, be it veterans groups, diversity groups, to bring more talent into the pool.” He’s pleased with progress, given where the industry has come from, but still the new diverse hires don’t match the growth: “We're bringing on new capacity, and it's outstripping the folks that raise their hand and say I want to go work in the data center industry. I think we as an industry have a lot of growing up to do and have to solve that problem at scale.” He admits that the industry is short on diversity in front-line staff: “We need to do more. We’ve got to tap into technical schools and engineering programs.” Within the company, he says “we really try to organically create employee resource groups around each of the diversity categories, and spend executive time with those groups to champion the inclusion and diversity of our workforce - and making our 3,000 plus employees the ambassadors to bring more talent into the company.” He reckons he can learn from being with different kinds of people: “When I somewhat reconstituted the leadership team here, it included a mix of internal promotions, and outside talent, with a diversity of backgrounds, international and domestic. “My roots come from finance, and I certainly want to surround myself with people that fill the holes in my background, and push me, on the technology front, on the infrastructure front, on the environmental front. Different folks bring more to the table.”
Diversity and skills
So how does Power play the data center game?
Rapid expansion has brought a skills shortage, he says: “This industry has grown from the backwaters to global asset class at an aggressive pace. The size and scale this industry - and this company - has come
“The team at Digital is not done yet,” he says. “We didn't head for the locker room at the halftime of 2023 for good. We're back on the field ready to deliver some more great wins.”
28 | DCD Magazine • datacenterdynamics.com
Sponsored by
Cooling by Design Supplement
INSIDE
Seeking efficiency and simplicity Nature’s cool enough > Cooling data centers with natural resources
Air is always there
> Liquid cooling is coming, but air cooling will always have a role
Cleaner cooling water
> A smart way to make data centers use less water and power
Good for the planet. Great for the bottom line.
Iceotope is reimagining data center cooling from the cloud to the edge. Using industry standard form factors, our Precision Liquid Cooling solutions offer extreme cooling performance, simplified maintenance, and significant cost reductions. We prioritize simplicity and practicality to craft innovative responses to real-world challenges. The results are impactful, sustainable solutions redefining data center cooling. Get in touch to arrange a demo.
+44 114 224 5500
sales@Iceotope.com
Cooling by Design
Sponsored by
Contents 32. Nature's cool enough Can we realistically cool data centers using the resources already available to us?
36. Advertorial: Sustainability, scalability, and serviceability The key criteria in choosing a liquid cooling solution 38. A ir cooling will never go away We need liquid cooling, but it will never completely replace air 41. Keeping your water clean Electrolytic descaling could help cooling systems use less water
Designs on better cooling
C
ooling differs from a lot of other data center disciplines. It's not about doing things bigger or faster. It's about doing things more simply and more efficiently.
Overcooling is a big trap to avoid. Specifying huge air conditioning systems, and driving the white space temperature down to an uncomfortable chill is a dated mistake. But at the same time, other parts of the ecosystem really are pushing the boundaries, with chips driving to higher power levels, and designers packing ever-higher densities into racks. This supplement picks a few angles to examine how cooling systems can keep pace with the rest of the data center, and still tell a better, more efficient, and less demanding of the planet. The best things are free
32 41 38
Free cooling has been a popular option for some time. In many places, you can run data centers without mechanical cooling for large parts of the year, using the temperatures nature provides.
Free cooling with outside air is well known, but many sites are using the cold water in their local environment to provide a steady low temperature, that is more reliable than the air. One data point: a barge-borne data center in California kept its PUE to 1.18 during this year's heatwave, thanks to river water.
Air is always there Liquid cooling is inevitable. There is simply no way to cool the rack densities that are coming without it. It offers better efficiencies and side-benefits like silent running, reduced floorspace, and a saleable by-product: hot water. So why aren't we using it all the time, already? The truth is that a new technology always has to coexist with the incumbent. And air cooling is so entrenched, and well understood it's going to be around for a long while maybe forever. Given that, what will hybrid cooling systems look like? And will they provide the best of both worlds, or fall between two stalls leaving us worse off than either? The answers are not simple. There's still a lot of work to be done at the level of integrating it with the rest of the data center ecosystem. The fact that this work is now being done at speed is all the confirmation we need that liquid cooling really is coming. You need clean water Chilled water loops play a big role in traditional data centers - and face criticism for consuming and contaminating water. Limescale in heat exchangers is an annoyance, but it seems you can avoid it without chemicals, slashing energy, and water use. As we said: make things simpler and more efficient.
The Cooling by Design Supplement | 31
The Cooling Cooling by Design by Design Supplement Supplement
Air cooling will never go away
We need liquid cooling, but it won’t replace air 32 DCD Supplement • datacenterdynamics.com
>>CONTENTS
Peter Judge Executive Editor
Air cooling
>>CONTENTS
iquid cooling is supposed to be driving a revolution in heat management within data centers. The old way, air cooling, is on the way out, we are told. It must go, to make way for a world where servers are cooled with water, dielectrics, and other fluids.
L
As a data center operator, Lawrence has to deal with what his customers - mostly large players taking a whole building at a time - are ready for: “We're seeing some customers playing around with some liquid cooling direct to chip, either single phase fluids, or phase changing fluids or cold plates. We aren't seeing a lot of immersion “
In real life, revolutions are rarely so neat and tidy.
With that temperature in their cooling systems, data centers can spend more time on free cooling using outside air. “That allows for a whole lot of hours in the free cooling method where the compressors do not consume any significant amount of power. In places like Loudoun County, Virginia, and in Silicon Valley, we're using as much economization as possible.”
The air perspective
In this world, 10 percent of the racks in a data center can move to liquid. “You have a cooling architecture that can cool 90 percent air cooled servers, and gradually convert this data center to more and more liquid cooled.”
There is no doubt that the densities of servers in racks are reaching the point where some of them can no longer be cooled efficiently with air. And liquid cooling has a vast set of benefits, including increased efficiency, improved exclusion of dust and dirt, and quieter operation - and it delivers waste heat in a form where it can be used elsewhere. But still, air cooling vendors have a backlog of orders that show no sign of diminishing, and new data centers are still being designed around chillers, HVACs, and other air-cooled equipment. How do we explain this? And how will today’s air cooled environments coexist with tomorrow’s liquid cooled systems?
Palette of cooling The story that air will give way to liquid cooling is wrong on two counts, says specialist cooling consultant Rolf Brink, the Open Compute Project lead for liquid cooling: “Air cooling will never disappear. And it is also incorrect to say they've always been air cooled. It's not a battle about which technology will be left at the end of the road. “You have to look at the IT equipment and see what it needs,” says Brink. “IT equipment has various requirements for cooling, and this is where the palette of cooling technologies that you should be considering is greatly enriched these days. “Cold-plate is becoming mainstream this year or next,” says Brink. “Immersion is going to take a few more years before it becomes mainstream. But not all IT equipment is suitable for immersion or cold plate or air cooling alone. “That is the big paradigm shift,” he says. “We're going to see more hybrid environments where the underlying infrastructure and facilities can cater to both air and liquid cooling. And that is what the industry needs to get prepared for.” “We're in this transition phase, where we see both extended demand for air cooling, and a lot of newer liquid cooling requirements coming in,” says Stuart Lawrence, VP of product innovation and sustainability at Stream Data Centers. “So we find configurability is the most important thing right now.”
Air-conditioning vendors admit that things must change. “At some point, air cooling has its limitations,” says Mukul Anand, global director of business development for applied HVAC products at Johnson Controls. “There's only so much amount of heat you can remove using air.” As he explains, it takes a lot of air to cool a high-energy chip: “The velocity of air becomes very high, noise in the white space becomes a challenge and the server fan power consumption increases - which does not show itself in the PUE calculation.”
In a best-case scenario, many of the liquid cooling scenarios defined by ASHRAE rarely need chillers and mechanical cooling, and those chillers will become a backup system for emergencies, says Anand: “It is for those warm afternoons. You must have a generator for the times when you don't have power. You must have chillers for the few hours in a year that that you cannot get
He sees direct-to-chip, immersion, and twophase cooling growing, and notes that air-cooled systems often have a water circuit, as well as using water in evaporative systems. Data centers are trying to minimize water consumption while switching off compressors when possible, and water cooling inside the white space can make their job easier. “We've seen a distinct shift of municipalities and communities away from using water for data center cooling,” says Anand. “A shift from direct evaporative cooling technologies towards either air cooled chillers or water cooled chillers and dry coolers.” As liquid cooling comes inside the white space, he says: “We have to make sure we completely understand the fluids that will be used (water, glycol, etc.) and make sure that we converge on an agreed liquid cooled server technology, and use economization as much as possible,” he says. “One of the direct consequences is to use the chilled fluid temperature as high as the IT equipment will allow. 30°C (86°F) is being looked at as a median number. That is certainly higher than the chilled water fluid used in data center air cooling systems today.” Air cooling systems will have to adapt, he says: “We must launch and use products that are as comfortable and efficient providing chilled fluid at 30°C.”
economization to do the cooling job for you.” Those chillers could still be challenged, he says, because as well as running denser white space, “owners and operators are leaning towards multi-story data centers.” These chillers will need to be built with a greater concern for the carbon embodied, both physically and in their supply chain: “If you're using less metal and lighter pieces of equipment, the carbon generated through the fabrication and shipping processes is lower.” Chillers are placed on the roof of the building, and this means they are packed together tighter, causing a “heat island” problem: “When condensing units or chillers with condensers are spread far apart on a roof, one is not influenced by the other. When you have 32 or 64 chillers close together on a rooftop space, the discharge air from one goes into the condenser of the next one, adversely impacting its efficiency and capacity.”
The Cooling by Design Supplement | 33
Cooling by Design Supplement
Extending air cooling Back inside the white space, Lawrence sees a lot of liquid cooling implementations as simply extending the air cooling provided in the building: “It's direct liquid to chip, but the liquid goes to a rear door heat exchanger or a sidecar heat exchanger.” Precision cooling from companies like Iceotope, where servers remain in regular racks, and liquid gets to the specific parts which need cooling are a mid-point between direct-to-chip or cold plate, and the more extreme idea of total immersion in tanks sold by the likes of GRC and Asperitas. Direct-to-chip and precision liquid cooling products can be installed in an air cooled environment, says Lawrence: “They reject heat by means of an air-to-liquid heat exchange system within an air cooled data center.” That may be disappointing to liquid cooling revolutionaries, but there’s a reason, says Lawrence: “Most colocation facilities aren't really ready to go direct liquid.” He sees liquid cooling as additive, where it is required: “I think we will get this extension of air cooling where they will take 10kW racks and make four rack positions into 40kW racks.” Those high-density racks have an extra heat exchanger or “sidecar.” “In the last 10 years, the majority of the products that I've deployed are air cooled with an internal liquid cooling loop,” says Dustin Demetriou, IBM Systems leader for sustainability and data center innovation. “As far back as 2016 we were doing this in a financial services company because they had basically DX chiller systems with no chilled water, but they needed a high power rack.” “The great part about direct-to chip liquid cooling is that it uses the same IT architecture and the same rack form factor as air-cooled servers,” says Anand. “The cooling distribution units can be
>>CONTENTS
in the white space, or sometimes in the rack themselves. Using this technology, transitioning at least a portion of the data center for the intense compute load can be done relatively quickly. When things move to immersion cooling tanks, there may be a division. Expelling the heat from an immersion tank into an air-cooled system might require the compressors to be turned on, or changes to the immersion system, says Anand. He explains: “The power that's consumed by the servers in the immersion tub gets converted to heat and that heat has to be removed. In a bath, we can probably remove that heat using warmer temperature fluid. And the lower temperatures that mandate the operation of a compressor are probably not needed.”
Losing the benefit There’s one obvious downside to this hybrid approach. One of the most vaunted benefits of liquid cooling is the provision of waste heat in the concentrated form of highertemperature water. If the heat gets rejected to the air-cooling system, then it is lost, just as before. Running the liquid bath at this lower temperature removes the benefit of useful waste heat. It’s like a re-run of the bad practice of overcooled air-conditioned data centers. “The sad part about it from a sustainability perspective is you are not raising any temperatures,” says Lawrence. “So we're not we're not getting the real sustainability benefits out of liquid cooling by utilizing this air extension technology.” Demetriou points out that there are still sustainability benefits: “If you look at it in terms of performance per watt, a product with 5GHz chips, if it was strictly air cooled, would have probably given half the performance. So you would need fewer servers to do the work. You're not getting
34 DCD Supplement • datacenterdynamics.com
all of the benefits of liquid but I think you're getting a lot.” Demetriou also sits on the ASHRAE 9.9 technical committee, a key developer of cooling guidelines and standards: “This is an area we spend a lot of time on, because it's not all liquid or all air. There are intermediate steps.”
Funneling Another reason that all-liquid data centers are complex to imagine is the issue of “funneling,” getting enough power into the racks, says Lawrence. “If I take a 40MW, 400,000 sq ft data center, made up of 25,000 sq ft data halls, I can get all my electrical lineups to deliver 2.6MW to each data hall. If I start doubling the density to make that 400,000 sq ft data center 200,000 sq ft or 100,000 sq ft, then I have a really big challenge. “I have to make that building really long and thin to actually get all the electrical lineups to funnel correctly. If I make it small and square I end up having really big problems getting the actual electrical power into the space. The funneling becomes too much of a challenge. “Not a lot of people are talking about that right now, but I think it's going to be a pretty big problem. The challenge with liquid cooling is to design the facility in such a way that you don't run into funneling issues to get the power into the space.” Placing small quantities of high density racks within an air-cooled facility actually avoids this problem, he says: “If you're working with an air cooled space, you've got a lot of space to route your power around. When you make the building appropriately sized for liquid cooling, you run into all sorts of electrical funneling issues that you hadn't had to even think about before.”
Air cooling
>>CONTENTS
Equipment lifecycles One major reason why air-cooled systems will remain is because are very rugged and enduring pieces of equipment. A chiller system placed on the roof of a data center is expected to last for 20 to 25 years, a period that could see four different generations of chip hardware, all with different cooling needs. Johnson’s Anand says this is possible: “If your HVAC architecture is designed to provide cooling required by liquid cooled servers, we will not have to change the cooling architecture through the life of the data center. “The time period from when a data center is designed in one part of the world to when it is built and brought online in another part of the world might be several years,” he says. “We do not want to wait for liquid cooling technology to be adopted all across the world for the next architectural design of the building to materialize it in construction.” It’s not just the equipment, it’s the building, says Lawrence: “Hyperscalers are signing leases for 15 and 20 years, and we are seeing IT refreshes in the four to five year range. That boggles my mind. If you're signing a lease today, it’s going to last three IT refreshes. That IT equipment that you're putting in is either going to be air cooled for that 15 year period, or you're going to have some form of liquid-to-air system in the rack or in the the white space.” Server makers like Dell and HP are producing liquid cooled versions of their hardware, and are predicting that in 10 years' time data centers will be 50 percent liquid cooled. Not every application has such high demands for cooling, and this means that half the servers can still be air cooled. It can also get complicated because of demarcation. If the building owner provides overall cooling with air, and tenants want to add liquid cooling, Lawrence explains: “It gets complicated if you bring liquid straight to a CDU (cooling distribution unit) on rack or an in-row liquid cooler.”
Forcing the issue Rolf Brink thinks that it may take education, and even regulation, to push data center designs more quickly to liquid: “It still happens too often that new facilities are not yet designed for the future ecosystem. This is one of the core problems in the industry. And this is where regulation can really be beneficial - to require facilities to at least be prepared for liquid infrastructures in the white space.” Brink says: “As soon as the data center is built and becomes operational, you're never going to rebuild the whitespace. You are not going to put water pipes into the whitespace
in an operational environment. It is just impossible.” Because liquid is not included in the design phase, this creates “resistance” from the industry to adding it later, he says: “People have neglected to make the make the necessary investments to make sure that they are future-proofed.” This may be due to the way that facilities are financed and refinanced at various times during the build phase, he says, “or it may be lack of ambition or not believing in the evolution of liquid cooling.” The problem is that it's creating an environment in which it's still going to be very difficult to become more sustainable. Data centers won’t take a risk and spend a bit more “just in case,” says Brink. Some of this can be changed by education. ASHRAE has brought out papers describing different stages of using liquid cooling (see Box), and OCP has also done educational work, but in the end he says “legislation can really make a significant difference in the industry by requiring the preparation for liquid.” At this stage, there’s no prospect of a law
to require new data centers to include pipes in the white space, although the German Energy Efficiency Act does attempt to encourage more waste heat reuse. Early in its development, the Act tried to mandate that 30 percent of the heat from new data centers should be reused elsewhere. This was pushed back, because Germany doesn’t have sufficient district heating systems in the right place to make use of that heat. But the requirement to at least consider waste heat reuse could mean that more data centers in Germany are built with heat outlets, and it is a logical step to connect those up with more efficient heat collection systems inside the white space. Across Europe, the Energy Efficiency Directive will require data centers to report data on their energy consumption and efficiency in 2024, and the European Union will consider what efficiency measures are reasonable to impose in 2025. Whatever intervention is imposed could have a big impact on the hand-over between air and liquid cooling.
ASHRAE’S CLASSES OF WATER COOLING
A
SHRAE, the American Society of Heating, Refrigeration and AirConditioning Engineers, has been a premier source of guidance for the use of air conditioning and other cooling systems in data centers and every other sector of society. Its Technical Committee (TC) 9.9 produced definitive guidance on how air cooling should be used in facilities. Back in 2011, it looked ahead to the arrival of water cooling with its first White Paper on “Thermal Guidelines for LiquidCooled Data Processing Environments” setting out how liquid cooling could be used alongside other techniques. That paper set out broad classes W1, W2, W3, W4, and W5, based on the cooling temperature. Originally those clases were 17°C, 27°C, 32°C, 45°C and Over 45°C, respectively. When the work was updated in 2022, new temperature refinements were required, including a temperature of 40°C, and ASHRAE moved to new class
definitions: W17, W27, W32, W40, W45, and W+. As engineer John Gross said on a blog at Upsite Technologies: “Honestly, we couldn’t come up with something better for ‘W+’ in the relatively short time we had to make the changes before the document had to go to publication. This new designation allows the committee to adjust the W classes as required based on industry demand, without confusing the issue of ‘which version of W3 do you mean?’” The main thing is that equipment makers can specify the requirements of their cooling systems, and what other systems they can connect to. “We clarified the definition that compliance with a particular W-class requires ‘full, unthrottled operation of the ITE at all temperatures within the respective W-class.’ Now, a facility designed to support W32 ITE can support any ITE which is W32 compliant,” says Gross.
The Cooling by Design Supplement | 35
The Cooling Cooling by Design by Design Supplement Supplement
>>CONTENTS
Sustainability, scalability, and serviceability are the key to data center cooling The most important criteria to consider when moving to liquid cooling technologies
36 DCD Supplement • datacenterdynamics.com
Iceotope |Air Advertorial cooling
>>CONTENTS
D
ata centers are at the heart of an unprecedented data explosion. The rapid growth of the internet, cloud services, IoT devices, social media, and AI has led to an overwhelming surge in data generation. In addition, the traditional role of data centers as the center of data is changing. As enterprises move towards a more interactive dynamic with their customers, HPC and AI applications are driving vast amounts of data to the edge. Enterprise organizations are also looking for ways to reduce costs, maximize revenue, and accelerate sustainability objectives. First, they want to reduce OPEX costs by reducing the amount of electricity and water they consume in the data center. Second, they want to increase the density of their current footprint so they can avoid constructing additional data centers. Finally, they want to meet or exceed their Net Zero carbon footprint goals by reducing or eliminating their energy footprint. Liquid cooling can help make this a reality. There are three criteria enterprises, data center operators and telco providers should consider when moving to liquid cooling technologies: sustainability, serviceability, and scalability.
For telco service providers, for example, this is particularly important. Deploying computing resources in remote locations can be challenging and expensive to maintain. With thousands of sites in remote locations, servicing edge devices across a telco network is costly, and minimizing on-site maintenance is key.
Scalability The data center is no longer the center of our data. From a single server at a cellular base station to a ruggedized edge solution to an enterprise-grade data center, workloads need to easily scale from the cloud to the edge. Repackaging traditional IT solutions won’t meet the environmental demands of harsh IT environments nor the sustainability demands to reduce power consumption. Purpose-built solutions are needed to address these concerns.
Sustainability Between rising energy usage, increasing power costs, and potential government regulations, pressure is on enterprises and data center operators to reduce the energy consumption of their data center facilities. Sustainability is no longer being viewed as a cost to business, as many companies are now using sustainability as a criterion for vendor selection. Reducing energy consumption and carbon emissions is not only good for the planet, but it’s also good for business.
Serviceability Whether in the data center or at the edge, the need for simpler and more cost-effective servicing of equipment is universal. A technician who can hotswap a module at the data center campus should be able to just as easily make the same replacement in a remote location.
The different types of liquid cooling Liquid cooling is rapidly becoming the solution of choice to efficiently and cost-effectively accommodate today’s compute requirements. However, not all liquid cooling solutions are the same. Direct-to-chip offers the highest cooling performance at chip level but still requires air cooling. It is a nice interim solution to cool the hottest chips, but it does not address the longer-term goals of sustainability. Tank immersion offers a more sustainable option but requires a complete rethink of data center design. Facility and structural requirements mean brownfield data center space is essentially eliminated. Not to mention special training is required to service the equipment.
Precision Liquid Cooling combines the best of both of these technologies. How? Precision Liquid Cooling removes nearly 100% of the heat generated by the electronic components of a server, while reducing energy use by up to 40% and water consumption by up to 100%. It does this by using a small amount of dielectric coolant to precisely target and remove heat from the hottest components of the server, ensuring maximum efficiency and reliability. This eliminates the need for traditional air-cooling systems and allows for greater flexibility in designing IT solutions. There are no hotspots to slow down performance, no wasted physical space on unnecessary cooling infrastructure, and minimal need for water consumption. Precision Liquid Cooling also reduces stress on chassis components, reducing component failures by 30% and extending server lifecycles. Servers can be hot-swapped at both the data center and at remote locations. Service calls are simplified and eliminate exposure to environmental elements on-site, de-risking service operations. The shift to liquid cooling has begun in earnest to meet the evolving demands of data centers and edge computing. With its focus on sustainability, serviceability, and scalability, Precision Liquid Cooling is emerging as the ideal choice for the future of data management and environmental responsibility.
The Cooling by Design Supplement | 37
The Cooling Cooling by Design by Design Supplement Supplement
Nature’s cool enough Can we realistically cool data centers using the resources already available to us?
38 DCD Supplement • datacenterdynamics.com
>>CONTENTS
Georgia Butler Reporter
Natural cooling
>>CONTENTS
C
ooling takes up a significant amount of a data center's power, with around 40 percent of a facility’s electricity bill going towards beating the heat.
Beyond cost, cooling can also be bad for the environment - in a period where grids are struggling and humanity’s emissions need to fall drastically, every watt saved helps. Free cooling has long been pitched as a way to take advantage of already available cold sources to chill a data center, rather than chillers. For low-density racks, and facilities in colder climates, outside temperatures can be enough - especially in winter months.
effectiveness) of 1.2 or better. “In Amsterdam, we were forced many years ago to design all of our data centers against a PUE of 1.2. Now, we are seeing the German law also coming into effect, pushing for 1.2 or better, and this actually helps us,” said Coors. “I won’t say it will be below 1.2 in Marseille, because it's pretty warm there, but we are definitely seeing more of a push down on PUE, and now there is also the Energy Efficiency Directive for the European Commission coming in place.” The Energy Efficiency Directive was first adopted in 2012 but has since been updated both in 2018 and 2023. The latest update significantly raises the EU’s ambitions for energy efficiency.
But as densities increase, and heatwaves get longer, researchers and operators have increasingly looked to natural bodies of water for cooling.
The change established a new “energy efficiency first” principle, meaning that efficiency must be the priority in all policy and investment decisions.
Data center giant Digital Realty, for example, uses river cooling at its Marseille data centers.
“The hardcore financial people would say, why would you invest €15m in a project like this? But at that time we had people in the company who said: ‘We also need to do the right thing. It will pay back.’ And yes, it's paying back on multiple angles now.”
The cooling solution took two and a half years to deploy - after all, it uses more than three kilometers of buried pipes and 27 heat exchanges. The system diverts water from an underground channel, La Galerie de la Mer, built in the 19th century to collect and channel rainwater from old mines. The water itself stays at a natural temperature of 15°C (59°F) all year round and is pumped to the data centers to cool them via thermal exchange. This cooling solution is used by Digital’s MRS2 and MRS3 facilities, and will also be used, by MRS4 and MRS5 which are under construction. It cost Digital Realty a total of €15 million ($16m). When asked about the value of the river cooling system, Digital Realty’s CTO Lex Coors told DCD that the answer was twofold: financial and for sustainability. “First of all, you look at the current energy pricing, and the trends and projects for what will happen with that. At the time, energy was actually a low-cost item, but we were seeing some increase though it was low, and some instability in the pricing as well,” Coors explained. “We saw the availability of energy going up and down, particularly for sustainable energy from either solar or wind. We also have our corporate social responsibility program, but we do not invest in things that make no sense - it would be bad for us, and for the environment.” In addition to the energy costs, Digital Realty saw trends across Europe to demand data centers achieve a PUE (power usage
By using what is already accessible natural water supplies, and the unused infrastructure of history - Digital Realty reckons that its data centers in Marseille run more efficiently, and save around 18,400 MWh annually while reducing CO2 emissions by 795 tons - which Lex Coors confirmed the company monitors carefully every month. According to Digital Realty, the solution will eventually be part of a loop. Once it has cooled Digital’s servers, the water will be hotter, and one day will be provided to the Euromediterranée urban heating network where it can help provide warmth for 500,000 sqm of offices and homes. This is something that is still currently in “discussion,” but not yet a practical reality, says Coors. “These programs take a long time. First, it starts with an intention, and there is an intention between both Digital Realty and the local authorities to see what can we do with this warm water. For us, it's important to get the warm water as low [in temperature] as possible, because the lower it is when the water goes back to the sea, the longer we can use the water from the river
The Cooling by Design Supplement | 39
The Cooling by Design Supplement
>>CONTENTS
without raising alarm bells from the center,” explained Coors.
we make them all better,” explained Pfleging.
“It's important for us, and also for the city, to say that even though there's not much of a winter in Marseille, there is always a demand for heated-up water. Sometimes it's for tap water, sometimes it's for another resource, sometimes for the heating system. The intentions are there, the conversations are there, but you can imagine that connecting to such a massive system takes time.”
He also added that one of the benefits of their system is the lack of need for propylene glycol or anti-freeze in the system to prevent microbe growth. “Because we don't need to put that kind of stuff in our water it adds to the efficiency. When you add, say, 20 percent propylene glycol to the water it knocks its cooling efficiency down by five or six percent. So we're even more efficient than the standard direct-to-chip.”
Digital Realty is not alone in its pursuit of using local water supplies while not impacting the local ecosystem.
The free-water cooling solution would not work universally, however. The water itself needs to be moving, and it needs to be cool enough to actually, well, cool.
Nautilus, a company perhaps best known for its “floating data center” in Stockton, California, designs water-cooled systems for data centers located up to 1km from a freeflowing water supply. That water supply can be anything from the ocean to a lake, a river, or even a wastewater treatment plant. But, of course, it has to be customized according to that water supply. “We’re not just about energy efficiency,” Rob Pfleging, Nautilus’s CEO and president told DCD. “We’re also about aquatic wildlife preservation.” In the case of the Stockton data center, located in the San Joaquin River, Nautilus teamed up with Intake Screens Inc (ISI) for the intake drum. ISI found that there were two endangered species in the river, and so designed the intake drum with velocities low enough that a fish can swim alongside it without getting sucked into the system. The drum itself is also designed with 100 percent inert materials, materials like copper or alloys that might leach into the water stream are not included, so ISI chose stainless steel. Similarly, the pipes are either stainless steel or high-density polyethylene with at least 3mm wall thickness. Another reason those materials are used, is that they are innately “slippery,” making it hard for things to get attached to them and clog the system. Once the water comes in through the drum, it is ionized with a negative charge, meaning that the microbiota in the water can’t attract and stick to each other and the system. The system is also attached to ultrasonic transducers which slightly vibrate the pipes enough to discourage microbiota from attaching, all of which means no chemicals need to be used, and protects the aquatic environment surrounding the data center. In total, the barge only holds onto the water for 60 seconds and it leaves the data center less than four degrees
Fahrenheit warmer than it arrived. None of the water is evaporated or consumed. Nautilus’s solution is both open and closed-loop. The loop circulating river water is open to the river, while a second closed loop takes heat from the servers and passes it to the loop of river water. According to Pfleging, those loops will never touch: “Where we're unique is because we are doing both open loop cooling and closed loop. There’s a plate frame heat exchanger. The two liquid streams never touch each other - they're just transferring energy through some metal plates,” said Pfleging. “On that closed loop side, we move the cold water around the data hall, not with pumps under positive pressure, like 50 or 60 per square inch positive pressure, we actually move it around the data hall with a light vacuum. We pull it around the data hall instead of pushing it.” According to Pfleging, that lower pressure means that the system does not leak. Even if the pipes were breached, instead of leaking water, they would suck in air. It's less taxing on the system itself, as continual pumping can cause valve packing, and it also prevents the growth of microbes in the system - many of which need oxygen to develop. According to Pfleging, Nautilus’s free cooling can be used with any other cooling method in the data center. “We don't care what your particular method is. We're just about being the lowest cost most energy-efficient final mile heat rejection. So we support all of those [in-row cooling, rear door heat exchange, immersion cooling] but more importantly,
40 DCD Supplement • datacenterdynamics.com
“If you wanted to put a data center next to the Nile, we aren’t going there, because that water is almost a hot tub,” agreed Pfleging. That being said, the heat in a given location is not always the end of the road for deploying these types of solutions. As Pfleging explained, during a heat wave, water temperatures rise far slower than air temperatures. During the California heatwave, the Nautilus data center was still able to run at a PUE of 1.15. “We're logical. We're not the answer 100 percent of the time, everywhere on the planet - I'm not going to put this in the middle of the desert, I'm going to put this reasonably close to a large body of water that meets our temperature profile.” Nautilus’s data centers can range in size, from their Stockton facility to another project they are working on in Maine, or the Smart Campus in Portugal expected to have up to 120MW which signed up Nautilus for cooling in early September 2023. All of which begs the question: Is nature cool enough? Data centers, in general, are going to be located near enough high-population areas to serve that city or region. Similarly, as settlements have developed over the centuries, the one thing those settlements have in common is the need for access to water. Pfleging argues that around 80 percent of the world’s population lives within a kilometer of water (the longest distance the company is prepared to transport water to a water-cooled data center). “There are very few times when someone comes to us and says; ‘I’d like to be here’, and we say no, we just can’t do that.” For this reason, it seems that free cooling may not be a total and complete solution to the cooling issue, but it certainly has a wildly relevant part to play, and one which is at our fingertips - just outdoors.
Electrolytic descaling
>>CONTENTS
Keeping your water clean
Peter Judge Executive Editor
Cooling systems are trying to use less water. Electrolytic descaling could be the answer
C
ooling towers are a grubby fact of data center life. Water is cycled through them, and they reject heat, either by evaporation or through a heat exchanger in a “dry” cooling tower. Traditional evaporative cooling
towers are not a great environmental choice. They consume millions of gallons of water. They potentially collect bacteria as the water and outside air are in direct contact, and accumulate limescale. Dry cooling towers also need cleaning and
The Cooling by Design Supplement | 41
The Cooling by Design Supplement
replenishing. Cooling tower water will deposit limescale which clogs the pipes, causes corrosion, and makes the heat exchanger less efficient. The normal ways to treat and prevent limescale also have their problems. Minerals build up in the "blow-down" water which has been condensed and drained from cooling equipment. Traditionally, chemicals are added to prevent a build up of calcium carbonate, but those chemicals, and the increasing concentration of calcium, mean that water can only go round a cooling circuit a certain number of times. It also means that the eventually discarded wastewater is heavily contaminated, and when water is changed, the heat exchanger has to be pressure cleaned to remove scale.
Apply the electrodes There’s an alternative, however. A small number of companies are using electrolysis to manage the existing chemicals in the water, rather than adding more. Players include Tiaano, Ensavior, Ball-Tech, and VST. The idea seems to be advancing most in India and the Asia Pacific region, in wider air-conditioning and water treatment sectors. Digital Realty is one of the first data center customers for the technique, and reports that blow-down water at its SIN10 data center in Singapore can now go through the cooling system three times as often, saving more than a million liters of water per month.
>>CONTENTS
The eventual wastewater is also cleaner than that left from a chemically treated water cooling system.
is already furred up: "The irony is, if your system is already doing well, we will give you a little saving.”
Digital is using DCI (DeCaIon) units, which apply a small continuous electric current, partially electrolyzing the water, and making OH- ions which alter the acidity (pH) of the cooling water. Calcium and magnesium precipitate out as calcium carbonate and magnesium hydroxide, harmlessly, at the electrode.
Cherrie says that a catalytic unit can save 10 to 15 percent of the energy used in a cooling loop.
The DCI boxes come from Singaporebased Innovative Polymers, and are already in use in sites such as hospitals, according to BK Ng, owner of Innovative and one of the developers of the product. "Data centers have been slower to pick this up,” he told DCD. But he is hopeful that the Digital project will open the door to other data centers, which are getting more careful about water use. Electrolytic descalers are an additional cost, and they also use a small amount of power continuously. Gavin Cherrie, at Allied Polymer's New Zealand distributor 2Plus, told us that the unit uses some 700kWh per year (i.e. an average of less than 1kW of power). Ng says the payback is considerably greater in direct energy savings, as the units will actually “clean” the heat exchanger in the loop. Because calcium and magnesium are being removed from the water, the concentration is lowered, and the chemical equilibrium shifts, so limescale will dissolve everywhere else in the system. Thermal transfer through the heat exchanger improves, and it needs less energy. This has a greater effect if the unit is installed in a cooling loop which
42 DCD Supplement • datacenterdynamics.com
Ng puts this into context, explaining that energy saving is about 50 to 60 percent of the total savings the product makes. Water savings make up around 35 percent, and lower maintenance and chemicals costs provide making up the balance of the return on investment.
The practicalities Cherrie says one electrolysis unit covers enough cooling water for 1.5MW, and installing it is a matter of connecting a box into each of the cooling loops. “We take water out of the tower,” he says. “Using a small pump, we pump it through the electrolysis chamber, and then back into the loop.” In use, the catalytic chamber needs cleaning, but this is made easier by periodically reversing the polarity of the anodes: “The calcium that has been collected, is rejected, and then flushed or drained,” Cherrie explains. Then the polarity is restored, and descaling continues. “If you have a manifold, you can consolidate the number of units,” he explains. “If you had three 1MW cooling towers on a manifold, then you'd get away with having two units. If you have three 1MW cooling towers on individual loops, in parallel rather than in series, then then you might need three machines. The pipework design to a degree determines how many machines you need.”
Electrolytic descaling
>>CONTENTS
But the capacity of the unit is very approximate, as any cooling system may be operated to a different level, the amount of use it gets will depend on the local weather patterns, and the amount of dissolved calcium in the local water will also vary. “We look at the water chemistry and other factors, including the volume of the system,” he says. “All that goes into a quality model, which then generates the number of machines that you need, and I can calculate, quite accurately the savings you can make in terms of energy and water.” Disposing of the calcium and magnesium carbonate that the system collects might seem to be an issue, but Cherrie says disposal is easy: “We encourage our customers to actually put it in the storm water drain. This is calcium and magnesium that needs to be bioavailable for the ecosystem.” In the rain water there is dissolved carbon dioxide, and that reacts with the calcium carbonate in the waste water, to make calcium bicarbonate, which is bioavailable - it can be absorbed by plants and animals which need it. “It's almost like a circular economy,” he says. “Calcium is a key element in ecology, and it remains in the water cycle.”
Electric reluctance If the system is that good, why hasn’t it taken off before? The technique was developed for electrochemical manufacturing processes, which make things like caustic soda and sodium, Cherrie explains: “These are manufactured using electrolysis. The guys that do that discovered they get calcium buildup on the electrodes, and frequently had to take the electrodes out of the electrolysis chambers and hydro blast them to get the scale off.” Electrolytic descaling was developed for that application, and then offered to other sectors. But chemical descaling was already well established in the air conditioning market. To make matters worse, the market was already being addressed by other “non-chemical” systems, which install a passive unit, often branded as an electrical or magnetic descaler. Cherrie says these are “pseudoscience” and don’t work. “For years. It's been done with molecules. You’ve thrown the chloride into the water along with various inhibitors. That’s a discipline that has been around for a number of years. We know how it works and it's used in a number of industries,” he says.
Of course those chemicals aren’t great for the environment. “We used to throw in sulfuric acid. The trouble was, that dissolves all the metal. You don't want that to happen. “So we used to put in hexavalent chromium and they basically turned all the surfaces into stainless steel, there's no corrosion.” If hexavalent chromium sounds familiar, it was the pollutant released by Pacific Gas & Electric Company (PG&E) in Hinkley, California, which was eventually stopped after a lawsuit led by Erin Brockovich - later portrayed by Julia Roberts in an Oscar-winning movie. The worst chemicals have been restricted, and the replacements don’t do such a good job of descaling, but the alternatives were no good till now, says Cherrie. “There have been a number of non-chemical systems that have been promoted in the past. People talk about using magnets and all sorts. But the science doesn't work.” Electrolytic catalysis provided a scientifically proven alternative, he says: “But, just because it's a non-chemical system, we’ve been up against it to get people to accept that this might work. It was quite difficult when we first launched the product to get people to try it.”
The Cooling by Design Supplement | 43
Precision Liquid Cooling Iceotope is reimagining data center cooling from the cloud to the edge.
Precision Liquid Cooling removes nearly 100% of the heat generated by the electronic components of a server through a precise delivery of dielectric fluid. This reduces energy use by up to 40% and water consumption by up to 100%. It allows for greater flexibility in designing IT solutions as there are no hotspots to slow down performance and no wasted physical space on unnecessary cooling infrastructure. Most importantly, it uses the same rack-based architecture as air cooled systems and simply fits to existing deployed infrastructure.
Get in touch to arrange a demo.
+44 114 224 5500
sales@Iceotope.com
AI plumbers
>>CONTENTS
AI moves to Norway
Peter Judge Executive Editor
There won’t be room for AI in the existing data center metro hubs. So AQ Compute is standing by to plumb it into a hydro-powered facility in Norway
AI
is predicted to take off quickly, with a massive demand for highperformance chips, in high-density racks. But how will all this capacity be powered - and more to the point, where will it be located?
“We can’t put it all in the current data center hubs like Frankfurt, London, Amsterdam, and Paris,” a site selection specialist at a hyperscale builder said
Andreas Myr
Issue 50 • October 2023 | 45
DCD Magazine #50
>>CONTENTS
now focus on high density, we have taken out the HVAC units.” So Myr has a number of unused HVAC systems for sale, originally intended for the anchor tenant’s halls. Given the high demand for data center M&E plant, he’s hopeful of getting a good price: “We are trying back what we paid for them.” The remaining halls may go beyond reardoor cooling, depending on the exact needs of tenants, says Myr, but they are more likely to need deeper liquid cooling than HVACs: “Our strategy is to have AI-ready data centers. And for this, we will also need direct cooling,” he says. “I think in the next six months there will be a move towards direct-to-the-chip cooling.”
to us recently. “If we tried to do that, the governments would close us down.” The existing so-called FLAP-D hubs (Frankfurt, London, Amsterdam, Paris, and Dublin) are struggling to cope with the demand for conventional data center capacity. There are well-publicized energy distribution issues in Amsterdam, London, and Dublin, and Frankfurt has issued a plan restricting the locations of data centers. Adding a massive fleet of AI facilities will require new data centers on top of the growing population of conventional facilities. It will also strain the net-zero promises of large operators. Denmark and the Netherlands have recently limited hyperscale developments, which soak up large amounts of renewable power that is needed for national decarbonization efforts. AQ Compute says part of the answer is to build specialized AI and high-performance computing (HPC) sites in more remote areas, starting with a facility powered by green electricity from Norway’s plentiful hydroelectric dams. Built from scratch, it’s possible to put liquid cooling in from the start and connect to a district heating system, so heat does not get wasted. “The first customer will move in, in midDecember, and we will go live with IT power capacity in January," says Andreas Myr, the CEO of AQ Compute in Norway. “By April, we will be fully operational with 6MW.” Launched in 2020, AQ Compute is a subsidiary of Aquila Capital, a German investment group that specializes in renewable energy, AQ is also building a data center in Barcelona, as well as the facility in Hønefoss, just outside Oslo. “Our strategy is to look into Tier 2 or Tier 3 cities,” says Myr, who was previously VP for data centers at Orange Business Services. “We are focusing a little bit outside
FLAP-D, on plots where we have renewable power available, and also looking into sustainability, to reuse the heat.” AQ-OSL-1 in Hønefoss will be first onstream. It is on a business park with three nearby hydro dams - the nearest only 800m from the site. AQ has an anchor tenant taking a large proportion of the data center’s 1,700 sqm (18,300 sq ft) of white space. Myr won’t give much detail, except to say the client is a company that specializes in HPC, and a number of 500kW halls remain for other tenants. “It's a high-density customer, with a mix of high-density (40kW) racks and a little bit lower-density storage racks,” says Myr. Part-way through construction, the site was customized explicitly for AI and HPC: “We have a new data center, with no legacy, so we had the possibility to build what the customer needed,” says Myr. “For this anchor customer, we are doing a closed-loop water cooling system.” That was a change from the data center AQ originally planned to open in 2021: “We saw that we needed more than air to cool the racks down. So we redid our solution and we went with rear-door heat exchangers for all of the racks. It's a mix between active rear doors and passive rear doors for the racks with low density for this specific customer.” The anchor tenant helped choose a UK-based rear-door cooling provider, and thanks to that switch, their space will not need any mechanical air conditioning units. Any additional requirements can be covered by free cooling based on Norway’s chilly outdoor temperature. “We started with both air-to-air and direct cooling,” he tells us. “But since we
46 | DCD Magazine • datacenterdynamics.com
Plugging in different cooling systems will be straightforward, says Myr: “Basically, you have to have smaller cooling devices and pipes. To cool the racks we need to tweak it a little bit, but underneath is the same solution.” There’s no decision yet on what system to use for more direct liquid cooling. It will depend on the customers, and also on recommendations from Nvidia, whose GPU chips will predominate in the AI spaces. “We are looking to what Nvidia prefer the vendors to do, because we're working on being an Nvidia-certified data center,” he tells us. Beyond direct–to-chip, AQ can support immersion cooling or two-phase cooling, but Myr doesn’t report any demand as yet: “I have seen one in Norway that asked for immersion cooling, but they went away from it. Right now there are really no requests for immersion.” If that changes, AQ will be ready: “If they want immersion cooling, of course we will set up a system to be able to handle that. It will not be a problem,” he says, explaining that the building has lifts big enough for horizontal tanks, and raised floors strong enough to support large baths of coolant. Any immersion tanks will need to be connected indirectly to the cooling loop through a heat exchanger.
Tapping heat from the loop This future-proofed water circulation loop, designed to support present and future liquid cooling, is installed under the raised floor, for reasons Myr says are obvious: “I don't like it when the water pipes go above the racks. That's a huge risk element in my opinion.” Myr plans to sell the facility’s waste heat to a district heating system which the local utility runs across the industrial park.
AI plumbers
>>CONTENTS
"Our strategy is to have AI-ready data centers. And for this, we will need direct cooling. I think in the next six months there will be a move towards direct-tothe-chip cooling" “We're finalizing agreements with several other companies in that area to be able to reuse heat from the data center,” he tells us. “And because we're using rear door cooling, we can reuse the heat more efficiently than with air cooling.” The outlet water is not the “high, high” temperature that some systems can produce, he says: “So it's low heating. We can reuse it to heat up office buildings and things like that. The utility company will do a part of the work and, if needed, increase the temperature of the return water, with heat pumps.” He’s enthusiastic about heat reuse, saying that low-density colo providers in Norway often can’t do it, because they use free cooling with outside air: “But then you let out most of the heat into the surrounding areas and you don't reuse it. Why would you do that? It's more efficient to reuse it.” Building in liquid cooling from scratch saves a lot of cost, and effort, says Myr: “Our main competitors are using both [liquid and free air cooling]. But legacy data centers in Norway are still free cooling. It costs a lot of money to change a legacy system.” For clients moving in alongside the anchor tenant, there are definite signs of AI work migrating from other hubs: “Most of our requests are from international customers, not that many Norwegian customers. We are focusing on units of approximately 500kW, so we are not talking about single-rack tenants.”
He tells us: “About 70 percent of those requests are for high-density AI solutions. In the last year there has been a real increase in demand for high-density AI.”
Is there enough power? Myr says AQ Compute plans to announce several new locations by the end of the year, partly driven by demand for AI computing. But if this demand, and the density of the IT, really ramp up, is there a danger that data centers might use up renewable capacity and disrupt Norway’s overall net zero ambitions? “That's a good question,” Myr responds. “AQ Compute is owned by Aquila Group, and they have focused on renewable energy. If we build a data center which uses
Liquid-cooled data centers need plumbers But there is one issue that liquid cooling has forced AQ to plan for: “We have the normal mechanical engineers and electricians - but now, we also need plumbers.” Plumbers could add around ten percent to the staff levels at the data center, he says, though the exact figures aren’t clear yet. And, Myr predicts that plumbing could create a fresh strand to the data center
10MW of power, then Aquila will produce 10MW of new renewable power.” If this includes any new wind or solar power, Aquila would have to work to match the hourly demand of the data center, though that would not be such an issue with hydroelectric power. Another potential problem is if data center developments expand more rapidly than renewable capacity can be provided: “Right now, we don't have that problem,” says Myr. “But when that problem occurs, we of course need to look into it.” Overall, Myr is hopeful that Scandinavian countries can provide enough low carbon energy to support an AI boom: “I think in the Nordics, because of our hydropower, we are in a situation where we can do this in a sustainable way - but of course, it depends on the total amount of AI that's needed. "We don't know that yet.” skills crisis: “We are contacting local schools, in Oslo and Barcelona to look into apprenticeships for both electricians, plumbers, and mechanical engineers. We are putting together a program to do this now.” We’re willing to bet that not many people would have predicted that one of the social impacts of AI would be a surge in demand for plumbers in Oslo.
Issue 50 • October 2023 | 47
DCD Magazine #50
>>CONTENTS
How Meta redesigned its data centers for the AI era
Sebastian Moss Editor-in-Chief
A look at Meta’s next-generation data centers
A
s the data center industry enters a new phase, every operator has been forced to reckon with two unknowns: How big will the AI wave be, and what kind of densities will we face? Some have jumped all in and are building liquid-cooled data centers, while others hope to ride out the current moment and wait until the future is clearer. For Meta, which has embraced AI across its business, this inflection point has meant scrapping a number of indevelopment data center projects around the world, as DCD exclusively reported late last year. It canceled facilities that already had construction workers on site, as it redesigned its facilities with GPUs and other accelerators in mind.
Now, with the company about to break ground on the first of its next-generation data centers in Temple, Texas, we spoke to the man behind the new design. "We saw the writing on the wall for the potential scale of this technology about two years ago,” Alan Duong, Meta’s global director of data center engineering, said. Meta previously bet on CPUs and its own in-house chips to handle both traditional workloads and AI ones. But as AI usage boomed, CPUs have been unable to keep up, while Meta's chip effort initially fizzled. It has now relaunched the project, with the 7nm Meta Training and Inference Accelerator (MTIA) expected to be deployed in the new data center alongside thousands of GPUs. Those GPUs require more power, and therefore more cooling, and also need to
48 | DCD Magazine • datacenterdynamics.com
be closely networked to ensure no excess latency when training giant models. That required an entirely new data center.
The cooling The new facilities will be partially liquidcooled, with Meta deploying direct-tochip cooling for the GPUs, whilst sticking to air cooling for its traditional servers. “During that two-year journey, we did consider doing AI-dedicated data centers, and we decided to move towards more of a blend, because we do know there's going to be this transition,” Duong said. “95 percent of our infrastructure today supports more traditional x86, storage readers, front end services - that's not going to go away. Who knows where that will evolve, years and years from now?
Meta's new data centers
>>CONTENTS
Another approach that will not go forward is a cooling system briefly shown in an image earlier this year of a fluid cascading down onto a cold plate like a waterfall (pictured). "These are experiments, right? I would say that that's generally not a solution that is going to be scalable for us right now. "And so what you're going to see a year or two from now is more traditional direct-to-chip technology without any of the fancy waterfalls." And so we know that we need that.” The AI systems will also require access to data storage, “so while you could optimize data centers for high-density AI, you're still going to need to colocate these services with data, because that's how you do your training.” Having the hybrid setup allows Meta to expand with the AI market, but not over-provision for something that is still unpredictable, Duong said. "We can't predict what's going to happen and so that flexibility in our design allows us to do that. What if AI doesn't move into the densities that we all predicted?" That flexibility comes with a tradeoff, Duong admitted. "We're going to be spending a little more capital to provide this flexibility," he said.
While the facility-level design is fully finalized, some of the rack-level technology is still being worked on, making exact density predictions hard. "Compared to my row densities today, I would say we will be anywhere from two times more dense at a minimum to between eight to nine times more dense at a maximum." Meta "hasn't quite landed yet, but we're looking at a potential maximum row capacity of 4-500 kilowatts," Duong said. "We're definitely more confident at the facility level," Duong added. "We've now gone to market with our design, and the sort of response we've gotten back has given us confidence that our projections are coming to fruition."
Switching things up
The company has settled on 30°C (85°F) for the water it supplies to the hardware, and hopes to get the temperature more widely adopted through the Open Compute Project (OCP).
Alongside cooling changes, the company simplified its power distribution design.
Which medium exactly it uses in those pipes to the chip is still a matter of research, Duong revealed. "We're still sorting through what the correct medium is for us to leverage. We have years - I wouldn't say multiple years to develop that actual solution as we start to deploy liquid-to-chip. We're still developing the hardware associated, so we haven't specifically landed on what we're going to use yet."
The company reviewed which equipment it could remove, without requiring new, more complex equipment.
The company has, however, settled on the fact that it won't use immersion cooling, at least for the foreseeable future. "We have investigated it," Duong said. "Is it something that is scalable and operationalized for our usage and our scale? Not at the moment. "When you imagine the complications of immersion cooling for operations, it's a major challenge that we would have to overcome and solve for if we were ever to deploy anything like that at scale."
"The more equipment you have, the more complicated it is," Duong said. "You have extra layers of failure, you have more equipment to maintain."
"And so we have a lot of equipment in our current distribution channel, whether it's switch gear, switchboard, multiple breakers, multiple transition schemes from A to B, etc., and I said 'can I just get rid of all that and just go directly from the source of where power is converted directly to the row?'" This new design also allowed Meta "to scale from a very low rack density to a much larger rack density without stranding or overloading the busway, breaker, or the switchgear," he said. Going directly from the transformer to the rack itself allowed us "to not only eliminate equipment but to build a little bit faster and cheaper, as well reduce complexity and controls, but it also allows us to increase our capacity."
Faster, cheaper Perhaps the most startling claim Meta has made with its new design is that it will be 31 percent cheaper and take half the time to build (from groundbreaking to being live) over the previous design. "The current projections that we're seeing from our partners is that we can build it within the times that we have estimated," Duong said. "We might even show up a little bit better than what we initially anticipated." Of course, the company will first have to build the data centers to truly know if its projections are correct, but it hopes that the speed will make up for the canceled data center projects. "There is no catching up from that perspective," Duong said. "You may see us landing a capacity around the same time as we planned originally." That was critical in being able to make the drastic pivot, he said. "We bought ourselves those few extra months, that was part of the consideration."
How long will they last The first Meta (then Facebook) data centers first launched 14 years ago. "And they're not going anywhere, it's not like we're going to scrap them," Duong said. "We're going to have to figure out a way to continue to leverage these buildings until the end of their lifetime." With the new facilities, he hopes to surpass that timeframe, without requiring major modernization or upgrades for at least the next 15 years. "But these are 20-30 plus year facilities, and we try to include retrofittability into their design," he said. "We have to create this concept where, if we need to modernize this design, we can." Looking back to when the project began two years ago, Duong remains confident that the design was the right bet for the years ahead. "As a team that is always trying to predict the future a little bit, there's a lot of misses," he said. "We have designs that are potentially more future-facing, but we're just not going to need it. We prepared ourselves for AI before this explosion, and when AI became a big push [for Meta] it just required us to insert the technologies that we've been evaluating for years into that design."
Issue 50 • October 2023 | 49
CHANGE YOUR BATTERY. CHANGE YOUR WORLD. C&D Technologies’ new Pure Lead Max battery is our longest-lasting VRLA battery for UPS systems, backed by an 8-year warranty. Proprietary pure lead technology extends life to just a single replacement cycle. One change can reduce total cost of ownership, support sustainability and safety initiatives, and reduce system footprint. LEARN MORE
Quantum data centers Qubits
>>CONTENTS
Qubits come of age in the data center What happens when you put a quantum computer alongside conventional systems?
Dan Swinhoe Senior Editor
L
ike the superpositioned quantum bits, or qubits they contain, quantum computers are in two states simultaneously. They are both ready, and not ready, for the data center.
On the one hand, most quantum computer manufacturers have at least one system that customers can access through a cloud portal from something that at least resembles IT white space. Some are even selling their first systems for customers to host on-premise, while at least one company is in the process of deploying systems into retail colocation facilities. But these companies are still figuring out what putting such a machine into production space really means. Some companies claim to be able to put low-qubit systems into rack-ready form factors, but as we scale, quantum machines are likely to grow bigger, with multiple on-site systems networked together. Different technologies require disparate
supporting infrastructure and have their own foibles around sensitivity. As we tentatively approach the era of ‘quantum supremacy’ – where quantum computers can outperform traditional silicon-based computing at specific workloads – questions remain around what the physical footprint for this new age of compute will actually look like and how quantum computers will fit into data centers today and in the future.
units (QPUs) find use cases where they can be useful.
The current quantum data center footprint
AWS offers access to a number of thirdparty quantum computers through its Braket service launched in 2019, and the likes of Microsoft and Google have similar offerings.
Quantum computers are here and available today. They vary in size and form factor depending on which type of quantum technology is used and how many qubits – the quantum version of binary bits – a system is able to operate with. While today’s quantum systems are currently unable to outperform “classical” silicon-based computing systems, many startups in the space are predicting the day may come soon when quantum processing
As with traditional IT, there are multiple ways to access and use quantum computers. Most of the major quantum computing providers are pursuing a three-pronged strategy of offering on-premise systems to customers, cloud- or dedicated-access systems hosted at quantum companies’ own facilities, or cloud access via a public cloud provider.
As far as DCD understands, all of the third-party quantum systems offered on public clouds are still hosted within the quantum computing company’s facilities, and served through the provider’s networks, via APIs. None that we know of is at a cloud provider’s data centers. The cloud companies are also developing their own systems, with Amazon’s efforts
Issue 50 • October 2023 | 51
DCD Magazine #50
based out of the AWS Center for Quantum Computing, opened in 2019 near the Caltech campus in Southern California.
>>CONTENTS
system in its UK HQ in Oxfordshire.
IBM offers access to its fleet of quantum computers through a portal, with the majority of its systems housed at an IBM data center in New York. A second facility is being developed in Germany.
Quantum startup QuEra operates a lab and data center near the Charles River in Boston, Massachusetts. The company offers access to its single 256-qubit Aquila system through AWS Braket as well as its own web portal. The company has two other machines in development.
Big Blue has also delivered a number of on-premise quantum computers to customers, including US healthcare provider Cleveland Clinic’s HQ in Cleveland, Ohio, and the Fraunhofer Society’s facility outside Stuttgart, Germany. Several other dedicated systems are set to be hosted at IBM facilities in Japan and Canada for specific customers.
UK firm Oxford Quantum Circuits (OQC) operates the 4 qubit Sophia system, hosted at its lab in Thames Valley Science Park, Reading. It offers access to its systems out of its own private cloud and access to its 8-qubit Lucy quantum computer via AWS. It is also in the process of deploying systems to two colocation facilities in the UK and Japan.
Many of the smaller quantum companies also operate data center white space in their labs as they look to build out their offerings. NYSE-listed IonQ currently operates a manufacturing site and data center in College Park, Maryland, and is developing a second location outside Seattle in Washington. It offers access to its eight operational systems via its own cloud as well as via public cloud providers, and has signed deals to deliver on-premise systems. Nasdaq-listed Rigetti operates facilities in Berkeley and Fremont, California. In Berkeley, the company operates around 4,000 sq ft of data center space, hosting 10 dilution refrigerators and racks of traditional compute; six more fridges are located at the QPU fab in Fremont for testing. The company currently has the 80-qubit Aspen M3 online, deployed in 2022 and available through public clouds. The 84-qubit Ankaa-1 was launched earlier this year but was taken temporarily offline, while Ankaa-2 is due online before the end of 2023. Rigetti fridge supplier Oxford Instruments is also hosting a
Based out of Berkeley, California, Atom Computing operates one 100-qubit prototype system, and has two additional systems in development at its facility in Boulder, Colorado. In the near term, the company is soon set to offer access to its system via a cloud portal, followed by public cloud, and finally offering systems on-premise. NREL is an early customer. Honeywell’s Quantinuum unit operates out of a facility in Denver, Colorado, where Business Insider reported the company has two systems in operation and a third in development. The 32-qubit H2 system reportedly occupies around 200 sq ft of space. The first 20 qubit H1 system launched in 2020.
Quantum computers: Cloud vs onprem As with any cloud vs on-premise discussion, there’s no one-size-fits-all answer to which is the right approach.
52 | DCD Magazine • datacenterdynamics.com
Companies not wanting the cost and potential complexity of housing a quantum system may opt to go the cloud route. This offers immediate access to systems relatively cheaply, giving governments and enterprises a way to explore quantum’s capabilities and test some algorithms. The trade-off is limited access based on timeslots and queues – a blocker for any real-world production uses – and potentially latency and ingress/egress complexities. Cloud – either via public provider or direct from the quantum companies – is currently by far the most common and popular way to access quantum systems while everyone is still in the experimental phase. There are multiple technologies and algorithms, and endless potential use cases. While end-users continue to explore their potential, the cost of buying a dedicated system is a difficult sell for most; prices to buy even single-digit qubit systems can reach more than a million dollars. “It's still very early, but the demand is there. A lot of companies are really focused on building up their internal capabilities,” says Richard Moulds, general manager - Amazon Braket at AWS. “At the moment nobody wants to get locked into a technology. The devices are very different and it’s unclear which of these technologies will prevail in the end.” “Mapping business problems and formatting them in a way that they can run a quantum computer is not trivial, and so a lot of research is going into algorithm formulation and benchmarking,” he adds. “Right now it's all about diversity, about hedging your bets and having easy access for experimentation.” For now, however, the cloud isn’t ready for
Quantum data centers
>>CONTENTS
production-scale quantum workloads. “There's some structural problems at the moment in the cloud; it is not set up yet for quantum in the same way we are classically,” says IonQ CEO Peter Chapman. “Today the cloud guys don't have enough quantum computers, so it's more like an old mainframe days; your job goes into a queue and when it gets executed depends on who's in front of you. “They haven't bought 25 systems to be able to get their SLAs down,” he says. On the other hand, they could offer dedicated time on a quantum computer, “which would be great for individual customers who want to run large projects, but to the detriment of everyone else because there’s still only one system.” The balance of cloud versus on-premise will no doubt change in the future. If quantum computers reach ‘quantum supremacy,’ companies and governments will no doubt want dedicated systems to run their workloads constantly, rather than sharing access in batches. Some of these dedicated systems will be hosted by a cloud provider or via the hardware provider as a managed service, and some will be hosted on-premise in enterprise data centers, national labs, and other owned facilities. “Increasingly as these applications come online, corporations are likely going to need to buy multiple quantum computers to be able to put it into production,” suggests Chapman. “I'm sure the cloud will be the same; they might buy for example 50 quantum computers, so they can get a reasonable SLA so that people can use it for their production workloads.”
While hosting with a third party comes with its own benefits, some systems may be used on highly classified or commercially sensitive workloads, meaning entities may feel more comfortable hosting systems onpremise. Some computing centers may want on-premise systems as a way to lure more investment and talent to an area to create or evolve technology hubs. In the near term, most of the people DCD spoke to expect on-premise quantum systems to be the realm of supercomputing centers – whether university or government – first as a way to further research into how quantum technology and algorithms work within hybrid architectures, and later to further develop the science. “It might make sense for a large national university or a large supercomputing center to buy a quantum computer because they're used to managing infrastructure and have the budgets,” says Amazon’s Moulds. Enterprises are likely to remain cloudbased users of quantum systems even after labs and governments start bringing them in-house. But once specific and provenout use cases emerge that might require constant access to a system – for example any continuous logistics optimizations – the likelihood of having an on-premise system in an enterprise data center (or colocation facility) will increase. While the cloud might always be an easy way to access quantum computers for all the same reasons companies choose it today for classical compute – including being closer to where companies store much of their data – it may also be a useful hedge against supply constraints. “These machines are intriguing physical devices that are really hard to manufacture,
and demand could certainly outstrip supply for a while,” notes Moulds. “When we get to the point of advantage, as a cloud provider, our job is to make highly scalable infrastructure available to the entire customer base, and to have to avoid the need to go buy your own equipment. “We think of cloud as the best place for doing that for the same reason we think that it’s the best way to deliver every other HPC workload. If you've migrated to the cloud and you've selected AWS as your your cloud provider, it's inconceivable that we would say go somewhere else to get your quantum compute resources.”
Fridges vs lasers means different form factors There are a number of different quantum technologies – trapped ion, superconducting, atom processors, spin qubits, photonics, quantum annealing – but they largely can be boiled down into a few camps; ones that use lasers and generally operate at room temperatures, and ones that need cooling with dilution refrigerators down to temperatures near absolute zero. The most well-known images of quantum computers – featuring the ‘golden chandelier’ covered in wires – are supercooled. These systems – in development by the likes of Google, IBM, OQC, Silicon Quantum Computing, Quantum Motion, and Amazon – operate at temperatures in the low millikelvins. With the chandelier design, the actual quantum chip is at the bottom. These sit within dilution refrigerators; large cryogenic cooling units that use helium-3 in closedloop systems to supercool the entire system.
Issue 50 • October 2023 | 53
DCD Magazine #50
>>CONTENTS
While the cooling technology these systems rely on is relatively mature – dilution fridges date back to the 1960s – it has traditionally been used for research in fields such as materials science. The need to place these units in production data centers as part of an SLA-contracted commercial service is a new step for suppliers. Rigetti offers its QPUs both as a standalone unit and as a whole system including dilution refrigerators etc.; the price difference can reach into the millions of dollars. Other quantum designs – including Atom, QuEra, Pasqal, and Quantinuum – rely on optical tables, manipulating lasers and optics to control qubits. While these systems remove the need for supercooling, these systems often have a large footprint. These lasers can be powerful – and will grow more so as qubit counts grow – and are cooled by air or water, depending on the manufacturer. Atom’s current system sits on a 5ft by 12ft optical table. IonQ uses lasers to actually supercool its qubits down to millikelvin temperatures; the qubits sit in a small vacuum chamber inside the overall system, meaning most of the system can operate using traditional technology. The company’s existing systems are generally quite large, but it is aiming to have rack-mounted systems available to customers in the next few years.
Rack-based quantum computers? While the IT and data center industries have standardized around measuring servers in racks and ‘U’s, quantum computers are a long way off from fitting in your standard pizza box unit. Most of today’s low-qubit systems would struggle to squeeze into a whole 42U rack, and future systems will likely be even larger. First announced in 2019, IBM’s System One is enclosed in a nine-foot sealed cube, made of half-inch thick borosilicate glass. Though Cleveland Clinic operates a 3MW, 263,000 sq ft data center in Brecksville, to the south of Cleveland, the IBM quantum system has been deployed at the company’s main campus in the city. Jerry Chow, IBM fellow and director of quantum infrastructure research, previously told DCD: “The main pieces of work in any installation are always setting up the dilution refrigerator and the room temperature electronics. That was the case here as well.” While rack-mounted systems are available now from some vendors, many are lowerqubit systems that are only useful as research
and testing tools. Many providers hope that one day useful quantum systems will be miniaturized enough to be rack-sized, but for now it's likely any systems targeting quantum supremacy will likely remain large. “Some companies are building these very small systems within a rack or within a cart of sorts,” says Atom’s CTO Tyler Hayes. “But those are very small-scale experimental systems; I don't see that being a typical deployment model at large scale when we have millions of qubits. I don't think that's really the right deployment model.”
capacity of over 4,000 RF lines and 500kg of payload. The hexagonal-shaped unit is about six times larger than the previous model, the XLD1000sl system. The vacuum chamber measures just under three meters in height and 2.5 meters in diameter, and the floor beneath it needs to be able to take about 7,000 kilograms of weight.
In the current phase of development, where QPU providers are looking to scale up qubit counts quickly, challenges around footprint remain ongoing.
However, IBM has also developed its own giant dilution refrigerator to house future large-scale quantum computers. Known as Project Goldeneye, the proof-of-concept ‘super fridge’ can cool 1.7 cubic meters of volume down to around 25 mK - up from 0.40.7 meters on the fridges it was previously using.
With supercooled systems, the current trend is to ‘brute force’ cooling as qubit counts increase. More qubits require more cooling and more wires, which demands larger and larger fridges.
The Goldeneye fridge is designed for much larger quantum systems than it has currently developed. It can hold up to six individual dilution refrigerator units and weighs 6.7 metric tons.
Current technologies, with incremental improvements, could see current cooling approaches scale up QPUs around 1,000 qubits. After that, networking multiple quantum systems will be required to continue scaling.
For laser and optical table-based designs, companies are also looking at how to make the form factors for their systems more rackready.
IBM is working with Bluefors to develop its Kide cryogenic platform; a smaller, modular system that it expects will enable it to connect multiple processors together. Blufors describes Kide as a cryogenic measurement system designed for largescale quantum computing, able to support the measurement infrastructure required to operate more than 1,000 qubits, with a
54 | DCD Magazine • datacenterdynamics.com
“Our original system was built on an optical table,” says Yuval Boger, CMO of QuEra. “As we move towards shippable machines, then instead of just having an optical table that has sort of infinite degrees of flexibility on how you could rotate stuff, we’re making modules that are sort of hardcoded. You can only put the lens a certain way, so it's much easier to maintain.” IonQ promises to deliver two rackmounted systems in the future. The company
Quantum data centers
>>CONTENTS
recently announced a rack-mounted version of its existing 32-qubit Forte system – to be released in 2024 – as well as a far more powerful Tempo system in 2025. Renderings suggest Forte comprises eight racks, while Tempo will span just three. “Within probably five years, if you were to look at one of our quantum computers you couldn't really tell if it was any different than any other piece of electronics in your data center,” IonQ’s Chapman tells DCD. AWS’s Moulds notes quantum systems shouldn’t be looked at as replacement devices for traditional classical supercomputers. QPUs, he says, are enablers to improve clusters in tandem. “These systems really are coprocessors or accelerators. You'll see customers running large HPC workloads and some portions of the workload will run on CPUs, GPUs, and QPUs. It's a set of computational tools where different problems will draw on different resources.” Rigetti Computing’s CTO David Rivas agrees, predicting a future where QPU nodes will be just another part of large traditional HPC clusters – though perhaps not in the same row as the CPU/GPU node. “I'm pretty confident that quantum processors attached to supercomputer nodes is the way we're going to go. QPUs will be presented in an architecture of many nodes, some of which will have these quantum computers attached and relatively close.”
Powering quantum now and in the future Over the last 50 years, classical silicon-based supercomputers have managed to push system performance to ever-increasing heights. But this comes with a price, with more and more powerful systems drawing ever-increasing amounts of energy. The latest exascale systems are thought to require tens of megawatts – the 1 exaflops Frontier is thought to use 21MW, the 2 exaflops Aurora is thought to need a whopping 60MW. Today’s quantum computers have not yet reached quantum supremacy. But assuming the technology does eventually reach the point where it is useful for even just a small number of specific workloads and use cases, they could offer a huge opportunity for energy saving. Companies DCD spoke to suggest current power densities on current quantum computers are well within tolerance compared to their classical HPC cousins – generally below 30kW – but could potentially increase computing power non-linearly to power needs.
“If we are a major cloud service provider on our own, then I think we would still partner with data center hosters. "I don't think we would want to go build a bunch of facilities around the world; we'd rather just leverage someone else’s”
QuEra’s Boger says his company’s 256-qubit Aquila neutral-atom quantum computer – a machine reliant on lasers on an optical table and reportedly equivalent in size to around four racks – currently consumes less than 7kW, but predicts its technology could allow future systems to scale to 10,000 qubits and still require less than 10kW. For comparison, he says rival systems from some of the cloud and publicly listed quantum companies currently require around 25kW. Other providers agree that laser-based systems will need more power as more qubits are introduced, but the power (and therefore cooling) requirements don’t scale significantly. Atom CEO Hays says his company’s current systems require ‘a few tens of kilowatts’ for the whole system, including the accompanying classical compute infrastructure – equivalent to a few racks. But as future generations of QPUs grow ‘magnitudes of order’ more powerful, power requirements may only double or triple. While individual systems might not prevent an insurmountable challenge for data center engineers, it’s worth noting many experts believe quantum supremacy will likely require multiple systems networked locally. “I think there's little question that some form of quantum networking technology is going to be relevant there for the many thousands or hundreds of thousands of qubits,” says Rigetti’s Rivas.
Quantum cloud providers of the future In less than 12 months, generative AI has become a common topic across the technology industry. AI cloud providers such as CoreWeave and Lambda Labs have raised huge amounts of cash offering access to GPUs on alternative platforms to the traditional cloud providers. This is driving new business to colo providers. This year has seen CoreWeave sign leases for data centers in Virginia with Chirisa and in Texas with Lincoln Rackhouse. CoreWeave currently offers three data center regions in Weehawken, New Jersey; Las Vegas, Nevada; and Chicago, Illinois. The company has said that it expects to operate 14 data centers by the end of 2023. At the same time, Nvidia is rumored to be exploring signing its own data center leases to power its DGX Cloud service to better offer its AI/HPC cloud offering direct to customers as well as via the public cloud providers. A similar model opportunity for colocation/wholesale providers could well pop up once quantum computers become more readily available with higher qubit counts and production-ready use cases. “The same cloud service providers that dominate classical compute today will exist in quantum,” says Atom’s Hays. “But we could see the emergence of new service providers with differentiated technology that gives them an edge and allows them to grow and compete.
Issue 50 • October 2023 | 55
DCD Magazine #50
>>CONTENTS
“I could see a future where we are effectively a cloud service provider, or become a box supplier to the major cloud service providers,” he notes. “If we are a major cloud service provider on our own, then I think we would still partner with data center hosters. I don't think we would want to go build a bunch of facilities around the world; we'd rather just leverage someone else’s.”
data centers as compared to an Equinix,” says CTO Rivas. “But that is forthcoming and we have both the space and a sense of where that's going to go. “If you came to our place, it's not quite what you would think of as a production data center. It's sort of halfway between a production data center and a proper lab space, but more and more it's used as a production data center. It has temperature and humidity and air control, generators, and UPS, as well as removed roofing for electrical and networking.
While all of the quantum companies DCD spoke to currently operate their systems (and clouds) on-premise, most said they would most likely look to partner with data center firms in the future once the demand for cloud-based systems was large enough. “We will probably always have space and we will probably always run some production systems,” says Rigetti’s Rivas. “But when we get to quantum supremacy, it will not surprise me if we start getting requests from our enterprise customers – including the public clouds – to colocate our machines in an environment like a traditional data center.” Some providers have suggested they are already in discussions with some colocation/ wholesale providers and exploring what dedicated space in a colo facility may look like. “About two and a half years ago, I did go down and have a conversation with them about it,” he adds.”When I first told them what our requirements were, they looked at me like I was crazy. But then we said the materials and chemical piece we can work with, and I suspect you guys can handle the cooling if we work a little bit harder at this.” “It's not a perfect fit at this point in time, but for the kinds of machines we build, I don't think it's that far off.”
Quantum data centers: In touching distance? Hosting QPU systems will remain challenging for the near-to-medium term. The larger dilution fridges are tall and heavy, and operators will need to get comfortable with cryogenic cooling systems sitting in their data halls. Likewise, optical table-based systems require large footprints now and this will continue in future, along with requiring isolation to avoid vibration interference. “You need a very clean and quiet environment in order to be able to reliably operate. I do see separate rooms or separate sections for quantum versus classical. It can be adjacent, just a simple wall between them,” says Atom’s Hayes. “The way we've built our facility, we have a kind of hot/cold aisle. We have what you
can think of as like a hot bay where we put all the servers, control systems, electronics, and anything that generates heat in this central hall. And then on the other side of the wall, where the quantum system sits, are individual rooms around 8 feet by 15 feet. We have one quantum system per room, and that way we can control the temperature, humidity, and sound very carefully.” He notes, however, that data center operators could “pretty easily” accommodate such designs. “It is pretty straightforward and something that any general contractor can go build, but it is a different configuration than your typical data center.” Amazon’s own quantum computers are currently being developed at its quantum computing center in California, and that facility will host the first AWS-made systems that will go live on its Braket service. “The way we've organized the building, we're thinking ahead to a world where there are machines exclusively used for customers and there are machines that are used exclusively by researchers,” AWS’ Moulds says. “We are thinking of that as a first step into the world of a production data center.” When asked about the future and how he envisions AWS rolling out quantum systems at its data centers beyond the quantum center, Moulds says some isolation is likely. “I doubt you'll see a rack of servers, and the fourth rack is the quantum device. That seems a long way away. I imagine these will be annexed off the back of traditional data centers; subject to the same physical controls but isolated.” Rigetti is also considering what a production-grade quantum data center might look like. “We haven't yet reached the stage of building out significant production-grade
56 | DCD Magazine • datacenterdynamics.com
“It also requires fairly significant chilling systems for the purposes of powering the dilution refrigerators and it requires space for hazardous chemicals including helium-3.” As a production and research site, the data center “has people in more often than you would in a classical data center environment,” he says. IonQ’s Chapman says his company’s Maryland site is ‘mundane.’ “There's a room which has got battery backup for the systems; there's a backup generator sitting outside that can power the data center; there’s multiple redundant vendors for getting to the Internet,” he says. “The data room itself is fairly standard; AC air conditioning, relatively standard power requirements. There's nothing special about it in terms of its construction; we have an anti-static floor down but that's about it.” “Installing and using quantum hardware should not require that you build a special building. You should be able to reuse your existing data center and infrastructure to house one of our quantum computers.” UK quantum firm OQC has signed deals to deploy two low-qubit quantum computers in colocation data centers; one in Cyxtera’s LHR3 facility in Reading in the UK, and another in Equinix’s TY11 facility in Tokyo in Japan. OQC’s systems rely on dilution refrigerators, and will be the first time such systems are deployed in retail colocation environments. DCD understands the Cyxtera deployment is close to completion at the time of writing. The wider industry will no doubt be watching intently. Quantum computers are generally very sensitive – manipulating single atom-sized qubits is precise work. Production data centers are electromagnetically noisy and filled with the whirring of sonically noisy fans. How quantum computers will behave in this space is still, like a superposed set of quantum states, yet to be determined.
A laser challenge
>>CONTENTS
Quantum computers face a laser challenge
Georgia Butler Reporter
Could laser-based computing deliver the best of both classical and quantum computing?
F
or many of us, computers have always been a series of zeros and ones. There was one type of compute - it was bits and bytes and we learned to accept that. It was pure - simple, logical.
When quantum computing entered the conversation, we were panicked. What on earth was a qubit? Could we think, instead of zeros and ones, in terms of values which are simultaneously both, and not, and either or, zeros and ones, all at the same time? Surely not? Ruti Ben Shlomi spent her academic years doing just that. Creating quantum computers from the ground up, and ultimately deciding that a different solution was worth pursuing. After meeting future CTO Chene Tradonsky at the Weizmann Institute of Science, the two realized that laser-based computing had a place in the conversation, and decided to establish a startup. Founded in 2020, LightSolver has received investment from TAL Ventures, Entree Capital, IBI Tech Fund, and Angular Ventures. The laser-based computer itself is small - around the same size as a traditional desktop computer and looks similar to a standard server - and operates with low power requirements and at room temperature. For the time being, a virtual LPU is available at an ‘alpha’ stage, while
Issue 50 • October 2023 | 57
DCD Magazine #50
>>CONTENTS
“We’re not using single photon cells, we are using lasers and completely simple diodes - we are using these lasers which are classical wave equations, and they are all interfering. The system does have superpositions - which allow things to exist in multiple ‘states’ at once, but this isn’t exclusive to quantum computers. In this case, it is a direct effect of the two lasers imposing on each other.” There are also some interference effects, but the laser-based processing unit (LPU) transfers these into a bit-language.
the LPU physical units are expected to be available for order at the end of this year. “We are developing a full-stack computer. We have three layers, so we know how to take the problem from the client side, translate it with our algorithm developers, and take it to the laser,” explained Ben Shlomi. The way she describes it - and I sense somewhat simplified for this writer - is as a mathematical model (a quadratic unconstrained binary optimization problem also known as QUBO) with “recipes” telling it how to talk to the laser. “Once we have this matrix with all the spins and the connections, we know how to blow them onto an electro-optical device that we have inside the system, and then we turn on the lasers. The lasers, starting with one inside the very small cavity, interfere with each other, computing and processing the problem and converging very fast a solution, such that then we just need to detect it with a very simple camera and we can translate it back to the client side.” The solution is particularly unique due to the lack of electronics built inside. This also means that the system can be compact, with low power requirements and the ability to operate at room temperature; notably different from the quantum computers which need to be kept close to around -273.15°C, also known as absolute zero. According to Ben Shlomi, the lack of electronics also means the computer is faster, as it isn’t limited by the speed at which those electronics can function. The solution is described as quantumbased, and certainly some “elements” of the process do recall that of a quantum computer, but a glimmering similarity is about as far as this goes. In fact, the computer is completely classical, explained Ben Shlomi when asked how it is different from a photonic quantum computer.
When it comes to computing you are always looking for a solution. Often, the first solution arrived at is not “optimal,” and different types of computers have different ways of refining and improving that. “So, with classical computing, you can add noise and then you can overcome the barrier and continue the search and hopefully, you will find a better solution than the one that you found before,” said Ben Shlomi. “Quantum-wise, maybe you can tunnel it out. That's what the quantum computers are counting on. But we [laser-based computing] are not tunneling out because we don't have entanglement, we don't have quantum effects, but we do have superposition. “What does that mean? It means that in a sense, all the lasers are interfering with each other and they are scanning all the possibilities at once in a single shot. Because they can interfere where electronics can't. You can't take 1,000 wires with all the connections and process a problem like that, but with lasers you can do it in a single space domain.” Because of this, the LPU is good for NP-hard (complex non-deterministic polynominol) problems, in other words, extremely complex problems that are both tough to solve and hard to verify if the solution found is correct. It can also be used for more traditional or simplistic compute problems. But, as explained by Ben Shlomi, if this is all you need to do, it is far more logical to just use a normal computer. There is such a thing as using technology which is over-qualified for the problem. An example of an NP-hard (and combinatorial optimization) problem that the LPU may be able to solve is the Travelling Salesman Problem (TSP). A long-referenced mathematical calculation, the TSP is solved by finding the quickest route for a traveling salesman who has to visit x amount of cities of varying distances from one another while only visiting each city once.
58 | DCD Magazine • datacenterdynamics.com
If the x amount was three cities: this would be relatively simple. We could all figure out the most logical way of traveling from San Francisco to LA and then to Texas, and back. But when the number of cities starts to increase, the possibilities become far too many to calculate in a reasonable amount of time - or as Ben Shlomi says - “try and do it on a GPU and it will take you till the edge of the universe.” The LPU is ideal for dealing with a situation like this, because of the superposition effect of the lasers. It can compute several variations simultaneously. “It [the TSP] is exactly a problem that you can upload to the LPU. What's so unique about that is it will scan all the possibilities, even if you're talking about 50 cities it just means that we need 2,500 spins because it's 50 square, and 2,500 is a number that we can reach and converge a solution very fast, not the age of the Universe.” Ultimately, the future of computing is still somewhat up in the air. Progress is being made on quantum computers every day, and classical computers continue to get more powerful even as the breaking point for progress seems to creep closer and closer. Despite being a classical computational solution, LightSolver’s competition comes mainly from quantum computers - which some of the biggest names in computing are exploring, including IBM. But more specifically, photonic quantum computers. The LightSolver solution is not a photonic quantum computer - Ben Shlomi made that very clear, but it is quantum-inspired, and there are obvious similarities between the laser-based processing unit, and that of photonic quantum solutions such as those offered by OrcaComputing, PsiQuantum, and Xanadu Quantum Technologies. Many of those solutions require the quantum computer to be kept in an absolute zero environment to operate, but others have begun to overcome this problem. IonQ, a trapped ion quantum computer company, uses laser Doppler cooling which targets individual atoms, meaning that the computer can be operated at room temperature. Similarly, Quantum Brilliance successfully established a quantum accelerator that can be deployed in roomtemperature environments, in this case by using nitrogen-vacancy centers in synthetic diamonds, effectively creating a defect in the diamond’s structure and enabling the use of its photoluminescence capability to read the qubits’ spin states. As for whether laser-based computing or quantum computing will have continued success and longevity, will depend on the ability of both to scale quickly, and affordably.
TRUSTED BY THE WORLD'S LARGEST DATA CENTER OPERATORS
Delivering expertly engineered solutions that meet the ever-evolving needs of data centers and building network infrastructure. •
Customer-focused
•
Industry-leading brands
•
Time-tested innovations
•
Unbeatable support
•
Acclaimed product quality
•
Solutions for Critical Power, Physical
and efficiency
Infrastructure, and Network Infrastructure
Learn more about Data, Power & Control at: https://www.legrand.us/data-power-and-control
Raritan | Server Technology | Starline | Ortronics | Approved Networks | Legrand Cabinets and Containment
Inventing the mobile phone
>>CONTENTS
Inventing the mobile phone How a youthful passion for engineering led Marty Cooper to create of one of the world’s most important inventions
Paul Lipscombe Telco Editor
"I
f it wasn’t me, it would have been somebody else,” Martin Cooper, otherwise known as Marty, tells DCD.
Except it wasn’t somebody else, it was Cooper who invented the first ever portable cell phone. His legacy is particularly poignant this year, as it marks the 50th anniversary since Cooper made the first call from the portable device he designed.
Beginnings Born in 1928 in Chicago to Ukrainian Jewish parents, as a child Cooper only wanted to take things apart and fix them. “From my earliest memories, I've always known I was going to be an engineer,” he tells DCD. “I always loved to take things apart, and occasionally I actually put it back together again. I had an impulse to understand how everything works, so it was no surprise that I went to a technical high school.” After graduating from the Illinois Institute of Technology (IIT), he served as a submarine officer in the Korean War. Then his drive to do engineering took him into the mobile telecommunications world, working for the Teletype Corporation and joining Motorola in 1954. By 1973, he was still at Motorola, leading a team of engineers working on truly portable mobile communications.
A passion for engineering Almost a hundred years earlier, Alexander Graham Bell invented the original landline phone, but now Motorola was competing with Bell’s legacy, formally known as the American Telephone and Telegraph Company, and more commonly known today as AT&T. At that stage, Cooper explains, the Bell System had a monopoly over telephone communications. Mobile communications were arriving, and AT&T wanted to extend that monopoly. “If you wanted to have a telephone, you got it from the Bell System. And they announced that they had created a cellular telephone. The nature of this telephone was that instead of being connected by a wire, so that you are trapped at home, you could now have a car telephone - so you're going to be trapped in your car. I didn’t really see this as an improvement.” While Bell had control of the carphone market in
Issue 50 • October 2023 | 61
DCD Magazine #50
>>CONTENTS
the project as exciting and jumped on board.
The phone After a frantic few months, Cooper and his team put together a mobile phone, the DynaTAC (Dynamic Adaptive Total Area Coverage). It was 23cm (9 inches) tall and weighed 1.1 kg (2.5 pounds). It allowed 35 minutes of talk before its battery ran down. That sounds paltry today, but was revolutionary for its time. But developing a piece of technology hardware is one thing, getting people to accept it is another challenge.
the States at the time, Cooper dreamed of untethered devices. “Tactically, Bell said they were the only ones in the world competent to do it and therefore wanted a monopoly. And, of course, Motorola objected to that mostly because the Bell System would probably take over their businesses as well as the telephone. “It looked like they’d be the only provider, so at that point, I decided that the only way we were going to stop them is to show what the alternative was and open up the market,” explains Cooper. “Their view of the current telephone was that there are very few people that wanted it and that there really wasn't enough business there. But my view was that someday when you were born, you would be assigned an individual phone number.”
Taking on the competition Cooper and Motorola were on a mission. He and his team of engineers wanted to innovate and show that phones could be portable, and the company was determined not to allow AT&T the opportunity to dominate the market. AT&T in the 1970s had an arrogance about it, Cooper feels, as the giant dominated the telecommunications market at the time. The Federal Communications Commission (FCC) looked set to grant AT&T the spectrum it would need to continue its hold, with a hearing scheduled for 1973. Motorola and Cooper were up against the clock to prove that there were other engineers capable of making serious contributions to the telecoms market in this era.
So on April 3, 1973, Cooper and Motorola arranged to showcase the DynaTAC in New York City. Initially booked in to appear on CBS Morning News, the team was left disappointed when, at the last minute, the news station canceled. Instead, Cooper’s PR team set him up with a local radio station. Cooper spoke about the importance of carrying out the phone call outside to best paint the picture for his vision - a portable phone that can be used on the move, anywhere, while live on the radio. He met the reporter outside a Hilton hotel. In the build-up to the call, all he could think about was if the phone would actually work. He weighed up his options about who to call first, before deciding to dial Dr. Joel Engel, who worked at rival AT&T. “I never thought very much about who I was going to call, but at the last minute I was inspired to call a guy that was running the Bell System Program,” says Cooper. “So I reached in my pocket and pulled up my address book, which does give you a hint of what primitive times they were, then looked up Joe's number and I called him, and remarkably he answered the phone. I said: ‘Hi, Joe. This is Marty Cooper. I'm calling you from a cell phone, but a real cell phone, a personal, portable, handheld cell phone.’” He jokes that Engel, who viewed Motorola as an “annoyance and hindrance,” has since told him that he doesn’t remember the call. “I guess I don't blame him. But he does not dispute that that phone call occurred,” he laughs.
The following years Despite the breakthrough of this phone call, it was only a demonstration and not the finished article.
In his mind, Motorola had to go big and deliver something “spectacular” to show the regulator an alternative.
In fact, it would take a decade more of development before Motorola introduced the DynaTAC 8000x in 1984. Despite its price of $3,995, the phone was a success.
He recalls telling Rudy Krolopp, head of Motorola’s industrial design group, to design a portable cell phone. Krolopp wasn’t too sure what one of those was, says Cooper, but saw
Later that same year, Cooper left Motorola, the company he had worked at for 30 years, to set up Cellular Business Systems, Inc. (CBSI), a company that focused on billing cellular
62 | DCD Magazine • datacenterdynamics.com
phone services. A few years later, Cooper and his partners sold CBSI to Cincinnati Bell for $23 million, before he founded Dyna with his wife, Arlene Harris. Dyna served as a central organization from which they launched other companies, such as ArrayComm in 1996, which developed software for wireless systems, and GreatCall in 2006, which provided wireless service for the Jitterbug, a mobile phone with simple features aimed at the elderly market.
Legacy? That mobile is for everyone It’s impossible to know where the mobile industry would be were it not for Cooper. He’s pretty humble when it comes to talking about his legacy. Instead, he’d rather praise the work of his colleagues. He doesn’t believe it’s his legacy. He says the mobile phone is for everyone. “What is my legacy? Well, people are very nice to me, but maybe that’s because I'm so old,” he jokes at 94. “But it took a long time for people to realize the impact of the cell phone on society. There are more mobiles in the world today than there are people. “For most people, the mobile is an extension of their personality. For us people in developed countries we often don't realize that in certain countries, the mobile is the basis of people's existence. Their very first phones were not wired phones, they were mobile phones, so we really changed the world so much that people now can’t always accept that the phone hasn’t always been there.” Going back to the battle with AT&T to demonstrate the first mobile phone, he says that if he wasn’t successful a very different path in the industry would have been carved. “Oh, I have no doubt in my mind that had Bell succeeded they would have built a system that was designed specifically for
Inventing the mobile phone
>>CONTENTS
cars,” he states. “If they had built a system designed specifically for cars, it would have been at least 10 or 20 years longer before we would have had portables. Because car cell phone deployments just didn’t make any sense.” Cooper accepts that if he hadn’t invented the mobile phone, someone else would have done so. But he is understandably glad that it was him, and Motorola, that did so.
Do not fear failure As with all creations, there are challenges, and this was something that Cooper faced a lot during his career.
As with everything, with the good comes some bad, and there are definitely some negatives around mobile phones, notably scammers and a rise of cyberbullying, though this could largely be seen as a social media problem, as opposed to a mobile phone issue. There are also other security issues that can impact mobile phone users, which again can be found on desktops as well. But that ties into Cooper’s other passion: education. He told DCD that he wants students at school to be educated about the mobile phone.
The battle to prove that Motorola could provide competition to AT&T was arguably the toughest challenge of them all.
“I’m currently working on a committee, and one of the things that I'm suggesting is that every student has full-time access to the Internet, and the only way to do that is with a smartphone,” he says.
It wasn’t easy to convince the FCC that there were other options, he says, adding that Motorola had its own issues, from the direction the business was headed in, to competitive threats, internal disagreements, and regulatory policies.
“But the pushback that I get is ‘no these kids are at school and they're going to be distracted.’ This doesn’t wash with me, as there have always been distractions in the past, long before mobile phones were around.”
Cooper admits it wasn’t always easy and that he had failures while at Motorola, but explains that the company, which was founded by Paul Galvin and his brother Joseph, provided a lot of encouragement and support.
A glimpse into the future
“The wonderful part about Motorola was the motto that the founder of Motorola, Paul Galvin, instituted. His motto was, ‘Reach out, do not fear failure,’ and I took that very seriously,” he says.
“I have to say that it's my view that we're just beginning the mobile phone revolution.”
“If you think about it, trying to build a portable telephone at that time, I was really sticking my neck out, and of course, the company stuck their neck out too. “Between 1969 and 1983, Motorola spent more than $100 million - and this is back then, so they really bet the company on this idea. I was really fortunate having a company that accepted failure and I took a lot of chances and had a few failures myself, and the company tolerated me for 30 years. It was one of the luckiest things I ever did in my life.”
Source for good While some say society is now too reliant on mobile phones, Cooper isn’t concerned. Instead, he sees the mobile phone as a source of good, and one that can help to eliminate poverty in some parts of the world. He claims that mobile phones have helped developing countries stay connected, providing new opportunities. “It turns out that the cell phone does a lot of things that can make us more productive and also improve our ability to distribute the wealth that we create now. So I think the most important thing short term that the mobile phone is doing is the elimination of poverty.”
Although he says the industry has come a long way so far, Cooper is adamant that we’ve barely touched the surface of what mobile can offer us.
He understands the need for 5G, 6G, and other future developments, such as the Internet of Things (IoT), but it’s actually the Internet of People he’s fascinated about. This vision ties in with what he expects future smartphones to look like, where the
human is very much the device, with built-in sensors. He doesn’t give a timeframe for this, but says that he expects future phones to be driven by artificial intelligence. “At some point, and it's going to be a few generations from now, everybody will have sensors built into their body. And before you contract a disease, the sensors will sense this. There is the potential to save lives and eliminate diseases, mostly because we have people connected.” Using AI, future mobile phones will examine our behavior, adds Cooper, noting that we should embrace “making the cell phone part of our personality and not just a piece of hardware.” As for future form factors, it won’t be a flat piece of glass, says Cooper, but likely embedded in some way into our body.
Team effort Concluding our conversation, Cooper was insistent that he shouldn’t get sole credit for the birth of the mobile phone. “I didn't do this all by myself. It took hundreds of people to create the mobile industry as we know it today and, in fact, even when I conceived of the first mobile telephone, I knew that it couldn't be done alone. “I studied all the technologies, but it took a team of really competent engineers to put that first mobile phone together. So I feel very good about the fact that I've made a contribution to this. “But by no means am I the only person that created this.”
Charting the history of the mobile phone Cooper’s legacy in the telecoms world is felt wherever you look. Anybody with a mobile phone is effectively using something that he helped bring to life five decades ago. Ben Wood sees that very clearly. Alongside his day job as chief of research at analyst CCS Insight, he is a telecoms historian,and co-founded The Mobile Phone Museum, an online site that shows more than 2,700 phones from different eras, from the earliest models in the early ‘80s up to today’s creations. Wood’s started at UK telco Vodafone in the 90s, and has evolved into one of the industry’s most recognizable analysts. But he’s quick to tell DCD that none of this would be possible without Cooper. “He is a legend,” Wood says. “It's easy now to look back and think how obvious it is just to chop the cord off a fixed line
phone and take it wherever you went. But people at the time just didn't understand that concept. Even in his nineties, he’s still a fantastic ambassador and evangelist for mobile technology.” Like Cooper, Wood wants to educate: “With the Mobile Phone Museum, we go into schools and speak to students about the importance of mobile phones and connectivity. It’s been amazing so far.” Head teachers tell him it is “refreshing to have someone come in and talk about phones in a positive light,” he says, noting that “they have given people more freedom, security, and have been massively important in terms of design and engineering.” If Cooper is the creator of the cell phone, Wood is his disciple, spreading the gospel of the portable communicator.
Issue 50 • October 2023 | 63
Connecting All of the the above world
>>CONTENTS
Sebastian Moss Editor-in-Chief
Photograph by: Sebastian Moss
All of the above Aalyria’s double-pronged plan to light up the world
A
s the heavens fill with man-made celestial bodies, there exists an opportunity to rewire the world. Except, this time, we won’t need
wires.
As Google became an intrinsic part of every Internet journey, the company faced an enviable problem - it was running out of fresh Internet users to monetize. Flush with cash and ambition, the company kicked off multiple projects to connect the world. Some, like Google Fiber, have stalled. Others, like ground-based free space optics Project Taara, are still ongoing. Yet more, like a plan to connect the world by balloon, came crashing back to earth and were shuttered.
But from their ashes, a new company hopes to help finish the job. Last year, Google spun out the new startup Aalyria to build on two technologies that will help connect the world. To understand more, we spoke to the CTOs of each effort.
explained. "The lower a transmitter is, the more the megabits per second, which is good for tighter population densities. You raise that transmission higher, you cover more area, but it's the same megabits per second - so now it supports a lower population density.
As we seek to connect billions more people, and increase coverage for the already connected, the telecoms industry will have to look beyond cell towers and fiber to moving assets in the skies.
"Population densities are clumpy, irregular, and they span five orders of magnitude across the Earth's surface. So I think it lends itself to a world that will have five orders of magnitude of altitude. Think of towers, highaltitude platform stations (HAPS), Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary Earth Orbit (GEO).
"Altitude allows you to trade for population density," Spacetime CTO Brian Barritt
“At some point, we will need all of the above.”
A planetary SDN
Issue 50 • October 2023 | 65
DCD Magazine #50
Routing between all of these layers of moving objects introduces an entirely new level of complexity, especially when you add in providing connectivity to airplanes and ships. “The whole idea behind Spacetime was ‘what if the network fabric didn't exclusively include just static physical links? What if it was the set of all possible links that can exist across space and time?” Built out of the Minkowski routing system for Loon, Aalyria envisions a network of satellites and other moving objects coming in and out of connectivity with each other, ground stations, and user terminals. The network is no longer a fixed asset, but one that changes both in location and over time. “They're all changing their position materially to the network. The possibilities can be massive, a giant webscale graph of tens of millions of possible pairings,” Barritt said. Along with trying to work out the best route, Spacetime can configure the pointing and steering of steerable antennas as well as configure radios and channels to create alternative physical links. “So, out of the set of all the possible things that could exist in the universe of the network, which physical links should be brought into existence at what time interval?” To work this all out is “very computationally intensive,” Barritt said. The company is constantly calculating the current and potential routes of its customers,
>>CONTENTS
tweaking if the weather forecast predicts rain in regions where it would otherwise rely on frequencies that are sensitive to precipitation. “We model the orbital properties of the satellite, calculate the parameters of motion, and do wireless signal propagation modeling across different atmospheres,” Barritt said. Where possible, it tries to peer into the future, helping pre-schedule handovers and routing changes. “But when you have an unexpected thing, like someone suddenly asks for a new route between two nodes or suddenly removes a satellite from the network, the solving engine doesn’t have to model everything, just the [new element].” For satellites, where orbital routes are usually pretty predictable, the system can plan far in advance. When trying to connect to a human-piloted aircraft, it can see a couple of seconds ahead - “just enough time before the aircraft’s wingtip breaks the beam, we can light up the new path.” This, Barritt believes, will help lead to “a fundamental shift from a network, from a geostationary network to one where the infrastructure itself is moving.” But, to do that, will also require new hardware capable of sending large quantities of data across vast distances.
Seeing the light Aalyria’s second grand project predates its time at Google, dating back to the turn of the century. “As far back as 2002,
66 | DCD Magazine • datacenterdynamics.com
Lawrence Livermore Labs were flying ER-2s [high-altitude planes] with ground-to-air connectivity at 20 gigabits,” Tightbeam CTO Nathan Wolfe said. “They spun out of Lawrence Livermore and became a group called Sierra Photonics and, ultimately, were acquired by Google [in 2014].” The company realized that while what would become Spacetime would help connect assets like Loon intelligently “a brain without connectivity solution isn't necessarily quite as useful,” Wolfe said. “We had to move bytes from one point to the other, at Google-scale speeds. That's a very loaded word - because, until you work at Google, you really just don't understand what scale is.” What the team at Livermore, and then Google, were working on was free-space optics - essentially lasers beamed through the surrounding medium, rather than something like fiber. Companies and researchers have
Connecting the world
>>CONTENTS
between its facility in Livermore and Mount Diablo. “That's a 60-kilometer roundtrip link and at night we close that link on about 125 milliwatts,” Wolfe said. “To put that into context, the nightlight in your hallway is typically a two-watt light, so about 1/16 of the power of your nightlight puts out to close 100 gigabit, packet error rate near zero.”
long seen free-space optics as a way to dramatically improve global connectivity, but most approaches have relied on noncoherent light - including Google’s ongoing Project Taara - with data encoded on the amplitude of the light source sending the signal. This is where Aalyria thinks it could change the game, with a coherent light system where data is encoded on the phase. “With noncoherent, you’re counting on a lot of amplitude for one and very little amplitude for zero, but the atmosphere likes to play with it - just look at the stars and see them twinkle,” Wolfe said. With coherent light, Aalyria’s Tightbeam system is “capable of running 400 Gig channels today - if we need it to, we can run four channels at any given time. We can go four 100 Gig channels, or we can go one 400. Or, ultimately, we could go four 400 Gig channels, which would be a 1.6 terabytes system full duplex - marketing would call that a 3.2 terabytes.” The company currently operates a test link
As the name Tightbeam suggests, free space optics is also a straight-line connection. Unlike radio frequency (RF), which sends data across a wide arc, free-space is both more efficient and harder to intercept. The downside is that, at least for now, it’s a one-to-one system, with one Tightbeam aperture talking to another aperture at any given moment. “If you want to go from an airplane to an airplane, you have to have one aperture to talk to the ground and another aperture to talk to something else,” Wolfe said. ”There are ways to do phased array and multiaperture approaches, but those don't tend to work with coherent light.”
The company may still explore that market, but is primarily focused on a broader goal: “We'd love to break the economics of connectivity, especially ground-to-space,” Wolfe said. “If you look at the architecture of ground stations today to get 60 gigabits to a GEO or MEO orbit requires three or four different ground stations that are located tens or hundreds of kilometers apart… Those ground stations cost about $20 million to build plus 10 percent of the cost annually to maintain. “You could eliminate the majority of them and go to optical terminus, and increase your data rate, potentially up into the terabit range.” The company hopes to do what SpaceX did for getting satellites into space for connecting those satellites to ground announcing a deal with satellite fleet operator Intelsat to create "subsea cables in space,"
However, with some of the project’s roots in Loon, Wolfe noted that the apertures had to be developed with the possibility of “crashing on a farm in Kazakhstan where you can’t get it back, so it had to be at a cost point where we didn’t care.”
capable of hundreds of gigabits per second.
While at Google, the team “contemplated testing it inside the data center,” Wolfe said. “For example, to connect different data halls with multi-terabit fiber takes time.
constellation,” Wolfe said.
“Or inter-building connectivity or connectivity to the metro area. Maybe not always as primary, but it's certainly a reasonable backup to have in place.”
With new LEO mega-constellations like Starlink, “you could put billions of more people onto the Internet, but the problem is you've got to be able to feed that “Right now, those constellations are limited by the RF connectivity that the ground stations can deliver those constellations, we're talking about lifting that connectivity ceiling. Imagine if you could get a gigabit into space what would that do to global connectivity?"
Issue 50 • October 2023 | 67
Low to no carbon, reliable, resilient, and dispatchable power supply NovaLT™ gas turbines deliver best-in-class plant efficiency and reliability, which drives down your operating costs, plus proven hydrogen capability up to 100% for decarbonized operation. With power outputs from 5.5 MW to 16.9 MW (ISO) and high exhaust temperatures for steam/hot-air production, NovaLT turbines are the best choice for industrial power generation, cogeneration and renewables integration. It's just one of the ways we're taking energy forward.
bakerhughes.com/NovaLT
Copyright 2023 Baker Hughes Company. All rights reserved.
AMD’s AMD’s nextAImove plan
>>CONTENTS
Photography: Sebastian Moss
AMD’s next move CTO Mark Papermaster on the company’s CPU and GPU data center strategy
T
he battle for the heart of the data center is heating up.
Once, the story was a simple one, of a server market dominated by Intel’s x86 CPUs, steadily refreshed in line with Moore’s Law. But, as is the way of things, a monopoly bred complacency, slowing innovation and technological progress. That changed in 2017, when AMD aggressively returned to the server market, breaking into the market with the Epyc processor line. Now the company faces threats of its own - on the CPU side from a reinvigorated Intel, and a slew of Arm competitors. Over on the GPU side, where it has long played
second fiddle to Nvidia, it has watched as its rival exploded in popularity, selling A100 and H100s by the ton. At the company’s data center summit this summer, we caught up with AMD’s CTO Mark Papermaster to discuss its war on multiple fronts. There, the chip designer’s big announcement was the Epyc 97X4 processor line, codenamed Bergamo. Using the new Zen 4c architecture, a 'cloud-native' version of Zen 4, Bergamo features a design that has a 35 percent smaller area, and twice as many cores, with the chips optimized for efficiency rather than just performance. "We had to knock out an incumbent that
Sebastian Moss Editor-in-Chief
had over 95 percent market share, you don't do that by saying 'I have a more efficient solution,' you have to knock them out by bringing in a more performant solution. But we did that. Now, we are able to add an efficient throughput computing device to our portfolio." The company claims that hyperscalers have begun buying the new processor family at scale, drawn in by the cost savings of more energy-efficient chips. “At the end of the day, customers make decisions based on their total cost of ownership - they look at the compute they’re getting, what power they are spending, the floor space that they have to dedicate to their server, and that’s where we believe we have a significant advantage
Issue 50 • October 2023 | 69
DCD Magazine #50
>>CONTENTS
has dominated headlines, wooed investors, and broken sales records. versus competitors,” Papermaster said. “If we hadn't designed for this point, we would have left that open. But we believe with Bergamot, we have a compelling TCO story versus Arm competitors.”
“Right now, there's no competition for GPU in the data center,” Papermaster admitted. “Our mission in life is to bring competition.” That mission begins with the hardware, with AMD announcing a generative AIfocused MI300X alongside its more generalpurpose AI and HPC version 300A. “Will there be more variants in the future?” Papermaster posited. “I'm sure there will.”
“We have created an AI group within the company, which is identifying applications that could benefit from both predictive AI as well as generative AI. In chip design itself, we're finding that generative AI can speed our design processes in how we place and route the different elements and optimize the physical implementation.
But hardware only gets you so far, with Nvidia’s dominance extending to a broad suite of software used by AI developers, notably parallel computing platform CUDA.
“We're finding that it is speeding our verification on those circuits, and even our test pattern generation because you can run a model, and it'll tell you the fastest way to create accurate test patterns. We're also using it in our manufacturing, because we have analyzed all the yield data, when you test our chips at our manufacturing partners, and are identifying spot areas that might be not at the most optimum productivity point.”
“Our approach is open, and if you run in their stranglehold we can port you right over, because we're a GPU,” Papermaster said. “We have a portability tool that takes you right from CUDA to ROCm.”
How deeply AMD embraces AI is yet to be seen, and it's equally unclear how long the current AI wave will last. “Our determination is AI is not a fad,” Papermaster said, knocking on wood.
“We went the other way - we had the highperformance core. And we said for cloudnative, let's optimize at a different point of the voltage and frequency curve, but add more cores.”
ROCm doesn’t support the full CUDA API, and portability mileage might vary based on the workload. Developers still attest that CUDA is superior to a ROCm port, despite Papermaster’s claims.
He added: “I think this will put a tremendous challenge in front of our Arm competitors.” Beyond the CPU, AMD has also tried to compete in the accelerator space, operating as a distant second GPU designer.
“You have some tuning you have to do to get the best performance, but it will not be a bottleneck for us,” Papermaster said, noting that most programmers are not writing at the lowest level, and instead primarily use PyTorch.
To meet this computing momentum requires AMD and its competitors to fire on all cylinders. “You have to have a balanced computer,” Papermaster said. “We have to attack all the elements at once. There's not one bottleneck: Every generation, what you see us do is improve the Compute Engines, improve the bandwidth to memory, the network conductivity, and the I/O connectivity.
As generative artificial intelligence became the biggest story of the year, Nvidia
AMD is also in the early stages of using AI to inform the future of its own chip design.
Papermaster defended x86 against Arm, which is pitched as being more efficient. “People think ‘oh, Arm is inherently more efficient, Arm is always going to have a much, much smaller core,’” he said. “But the biggest thing is the design point that you optimize for - if you take Arm and you optimize for the high-performance design point that we have in [AMD CPU] Genoa, and you had simultaneous multithreading, support for instructions like 512 width vectors and neural net support, then you're gonna grow the area significantly.
70 | DCD Magazine • datacenterdynamics.com
“We are big believers in a balanced computer. As soon as you get fixated on one bottleneck, you're screwed.”
Advertorial
>>CONTENTS
The Challenges of Safeguarding Data Center Infrastructure from Cyber Attacks
A
s data centers grow in scale and complexity, so do the security challenges they face. Among the various security challenges data centers regularly address, the threats to Operational Technology (OT) and Industrial Control Systems (ICS) - otherwise known as the Data Center’s infrastructure - have emerged as a serious concern. Ransomware attacks on data centers can trigger extended shutdowns, potentially impacting the operational integrity of mechanical and electrical equipment in OT. Reports and surveys of data center operators show outages caused by cyber incidents are increasing year over year. In data centers, ICS play a crucial role in managing the Building Management Systems and Electrical Management Systems, which oversee cooling, power distribution, access control, and physical security. The convergence of OT and ICS with traditional internet-facing IT systems and cloud platforms introduces vulnerabilities that malicious actors can exploit. Additionally, any possibility of a breach via an internet-facing DCIM interface represents a very high risk for the data center as the DCIM has direct access and control over these critical OT systems. The interconnected and digital nature of data center systems increases the risk of a cyber attack propagating from the internet connected Enterprise network and affecting multiple core components simultaneously. To mitigate these risks, data center operators must implement robust cybersecurity measures, such as fully segmenting OT networks from IT and regularly updating or patching the OT systems (albeit cautiously, after thorough testing).
However, managing cybersecurity for these infrastructures is different from managing cybersecurity for information systems. While problems with a new software version or security update can be “backed out” to preserve uptime, an impairment to high-voltage transformers or compromised cooling systems cannot be restored from backups and will create an immediate outage at any data center.
Engineering Grade Protection All this forces the conclusion that the physical infrastructure of data centers is more of a network engineering domain than an information processing domain. While IT strategies generally rely on softwarebased solutions to deal with existing attacks, network engineering strategies use engineering-grade protections to prevent cyber attacks from entering data center OT networks in the first place. The network engineering approach includes a number of engineering-grade tools for the prevention of cyber attacks from entering OT networks, but the most widely-applicable tool is Unidirectional Security Gateway technology. These gateways are deployed at consequential boundaries – connections between networks with physical consequences and networks with only business consequences. In data centers, the gateways are deployed most commonly at IT/OT interfaces and provide unbreachable protection of the infrastructure contained in the OT networks. Unlike purely software-reliant firewalls, hardware-enforced unidirectional gateways provide physical engineering-grade protection – OT data is copied to IT networks in real time and
there is absolutely zero risk of a cyber attack (like ransomware) pivoting from Enterprise network through the gateways into OT networks. The gateways therefore ensure the data center’s uptime by protecting the essential infrastructure which maintains reliable operations.
Conclusion Data centers are changing the world, and the world is changing around data centers. Being at the heart of modern technological infrastructure, data centers should naturally prioritize OT and ICS security to safeguard critical operations and sensitive data, and this is driving a push towards engineering-grade protections. By understanding the unique challenges and implementing proactive security measures, data center operators can ensure the highest levels of protection against evolving cyber threats. Only a secure OT network will allow data centers to maintain the uptime goals they strive to achieve. A deterministic approach that includes risk assessment, network segmentation, access controls, and employee awareness will fortify data centers against potential infrastructure breaches, enabling a safer digital future. The increased use of unidirectional gateway technology is a reflection of this approach as it sits at the junction of engineering and cybersecurity.
Elisha Olivestone elishao@waterfall-security.com Director Business Development & Channel Partnerships https://waterfall-security.com/
DCD Magazine #50
>>CONTENTS
Behind Omniva: The secretive GPU cloud startup that began as an attempt to build the world’s largest crypto data center DCD uncovers more about the Kuwaiti-backed business that hired hyperscaler heavy hitters
72 | DCD Magazine • datacenterdynamics.com
Sebastian Moss Editor-in-Chief
I
t started as a plan to build a 1GW data center in the desert.
Secretive startup Omniva grabbed headlines this summer when it was outed by The Information, but details about the AI cloud company remained scant. Over the past three months, DCD spoke to current and former employees and contractors at Omniva to learn about the history of the company, who is funding it, and what happened to its ambitious chip design plans. Many of those we spoke to painted a picture of a business with near-bottomless funding held back by ill-defined goals and a lack of data center expertise at the top. Representatives from the company declined to comment, and did not respond to a detailed list of questions.
Omniva uncovered
>>CONTENTS
The business began as Moneta United Technologies, initially registered in Kuwait back in 2014. But it was only at the turn of the current decade that the company moved from being a holding entity to an actual business with a plan: Burst into the data center space with a huge facility in Kuwait. Initially, Moneta hoped to target wholesale customers, but the early data center designs were unimpressive, apparently delivering low uptime. Then came the first pivot - to build the world’s largest cryptomining data center. Moneta/Omniva is funded by the wealthy family-owned Kuwaiti business Khalid Yousuf Al-Marzouq & Sons Group of Companies (KMGC), sources told DCD. The family’s oil interests were expected to provide discounted energy to fuel the cryptomining facility, former staffers told us.
with a plan to build custom immersioncooled racks and design their own ASIC semiconductors to mine Bitcoin. “The engineering lab was in Seattle, they opened a Santa Clara office with some former TSMC people to be kind of close to the semiconductor space,” one source said. “They also opened in Zurich, as well as an operational team in Kuwait for the data center itself.” Those tanks, codenamed Athena, used 3M’s Novec HFE-7100 fluid in a two-phase evaporation/condensation natural flow system.
Under design plans seen by DCD, the Moneta data center was expected to feature 1,650 server tanks - for a total of 825MW. The facility itself is shown to have a total available power of 864MW on a 40,000 sqm (430,555 sq ft) plot, a white paper shared with contractor Araner states. Araner did not respond to requests for comment. But behind the large numbers and ambitious plans, employees painted a picture of a company whose senior leadership had little understanding of data centers. To make matters worse, there was confusion over who exactly was in charge,
Publicly available documents verify this: Kuwaiti registration documents show KMGC’s ownership of Moneta United Technologies, while documents in Delaware, New York, and elsewhere show the company’s various names - Moneta Tech, Moneta Systems, and - eventually Omniva and Omniva Systems. KMGC is best known for the Sabah Al-Ahmad Sea City Project, a multi-billion dollar city built with canals forming 200 kilometers of artificial shoreline, and housing up to 250,000 residents. The conglomerate also has huge real estate holdings, an oil and gas business, logistics and construction businesses, and a newspaper. The idea to get into data centers was pushed by the youngest son of the family, people familiar with the matter told DCD. “The family wants to build an empire that will rival Google and Microsoft,” one person said, expressing doubts about the likelihood of that happening. DCD granted those interviewed anonymity, in order to speak freely and without fear of retribution. “They have always been aggressive about secrecy - it was 'don't tell your family, don't tell your friends, don't tell anybody about what we're doing, who you're working for, how it's funded,'" one former employee recalled. The company in 2022 planned to build two data centers in Kuwait - the first at the entrance to the Sea City Project, and the other slightly to the north and cooled by seawater. Alongside the facilities themselves, Moneta began hiring around the world
"The most important and only fixed design parameter is that the heat of the tank coolant shall be rejected to a chilled water flow with inlet temperature of 45°C and outlet temperature of 53°C,” one document states. Another presentation seen by DCD shows a proposed tank design with an overhead gantry and a "simple mechanical arm to manipulate miners and PSUs." 3M is phasing out Novec 7100 due to environmental and health concerns over PFAS (poly-fluoroalkyl substances), and suggests customers shift to BestSolv Sierra, a drop-in replacement based on hydrofluorether, the same PFAS chemical as the Novec products. It is not known if Omniva has switched to Sierra, but recent photographs show that the company is still developing immersion tanks. “We were recruiting people from 3M,” an employee said. “Immersion [cooling] is key.” Under the older Moneta design, which may have changed since the latest pivot, Athena tanks were capable of holding 30 GPUs, with an IT capacity of 500kW. "It's crypto-worthy, but it's not AI stuff," one former staffer said.
with vying consultants bending the ear of the Al-Marzouq family. "I witnessed bitter arguments," one employee said. Another added: "They are not data center professionals, they are not operations professionals, they're not supply chain professionals, they are consultants. “What they had was the trust of the investors, and so they were allowed to drive the development of the company, even if they didn't know what the hell they were doing." One former Moneta staffer told of initial expectations that the 1GW facility could be built in one year for $200 million. “If you've been around data centers, you know that’s not possible.” Another said that they were told that chillers could be placed inside the building in air-locked rooms, and that staffers didn’t need the building to be kept cool. “They said we don't need space chillers, we're gonna have people with helmets that have cooling units on their head,” despite temperatures in the country reaching 50°C (122°F). More troublingly, they claimed that leadership did not “want to use a water mist system or fire suppression gas - they just wanted to use fire extinguishers,” and
Issue 50 • October 2023 | 73
DCD Magazine #50
designed the facility with no easy escape routes. The culture of extreme secrecy also hindered development, with multiple staffers describing strained relationships with partners and suppliers including 3M and TSMC. Engineering services company Jacobs was originally attached to the data center project, but was kicked off the contract after an RFP to potential suppliers was not deemed secret enough.
>>CONTENTS
Staff were paid generously. Salaries shown to DCD were several times the norm, alongside guaranteed 30 percent annual bonuses. “They were paying people wild amounts of money, so that was kind of intoxicating,” one said. But it also led to a flawed culture. “They would just throw money at a problem, instead of looking at it. But spending more money or hiring more people doesn’t always solve things,” another said.
Moneta struggled to find a replacement, ultimately turning to UAE-based power and cooling company Araner. “They have supplied data centers, but they've never managed them. It's the first one, and they're going to be the consultancy and the vendor,” a former employee said.
Two said that, when they joined, their direct managers made it clear that they had little faith in the project reaching any of its goals - but that they were making good money while they could. “The owner is being made promises that they could not keep from the very beginning. I was told this to cover up,” a former employee said.
Finally, staffers spoke of another, less obvious problem at Moneta: There was too much money.
And then came the latest pivot. With Bitcoin prices crashing and the project dragging on, even the discounted energy
prices the family promised staffers were not enough to keep the project viable. That, combined with the explosive launch of ChatGPT, led to the company once again reinventing itself in January - this time targeting the biggest trend in the sector: "Omniva Technology: Harness the power of AI, Data Centers, Cloud Infrastructure, Large Language Models, High-Performance Computing, Machine Learning, and Energy Efficiency all in one place,” a company description explains. The Information earlier revealed that the company hired Sean Boyle (the CFO of AWS until 2020), Kushagra Vaid (a former Microsoft VP and distinguished engineer until 2021), and T.S. Khurana (Meta’s VP of infrastructure until June), bringing muchneeded data center expertise to senior management. Joining the trio at Omniva are Tyson Lamoreaux (former VP of Amazon's Project Kuiper, software, networking & infrastructure services), Matthew Taylor (previously SambaNova and Ampere), and Somnuk Ratanaphanyarat (TSMC). It is understood that Vaid also took a number of Microsoft employees with him, while AWS staffers have also taken roles in HR and legal. In tandem with the new hires and the rebrand from Moneta to Omniva, the company underwent mass layoffs - primarily in its ASIC chip team, although it still retains a small operation. Those that left expressed no ill will, pointing to generous severance packages. But they remain unconvinced whether the new hires will be able to fix the fundamental issues at the business, unless the investors can be made to focus on the technical fundamentals of the data center sector. Until then, Omniva remains little more than what it was two years ago - a dream in a desert.
74 | DCD Magazine • datacenterdynamics.com
DCD Magazine #50
>>CONTENTS
Germany: The first regulated data center market
Peter Judge Executive Editor
The largest economy in Europe is the first to experience serious legislation relating to data centers
G
ermany is the largest country in the European Union by population and has the highest GDP. It’s also got a thriving tech sector, closely linked to the financial hub in Frankfurt, which is one of the top data center hubs in Europe. Frankfurt, London, Amsterdam, Paris, and Dublin make up the FLAP-D set of data center hubs, with Frankfurt currently coming second to London. But all these zones face increased pressure due to space and power considerations.
growth of the sector has brought it to the attention of the government - and brought regulations down on its head.
Frankfurt for the win The annual conference of the German Datacenter Association (GDA) in September heard a lot about the fast growth of the sector including speakers who suggested that Frankfurt might be about to nudge London out of its leading
More importantly, the
76 | DCD Magazine • datacenterdynamics.com
position in Europe. CBRE’s Michael Dada presented figures that suggested that Frankfurt will push through the 1GW barrier in 2024, describing that as a “breakthrough moment.” But he thought London would stay ahead, predicting that the UK’s capital would reach 1.2GW at that same point. CBRE has tracked London’s lead in data centers for a long while. But some dispute that. Adam Tamburini, chief hyperscale officer at Stack Infrastructure, said: “[CBRE’s figures] say London is the biggest market - but I never feel that out in
Regulated data centers
>>CONTENTS
the industry,” he told the GDA conference (which was of course in Frankfurt). “When I look at the client demand, Frankfurt is always the city I'm asked about.”
merely “overspill” from Frankfurt, but Mertens thinks there’s more to it. She is planning to have 200MW of her site online in 2026.
The Brandenburg state
Of course, Tamburini might be biased. His company is building an 80MW project for hyperscalers in a repurposed Coca-Cola factory in Frankfurt, and has not yet attacked any of the other FLAP-D sites.
“Berlin is a market in itself, and what makes it so interesting is there is a lot of foreign investment going on in Berlin,” she said. “Look at the Tesla Gigafactory, and all the major hyperscalers have announced a presence in Berlin in the last couple of years.”
Even if the central government has problems following through with support for data centers, Mertens thinks support is strong at the local government level, and points out that the Virtus facility’s location is actually Wustermark, 20 miles to the west of Berlin, within the Brandenberg state.
Google launched a cloud region there this summer, and Microsoft is also serving its Azure Germany North Europe region from Berlin.
The Brandenburg state has a dedicated team looking after data centers, she said “and that shows they really see the potential for investment in the area.”
The German government has said that data centers are “the lifeline of the digitized world,” Mertens pointed out. The local Berlin government, meanwhile, “understands the importance of data centers in that region,” and it understands that it must make the space they need available in the city.
In Berlin and Brandenberg, land and power may be difficult to get hold of, but Mertens said: “It is difficult to find the right plot in the city of Berlin, but there are a lot of brownfield sites that need redevelopment, that are more than happy for anyone who wants to come and invest in that area and make it buildable and social.”
But others echoed his thoughts, and suggested that real estate analysts and brokers like CBRE might be underestimating the huge volume of capacity that is being quietly bought by hyperscalers. “From my perspective, London is not bigger than Frankfurt. In fact, Frankfurt is much larger,” said Rhea Williams, formerly in charge of European site selection for Oracle, and now building hyperscale facilities in the US with a new startup called E3 Platforms. She was speaking at DCD’s Connect event in London the following month, in a session about hyperscaler site selection in Europe. Whether or not Frankfurt is poised to overtake London yet, there are plenty of people saying that it is on a faster growth curve, and will inevitably overtake it in the fullness of time.
Berlin - a two-hub nation? Beyond the role of Frankfurt, others in Germany are predicting that the country will score another first - becoming the first European nation with two true data center hubs, thanks to a surge in developments in and around Berlin - in particular including a 300MW site being developed by Virtus. “Frankfurt would have been a natural choice. So why did we choose Berlin?” Christina Mertens, VP of business development at Virtus told the GDA conference. Virtus is a big UK player, and aims to grow more of a presence in Europe. It seems the company saw more of an opportunity to be a big player in a hub that is newly emerging, and possibly growing faster because capacity in Frankfurt was becoming harder to find. “Berlin is the second cloud region that is emerging - and Germany is the only European data center market that is actually having two mega-regions,” Mertens said. In the planning process, Virtus noted that Frankfurt was projected to double between 2018 and 2025 - but Berlin was predicted to have a compound growth rate of 30 percent per year: “That caught our interest.” Until 2019, Berlin only had a few megawatts of capacity, but the plans have expanded since then, Mertens said: “Today we have roughly 100MW that is being planned, but our project is not included in that statistic.” Some people have viewed Berlin as
Many businesses located in Berlin are a logistics gateway to Eastern Europe, which needs digital support, said Mertens. Beyond that, it is also “a hotspot for AI and machine learning development,” and therefore a logical place for more capacity: “There’s a very vibrant tech scene, with numerous startups and research institutes in the area. It has all the potential to become one of the central hubs for AI.” Along with this, there is Potsdam University and the Hasso Plattner Institute on the outskirts of Berlin, and software giant SAP has a research center in the area - all of which boost the availability of technical staff in Berlin. Despite this, some have poured cold water on her optimism over further major developments in Berlin, saying that the infrastructure isn’t sufficiently developed, creating a situation where more large projects would be politically unacceptable. “The German government wanted to make Berlin a major IT hub,” one hyperscaler told us in private, ”but it doesn’t have the ecosystem. Berlin will be successful, but I don’t think it can be as big as Frankfurt. I don’t think the government will allow it to be.” Berlin has a much larger population than Frankfurt, and the supply and distribution of electricity is limited. That Gigafactory is among the things causing demand on the infrastructure of the German capital, and more data centers in the Berlin metropolitan area could start to squeeze the availability of power for residential projects. Data center developments in a major population center can be a political hot potato, as the west of London found in 2022, when data centers were reported to have swallowed up available connections
effectively preventing large housing developments.
Mertens said there’s good connectivity across Brandenburg, and the land is flat. “It’s much more difficult to build a data center in Munich from a landscape point of view, compared to the very flat north of Germany.” Across Brandenburg, there is also some 7.2GW of wind turbine power, and Mertens said: “The surprising thing is that not all this power is being used because of the difficulty of transmitting it across Germany.
Wooing the mayors German local government officials are keen to have data centers, if Mertens is correct and a panel of mayors and legislators at the GDA conference seemed to agree. “We were talking to local municipalities, and they were really, really excited at the benefit that data centers bring to the community,” said Mertens. “It’s not just the monetary perspective, or an investment perspective, but also developing talents, bringing more infrastructure, and bringing more entities into the market.” Erika Schulte of the Hanau Business Development Agency said data centers would “make our business location more resilient,” and help reuse land and buildings in the city of Hanau, Hesse, that could not be used economically any other way. “Municipalities need to pay money to keep up the infrastructure, the public swimming pools, the sporting places, etc,” said Klaus Schindling, Burgermeister of the town of Hattersheim, a long-term supporter of data centers. “You can only sell a plot of land once every 50 to 100 years - and data centers offer advantages for the city. They are not noisy, and they don't have lorries that use the streets of the municipality 24 hours a day, seven days a week.”
Issue 50 • October 2023 | 77
DCD Magazine #50 Data centers pay taxes, and give revenue to the municipality, he said.
Encroaching regulations Not everyone is so positive about the growth of data centers. Germany has not had a large-scale public movement against data centers, like the ones in Ireland or the Netherlands, but the authorities in Frankfurt decided to regulate data center developments in 2021. The city warned that there was limited land available, that data centers’ power demands might derail the city’s decarbonization plans, and suggested that waste heat should be reused. A year later, Frankfurt came up with a zoning plan and promised to bring in requirements to reuse waste heat - once it has built a city-wide district heating system to make use of it. Since then, government activity across the country - and across Europe - has increased. Europe has passed an Energy Efficiency Directive, which is part of the bloc’s move to decarbonize the continent. The Directive explicitly includes data centers alongside other sectors, and will require them to report on energy usage and efficiency, before eventually requiring efficiency measures that have yet to be defined. All member states will have to implement the Directive in their national laws, and Germany is well ahead of the pack. The day before the finalized version of the EED was published in September 2023, the German federal government passed a nationwide Energy Efficiency Act, which is in part an enactment of the EED. The German “Energieeffizienzgesetz” Act imposes restrictions on data centers, alongside all other sectors of industry. And, throughout its passage, the industry argued about the stringency of those restrictions. The GDA, in particular, fiercely opposed some early demands in the act. Anna Klaft, head of the GDA described it as effectively a “data center prevention act,” saying its proposed demand for a PUE efficiency figure of 1.2, and its steep suggestions for heat reuse, were not feasible. A 2022 draft of the bill would have applied to all data centers over 100kW, and required new facilities built from 2025 to reuse 30 percent of their heat. The GDA pointed out that data centers could only reuse heat if they were built near district heating systems. There are very few such systems, and they are not located near fiber hubs which data centers need. The eventual Act was presented to the conference by Benjamin Brake, who is head
>>CONTENTS
of the government’s Digital and Data Policy department at the Federal Ministry of Digital Affairs and Transport (BMDV), and took part in the negotiations that finalized it. Brake is effectively part of the data center industry. Before taking his role in the civil service, he was head of IBM’s Berlin office, and he shared the final Act with the GDA conference, showing how requirements had been relaxed.
Efficiency Act - a compromise The eventual Act only applies to data centers over 200kW, and the heat reuse requirements have been put off until 2028, by which time it is hoped there will be more district heating systems around, especially the “4.0” version of district heating, which is efficient enough to make use of the lowgrade heat offered by data centers. Data centers will also only be expected to reuse 20 percent of their heat - and there will be other clauses to exempt certain facilities. Brake assured the conference that regulations were unavoidable, and sold the Act as a practical deal: “No industry can ignore the goals of the digital strategy, and the agreements of coalition [government] partners cannot be questioned, but we have to find a balance of what is reasonable.” He agreed that data center sites cannot be chosen just on the basis of access to a district heating system: “The use of waste heat as the sole criteria for choosing a location is problematic, especially when there are no requirements and obligation for waste heat users.” In practice, data centers will be required to offer their heat, which could simply mean they are built ready to share, but have no obligation to actually sign a deal to share their heat. The GDA still doesn’t like the Act’s imposition of a PUE of 1.2 for all new data centers, although this doesn’t come into force till 2026. As Klaft told DCD, this is a problem for colocation providers because PUE requires a facility to be full and for racks to be operated efficiently, both of which are controlled by colo customers, and not by providers. The changes in the Act came about because of strong lobbying by the data center sector, which argued that environmental measures should be balanced against the needs of the industry to remain competitive with those in other countries. If operators were required to make expensive changes to their operations, the government was told, Germany might lose data center business to other European states. Sources involved in creating the Act told DCD that the process of creating the
78 | DCD Magazine • datacenterdynamics.com
eventual Act was even more complicated than the public documents revealed. We understand that early proposals made no specific requirements to share a percentage of heat, but only required data center operators to publish information about the costs involved in producing the heat. This proposal would have formed a useful basis for commercial contracts between data centers and the heating industry, something regarded as essential for heat sharing on a solid business footing. It was rejected by the industry, DCD was told, because it would have required data centers to share information that they regarded as commercially sensitive. German data center operators are still fighting a rearguard action against the measures, and no one seemed enthused by them. Stack is a possible exception, with Adam Temburini describing the heat reuse requirements as “an opportunity.” Many international providers, Stack included, already have facilities in the Nordic region that are connected to district heating systems.
Leading the way Germany is a nation where data centers have developed to the extent that they impinge on daily life, sometimes offering useful income, jobs, and services, at other times seen as a consumer of land and power, as well as water. The industry has ambitions to build much more capacity within Germany, but it is clear that this can only happen with the agreement of people, along with federal and local government. “Regulation is going to be a big issue because we are growing so quickly,” said CBRE’s Dada. “Politicians have become aware of the opportunities, but also the threats with data centers, so we need to face new regulations.” One response to the Act has been an Open Day. On September 29, colocation facilities in Germany opened their doors to the public, hoping to establish better relations with the societies where they are located. Brake and Klaft agreed that the Act has begun a process of negotiation over data centers’ role in German society. But both seemed to feel that the dialog still has some way to go. Data center people in Europe and around the world should keep a close eye on future developments. If a social contract is established between data centers and society, it is likely to happen first in Germany.
Immersion Cooling: Your Pathway to an Efficient, High-Performance Data Center
GRC and ENEOS have forged a partnership
ICEraQ® Immersion Cooling
to navigate a course towards unparalleled efficiency and sustainability in data center operations.
Energy Efficient
Future Proof
Location Flexible
Powered by advanced immersion cooling technology and precision-engineered fluids, we’re charting the way for operators
ENEOS Fluid Advantage
worldwide to attain optimal data center performance, embrace environmental responsibility, and confidently prepare
High Performance
Purpose Formulated
Earth Conscious
for the thermal challenges that lie ahead.
Explore Our Transformative Journey Learn More Enhancing Data Center Performance with Immersion Cooling and PrecisionEngineered Fluids
Meet Us at the Intersection of Innovation & Efficiency
> Connect | Virginia November 6-7, Stand 17
www.mcim24x7.com
5G SA 5G explained Explainer
>>CONTENTS
Paul Lipscombe Telco Editor
Will the real 5G please stand up? 5G Standalone Core is tipped to be the real game-changer
"I
t’s fair to say that 5G Standalone is the real 5G,” CCS Insight director for consumer and connectivity, Kester Mann, said.
When 5G launched commercially in 2019, there were promises of faster download and upload speeds, as low latency and greater capacity were heralded. A frantic race to launch 5G networks across the world took place between network operators. But those early networks that were launched were 5G Non-Standalone networks. 5G Non-Standalone (5G-NSA) was the first implementation of 5G’s network architecture, and designed to be deployed on top of an existing 4G LTE network. Now, the industry has begun to transition to 5G Standalone (5G SA).
Not reliant on 4G - it stands alone Put simply, 5G Standalone is not reliant on older mobile generations and solely uses a 5G core network. It’s been labeled by many as the “real 5G” and is lauded for what it’s expected to bring to the mobile industry. Operators big up its potential capabilities,
which include things such as network slicing, ultra-low latency, and a simplified RAN and device architecture. Network slicing in particular is spoken about at length by mobile operators when discussing 5G SA. Slicing allows telecom operators to create separate and isolated networks for different use cases, while the slice can be configured differently. It’s for this reason that telcos and industry analysts call 5G Standalone the “real” iteration of 5G technology. “The Non-Standalone version had the anchor of 4G and was an improvement in terms of the capabilities of what could previously be offered,” Mann said. “But the real services and applications over 5G - thinking kind of things such as private mobile networks, and network slicing, and enterprise solutions - were part of the main vision of 5G at the beginning.” He says it’s a “significant milestone” for the evolution of 5G.
Network slicing Although network slicing can be achieved on older standards such as 4G, it’s widely seen as being pivotal to 5G Standalone, and getting the most out of the network.
Issue 50 • October 2023 | 81
DCD Magazine #50
In the UK, Vodafone used network slicing during the King’s Coronation earlier this year, in partnership with TV company ITN. The operator dedicated an exclusive slice of Vodafone’s public 5G SA network to enable the 'swift and secure' transfer of the live Coronation broadcast coverage from Westminster to ITN’s HQ newsroom in Gray’s Inn Road, London. Over in the US, telco giant T-Mobile used network slicing for remote video production on a commercial network, as part of what it claimed to be a country-first, during Red Bull’s Cliff Diving event in June. The customized slice provided the broadcast team with faster data rates in wireless uplink speeds, meaning they could rapidly transfer high-resolution content from cameras and a video drone to the Red Bull production team in near real-time over 5G. “With an increase in demand straining limited spectrum resources, network slicing allows us to ensure that critical communication needs are met without having to build excessive capacity scaled to meet extreme loads,” T-Mobile US CTO John Saw said at the time.
The 5G Standalone battle Stateside T-Mobile was the first mobile operator to launch a 5G Standalone commercial network, doing so back in August 2020. “5G SA is already playing a role in T-Mobile’s 5G strategy,” a T-Mobile spokesperson told DCD. “This has enabled the company to innovate on a whole new level by enabling technologies like network slicing which requires a 5G SA network. “On top of being a key driver in innovation, T-Mobile’s 5G SA network enables the company to offer its wireless uses the most advanced, fastest, and largest 5G network. 5G SA will continue to play a critical role in T-Mobile’s 5G strategy on both the creation of new technologies and offering customers the best possible experience on our network.” Rival AT&T is also intent on showcasing 5G SA’s potential, launching its own commercial network at the end of last year. “5G Standalone is the future of our network and is where we're putting all of our energy around our innovation currently,” Sherry McCaughan, vice president of mobility core and network services AT&T, told DCD. “At AT&T we have been very thoughtful on how we've deployed 5G Standalone in making sure that across what we call the sixway match, which is from the device to the RAN, to the core, to the transport to the data center, Edge devices, that all of that needs to be in place to have the best experience for
>>CONTENTS
the customer.” McCaughan touted the benefits of the technology, noting faster upload speeds, ultra-low latency, and a more reliable network service. AT&T is a member of the Open RAN Alliance, which seeks to promote a more flexible, open Radio Access Network (RAN).
A global push French operator Orange has been busy in this space, launching the service in Spain earlier this year. Bernard Despres, Orange's vice president of core network, automation, security, and E2E services explained to DCD that it will play a key role in its future 5G build-out. “It’s clearly in the heart of Orange’s 5G strategy,” he said. “I would say it's a natural prolongation of the launch of our 5G NSA, and now that network is really mature and working well.” Back in the UK, Vodafone again was first to launch a Standalone 5G commercial network in the country, doing so in June. The telco switched the service on in London, Manchester, Glasgow, and Cardiff, and has plans to expand to further locations. Vodafone’s appetite for 5G SA is shared by the UK government which in April set a target to deliver 5G SA to all populated areas of the country by the end of the decade. "Our wireless infrastructure strategy sets out our plan to ensure everyone, no matter where they live, can reap the benefits of improved connectivity. We are doing this by ensuring all populated areas in the UK will be served by what I call ‘5G-plus’ technology by 2030," said Michelle Donelan MP, Secretary of State for Science, Innovation, and Technology. According to the government, 5G SA will “unlock new technologies that will change our lives and the way businesses operate,” boasting a range of opportunities including driverless vehicles, robots, and drones, all of which will require technologies such as network slicing to provide dedicated private networks. Meanwhile, in South Korea, vendor Samsung paired with Korea Telecom (KT) to launch the country’s first 5G SA network in 2021.
Driving the business use case When 5G was first launched, it promoted faster download speeds and promised to be a game changer. Arguably though, it was more for the consumer. Now, with its Standalone core, it is seen as ideal for business. “For me, it’s a very clear driver in the B2B market where there are many differentiators, with the main one being network slicing
82 | DCD Magazine • datacenterdynamics.com
to offer ready customized service area connectivity for enterprise so you can offer a full private solution like it was in 4G, where it is totally isolated from the rest of the network,” Orange’s Despres said. “But thanks to a new feature such as local breakout, you can place a part of the core network in the client premises, which provides low latency that can be managed locally. So for example, when people from the company go outside they can get connection from the main public network 5G network.”
Challenges of going solo As with all technologies, there are challenges. Despres pointed out that the transition from legacy technologies to 4G and 5G itself requires a level of upskilling for Orange’s engineers, and 5G SA has been no different. Meanwhile, McCaughan acknowledged that AT&T wishes the implementation of a 5G Standalone network could have been quicker in general. “We wish it was a bit faster, but I don’t think we’re surprised at how it’s been. That said, we believe it's actually one of the fastest technologies we've seen in terms of the evolution of the network,” she said. “If you start to think about the evolution of the network, we were not talking about Open API-based digital platforms in front of a network much just a few short years ago.” McCaughan added that the pace of innovation has been really quick and that it does take time to get the right infrastructure in place to keep up with the advancements in technology.
Delivering on its hype Not everyone agrees that 5G has been a success. South Korean telco SK Telecom said in August that the technology has so far “underdelivered,” and has been “overhyped.” That’s where 5G Standalone hopes to prove the doubters wrong. We’ll hear more mobile operators announce their respective launches of 5G Standalone, likely dubbed 5G Ultra, Advanced 5G, or 5G Plus, or “true 5G,” as Vodafone UK chief network officer Andrea Dona calls it. That said, everyone DCD spoke to seems to agree on the same thing, and that is that 5G Standalone is the “real 5G.” Time will tell if the use cases that are touted can be underpinned by the technology on a wider scale to make a material difference to the telecommunications industry.
5G Explainer
#1 Cx PLATFORM FOR
DATA CENTERS
Visit cxalloy.com/dcdtrial for More Information
Issue 50 • October 2023 | 83
SAY HELLO
TO TOMORROW ALL-NEW RARITAN AND SERVER TECHNOLOGY
RACK PDUS
MAIN BENEFITS OF PX4 & PRO4X: • Real-time visibility, reporting, and alerting of power metrics and events • Best-in-class flexibility with C13 and C19 all-in-one outlets • Engineered for mission-critical uptime with mechanical locking outlets • Easy data collection and export to manage energy utilization • Secure encrypted communication, by default, for all PDU data
EXPLORE THE NEXT GENERATION OF INTELLIGENT PDUS TODAY LEARN MORE about Raritan LEARN MORE about Server Technology
DATA CENTER SOLUTIONS
Climate change Subsea comes for cable thenetworks Internet
>>CONTENTS
The tide comes in for subsea cable networks
Georgia Butler Reporter
Sea levels are rising and our coasts are wasting away, the question remains whether it will take our subsea networks with it
T
here is a place in Colorado, overlooked by the Boulder County Mountains, where scientists gather to study the disastrous reality of our melting ice caps and glaciers.
The National Snow and Ice Data Center (NSIDC) monitors all the world’s natural ice and observes as climate change slowly eats away at that supply. Colorado is one of the most central states in the US, but even from there the impact that climate change is having on our oceans and our coasts is obvious This is captured through satellites - in the obvious sense that we can see the square footage of our ice caps reducing through images, but also via altimeters which send down a pulse of light or microwave which is
then returned back, and through the speed and distance, the size of the ice mass can be quantified.
“The other aspect is melting land ice. It runs off into the ocean, which is about two-thirds of the sea rise that we've seen.”
As the ice reduces, we are slowly submerging or destroying the coastline. And there’s a less obvious effect: the digital infrastructure that lives there is at risk.
“The biggest effect is in coastal areas. The rule of thumb that I have heard is that for every foot of sea level rise, we lose about 100 feet of coastline - of beach. That depends on the type of coast - if it's cliffs then it won’t be as huge, of course.”
Who forgot the ice?
The implications of this rise on sea level
“Temperatures are rising, and that has two primary effects in terms of sea level,” Walt Meier, senior research scientist at NSIDC, told DCD.
are exacerbated and complicated by other factors, and there’s a specific part of digital infrastructure which is affected severely.
“One is pretty huge, and that is simply the direct effect in which, as water warms, it expands. That alone accounts for around a third of sea level rise so far,” Meier explained.
Most data centers are located further inland, but subsea cables and their landing stations, by their very nature, have no choice but to reside on the water’s edge.
Issue 50 • October 2023 | 85
DCD Magazine #50
>>CONTENTS
Coastal erosion
The longevity of subsea cables
“It should be fairly simple to know if [cable landing sites are] at risk or not, purely based on the sea level rise projection,” Dr. Yves Plancherel, a lecturer in climate change and the environment at Imperial College London told DCD during a particularly wet and dreary summer’s day in London, England.
Earlier this year the International Cable Protection Committee published Submarine Cable Protection and the Environment, a paper written by marine environmental adviser Dr. Mike Clare. Through his research, Clare identified a clear pattern of climate change wreaking
“We just have to figure out the maps of where these installations are, and then we can apply estimates of sea level rise for the next 50 or 100 years and quite easily see if they are within predicted flooding zones or not,” explained Plancherel. “But of course, what's difficult is that the coastline can change quite dramatically if you increase the [sea] level depending on whether the coastline is rocky, or sandy. Somewhere sandy is going to be very dynamic, whereas if it's a sheer cliff or rock, it will probably be less sensitive.” In terms of protecting our coasts, there are steps that can be taken. The UK Department of Environment, Food & Rural Affairs announced in 2020 that it would be investing £5.2 billion ($6.33bn) in flood and coastal defenses between 2021 and 2027 including nature-based solutions and sea walls. The US government in April recommended $562 million in funding for nearly 150 flooding resilience projects across 30 coastal states and territories, while the Climate Conservation Corps is expected to help protect the coast. But when it actually comes to reinforcing our coasts, those developments have unintended ripple effects.
there are consequences. If you protect one part of the coastline, you can severely affect the erosion next to where you protected that asset.” It is impractical to protect the entire coastline - thus decisions have to be made, and money invested in the problem.
A sandy problem In South America, there have been cases where beach erosion has been so bad that the subsea cables buried below have actually been exposed. Elena Badiola, product manager for dark fiber, colocation, and subsea services at Exa Infrastructure told DCD that in Argentina she had had to move a subsea cable manhole inland as it was close to exposure due to the amount of erosion. “You have to go and re-bury them deeper in the beach,” Badiola said. “Because, once the cable is exposed, it's much easier for it to have an outage.” Subsea cables are expensive and longterm projects - Google’s Equiano cable cost $1bn and took around three years to deploy so operators have to think about climate risk in the years ahead. “I think that environmental permits are getting more restrictive and more thorough, mainly because of this,” Badiola said.
“Coastal erosion is something that's much more risky. You can have coastal engineering, to some degree, but you can end up causing damage as we're talking about changing global coastlines,” said Plancherel. “There's going to be a competition for what you protect and the winning side gets to protect its assets. But at the same time,
“There are many circumstances that are taken into account now that weren’t 15 or 20 years ago. The stakeholders have to think how is climate change or erosion going to affect these beaches in the next 25 years? It’s a very long time, and even scientists don't really know how climate change is going to affect them because it has been happening faster and more severely than their forecasts were suggesting.” Those permits have to come from the local authorities - which is in itself a complicated issue as much of a subsea cable will be deployed in the “high seas,” space beyond territorial waters.
86 | DCD Magazine • datacenterdynamics.com
havoc on our subsea cable networks - not just predicting damage in the future, but giving concrete evidence from the past. A sediment flow in deep-sea Congo Canyon traveled over 1,000km, breaking cables connecting Africa. In 2017, Hurricane Irma’s floodwaters cut off power, submerged terrestrial data cables, and damaged landing stations across the Caribbean. Cyclones in Taiwan severed subsea cables in 2009, and storm surges in New York broke down Internet connectivity in 2012. These are perhaps the more extreme incidents that come to mind when we think of climate change, but Clare also saw this pattern extend to more gradual and granular impacts that we would usually disregard. Human activities such as fishing and ship anchoring remain the primary cause of subsea cable damage and outages - which happen somewhere between 200 and 300 times each year. Planning cable routes to avoid fishing areas is part of the surveying process before cables are deployed, but even with this consideration it is still a notable problem, and perhaps one that we can expect to get progressively worse. Ocean acidification (caused by the ocean absorbing carbon from the atmosphere), warming waters, unpredictable weather, and overfishing have been gradually driving fish into deeper waters. Waters where existing cables weren’t planning for that disruption, aren't as protected and will be more at risk. Then there are the terrestrial cables that are close to the coast but definitely not designed for subsea conditions. A 2018 paper projected that by 2030 - just
>>CONTENTS
Climate change comes for the Internet your boundary, the cable industry is comprised of so many elements - from the landing station to the cable itself,” said Starosielski.
seven years in the future - rising sea levels would submerge thousands of kilometers of onshore cable not designed to get wet. Similarly, cable landing stations also face flooding they were not designed for.
“There are all these other pieces that the industry is trying to account for. One is obviously a marine aspect. You have a fleet of ships that are older, and there's not a lot of overhead and margin in the supply side of the marine sector. Google has money to build cables, but you don't see SubCom, ASN, and NEC running around with a lot of extra cash to build new ships.”
A diversity of geography Sea level rise is not anticipated to affect the planet universally. Some areas are going to be more impacted than others, and some locations may even see the sea level fall, particularly those in a higher latitude such as Alaska or Norway where continents can “continental rebound” as the ice sheets break down. But those regions that are more at risk include the Gulf of Mexico, Northwest Australia, the Pacific Islands, Southeast Asia, Japan, and the Western Caribbean. The Under-Secretary-General for Legal Affairs and United Nations Legal Counsel Miguel de Serpa Soares, has also noted that “low-lying communities, including those in coral reef environments, urban atoll islands and deltas, and Arctic communities, as well as small island developing States and the least developed countries, are particularly vulnerable.” A location that falls at the perfect epicenter of these risk factors is the Maldives. Sitting in Southeast Asia, the Maldives is comprised of around 26 atolls and its ground is an average of just 1.5m above sea level. Imperial College London’s Shuaib Rasheed used satellite imagery to map the islands and found that the geography of the nation was undulating, like a living and breathing being, even on a relatively shortterm basis. “The effect of increasing sea level on the sediment balance in the Maldives is that you can actually see some islands grow, and some shrink. It's a very specific case, being that it is made up of coral atolls. But the complexity of the island is that it depends on a sand budget for existence - on how much sand is imported by the ocean or is exported,” Plancherel explained of his colleague’s research. It seems an impossible feat to engineer a coastal digital infrastructure that can bend and sway with the rhythm of the islands. Yet, earlier in 2023, a subsea cable landed on the island of Hulhumalé in the Maldives. DCD reached out to Ocean Connect Maldives (OCM) to find out what measures were taken to ensure the longevity of their subsea cable and landing station. “Coastal erosion and rising sea levels are very big concerns. That is why we had chosen Hulhumalé, an island entirely built on artificial land,” an OCM spokesperson said.
The island of Hulhumalé was designed. According to OCM, Hulhumale was created to have “eco-friendly initiatives” such as building orientations that reduce heat gain, streets with wind penetration optimization, and amenities within walking distance to reduce the need for cars. But, perhaps most notably, it has an average height of 2m above sea level - just that little bit more than other islands in the Maldives, reducing flood risks. OCM has put all of its equipment at the landing station on the first floor - just in case there is a flood. The landing station itself is located around 200m away from the beach manhole. It is here where the data center and operational offices also reside. “OCM’s Network Evolution Plan (NEP) is done annually for the next five years,” the company said. “Climate change is a slowmoving and long-term global issue with farreaching impacts. Therefore, we incorporate climate change forecasts and considerations into the NEP processes.” Beyond that, the company and the government conduct ongoing analyses of the sea and coastal geography of the island, with OCM noting: “The Maldives, including Hulhumalé, is particularly susceptible to the impacts of climate change, including rising sea levels, coastal erosion, and changing weather patterns.”
Part of the problem, or the solution? Our subsea networks are a victim of the problem, but they are also a contributor - as is every industrialized sector. Nicole Starosielski, author of The Undersea Network and subsea cable lead principal investigator for Sustainable Subsea Networks, evaluated the sustainability of subsea cables, while acknowledging the difficulty that Sustainable Subsea Networks has had in actually quantifying the sector. “It’s a difficult process, generating a carbon footprint of the [subsea cable networks] industry. Unlike a data center which has four walls where you can draw
The lack of cable ships is a notable issue for the subsea network - in part due to the long wait times for deployment, but also because these older ships tend to be worse for the environment. As of 2022, of the roughly 60 cable ships in working order, most ships were between 20 and 30 years old, 19 were over 30, and one was over 50. “They're [cable ship companies] trying to modernize and upgrade their fleet but they don't have the money funneling in like it is in the data center world. Cable ships have not transitioned to net zero but, then again, the whole shipping industry hasn’t. They're working on it - the International Maritime Organisation has a lot of efforts underway, but there remains this problem that the infrastructure needs to be updated.” Beyond the ships themselves, the materials used in the cables should be considered. While, once decommissioned, materials like copper can be recycled, the process of acquiring enough high-quality copper or steel can have a knock-on effect., based on the supply chain in various locations, e.g. the Nordic versus China. The quality of the materials and the cables themselves can have a long-term impact on the cable's ongoing footprint. If the cable needs regular repairs, each repair is just another hit to the planet - and the cable owner’s wallet. But, again, more armoring means more steel. Similarly, avoiding fishing locations and attempting to reduce the risk factors for damage increases the length of the cable and thus the need for materials. The sustainability of our subsea networks - both in terms of their impact on the environment and their longevity in the face of climate change is a long-winded conversation of compromise and adaption. But, as with any issue, the more data we have, the better we will be able to tackle it. That’s where we come back to initiatives like the NSIDC. The more accurate visualizations they provide of the state of our planet in the next 10 - 30 years, are the basis for ensuring our subsea network will be sustainable and resilient.
Issue 50 • October 2023 | 87
KNFILTERS.COM
Data centers 5G Explainer above
>>CONTENTS
Space: The final data center frontier Project Ascend and the EU’s dreams of data centers in space
O
Graeme Burton Partner Content Editor
n the surface, the idea of putting data centers in space, partly in a bid to reduce carbon emissions, among other objectives, sounds somewhat farfetched.
After all, blasting a rocket into space isn’t exactly an environmentally friendly process. It takes more than 400,000 liters of rocket fuel to get SpaceX’s Falcon 9 into space, for example, and getting an engineer out to fix something isn’t quite as straightforward as sending them to Slough or downtown Dallas. Then there’s the issue of power, whereby the entire facility will need to be powered by solar. Moreover, with geosynchronous orbit some 22,200 miles away, latency will also be palpable while all communications will be subject to atmospheric interference. That’s not all, there’s also a greater risk of electromagnetic interference from solar flares, not to mention a high-speed collision with the increasing amount of space junk cluttering up Earth’s orbit. And, finally, there’s the risk that at some point it will fall to Earth. None of these issues are news to Yves Durand, director of technology at Thales Alenia Space, one of the world’s biggest manufacturers of satellites. But as part of a 16-month European Union-funded project, called Ascend [Advanced Space Cloud for European Net zero emission and Data sovereignty], the task for Durand and his team is to work out the feasibility of running data centers
Issue 50 • October 2023 | 89
DCD Magazine #50
in space, and whether they could be consistent with the EU’s Green Deal plan to make the continent carbon neutral by 2050. “In essence, it’s a feasibility study to try to see whether putting some data center capacity in space might help reduce the carbon footprint of data centers. [We’re asking] whether putting them in space and capturing the energy from space, and using the natural cooling of space, could help?” Durand said during a recent broadcast on DCD’s Edge Computing Channel. So, while it might take a gargantuan quantity of rocket fuel to get it into orbit, there could well be plenty of power and cooling savings that might make it a worthwhile consideration for particular data center applications – and it’s Durand’s job to find out with the €2 million ($2.1m) project, part of the EU’s Horizon Europe science research program. “Space is becoming increasingly important. We capture lots of data, for example, of the environment, satellites taking pictures of the planet, and we also have to observe the Earth for potential environmental problems, such as fires or Earthquakes, and have to react quickly. “So to have processing and data storage of all that in space makes a lot of sense. It’s an ambitious project, but will give us a perspective on what we should do in terms of manufacturing servers and electronics that can withstand the environment in space. And having data centers in space should allow us to integrate space assets in a more global cloud environment.” We have already gone from standalone satellites to constellations, which “can now
>>CONTENTS
communicate with each other with intersatellite links and laser communication – it’s an information network in the sky,” says Durand. The next step, coming soon, is the installation of a small server on the International Space Station (ISS) as a joint experiment that also brings Microsoft into the fold. Microsoft will implement it, integrate into the cloud, and will have cameras set up so specialists on the ground can see what they can do with this capacity in space – without having to go up to the ISS in person. That follows previous tests by HPE and Skycorp, which both sent hardware to the ISS to test out what the impact of radiation has on hardware. Companies like JSAT and NTT are also evaluating putting data centers in space. But what about setting it up and maintenance? According to Durand, this is where robotics will come in. In addition to having Ariane Group on board, the biggest satellite launch company in Europe, and Airbus, Germany’s Institute of Robotics and Mechatronics (DLR) is also part of the team, examining the feasibility of using robots to set up the data center once it has been parked in orbit, and to provide ongoing maintenance.
the research may be valuable, but is it possible to do in terms of the EU’s net-zero ambitions? “By 2050, it’s likely that we will not have unlimited renewable energy capacity. So the first challenge is to analyze how data center needs will evolve because there’s an explosion in the use of data centers, and with people starting to use ChatGPT instead of Google as a search engine for example, the need for data center processing will be huge,” says Durand. So data center energy demands will continue to increase and renewable power sources on Earth may struggle to keep up. “When I started the project, I thought it would be a simple lifecycle analysis, but it’s become very complex and involved several types of analysis. It also involves energy prediction, which is huge. But we’re talking to more and more experts in the field who are interested in the project to make sure that it makes sense. “This might even come as a great push for both the space industry and the technology industry in Europe,” says Durand. In terms of sustainability, a large launch rocket like the SpaceX Falcon 9 – or the 2050 equivalent – might be able to bring economies of scale to bear, not just in terms of price, but also usage of rocket fuel.
So what about the issue of power – both the 400,000-liter question of getting there in the first place, as well as whether it’s truly possible to run a data center of any significant capacity on solar power alone?
Moreover, by 2050, there may well be a few more space stations orbiting Earth, not to mention Moon bases and possibly even a base on Mars, if SpaceX CEO Elon Musk gets his way.
Durand admits that the biggest issue is simply working out whether it makes sense. It may be technically feasible and
And each and every one of them will require data center facilities to support them as we expand beyond our planet.
90 | DCD Magazine • datacenterdynamics.com
The Business of Data Centers
5G Explainer
A new training experience created by
Issue 50 • October 2023 | 91
DCD Magazine #50
>>CONTENTS
Criticality and redundancy in the data center
Vlad-Gabriel Anghel Head of Product at DCD>Academy
How data centers keep running when there is a failure in the power train
D
ata centers ensure continuous uptime and availability by making sure the power never goes down. But how does a facility maintain the IT load if there is a grid outage or a failure within its power train? Most data centers worldwide make use of an uninterruptible power supply coupled with a standby diesel generator that kicks in when a grid outage is detected. Let's dive deeper into the uninterruptible power supply technology and how this ensures the availability of the load. Uninterruptible power supplies (UPSs) sit between the utility grid and the data center and provide power to the facility, either from the grid itself or - in the event of a grid disruption - from a local energy store that can power the facility for a short time. UPSs are split into two categories - static and rotary - according to their energy source. As the name implies rotary UPS systems store energy in a rotating flywheel, which acts as a dynamo to deliver power back to the data center when required. Static UPS systems use batteries. We will leave the comparative properties of static and rotary UPS systems to another article.
Since UPSs provide power to the facility, their output performance is measured specced in three ways: Input Dependency (AAA) - the output voltage is dependent on the quality of the input voltage. There are three classifications in terms of input dependency a.
Voltage and Frequency Dependant VFD
b.
Voltage Independent - VI
c.
Voltage and Frequency Independent - VFI
Output Voltage Waveform (BB) - given as two characters. The first represents the output voltage waveform when operating in normal mode (using utility power) and the second represents the output voltage waveform when operating in stored energy mode (from the batteries): a.
Low total harmonic distortion (THD) sine wave - denoted by S
b.
Medium total harmonic distortion (THD) sine wave - denoted by X
For now, it's important to understand how manufacturers of UPS systems represent and name their products.
The IEC UPS Standard The IEC 62040-3: UPS - Method of specifying the performance and test requirements is a standard that ensures vendors use the same naming conventions, helping users choose the best product for their requirements, as well as providing test requirements for performance validation.
92 | DCD Magazine • datacenterdynamics.com
c.
Non-sinusoidal wave denoted by Y.
There are 9 possible combinations of output voltage waveforms but the two most common types are sinusoidal (S) and non-sinusoidal (Y). Almost all UPSs rated above 2kVA will be denoted as SS whereas those rated lower than 2kVA will normally be denoted as SY. Dynamic Output Performance (CC) - the dynamic output performance represents the output voltage variations that happen when the operating mode changes - i.e. switching from the grid to the batteries (first character) and when the load is increased or decreased from the system (second character) These voltage fluctuations fall in three categories: a.
No interruption with ± 20 percent voltage regulation in under 10 ms.
b.
Up to 1 ms of interruption ± 20 percent voltage regulation in under 10ms.
c.
Up to 10 ms of interruption with + 10 percent and -20 percent voltage regulation in under 100 ms.
Staying up
>>CONTENTS
Modes of Operation A lot of uninterruptible power supplies can operate in several normal modes that have different output performance characteristics as per the IEC standard. A double conversion online UPS will have a VFI output performance when operating in normal mode and a VFD output when operating in high-efficiency normal mode. Essentially a trade-off has been made between the independence of the output and the energy consumption of the UPS. There are three main modes of operation: •
Normal Mode - The UPS pushes power to the load using the AC input power source (Utility) AND the energy storage device (batteries/flywheels etc) is connected and charging or already fully charged.
•
High-Efficiency Normal Mode - The UPS pushes the power directly to the load from the AC input power source with the aim of increased efficiency.
•
Stored Energy Mode - The UPS fuels the load with DC current using its energy storage device (battery/ flywheel) because the AC input power source is either experiencing an outage or is outside the allowable voltage or frequency ranges.
The double action of the UPS The role of the UPS in the facility is not just to keep the IT load up and running in the event of a power outage but to also provide high-quality and clean DC power from its energy storage device. The grid and the utility are subject to multiple power quality issues stemming from the fuel mix, age, and geographical location. The UPS can “clean up” the power source to remove those issues. The IEEE Standard 1100-2005 standard provides a common nomenclature for the business community and electrical industry when referring to power quality problems and has categorized these into seven categories which are based on the shape of the sine wave: 1. 2. 3. 4. 5. 6. 7.
1.
Transient Interruptions Sag / Undervoltage Swell / Overvoltage Waveform Distortion Voltage Fluctuations Frequency Variations
Transients
Transients are the most damaging type of power quality problem and, depending on how they occur, they are split into two categories - Impulsive and Oscillatory.
An impulsive transient is a sudden high peak event that raises the voltage and/or current. Examples of impulsive transients are lightning (and the electromagnetic fields that follow them), ESD (electrostatic discharge) and poor grounding. An Oscillatory transient is, in simple terms, a transient that causes the power signal to swell up and then shrink down very rapidly. This usually occurs when turning off an inductive or capacitive load - think about a spinning motor that has just been turned off - as the motor’s spin slows down it acts as a generator producing and pushing current through the wider circuit until it reaches a stop.
create sags. The term brownout is commonly used to describe this and undervoltages will lead to the failure of non-linear loads (like a computer power supply) and can overheat motors.
4. Swells and Overvoltages A swell is the opposite of a sag - having an uptick in AC voltage for a duration of 0.5 cycles to 1 minute. Common causes for this are represented by sudden huge load reduction and single-phase faults in a three phase system. As with undervoltage, an overvoltage is the result of long-term problems that create swells. Continuous overvoltage conditions can increase heat output from devices because of the stress of the additional voltage and thus something highly detrimental to the data center space.
Figure 1 - Impulsive and Oscillatory Transients Source: Vlad-Gabriel Anghel, DCD>Academy
2. Interruptions An interruption is defined as the total loss of the supply voltage or current. Depending on the length of the interruption these can be categorized as instantaneous (0.5 to 30 cycles), momentary (30 cycles to 2 seconds), temporary (2 seconds to 2 minutes), or sustained (bigger than 2 minutes).
5. Waveform Distortion This category includes all distortions present in the sinewave and it’s further categorized as follows:
DC Offset - this occurs when DC current is induced into an AC distribution System - most often caused by a faulty rectifier within the current AC to DC conversion systems most data centers will use. b. Harmonics this represents the distortion of the fundamental sine wave at Figure 2 - Interruption - Source: Vlad-Gabriel Anghel, DCD>Academy frequencies that are multiples of 3. Sag and Undervoltage the fundamental one - 120Hz is the 2nd harmonic of a 60Hz fundamental). A sag is essentially a reduction of the AC Effects of this include overheating voltage at a given frequency for the duration of 0.5 cycles to 1 minute. These are usually caused by system faults or the starting of equipment with heavy startup currents - like a large air conditioning unit. An undervoltage is the result of longterm problems that
a.
Figure 3 - Overvoltage and Undervoltage - Source: Vlad-Gabriel Anghel, DCD>Academy
Issue 50 • October 2023 | 93
DCD Magazine #50
c.
d.
e.
transformers, tripping of circuit breakers, and loss of synchronization. Interharmonics - this represents a type of waveform distortion that is usually caused by a signal imposed on the supply voltage by electrical equipment such as induction motors and arcing devices. The most noticeable effect of interharmonics is the flickering of displays and incandescent lights as well as causing excess heat. Notching - this is a regular voltage disturbance caused by normal operation of variable speed drives, arc welders, and light dimmers. Noise - this is unwanted voltage superimposed on the power system and it's usually caused by poor grounding. Noise can cause long-term component failure, distorted video displays, and even hard drive failures.
>>CONTENTS
This represents the rarest occurrence of power quality problems in stable utility power systems. It does pose a problem for data center sites that have poor power infrastructure and a heavily loaded generator. That being said, IT equipment is frequency tolerant and not affected by small changes in the frequency of the powertrain that it is connected to. However, frequency variations can cause a motor to spin faster or slower depending on the input power - this will affect its lifespan and performance.
7. Frequency Variations
Redundancy, Criticality, and “N” We now understand how a facility avoids a utility outage but what if the equipment in the facility itself fails? This is where the redundancy of the UPS system comes into play. A UPS system that can provide the full load that the IT load needs, and nothing more, is represented as “N,” in which the energy is provided by N subsystems or components. An “N+1” system has an additional component designed to support a single failure or required maintenance of any other component. 2N represents a fully redundant mirrored system with two complete, independent distribution systems.
6. Voltage Fluctuations A voltage fluctuation is essentially a systematic change of the voltage waveform, of small dimensions - around 95 to 105 percent of nominal voltage at a low frequency, usually below 25Hz.
the IT load or the IT equipment.
Figure 5 - Frequency Fluctuations - Source: Vlad-Gabriel Anghel, DCD>Academy The solution for all the abovedetailed problems is represented by the uninterruptible power supply, which takes in AC power from the utility, uses it to charge its energy storage device and then provides a clean sine wave and highquality power towards the IT hall. Through a switching mechanism, the UPS maintains the batteries charged and ready to operate even in the event of a prolonged grid outage with no impact to
Figure 4 - Voltage Fluctuations - Source: Vlad-Gabriel Anghel, DCD>Academy
Figure 6 - 2N UPS Configuration - Source: Vlad-Gabriel Anghel, DCD>Academy
94 | DCD Magazine • datacenterdynamics.com
These are fully separated and not dependent on each other and this translates to the fact that even if one whole system fails, the other one can safely handle the load. The UPS represents the first line of defense against outages and downtime. Firstly it regulates, converts and cleans the AC utility current into DC power ready to be served to the IT equipment, and secondly by correcting the swathe of quality problems that plague any electrical grid. There is no one-size-fits-all UPS design - owners and operators need to carefully consider their requirements by measuring factors such as IT load and environment, business aims, and budgets.
5G Beyond Explainer PUE
>>CONTENTS
Going metric: Beyond PUE Many companies have pledged to go net zero by 2030 or 2035. But how will they measure their progress and eventual success?
Graeme Burton Partner Content Editor
Y
ou cannot manage what you can’t measure, the old saying goes. But when it comes to sustainability, nothing can be done without adequate measurement – of power, water, and carbon usage; measurements that ultimately tap into almost every aspect of data center operations. Power usage effectiveness (PUE) proves how effective a good metric can be. When it was first proposed by Christian Belady and Chris Malone in 2006, the average data center PUE stood at around 2.5, meaning that every 1kW of compute power required 2.5kW of electrical power to produce it. That PUE of 2.5 probably didn’t even account for the many inefficient legacy enterprise data centers that were still chugging away in the stuffy backrooms of poorly run SMEs at the time. Since then, though, even those SMEs have shifted their IT into either colocation facilities or the cloud, and average PUEs now stand at 1.6, while the large investments from the big boys – Google, Meta (formerly Facebook), Microsoft, and others – enables them to achieve PUEs of around 1.1 in their marquee facilities.
Issue 50 • October 2023 | 95
DCD Magazine #50
>>CONTENTS
are becoming more sophisticated in terms of the type of data they are looking for in order to measure their own overall sustainability performance in our facilities,” Amanda Abell, senior director of sustainability at Vantage Data Centers, tells DCD. The pressure has been ratcheted up by the well-publicized decarbonization projects of aforementioned big-name companies, such as Microsoft and Google, in their attempts to find something more sustainable than diesel power for backup on the one hand, while also attempting to achieve genuine, 24/7 renewable power for their facilities.
The efficiencies that have been squeezed out by this very simple, straightforward measurement have therefore helped to drive huge savings in terms of power, money, and carbon emissions, but industry figures increasingly warn that PUE is reaching the end of the road and that other metrics are required to continue pushing the industry in the right direction. “PUE was great ten years ago. It was a simple metric we could all use and it really drove energy efficiency and reduced costs,” Carsten Baumann, director of strategic initiatives and a solution architect at Schneider Electric, told a panel discussion on DCD’s Energy & Sustainability Channel. “But PUE has been around for some time and most of the ‘low-hanging fruits’ in our data centers have been captured. There’s still more work that could be done, but we’re reaching the point of fast diminishing returns,” he adds. Part of the issue is the greater awareness of broader environmental issues, such as water usage, as well as the increased focus over the past decade on carbon reduction. Moreover, the sheer volume of data center capacity that is being added makes the issue even more acute, and this will only increase as generative AI applications go mainstream over the next decade – just as all those big data center net-zero deadlines come into view. The industry isn’t short of ideas over the next steps that could potentially be taken, and the kind of data that needs to be incorporated into new metrics. Nor is it solely being pushed by data center operators who want to be seen as sustainable, but by their increasingly demanding customers; and not just the biggest customers but, increasingly, rank-and-file customers, too. “We’ve always been very good at reporting energy use because that’s our bread-and-butter. But now our customers
“Google is trying to achieve 24/7 carbon-free energy. That’s the next level of granularity that we need to be able to provide to customers within our facilities, because Google certainly isn’t the only one asking that question, trying to tie energy use with real-time renewables from the grid. “To do that you need almost minuteby-minute – at a minimum hourly – data on operations so that you can assign each electron to a customer for their operations. They haven’t asked for it yet, but we can see that trend coming,” says Abell.
assessment that can help organizations capture their Scope 3 carbon emissions, enabling them to choose lower carbon options. However, ISO 14025 is still in its infancy and the level of information required means that it is somewhat bureaucratic – it even requires third-party certification, so is not going to be suitable for a wide range of goods and services. “I wouldn’t say that our own customers are asking for that level of granularity [on carbon emissions] at this point,” says Witkop. “Our customers are still really focused on energy and water at this point.” Crane is looking to suppliers like Schneider to provide that data so the company is prepared and ready when customers start to ask questions about Scope 3 emissions. “The challenge is that not every supplier is like Schneider and they’re not all doing this and, if they are, they’re not all doing it the same way,” he adds. Even something as mundane as the concrete used to construct Crane’s data centers can have different levels of carbon embodied in the mix depending on where it is made. Witkop adds: “The challenge for us as a developer is how do we get to the point where some of these metrics and reporting standards become as pervasive as PUE?”
Even if a data center operator is able to provide 100 percent renewable power in a facility with a low PUE, the conversation will then turn to water usage, says Adam Witkop, chief technology officer at Crane Data Centers, which hopes to build sustainabilityfocused data centers in the US. And those questions aren’t just coming from customers, but the municipalities in which Crane is looking to expand into.
But the question is, will all these efforts yield genuine, sustainable environmental benefits? Some remain skeptical.
“It even goes beyond energy and water use. We’re also getting questions around the carbon impact of how we build and how we operate,” says Crane’s Witkop.
The research paper, ‘System-level impacts of voluntary carbon-free electricity procurement strategies,’ suggests that the large-scale purchases of renewable power by organizations that want to be seen as green don’t take into account the bigger picture. One power-hungry organization tying up long-term contracts for limited renewable energy contracts often just means less for other organizations, at a higher price. Therefore, it can simply lead to the displacement of carbon emissions.
In other words, while PUE was straightforward (even if it could be ‘gamed’ by less scrupulous organizations, the scope for doing so was limited) tackling this range of environmental issues with new metrics introduces a plethora of complexities. “It’s going to require digitization; a ‘single source of truth’ of data from companies and driving transparency industry-wide,” he says. Perhaps, he adds, it requires something akin to food labeling to enable different offerings to be fairly compared. “By far the largest contributor to carbon emissions is the embodied carbon in the supply chains,” says Baumann. So the first step is to ensure that any product or service comes with an environmental product declaration, as specified by ISO 14025. This will provide an environmental lifecycle
96 | DCD Magazine • datacenterdynamics.com
In September, Princeton University’s ZERO lab released a working paper on the most common net-zero energy-purchasing policies pursued by some of the world’s biggest companies. It concluded that they will have little or no long-term effects on reducing carbon emissions.
Even shifting from renewable energy certificates (RECs) to power purchase agreements (PPAs) - in which the customer pays for a certain quantity of renewable power to offset its own use - only goes so far, the report suggests. Nevertheless, it does highlight the need for measurements and metrics that everyone can rely upon, as well as action on sustainability which is about more than just one organization serving itself.
5G Explainer
Delivering Increased Capacity and Energy Savings in Data Centers with Maxwell™ Maxwell™ is a patented innovative technology that increases heat transfer systemwide with low risk and high sustainable returns for end users. In the fast-paced world of data centers, where efficiency, capacity, and sustainability are paramount, a breakthrough innovation has emerged to address these critical needs. Maxwell™ a heat transfer fluid additive , developed by HT Materials Science, is revolutionizing the way data centers operate by significantly increasing delivered capacity and delivering substantial energy savings. Data centers are the backbone of our digital world, powering everything from cloud services to e-commerce platforms and the healthcare industry. However, they face a dual challenge: the ever-increasing demand for computing power and the need to reduce energy consumption and carbon emissions. The data center market continues to be a titan of ever-expanding size and necessity in today’s world. As data centers continue to grow in size and complexity, new solutions are needed to overcome these challenges.
DECREASE HVAC ENERGY 10-15%
Maxwell™ is a game-changer for data centers and other high usage environments. This innovative heat transfer fluid is engineered to enhance thermal energy transfer in closed-loop hydronic cooling systems. What sets Maxwell™ apart from other technologies is its ability to increase thermal energy transfer by approximately 15% or more in water or glycol-based systems. This boost in efficiency translates into several significant benefits for data centers. One of the standout advantages of Maxwell™ is its capability to increase cooling capacity. In data centers, maintaining the right temperature is crucial to prevent equipment overheating and ensure optimal performance. Maxwell™ achieves this by improving heat transfer within the cooling system. For instance, in the evaporator of chillers and heat pumps, Maxwell™ enhances heat transfer between the fluid and the refrigerant. This enhanced heat transfer results in the compressor doing less “work” by decreasing lift and lowers energy consumption. As a result, data centers can achieve higher cooling capacities without a corresponding increase in energy usage. In some applications, the Coefficient of Performance (COP) can be boosted by 15% or more, demonstrating Maxwell’s effectiveness in increasing cooling efficiency. Data centers are notorious energy consumers, and their environmental footprint is a growing concern. Maxwell™ steps in as an ecofriendly solution that not only increases capacity but can also contribute to substantial energy savings, if increased capacity is not required. By reducing the workload on chillers, compressors, heat exchangers and other cooling components, Maxwell™ leads to a decrease in overall system energy consumption. The annual energy savings achieved with Maxwell™ translate directly into cost reductions for data center operators. Lower energy bills and decreased carbon emissions not only make financial sense but also align with global, governmental and corporate sustainability goals. The adoption of Maxwell™ supports data centers in becoming more environmentally responsible while maintaining high-performance standards. Perhaps one of the most compelling aspects of Maxwell™ is its rapid return on investment (ROI). Data center operators can expect to recoup their investment in Maxwell™ within a remarkably short timeframe, typically ranging from 1 to 3 years. The specific payback period depends on factors such as utilization rates and energy costs, but the financial benefits are tangible, substantial, and sustainable. In the dynamic landscape of data centers, staying ahead of the curve requires innovative solutions that can boost capacity, increase efficiency, and contribute to sustainability. Maxwell™ from HT Materials Science is a solution that checks all of these boxes. By enhancing thermal energy transfer, it not only expands cooling capacity but can also drive down energy consumption and operating costs. With a rapid ROI and significant environmental benefits, Maxwell™ is poised to revolutionize the way data centers operate and grow, ensuring they remain the backbone of our digital world for years to come. Data center operators looking to maximize efficiency and sustainability should consider Maxwell™ as a powerful ally in their journey toward a more efficient and eco-conscious future.
Contact HT Materials Science for more information: 120 W 45th St, 2nd Floor New York, NY 10036 info@htmaterialsscience.com (716)446-4171
Issue 50 • October 2023 | 97
>>CONTENTS
The AI Century
Lessons from Montreal for the AI century
T
here’s something unsettling about the world’s smartest people and the world’s richest people teaming up to use vast quantities of compute all for one task: To get me fired. Despite what we are often led to believe, technological advancement is not inevitable. We’re not locked to one pathway of progress, unable to save ourselves from our inventions. Public input, government policy, and careful thinking at the corporate and individual level can lead to drastically different outcomes. When it became clear that chlorofluorocarbons depleted the ozone, companies like DuPont tried to lobby against any action. The company called the ozone depletion theory a "science fiction tale [and] utter nonsense," arguing that the progress and lifestyle improvement offered by CFCs were worth any risk. But concerted efforts by activists, scientists, governments, and some of DuPont’s competitors brought about the Montreal Protocol, a comprehensive ban on CFCs that reversed a growing ozone hole, and saved us from disaster. Now, as we race into the age of AI, we must once again consider where we draw the line. Those pushing generative tools intentionally put the focus on the far-away risk of artificial general intelligence, and not on the much more apparent risk of deep, lasting job losses.
98 | DCD Magazine • datacenterdynamics.com
Beyond the initial layoffs, it's also not clear what a post-human creator world would actually look like. Good journalism, already a rarity, requires not just rewriting press releases, but adding to the world's collective knowledge through investigation, interviews, and imagination. AI won't be able to do that, but it may be able to kneecap the business models that support good journalism with workaday information delivery that involves a certain amount of press release rewriting. The same is likely in other sectors. AI will be able to work across industries, cheaper and faster than people, and does not need sleep or threaten unionization. For managers, it will be a simple balm to paper over any issues. But, in doing so, certain skills will be lost that the AI won't be able to handle. We don't know what those skills are, so we're not prepared for what might disappear. Already the gamification of some roles, with human workers trained to work in less intelligent ways to meet performance metrics, hints at what is to come. As we rush headlong into this eventuality, it’s worth pausing and thinking about what we want to save, and how to realign technology to build a future that we can actually live in.
- Sebastian Moss, Editor-in-Chief