DCD Mag Issue 49: The Carrier Hotel Renaissance

Page 1

Also Inside: Race cars Big chips Telco towers Issue 49 • July 2023 datacenterdynamics.com

Connec t

Data Center Solutions

Delivering global value at every phase of data center development.

Whether you operate a multi-tenant data center, work exclusively within edge environments, require a strategic point of distribution for your hyperscale, or have new enterprise facility needs around the world, Wesco can help.

We build, connect, power and protect the world.
Wesco.com/datacenters

6 News

Singapore’s tentative return, Twitter’s unpaid X-penses, Compass’ sale

12 The rise and rebirth of carrier hotels

Legacy interconnection hubs find new meaning in the age of cloud & Edge

18 New York’s carrier hotel microcosm

The future of the Big Apple’s big data center push

24 Pure DC’s CEO - Dame Childs

On building hyperscale data centers and becoming a Dame

31 The Network Edge supplement

AtlasEdge, telco towers, the Edge in review, and satellite networking in this extra-large supplement

47 The driving force behind F1

How data centers can help top racers shave precious time off of their laps

52 Big plans for the big chip company’s big supercomputer

Cerebras and Colovore want to take advantage of the AI revolution and expand across the US

56 The many lives of Evoque’s data centers

Former telco assets find a new life as Evoque targets hyperscalers

60 What is green software?

It’s time for more efficient code to cut our carbon footprint

66 Evaluating carbon capture’s promise Taking carbon out of the atmosphere sounds like a brilliant idea. But...

70 The broadband industry’s collective governing body

WBBA hopes to copy the GSMA and create a standard body for the sector

72 The great telecom tower sell-off

Why operators are spinning off their assets

76 Op-ed: Don’t get hooked on carbon capture

Why the easy promise of buying carbon salvation can never beat simply creating fewer emissions

July 2023 Issue 49 • July 2023 | 3 12 52 18 47 60 24
Contents

From the Editor

You could be forgiven for thinking that carrier hotels are old fashioned - that their days are numbered. But if you think this, you're wrong.

In big cities like New York, the carrier hotels are in vast old buildings. 60 Hudson was built for Western Union nearly 100 years ago, and has housed data centers for around 25 years.

Carrier hotels appeared during the 1990s colocation boom, and are based around a meet-me room where individual carriers connect. Is that relevant in a cloud-and-Edge era?

Meet-me rooms are just as important as when they first appeared in the 1990s

Very much so. Not all of our capacity can be centralized in the cloud, or pushed to a street corner.

The need for carrier-neutral space is greater than ever, even while enterprise capacity is migrating into centralized hyperscale barns.

But carrier hotels face challenges. Some are in old buildings that have already been refitted several times.

In other locations, there are no meet-me rooms at all. Some local governments welcome cloud data centers, while all the carrier-neutral traffic has to be sent via a distant meetme room in another far-off location.

This issue covers carrier hotelsand we also present a deeper look at the situation in New York.

Green code and carbon capture

Data center operators are trying to reduce their emissions by improving cooling systems and servers, but the most effective way to reduce emissions might be green software.

If code ran twice as efficiently, you could do without half your data centers.

And it's no pipe dream. In the early days, software had to be compact, but for years now processors have been powerful enough to perform almost any task, and software development aims for delivery (beating the competition), not efficiency (using fewer cycles).

We've been talking to people who want green software, and their pitch is better than the carbon capture proponents, who want to pull CO2 back out of the air after we've emitted it.

The Dame of hyperscale

Uniquely among data center CEOs, Dawn Childs is a Dame of the British Empire. We spoke to her about Pure DC, and what the industry needs to do to get more women involved.

Elsewhere in this issue, we find out what's changed at AT&T spin-off Evoque, and look at the prospects for Cerebras's giant AI chips, and its specialized water-cooled colocation partner.

Live fast, sell towers

Formula 1 is a data driven sport, where narrow margins are created by complex calculations. We kick the tires on some of the world's most tightly regulated supercomputers.

And for our telecoms focus, we have a supplement on the telco Edge - and a feature asking why the mobile industry, no longer want to own its towers.

work started on building 60 Hudson, now a major

Meet the team

Publisher & Editor-in-Chief

Sebastian Moss @SebMoss

Executive Editor

Peter Judge @Judgecorp

News Editor

Dan Swinhoe @DanSwinhoe

Telecoms Editor

Paul Lipscombe

Reporter Georgia Butler

Head of Partner Content

Claire Fletcher

Partner Content Editor

Graeme Burton @graemeburton

Partner Content Editor

Chris Merriman @ChrisTheDJ

Brazil Correspondent

Tatiane Aquim @DCDFocuspt

Designer

Eleni Zevgaridou

Head of Sales

Erica Baeta Conference

Director, Global Rebecca Davison

Channel Management

Team Lead

Alex Dickins

Channel Manager

Kat Sullivan

Channel Manager

Emma Brooks

Channel Manager

Gabriella Gillett-Perez

Chief Marketing Officer

Dan Loosemore

Head Office

DatacenterDynamics

22 York Buildings, John Adam Street, London, WC2N 6JU

Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.

Dive even deeper Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below. Events Intelligence Debates Training Awards CEEDA 1928 Issue 49 • July 2023 | 5 >>CONTENTS

The year
Carrier hotels: Rooms with a view? New
York carrier hotel
© 2023 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any
views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates.
Peter Judge Executive Editor

News

The biggest data center news stories of the last

three months

Singapore selects Equinix, Microsoft, AirTrunk, and GDS for 80MW data center trial

Four data center operators have been allocated 80MW of capacity for new facilities in Singapore as the city-state looks to end a years-long moratorium on new developments in the city-state.

Singapore has had a moratorium on new data center developments since 2019 (although already authorized facilities were allowed to be built after this point). However, this ban was relaxed slightly in July 2022 when the Singapore Economic Development Board (EDB) and the Infocomm Media Development Authority (IMDA) announced a pilot scheme allowing companies to bid for permission to develop new facilities.

IMDA and EDB said this week they have provisionally awarded around 80MW of new capacity to four data center operators.

The four winning operators were listed as Equinix, GDS, Microsoft, and a consortium of AirTrunk and TikTok-owner ByteDance.

The EDB and IMDA said the four proposals were best able to meet the state’s “desired outcomes to strengthen Singapore’s position as a regional hub and contribute to broader economic objectives.”

EDB and IMDA said there was “significant interest,” receiving more than 20 proposals.

“We aim to allocate more capacity in the next 12 to 18 months to advance our interest as an innovative, sustainable, global digital hub,” the EBD and IMDA said.

“We remain committed to the sustainable growth of the DC sector and will develop a roadmap together with the industry towards the development of Green DCs with lower carbon emissions in support of Singapore’s net-zero target.”

When the pilot was launched last year, it was reported authorities would allocate around 60MW of capacity for data center development on the island.

Digital Realty, which already has three data centers in Singapore, was one of the companies rumored to be interested but missed out. The company reportedly applied to build a 60MW data center under the pilot.

Equinix operates five facilities in Singapore. Its SG5 data center was one the last to open after the moratorium was introduced.

AirTrunk opened its Singapore data center in 2020. Microsoft opened its Singapore Azure cloud region in 2010; the region has three availability zones. This would be GDS’ first facility in Singapore.

bit.ly/SingaporeThaw

DataBank sells French data centers to Etix Everywhere

US operator DataBank is selling its French data center portfolio to Etix Everywhere and exiting the country. This includes five data centers in Paris, Toulouse, & Montpellier totaling 3.7MW.

Microsoft signs 24/7 nuclear power deal with Constellation

The cloud company signed a PPA with Constellation to power a new data center in Boydton, Virginia with renewable energy in a 24/7 matching deal.

Infratil acquires majority stake in Console Connect for $160 million

Investment firm Infratil has bought a stake in the network provider from PCCW’s HKT. The two companies are to jointly invest up to $295m in Console Connect to speed growth.

D-Wave avoids delisting from the NYSE

Quantum computing company D-Wave has regained compliance with the New York Stock Exchange (NYSE), after being faced with potential delisting. Its stock traded below the required $1 minimum for months but has since rallied.

US gov’t considering restricting Chinese access to US cloud providers

The WSJ reports the Biden admin is considering new rules that would require US providers such as AWS and Microsoft to seek permission before providing certain services to Chinese customers. The idea is to close loopholes allowing blocked customers access to AI hardware.

AWS’ head of data centers joins Google Cloud, Google’s Urs Hölzle steps back

Chris Vonderhaar left AWS in May after 13 years at the company and is now heading up Google’s cloud supply chain and operations. At the same time, Urs Hölzle, one of Google’s first employees and former data center lead, is stepping away from many day-to-day activities.

Whitespace NEWS IN BRIEF
>>CONTENTS 6 | DCD Magazine • datacenterdynamics.com

Twitter plays chicken with Oracle, Google, and AWS over unpaid bills

Twitter has been playing chicken with its cloud providers over unpaid bills.

As well as not paying rent on offices, the social media company in recent months has been accused by numerous publications of not paying bills to Amazon Web Services, Google, and Oracle in the wake of new owner Elon Musk’s ongoing efforts to cust costs.

In a number of instances the company was reportedly close to having its services cut off before relenting.

Oracle reportedly went months without being paid for services rendered. Oracle representatives have reportedly been calling current and former Twitter employees in an attempt to collect on overdue invoices “well into the six-figure range.”

Twitter CEO Elon Musk and Oracle founder Larry Ellison are long-time friends, with the latter investing $1 billion into Musk’s $44bn takeover of Twitter.

At the same time, Twitter has reportedly been was withholding payments due to Google as its contract comes up for renewal, potentially risking some services being shut off.

Reports said Google could not get through to Musk to discuss the unpaid bills, and went as far as attempting to reach him through his other firm, SpaceX, also a GCP customer.

The two companies seemingly ended the feud after high-level talks between Twitter’s new CEO Linda Yaccarino and Google Cloud CEO Thomas Kurian. The two companies

were said to be now exploring a ‘broader partnership’ that could include advertising and Google’s use of Twitter’s API.

Twitter signed a contract with Google in 2018, and announced an expansion of its GCP footprint in 2021. The multi-year deal up for renewal reportedly spanned five years and was valued at more than $1 billion, with some $300 million due in 2023.

In March, Twitter began a similar game of cat and mouse with Amazon Web Services (AWS), reportedly refusing to pay its AWS bills for months.

In retaliation, Amazon then refused to pay for the advertising it runs on the social platform, leading Twitter to pay at least some of what was owed. Twitter had been an AWS customer since at least 2020.

Since he took over Twitter in a $44 billion acquisition last year, Elon Musk has been looking to reduce company costs, including its IT footprint, by reducing spend on both cloud and on-premises IT resources.

The company has closed one of its three US data centers and reportedly exited another – with Musk’s other company Tesla taking vacated space in at least one of the sites. It has also cut back on server capacity, and abruptly fired IT and software workers that kept the service online.

Twitter has experienced a number of major outages since Musk took over.

The company has also faced an exodus of users to Twitter clones - including Threads, from Mark Zuckerberg’s Meta - which have been have launched hoping to lure disaffected customers.

bit.ly/BlueBirdPlaysChicken

Meta growth causing more emissions than it can cover

Meta has been growing too fast to achieve its goal of being net zero in 2030, according to a report it has issued.

The social media firm’s 2023 Responsible Business Practices report shows the company’s supply chains and ecosystem produced 8.5 million tonnes of CO2 equivalent in 2022, while its own emissions were only 67,000 tonnes.

Meta admits its growth over the last five years has vastly exceeded its ability to cancel the emissions: “Our business growth accelerated at a faster pace than we can scale decarbonization.”

The company says: “Early in this decade, we do not expect decarbonization and business growth to be in harmony.” Including factors such as staff returning to offices after the pandemic, Meta’s emissions increasing 46 percent in 2022.

Most of Meta’s emissions come from the goods and services it buys, which are very hard to decarbonize. It offers some promises to “engage with suppliers,” to get them to use renewable energy, but falls back on a promise to use carbon removal to compensate for the shortfall.

The data center with the biggest carbon footprint is Meta’s largest, in Prineville, Oregon, with 4,500 tonnes of CO2 equivalent.

bit.ly/WhatsTheMetaWithEmissions

>>CONTENTS
Issue 49 • July 2023 | 7 DCD Magazine #49
Wikipedia / US DoD

Residents in Virginia’s Prince William County look to sell land to developer

Sanders Lane is located directly north of Pageland Lane, on the opposite side of Sudley Road, and on State Route 234.

Like Pageland, it is largely rural land adjacent to a power transmission line. It falls under the Catharpin unincorporated community.

Gainesville Supervisor Bob Weir has confirmed that an assemblage effort is proceeding on Sanders Lane.

The identity of the potential buyer has not been disclosed. Likewise, details on which parcels and the total acreage are on offer aren’t known at this point.

Meanwhile, Amazon has continued its mammoth efforts to cover every inch of Virginia in data centers.

The company recently filed for four data center campuses in Virginia’s Spotsylvania and Caroline Counties, which would amount to more than 10 million square feet.

Most of the campuses are close to the town of Carter’s Store, located south of Fredericksburg. Documents suggest the company is planning to invest around $1 billion in each project, most of which will be built out in phases until around 2035.

Amazon has also filed plans with Loudoun County to demolish nine office buildings across three sites in Sterling and replace them with four data centers spanning more than 900,000 square feet (83,600 sqm) between them.

People living in Virginia’s Prince William Country are seeking to sell their land to an unnamed data center developer.

Residents of Sanders Lane outside Gainesville are seeking to assemble and sell their properties to a data center developer in a bid to copy the efforts of landowners on nearby Pageland Lane, who clubbed together to sell more than 2,000 acres to QTS and Compass for two massive new data center developments outside Manassas.

Elsewhere, in Virginia’s Culpeper, two developers have filed to build two adjacent campuses which would add up to 4.46 million sq ft.

Peterson Companies is looking to construct a nine-building development known as the McDevit Technology Campus. Directly north, CR1/Culpeper LLC and CR2/Culpeper LLC are planning an eight-facility development known as the Copper Ridge Campus.

A trio of site plans would see Amazon knock down the three Loudoun Commerce Center buildings at 45965, 45969, and 45975 Nokes Blvd.; the three buildings at 46000, 46010, and 46020 Manekin Plaza; and the three buildings at 21660, 21670, and 21680 Ridgetop Circle. All three sites are within a mile of one another and located close to number of other data centers.

Amazon has also recently acquired a 25-acre plot of land at 44150 Wade Drive in Chantilly in Fairfax County. Currently a composting site, the property is already zoned for data centers and is located just north of a trio of data centers operated by AWS and west of a site set to house a fourbuilding data center campus in the future from the cloud giant.

bit.ly/PWGatewayII

Bidding war erupts for Chinese data center firm ChinData

Chinese data center firm ChinData is in the middle of a bidding war between its largest investor and a Chinese state-owned holding firm.

In June, ChinData’s existing investor Bain Capital submitted a non-binding proposal to acquire all of the outstanding ordinary shares of the company and take the company private.

In July, China Merchants Capital Holdings submitted a larger counter-bid to acquire the company.

Bain, however, has told ChinData it will not sell its shares to any third party and remains “fully committed” to its own bid.

ChinData operates more than 17 data centers across China, Malaysia, and Thailand, with its non-Chinese facilities operated by subsidiary Bridge Data Centres, which is also building a site in India. TikTok owner ByteDance is a major ChinData customer.

Bain bought ChinData in 2019 from Wangsu Science & Technology Co. and merged it with its portfolio firm Bridge Data Centres. The combined company went public in 2020. Bain still owns around 42 percent of ChinData, with SK Holdings taking a stake in the company in 2020.

ChinData was reportedly fielding acquisition offers last year, with China Merchants reportedly considering a bid.

bit.ly/BainToTheBone

Whitespace
>>CONTENTS 8 | DCD Magazine • datacenterdynamics.com

The Global Critical Cooling Specialist

With British engineering at the heart of our products, Airedale manufactures across several continents so that our clients can apply our solutions worldwide.

A digital world needs a global specialist. Our chillers, software, CRACs, CRAHs and fan walls are engineered to perform in the toughest conditions, all year round. When you partner with Airedale, you can be reassured of quality, reliability and efficiency.

Start your journey with us today.

Leeds, UK Global Headquarters

Chillers, CRAHs, Telecoms, R&D, Test Labs

Consett, UK

AHUs, CRAHs, Fan Walls

Guadalajara, ES

Global Locations: www.airedale.com

CRAHs, Fan Walls, Test Labs

India CRAHs

Rockbridge, VA, US

Chillers, Test Lab

Grenada, MS, US

CRAHs, Fan Walls, Test Lab

Dubai, UAE Sales Office

Consett, UK Leeds, UK Guadalajara, Spain Rockbridge, Virginia Grenada, Mississippi Dubai, UAE India

ViaSat-3 satellite suffers “unexpected event” during reflector deployment

Viasat’s latest communications satellite is experiencing deployment issues that will impact its performance.

In July the company said an “unexpected event” occurred during reflector deployment that may materially impact the performance of the ViaSat-3 Americas satellite.

Viasat and its reflector provider are conducting a review to determine the impact and potential remedial measures, and contingency plans are being “refined” to minimize the economic effect on the company.

“We’re disappointed by the recent developments,” said Mark Dankberg, Chairman and CEO, Viasat. “We’re working closely with the reflector’s manufacturer to try to resolve the issue. We sincerely appreciate their focused efforts and commitment.”

Viasat said potential options include redeploying satellites from Viasat’s existing fleet to optimize global coverage, and/or reallocating a subsequent ViaSat-3 class satellite to provide additional bandwidth over the Americas.

The company’s share value has dropped almost 35 percent since it announced the news.

First announced in 2015, ViaSat-3 is a constellation of three geostationary Ka-band communications satellites; Americas, EMEA, and APAC. The satellites are each expected to have a throughput of more than 1Tbps and download speeds of 100+ Mbps. The company itself has previously noted the satellites’ “enormous” reflector.

The Americas satellite was the first of the three in orbit, launched in May 2023 aboard a SpaceX rocket. The EMEA satellite is due to launch in September. The company didn’t say if this reflector issue will impact the deployment of the next satellite.

SpaceIntelReport noted that, if the satellite is lost, Viasat may trigger a $420 million claim. A space insurance underwriter described the situation to CNBC as a “market-changing event” for the sector.

CNBC noted the design of the reflector on the Viasat-3 Americas satellite appears to match the “AstroMesh” line of reflectors that Northrop Grumman advertises. Northrop Grumman did not immediately respond to CNBC’s request for comment. Boeing provided the satellite bus, a 702MP+ platform.

In June US-based ViaSat closed its $7.3 billion merger with British satellite operator Inmarsat.

bit.ly/ViaSad-3

Vodafone and Three agree to £15bn UK merger

Vodafone and Three have agreed to a £15 billion ($19bn) merger, to create the UK’s biggest mobile operator with 27 million combined subscribers.

First announced in October, the merger will give Vodafone a 51 percent majority stake of the combined entity, currently labelled as “MergeCo,” with CK Hutchison’s Three holding the remaining 49 percent.

The move will consolidate the UK’s mobile market from four operators to three, with BT/EE losing its place as the UK’s biggest mobile network provider for customer numbers. Virgin Media O2, created by merger in 2021, will also be overtaken.

Vodafone said it expects the deal to close by the end of 2024, pending regulatory approval.

No cash is expected to change hands. The deal will be completed through a debt adjustment, with Three transferring £1.7bn to the new company.

bit.ly/Vodafon3

Dish Network linked with possible EchoStar merger

Dish Network is eyeing a merger with satellite communications provider EchoStar.

Dish co-founder and chairman Charlie Ergen is reportedly looking to merge the two halves of his telecom empire, with both companies engaging advisors to work on a deal.

EchoStar Communications Corporation was founded back in 1980 as a distributor of C-band TV systems. Dish was spun out of the company to offer home satellite services in 1996.

Dish has been investing heavily in its 5G network rollout, and recently met its commitment to cover 70 percent of the US with a 5G wireless network last month.

The company’s next deadline is set for 2025 and while it only requires that Dish cover 75 percent of the US, this will cost billions as Dish will have to cover rural and hard-toreach parts of the country. The telco has around eight million retail wireless subscribers.

bit.ly/ADishBestServedMerged

Whitespace
>>CONTENTS 10 | DCD Magazine • datacenterdynamics.com

Sabey president says hyperscalers are blocking colo efficiency improvements

Despite their own sustainability claims, big cloud players are forcing their colocation landlords to waste energy on their behalf.

Companies such as Amazon, Google, Meta, and Microsoft are insisting on contract terms that force third-party operators keep data centers cooler than necessary, according to Sabey Data Centers president Rob Rockwood.

Despite raising temperatures in their own facilities as high as 29.4°C, hyperscalers are insisting on contract terms that keep hall temperatures lower than necessary and demand that facility owners waste energy cooling them down.

Colo providers want to change cooling standards, but the hyperscalers are apparently refusing to cooperate.

“If we as an industry could get more reasonable [contracts] for what goes on inside a data center, it would decrease the cost to deliver it, it would decrease the cost of the lease, and it would decrease the amount of power that gets used,” Rockwood said. “It would very much be a more efficient operation overall.”

bit.ly/TooCoolForColos

Brookfield and Ontario Teachers’ Plan acquire Compass Datacenters

for $5.5bn

Investment firm Brookfield Infrastructure Partners has acquired Compass Datacenters alongside the latter’s existing investor Ontario Teachers’ Pension Plan.

In June the companies said they had entered into a definitive agreement with RedBird Capital Partners and the Azrieli Group to acquire Compass. The transaction is expected to close by year-end, subject to customary regulatory approvals. Terms of the deal were not disclosed, but previous reports suggest the deal could be valued at more than $5.5 billion, including debt.

Brookfield said Compass founder and CEO Chris Crosby, as well as the current Compass management team, will continue to lead the company after the transaction closes.

The companies said the new ownership structure will provide Compass with “continued strong operational and financial support” to execute

Peter’s factoid

its data center campuses across multiple regions.

Founded in 2011, Compass currently operates or is developing around 16 data center sites across the US, Europe, and Israel. The company also has a modular offering. The company plans to develop an 11 million sq ft data center campus in Virginia.

Ontario first invested in Compass alongside Redbird in 2017. Israeli real estate developer Azrieli Group bought a 20 percent stake in the company for £135 million ($172m) in 2019.

Infrastructure investor Brookfield has significant data center holdings and has made billion-dollar deals in the space before. The firm recently acquired European data center firm Data4 in a deal that reportedly valued the company at around €3.5 billion ($3.77bn).

This is the largest data center acquisition announced so far in 2023.

bit.ly/BrookfieldStrikesAgain

Telecoms firm Veon - which was originally founded in Russia - is to invest $600 million into its Ukrainian subsidiary Kyivstar in order to deploy 4G services across the war-torn country

KDDI’s Telehouse acquires Allied’s data center portfolio for $1 billion

Telehouse has acquired three data centers in Canada, marking its entry into the country.

The data center firm’s parent company KDDI this week announced it had acquired three data centers and the accompanying assets in Toronto, Canada from Allied Properties REIT. KDDI paid CA$1.35 billion (US$1.02bn) for the portfolio, and will be establishing a new local entity, KDDI Canada, Inc.

The Allied data center portfolio comprises freehold interests in 151 Front Street West and 905 King Street West, and a leasehold interest

in 250 Front Street West. It does not include 20 York Street, the site for Union Centre.

KDDI also recently acquired a stake in Japanese provider Internet Initiative Japan

Around 18 million shares in IIJ owned by NTT were acquired by KKDI for around 51.2 billion yen ($370.1m).

May saw another large acquisition, when Manulife Investment Management bought a controlling stake in Serverfarm, which operates nine data centers globally. Terms of the deal weren’t shared. https://bit.ly/NewKDDIsOnTheBlock

>>CONTENTS DCD Magazine #49 Issue 49 • July 2023 | 11

The rise and rebirth of carrier hotels

What role do legacy interconnection hubs have a place in a cloud & Edge ecosystem?

Aside from a few modernizations such as LED bulbs, the lobby of 60 Hudson Street in New York City looks largely the same as it did in the 1930s, down even to a row of original phone booths kept preserved by the building manager.

But little else has remained unchanged about this landmark of data center history. While it has always been a telecoms hub, today the building is a major data center interconnection point, ready to undergo its latest phase as old tenants move out and new ones move in.

Today the ducts that run throughout the building and surrounding area carry packets of data along fiber instead of packets of mail.

Since the mid-1990s, 60 Hudson and a number of peer facilities have kept the Internet’s lights on, providing interconnection capabilities unrivaled anywhere else in the world.

But as the data center market increasingly evolves towards purpose-built, high-density cloud and Edge facilities, can these historic legacy facilities keep up with modern demands?

Carrier hotels: A slice of data center history

There is no industry standard definition of a carrier hotel versus merely a data center with a meet-me room. But, generally, the term is reserved for the facilities where metro fiber carriers meet long haul carriers – and the number of network providers numbers in the dozens

That means these facilities are often located centrally in major metros, in large multi-story buildings. The compact and built-up nature of inner-city development means it's not unusual to retrofit existing real estate for digital infrastructure – it's common to see carrier hotels housed in buildings dating back more than 100 years.

“Carrier hotels, they really start with the meet-me rooms,” says Gerald Marshall, CEO of Netrality Data Centers. “They have long been the connectivity hub, forming a critical mass of carriers which form the foundation of an ecosystem.”

“If you look at a fiber map where carrier hotels are located, all the lines run into each other [at these locations],” he adds. “Whereas if you look at just a commodity data center, you might see one or two lines of fiber optic cables going in. So it's a staggering difference between a carrier hotel and just a normal data center.”

DCD Magazine #49
>>CONTENTS

These buildings will often host multiple colocation providers and offer interconnections via one or more meetme-rooms (MMRs). The presence of a large number of carriers is the major draw of these facilities, but takes time to reach a critical mass.

Many of the most interconnected facilities in the US date back to the mid1990s and the deregulation of the telecoms sector, and have spent decades building up their connectivity amid shifting technology waves.

Amid ever-growing hyperscale grey box builds in the countryside and containerized Edge facilities built in various nooks and crannies, many carrier hotels retain some old-world character, humming away quietly in defiance of a market constantly changing around them.

“These carrier hotel locations, they're artifacts of how the Internet evolved,” says DataBank CEO Raul K. Martynek. “But they are still the connectivity bedrock of the modern Internet. And these buildings are irreplaceable.”

The key differentiator, he says, is the number of fiber networks. Carrier hotel locations work out as the termination of fiber connections in the inner exchange of IP and voice traffic.

“They appeal to that carrier and content ecosystem because the Internet is a network of networks, and at some point, these networks have to physically touch. And where they physically touch is in these carrier hotel locations.”

Mike Hollands, VP of market development at Digital Realty, adds: “The role of a carrier hotel has never been more important. Carrier hotels act as pulsating hubs, enabling diverse networks to interconnect, creating a robust and resilient infrastructure for global connectivity. As data demands surge exponentially, carrier hotels continue to play an increasingly vital role.”

Legacy benefits, legacy challenges

Some of the earliest interconnection facilities remain in operation today. As well as 60 Hudson Street, the likes of 111 8th and 32 Avenue of the Americas in New York City are still humming away, many having launched in the late 1990s or early 2000s.

In California, for example, MAE-West - housed in the Market Post Tower at 55 South Market in San Jose - was one of the earliest Internet exchanges. Launched in 1994 and previously operated by MCI Worldcom, CoreSite still operates the facility as its 80,000 sq ft SV1 data center with access to some 65 networks.

529 Bryant Street in Palo Alto was originally the home of the Palo Alto Internet Exchange (PAIX), it was a local telephone company building that was developed as an independent carrier-neutral exchange point in 1996. PAIX was owned and operated by Digital Equipment Corporation; AboveNet acquired PAIX in 1999 and sold it to Switch and Data in 2003. Today the facility is operated by Equinix as its SV8 data center.

MAE-East, the first Internet exchange, was launched at 1919 Gallows Road in Vienna, Virginia, in a cinder-block room in an underground parking garage. The exchange moved to Ashburn in Loudoun County in the late 1990s, and the site was closed down in 2009.

Equinix’s nearby DC2 facility at 21715 Filigree Court – built in 2000 and part of a 12-building campus – took the mantle and today hosts more than 200 networks. Today 1919 Gallows is a Cogent office.

Many of the other major interconnection hubs in the US – One Wilshire in Los Angeles California, 350 East Cermak in Chicago, Illinois, and NAP of the Americas in Miami, Florida – also date back to the early 2000s.

“The challenge with the carrier hotels is the fact that they're not purpose-built to be data centers, generally speaking,” says Netrality’s Marshall. “They're there because they happen to be on the intersection of Main and Main for Metro and long haul fiber, and these carriers knocked on the door and said, ‘Can we come into your building and put some gear in there.’ And that's often how they evolved.

“These buildings weren't custom-built to be data centers, and therefore they are limited in their ability to accommodate some of these high-density uses, especially when they're on a larger scale.”

Sander Gjoka, VP of construction at HudsonIX, notes buildings like 60 Hudson –where his company operates several floors – present regular challenges.

“There's some great things about the building, but there's also some challenging things about it, one of them being ceiling heights,” he says. “From slab to slab is about 14ft. Once you put the raised floor in and you’ve got beams and all that stuff in the way, heights get very tight.

“And also just because it's a brick building, the floor loading can be challenging from a structural perspective. We invest a lot of money into steel and spent millions of dollars reinforcing it.”

Demand for fiber conduit is so high in 60 Hudson the building has decommissioned two elevator shafts in order to feed in connectivity capabilities.

The fact that these facilities are usually located in inner city metros also means a lot of red tape and bureaucratic processes to get things done.

“What I envy [with new builds] is that from shovel to turning up a server is six months,” says HudsonIX CEO Tom Brown. “I wish we would operate like that in the city.”

The ongoing renaissance of the carrier hotel

Because of their age, it is rare for new space and power to become available at some of the most sought-after carrier hotels. When it does, it is often snapped up quickly. But where most colocation data centers will be full of appliances of every variety, increasingly carrier hotels customers are focused on critical connectivity hardware.

“You don't see a lot of racks of storage systems or compute servers in these carrier hotel locations; there are predominantly Ciena gear and Infinera gear,” says DataBank’s Martynek, whose company operates out of some of the biggest carrier hotels in the country and operates several in tier-two markets.

“In the 1990s and early 2000s, you had the networks in these locations, but you also had customers putting in regular compute and storage systems. But over time all those non-critical workloads have migrated out of these buildings, and more and more network workloads have migrated into these buildings.”

Long live the carrier hotel 
>>CONTENTS

And whether that compute and storage is going to the cloud, on-premise, or colo, data still needs to travel from A to B (and C and beyond), almost always traveling through these carrier hotels.

That importance means space remains at a premium in these sites. HudsonIX – founded in 2011 as DataGryd and recently renamed after being acquired by investment firm Cordiant – is doubling down on its presence in the building.

The company currently occupies around 170,000 sq ft (15,800 sqm) in the building –including one suite leased to Digital Realty – and recently took up an option on an additional 120,000 sq ft of data center space and 15MW of capacity that will bring its presence at 60 Hudson to four floors.

“A little over 70 percent of all traffic in the Northeast Corridor – going from Boston, all the way down to Washington, DC –comes through this building in one shape or another,” says HudsonIX’s Gjoka. “90 percent of this building is communications. This is the center of the wheel in New York; The 111 8th Avenues, 85th 10ths, 32 Avenue of the Americas, they are all spokes off of this building.”

The company is currently building out five 1MW data halls – known as MegaSuites – on the sixth floor. The first hall launched in 2021 and work on the next hall is due to begin in the summer for a Q4 completion date. The vacated floors were previously office space occupied by the New York Departments of Buildings and Correction; HudsonIX’s Brown describes the floors as a “clean slate” that gives the company a rare chance to build new data center space without the need for a retrofit of existing infrastructure.

2022 saw Equinix surprisingly announce its NY8 data center in 60 Hudson was to be closed, its customers migrated, and the company exit the iconic building. Equinix’s

lease at the site – where it offered around 10,073 sq ft (936 sqm) of colocation space –ended around September 2022.

Equinix’s stated reason for the exit –alongside several other facilities including the 56 Marietta carrier hotel in Atlanta, Georgia – was: “In some cases, the leased properties are in facilities that may not meet the future operational, expansion, or sustainability needs of our customers or our corporate standards.”

The company retains a presence in NYC close to 60 Hudson at 111 8th Avenue as its NY9 facility. In response to DCD ’s requests for comment, Equinix described carrier hotels as “strategic assets” and an “important part of data center strategies” to support connectivity needs.

When asked if leaving 60 Hudson and 56 Marietta negatively impacts its customers, an Equinix company spokesperson said: “On the contrary, this decision shows we are very in tune with our business, our customers’ needs, and how the marketplace is evolving.

“We believe our customers at these sites will be better served in other IBXs in each metro where they can take full advantage of our rich ecosystems. This also will allow us to reallocate and sharpen our investments to other facilities.”

Beyond its given reason, many of the people DCD spoke to said the company likely thought it could offer better differentiation by using its large New Jersey campus in Secaucus, rather than playing second fiddle as a smaller tenant in 60 Hudson.

The space was quickly snapped up by local colo provider NYI – which already has a presence in the building – in partnership with QTD Systems, led by former Telx CTO and DataGryd/HudsonIX CEO, Peter Feldman.

“There are people that will show you gorgeous, perfect meet-me rooms that were built to a high spec fairly recently,”

NYI COO Phillip Koblence tells DCD “But where the actual traffic is going, most of it is happening in places that exist because they have always existed, in facilities people have used over the course of the last 20 years and still continue to use.”

NYI already

operated around 21,000 sq ft (1,950 sqm) of space in 60 Hudson after acquiring Sirius Telecommunications’ assets at the iconic building in 2018. The company is also looking to bring the 75 Broad Street carrier hotel back into the fold after the facility came close to closing down entirely [see page 18].

“We're trying to make the legacy sites cool again,” Koblence jokes.

The role of carrier hotels in 2023: The broker between cloud and Edge

In the realm of technology, legacy is often a dirty word. Instead of its common use to suggest lasting impact, it implies old, inefficient, outdated, verging on obsolete.

In some ways, carrier hotels represent both; they are often filled with old equipment and operate at low densities, with PUEs that pale in comparison to today’s most-efficient facilities. They are often difficult to update due to being in heavily-regulated inner-city metros.

Given these limitations, one could question the value of these historic oddities in the era of Edge and cloud. Admittedly, most of these can be fixed with enough time, money, and determination – Hudson IX has spent large amounts of money reinforcing floors and upgrading the building’s power infrastructure in 60 Hudson – but is it worth spending so much on a relic?

“Carrier hotels are absolutely critical,” says HudsonIX’s Brown. “I have a simple definition for the Edge, and that's where the application meets the network. And you're gonna see more and more applications being driven to the aggregation points like legacy carrier hotels.”

Many others buy into the same theory. These are some of the most indemand facilities in the industry, offering connectivity benefits near-impossible to replicate elsewhere because of their years in operation.

Carriers continue to use carrier hotels as a way to connect to each others’ networks, and will likely continue to do so for years. But as the rest of the data center and cloud sector continues to evolve, putting many static, latency-tolerant workloads in these buildings doesn’t make sense. And as these workloads move out, a new breed of lowlatency workloads are moving in.

“Carrier hotels are not just limited to the present, they are at the forefront of shaping the future data center landscape,” says Digital Realty’s Hollands. “With the rise of emerging technologies such as Edge computing and 5G, carrier hotels will

DCD Magazine #49
>>CONTENTS

become even more instrumental, acting as pivotal points for Edge deployments, enabling low-latency processing and supporting the massive data influx generated by the Internet of Things (IoT) devices, as well as artificial intelligence (AI).”

The likes of gaming, 5G, financial services, voice, and video communications, CDNs and live streaming services, IoT/ machine-to-machine, and other lowlatency workloads are increasingly coming into carrier hotels for latency and load balancing.

As well as acting as established Edge data centers themselves in densely populated locations with little room for new facilities, carrier hotels can become an aggregation point for data en route from more remote facilities to the cloud.

For example, Dish – which has been rushing to reach a 70 percent 5G coverage milestone in 2023 - was present across a number of the carrier hotel facilities DCD visited in New York, with newly established cages and IT hardware. Dish is a major AWS customer, so much of that data aggregated in NYC is likely directed back to Amazon’s cloud facilities in Virginia.

The new availability of space in 60 Hudson is allowing some customers to overcome traditional power limitations in long-toothed carrier hotels. HudsonIX is building out space for a gaming client. Unusually for the building, the data hall in question – built on one of the floors vacated by the Department of Corrections – won’t be using raised floors in order to accommodate higher-density workloads.

“That original use case as an interconnected and centralized aggregation point for metro fiber routing data to long haul fiber to send to faraway places is still critical today,” says Netrality’s Marshall. “While this need continues to drive robust demand, more and more latency-sensitive information is being processed locally and sent right back to the region.

“We're not seeing the workloads of the past go away, but we're seeing the new workloads join. Before, the carrier hotel was aggregating data and sending it off to where it needed to go. Now, with these latency-sensitive use cases, more data is being stored and processed in the carrier hotel, and then sent right back to the local end users that are sometimes just machines needing instructions.”

Is the market hospitable to new carrier hotels?

Demand for these kinds of interconnectrich facilities remains high, but availability is often low, and finding the right locations to establish new facilities with the same offerings can be difficult. The fiber overbuilds of the dotcom boom are long over, and in today’s capex-cautious world, finding new sites with the kind of fiber-rich ecosystems carrier hotels have just isn’t as feasible as it once was.

Difficult though it may be to create that network gravity, companies do still regularly pop up hoping to create new carrier hotels, often in secondary markets where there is potentially more opportunity. But efforts like these can take years to come to fruition, even in a sector used to long timelines.

“It's a double-edged sword,” says Ray Sidler, CEO and co-founder of DataVerge, a data center operator in Brooklyn that has built up a network of 30 carriers over the course of 20 years. “How do you get the clients, and how do you get the carriers? The clients don't come if the carriers aren't there, and the carriers don't come if the clients aren't there. It takes forever.”

In Kansas City, Netrality has built up its 1102 Grand facility from 30 networks when it took over the site around a decade ago to more than 140 today. Known as the

Bryant Building and built in the early 1930s, the 26-story property offers 7MW across 156,120 sq ft.

Netrality’s Marshall adds: “I think that it's nearly impossible to create a carrier hotel from scratch. It’s really difficult to get a critical mass of customers to pick up and move to a new place.”

The company is now aiming to supplement its existing carrier hotel in Kansas City with a directly-connected facility that can offer a more modern home to customers.

The goal, he says, is to meet the highdensity demands of the customers, but also their sustainability requirements –including waterless cooling. As the new facility is located 10 miles away from 1102 Grand, it allows for active replication between the two facilities and offers a new location for facilities that want the connectivity but might be able to tolerate an extra millisecond of latency.

“We took a building that was partially office and partially warehouse and we're converting it into a data center that we are going to connect via a fiber optic umbilical cord to our 1102 Grand in Downtown Kansas City.”

“This new facility is capable of much more much higher density power, it's got higher ceilings, we can put in the new most modern power distribution and cooling systems, as well as waterless cooling.”

While existing companies are always eyeing new opportunities, new entrants are also trying to make a name for themselves, both in established markets and new ones. Last year Polaris Group-owned Spencer Building Carrier Hotel announced plans to develop a whole new carrier hotel in Vancouver, Canada adjacent to the city’s existing 555 West Hastings Street carrier hotel.

Long live the carrier hotel  >>CONTENTS

In November 2022, Detroit real estate firm Bedrock announced plans for a new carrier hotel in the Michigan city, at 615 West Lafayette, in partnership with digital infrastructure platform provider Raeden. Formerly the Detroit News Building, the six-story, 311,800 sq ft (29,000 sqm) 615 West Lafayette was built around 1915-1916. Detroit News left the building around 2014 and sold the site to Bedrock for an undisclosed sum.

Hunter Newby, an interconnection pioneer who co-founded Telx and Netrality, aims to develop a huge amount of new small interconnection hubs across parts of America where connectivity is currently lacking. His Newby Ventures business is partnering with non-profit organization Connected Nation to build 125 Internet Exchange Points in 43 states and four US territories, including in 14 states which currently have no carrier-neutral connectivity point at all.

“The markets have coalesced around a small number of assets in these markets,” says DataBank’s Martynek. “The huge markets like New York, Chicago, and Dallas, tend to have three carrier hotels. In smaller

markets like Pittsburgh or Cleveland, there might only be one or two.

“The Internet continues to grow, bandwidth continues to grow, and you still need more fiber terminations,” he adds. “But these buildings are old, and it's hard to bring new power and cooling into them.”

“So what you're seeing now is that interconnect fabric is starting to decentralize. You see it happening all over the country, where newer interconnect homes are emerging. If we want to serve customers in Minneapolis, we need to serve them out of Minneapolis, and not out of Chicago or out of Ashburn or out of California, which is what happens today.”

Martynek says things are changing.

Up to 90 percent of data in Minneapolis is currently being peered in Chicago, he says. In a decade, he suggests that most of that will be peered in Minneapolis: “It’s a change in the shape and gravity of the Internet.”

But while some traffic might drain from the major hubs to these secondary and tertiary markets, the continued global growth in data means it will be replaced ten-fold by even more data created in those existing carrier hotels’ local markets.

While many of these new carrier hotels might lack the history and legacy of a 60 Hudson or a One Wilshire, they will be no less important in their local areas than some of those historic buildings.

“Over the next 10 years, I think you'll see the incremental capacity that gets consumed on the Internet will be more pronounced in places like San Diego, in Minneapolis, in Kansas City.

“But it's additive; it's not at the expense of the existing carrier hotels, because ultimately these are the most efficient places to exchange traffic due to the unparalleled number of networks available in these buildings.” 

Though the likes of CoreSite, Equinix, and Digital Realty are the biggest players in the space, the data center market is highly competitive and constantly seeing new players come and go.

Likewise, the carrier hotel market is equally dispersed. Equinix and Digital might own some of the most wellknown facilities – and have a presence at many more – it's not usual to see highly-interconnected buildings owned by property companies or other private owners.

As well as 21715 Filigree Court in Ashburn, Equinix owns the Dallas Infomart – acquiring it from ABS Real Estate for $800 million in 2018 – as well as NAP of the Americas in Florida, which it bought from Verizon in 2017.

Digital Realty owns 350 East Cermak, as well as 56 Marietta in Atlanta, Georgia (acquired in 2015 as part of Telx), and the Westin Building in Seattle (acquired in 2020).

While it operates multiple facilities across the country, CoreSite owns 12100 Sunrise Valley Drive in Reston, Virginia, which it acquired in 2008.

60 Hudson was bought by Stahl and Williams Equities in the 1980s. Williams Real Estate, which managed the building, was acquired by FirstService, which then

later merged with Colliers. Today 60 Hudson is officially owned by 60 Hudson Street LLC and managed by Colliers.

Netrality owns a number of carrier hotels across the US, including 1301 Fannin Street in Houston, Texas; the Indy Telcom Center campus in Indianapolis, Indiana; and 1102 Grant in Kansas City, Missouri, which has access to some 140 networks.

Though it doesn’t own large carrier hotels, DataBank is present at most of the major sites across the US. The company claims to actually be present in more US carrier hotels than Equinix; 12 markets to nine.

Many of the notable carrier hotels in the US aren’t owned by large data center firms. GI Partners – which founded Digital Realty in the early 2000s – has owned One Wilshire since 2013, after acquiring it from Hines. 910 Telecom owns the Denver Gas & Electric Building at 910 15th in Denver, Colorado. Tower 55 in San Jose, California, has been owned by the Carlyle Group since 2000. The Globe Building carrier hotel in St. Louis, Missouri is privately owned by Steven Stone. The Markley Group owns One Summer Street in Boston, Massachusetts.

Competition is fierce when these buildings come onto the market, with

carrier hotels sometimes commanding high premiums for what are often older and lower-density facilities.

“We are constantly looking to buy carrier hotels in new-to-us North American markets,” says Netrality’s Marshall. “We look for those facilities that are sitting on the intersection of Main and Main for long-haul and metro fiber and have the richest customer ecosystems.

“When we see one of those become available, we really focus very heavily to put our best foot forward so we can try to purchase those properties.”

In recent years, 1547 CSR & Harrison Street have acquired the Chase Tower carrier hotel in McAllen, Texas; the Pittock Block carrier hotel in Portland, Oregon; and the Wells Building carrier hotel in Milwaukee, Wisconsin.

IPI acquired the 1500 Champa carrier hotel in Denver, Colorado from the Morgan Reed Group in 2021 and launched an Edge-focused data center firm, RadiusDC to offer ‘metro Edge’ focused services. DataBank is also present in the building.

2022 saw H5 acquire the 505 Marquette carrier hotel in Albuquerque, New Mexico.

Telehouse this year acquired three carrier hotels in Toronto, Canada from Allied Properties for $1 billion.

16 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS
A competitive market that’s slow to grow
Data centre battery solutions High energy in a compact design Safety first fiamm.com Designed for life Maintenance free

New York, New York! The NYC carrier hotel microcosm

The Big Apple looks to rebrand as an interconnection city

Most major metropolises in North America will have two, perhaps three major carrier hotels. Some tier two cities will have one such facility, while many more will have none at all.

New York, however, boasts more densely-networked carrier hotels than the total number of data centers in some other cities. Thanks to New York’s population density, the presence of a major global financial district, its location as an East Coast cable landing hub, and strong historical connections to AT&T and its Bell predecessors, the area is rife with communication infrastructure.

The NYC Trifecta

Today 60 Hudson Street and 32 Avenue of the Americas (32 AotA) are the twin centers of the connectivity ecosystem in NYC, though most would likely say 60 Hudson is the heart of the Internet there and perhaps even the world.

Both located in Tribeca, the two Art Deco buildings were designed by Ralph Thomas Walker of Voorhees, Gmelin & Walker. Other notable buildings the firm designed for communications companies include the Barclay-Vesey Building at 140 West Street (originally for New York Telephone, and today home to its successor Verizon Communications); 101 Willoughby Street, also for New York Telephone; and the New Jersey Bell Headquarters Building at 540 Broad Street in Newark, New Jersey;

DCD Magazine #49
Photography by Vlad-Gabriel Anghel
>>CONTENTS

The 24-story 60 Hudson was built between 1928 and 1930 for use as telegraph company Western Union’s headquarters and designed to be ‘the largest telegraph building in the world’ and the ‘Telegraph Capitol of America.’ The company sold the building in the 1940s but continued to occupy the site until the 1980s, when it relocated to New Jersey, with the building becoming an AT&T telecoms hub.

Pneumatic tubes ran the length of the building to carry telegrams and other mail. They eventually became filled with copper, and later, fiber. Some joke that if you remove the bricks, the copper and fiber conduits could support the building. Today the facility has more than 100 carriers present.

The facility is now owned by Stahlaffiliated 60 Hudson Street LLC and managed by Colliers. HudsonIX, Digital Realty, NYI, DataBank are among some of the building’s tenants.

Five hundred yards away, 32 Avenue of the Americas was built in phases from 1911 with the final construction work between 1929 and 1932. 32 AotA was built for AT&T and known as the AT&T Long Lines Building. The property spans more than one million sq ft; at one point, every Bell System trunk line in the Northeastern United States converged within the building.

The 27-story building remained under the telco’s ownership until the eve of the new millennium, with the privately held Rudin Management Company buying it in 1999 and still owning it today. The site has more than 50 carriers; colo tenants include Digital Realty and CoreSite.

A mile and a half North, 111 8th Avenue was for many years the third part of what was known as the NYC Trifecta. The 15-story structure, previously known as the Port Authority Building, spans 2.9 million square feet of floor space. However, today its status as a core data center connectivity hub is weakening.

Another Art Deco building completed in 1932, it was originally used as a terminal to transport goods by truck to and from railroad lines and shipping piers on the Hudson River. The Port Authority sold the building to Sylvan Lawrence Companyaffiliated Realopco Inc. for $24 million in July 1973.

It was sold again in 1997 to Blackacre Capital Group and Taconic Investment Partners for $387 million and marketed to telecoms and Internet firms. After building up a sizeable roster of carriers, Google acquired the site in 2010 in a deal reported to be worth around $1.8 billion.

Today Digital Realty has a space in this facility, which it acquired from Telx in 2015. Equinix, DataBank, and Colocation America are also present in the building.

However, a number of firms have left the facility in the intervening years; at the time of its acquisition Google reportedly planned to “gobble up” other tenants’ space in the building.

“It’s not about the ‘carrier hotel’ space,” Google's SVP of Product Management, Jonathan Rosenberg, said of the acquisition in 2011. “We have 2,000 employees on-site... [and] it’s very difficult to find space in New York.”

One local industry insider told DCD the NYC market was “shrinking day by day as Google continues to add food trucks instead of data center space” in the building. Nowbankrupt Internap left the building in 2013 to relocate to New Jersey.

On its website, H5 DC says: “Google has shown more interest in constructing office and commercial space at 111 8th Avenue than in growing the interconnection community. Over time, 111 8th Avenue is expected by many experts to not serve any carrier-neutral data center requirements.

“We recommend that all prospective customers steer clear of 111 8th Avenue due to the lack of surety as a long-term data center location. Existing customers would be wise to have a contingency plan when/if Google management would like to convert data center space to an alternative commercial use.”

NYC’s ring of carrier hotels

In the years since Google's acquisition of 111 8th Avenue, a number of new facilities popped up or expanded, hoping to take the building’s crown as the third major hub in New York.

Telehouse America acquired a 60,000-square-foot facility in 2011 that was previously a Lehman Brothers data center at 85 10th Avenue. The building itself, built on a former landfill in 1939 for Nabisco, was reportedly where the first Oreos were made. Level 3 had bought the building in 1998 and sold it to Somerset Partners in 2005, and it was sold again in 2007 to current owners Related Companies & Vornado Realty Trust. Like 111 8th Avenue, however, Google is a major office tenant of the property.

365 Data Centers – then 365 Main –announced it was to double space at its 65 Broadway facility in 2013. Owned by the Chetrit Group and formerly the American Express Building, the site offers a 16,000 sq ft data center with ten carriers present. Built around 1917 and spanning 21 floors, 365 took over a Switch & Data facility there from

Issue 49 • July 2023 | 19
New York, New York 
Looming in the background, there is 33 Thomas Street – a giant, windowless, 1970s brutalist AT&T building known by the NSA as Titanpointe. It is known to be a US government surveillance hub.
>>CONTENTS
Photograph by Sebastian Moss

Equinix in 2012.

Sabey has been operating 375 Pearl Street – also known as Intergate.Manhattan – since 2013. Completed in 1975 for the New York Telephone Company, Verizon sold the building in 2007 to Taconic Partners, but its logo is still visible on the tower. Sabey acquired the property in 2011 from M&T bank in lieu of foreclosure. Offering access to more than 15 network providers, around seven of the 32 floors are dedicated to data center space, with another four for generators, chillers, and other infrastructure.

H5 also operates a 240,000-squarefoot data center in the city at 325 Hudson. Atlantic Metro, which was acquired by 365 DC in 2020, had operated a data center in the building since 2007. Global Cloud Xchange was also present in the building. Built in the 1950s as a manufacturing facility and converted to data center use around the year 2000, DivcoWest acquired the ten-story building in 2021 for $135 million, after which H5 moved in. The building has access to more than 40 fiber networks.

Many others have come and gone over the years. Atlantic Metro opened a 5,000 sq ft facility at 121 Varick Street in 2010 that is no longer in operation. Atlantic, alongside Data Center NYC Group, aimed to convert at least seven floors of the 12-story SoHo building into a data center hub in a $100 million phased build-out. ColoHouse-

owned Steadfast Networks was also a tenant.

Telehouse opened a data center at 25 Broadway in 1997 that it has since exited. The site was built in 1919 for the Cunard White-Star Line shipping company; Telehouse offered 85,000 square feet across two floors.

Cogent still lists a data center at 25 Broadway on its website, but has exited 33 Whitehall. Datagram – acquired by Singlehop, which was then bought by Internap – had also been previously present at 33 Whitehall since 2004.

The Starrett–Lehigh Building, a 19-story building at 601 West 26th Street, was another property designed to be a port

terminal when it opened in 1931. Broadview (acquired by Windstream), Lexent Metro Connect (acquired by Crown Castle’s Lightower), Level 3 and others previously operated data centers in the building.

Built in 1960 by Rudin, 80 Pine once housed a Global Crossing data center. The company was acquired by Level 3 in 2011. Global Crossing had a facility in 110 East 59th Street – a modern glass structure from the late 60s.

Looming in the background, there is also 33 Thomas Street – a giant, windowless, 1970s brutalist AT&T building known by the NSA as Titanpointe. While it is nominally a telephone exchange building, it is known to be a US government surveillance hub. 811 Tenth Avenue is another ‘60s brutalist windowless AT&T switching building in the city.

75 Broad is reborn

For many years, 75 Broad Street was one of the more notable carrier hotel data centers in NYC. But like many others in the city, the building came close to losing all its data center tenants.

As with a number of the mentioned buildings in this piece, 75 Broad has telecoms pedigree. Built in 1929, it was the former headquarters of the International Telephone and Telegraph Company. During World War II, it served as a hub for communications with American submarines operating in the Atlantic

20 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS
“New York's infrastructure, from the standpoint of people's feelings about how resilient it was, it took a significant hit [after Sandy]"

Ocean. Like many other buildings in the city, it was converted to telecoms use in the early 2000s. At its peak, it boasted a healthy number of data center providers including Internap, FiberMedia/VxChnge, and Peer 1.

“This building has a long storied history as a carrier hotel that fell apart during Sandy because they really got clobbered,” says Phillip Koblence, COO at local data center firm NYI.

In 2012, Hurricane Sandy caused huge amounts of damage across the eastern seaboard. The Atlantic hurricane was one of the largest on record, and flooded a large part of downtown Manhattan, knocking out power across parts of the city for more than a week.

In terms of digital infrastructure, 75 Broad was one of the hardest hit sites in the city; the basement was flooded under more than 15 feet of water. Due to regulations enacted after 9/11, New York data centers keep their diesel fuel in the basement, and their generators on the roof. Floodwater damaged the diesel fuel pumps, leaving providers no way to refuel generators on mezzanine floors.

Internap's data center at the building (known as LGA11) was knocked offline by the incident, and was only brought back online once a fuel truck arrived.

Peer 1 managed to stay online, but only through its now-legendary bucket brigade. Staff, along with customers including Squarespace, manually carried diesel fuel up 17 stories from the street to the rooftop generator. At first, they tried to carry 55-gallon diesel drums on hand trucks one flight at a time. Eventually, the diesel was transferred into smaller five-gallon barrels

and carried upstairs in a chain for some sixty hours.

Other facilities impacted and unable to stay online were Atlantic Metro’s 325 Hudson Street and 121 Varick Street, Datagram at 33 Whitehall, Verizon at 140 West Street, and Internap & Equinix at 111 8th Avenue due to generator failures.

In the wake of Sandy, 75 Broad’s decline wasn’t instant, but happened slowly over the years as customers and colo providers left the facility. The now-shutdown VxChnge – previously FiberMedia Group –was the last to leave. Many moved to New Jersey.

“New York's infrastructure, from the standpoint of people's feelings about how resilient it was, it took a significant hit,” says Koblence. “Anybody that was on the fence about moving to the cloud or whether this city was where you should deploy a significant amount of your production environment, their case was made to migrate out to other places.”

Real estate firm JEMB acquired the 34-story, 720,000-square-foot property in 1999. After all the operators left, the last remaining data center facility at 75 Broad ended up being taken over by building ownership.

In 2021, BSC announced it was selected to provide critical facility management for a data center formally operated by FiberMedia Group LLC at 75 Broad.

According to Koblance, building ownership came to NYI and said it had a data center asset at 75 Broad it was running for some legacy customers.

“They said we have some interconnection that is incredibly difficult for those companies to reprovision or move because it's just been here for so long, is there a way for us to take that infrastructure and make it relevant again?”

As a result, NYI is now JEMB’s operating partner for 75 Broad in a deal focused on “re-establishing 75 Broad Street as an interconnection hub.” JEMB has reportedly invested more than $15 million in the building since 2012.

NYI will act as a strategic operating partner, managing the sales, business development, marketing, and customer success functions for digital infrastructure and interconnection activities at 75 Broad Street. As well as attracting enterprises, the goal is to lure more carriers to the building.

“A lot of the investment that JEMB made in this building – flood gates, hardened power infrastructure, etc – they made because of the acute impact of all of what happened during Sandy,” he says. “Those

Issue 49 • July 2023 | 21
New York, New York 
>>CONTENTS
Photograph by Sebastian Moss

investments haven't been made in other buildings that weren't as impacted.”

“What never changed and was never decommissioned though is that fiber ecosystem that existed here. That's something that's really really difficult to recreate.”

Koblence says his aim is to make 75 Broad relevant again and lend its company’s data center credentials to the facility's sizeable interconnectivity assets.

“I don't know this building will ever be the place where you're going to build large data centers again, and I don't know that New York City is necessarily a place where you're going to build large, multi-megawatt data centers.

“But the fiber’s here, and it's inherently relevant because - for all of the compute infrastructure that has moved to the Midwest and Dallas and Ashburn - at the end of the day, the NFL cities are still the places where the eyeballs are. All that data has to exist somewhere and it's just not efficient for it to all happen at some aggregated site in Ashburn, which is why New York has evolved into an interconnection-focused city.”

NYI was founded in the mid-1990s and co-founder Koblence has seen the NYC data center market change over the years as it

toured various facilities in the city.

The company started offering hosting, web development, and DSL services. It originally offered services out of 20 Exchange Place, another 1930s art deco building formerly known as the City Bank–Farmers Trust Building.

“We originally started just next to Penn Station on 29th and 7th and we moved downtown, put in some racks and ripped up the carpet,” says Koblence.

NYI then migrated to a 2MW data center at 100 Williams Street in a site Level 3 had been building out but decided not to use in the wake of 9/11. It remained there until 2020.

After the 2008 crash, the company then launched another site, 999 Frontier, in Bridgewater, New Jersey that had belonged to payroll company ADP. The company sold the site in 2019 to 365 DC, which still operates it today.

NYI moved into 60 Hudson in 2018 after acquiring Sirius Telecommunications’ assets at the iconic building.

“When we acquired the former Sirius Telecom site at 60 Hudson Street, it really was our introduction into the carrier hotel space and interconnection.”

Today the company is also building

out further in 60 Hudson after taking over space recently exited by Equinix.

Brooklyn and Long Island want to muscle in

Outside of the city, the wider New York area is still blessed with more carrier hotels than most.

West of the Hudson River in New Jersey, Equinix operates its campus in Secaucus, while the Tishman-owned 165 Halsey Street offers 80MW across 1.2 million sq ft and access to some 60 carriers.

South of the East River, Brooklyn is one of the most populous and densely populated areas in the country. Located at 882 3rd Avenue in Brooklyn’s Industry City, DataVerge operates what it says is the borough’s only carrier-neutral meet-me room and carrier hotel.

The facility currently occupies 50,000 sq ft across two floors of the building, with another 35,000 sq ft immediately available as well as multiple currently-empty floors potentially offering hundreds of thousands of square feet. The facility offers connections to around 30 networks.

“We have this fiber here and now all those carriers aren't backhauling back to the city from Brooklyn. They're backhauling it here,” says Ray Sidler, DataVerge CEO and co-founder.

22 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

Ruben Magurdumov, COO and another co-founder, adds: “Our goal is to grow the carrier presence as much as possible to really target all the carriers that are out there.”

DataVerge can trace its roots back to 2003 and the formation of Galaxy Visions Inc and later ColoGuard. The company briefly operated a data center at 470 Vanderbilt in Brooklyn. Around 2000, the Carlyle Group and Chase Capital had hoped to turn the 10-story former manufacturing building into a telecom hub known as the Atlantic Telecom Center, but by 2007 had given up on the plan.

The company opened the 882 3rd Avenue location around 2003. Industry City, formerly a port shipping and warehousing terminal owned by Bush Terminal Company, today is a 35-acre business hub comprising more than 400 large and small companies.

Owned by Belvedere Capital, Jamestown and Angelo Gordon Co., Industry City comprises 16 buildings with rooftop and underground fiber all connecting to the DataVerge building. This gives DataVerge a healthy amount of potential customers and a reason to lure in carriers.

“You've got media companies, you’ve got production, post-production, editing. This is the creative hub now,” says Sidler. “And they need connectivity.”

“We're getting a lot of the 5G providers, a lot of the Edge providers like the Zenfis. LynkNYC, all those little WiFi hotspots across New York, they backhaul it to here and as well as other data centers.”

While the company isn’t targeting hyperscalers, Industry City has a 50MW on-site substation from ConEd. DataVerge is the station's biggest user, and reportedly has plenty of spare capacity – Sidler says some 45MW is currently available.

“Honestly, there's two things that we're not worried about - space and power,” says Jay Bedovoy, DataVerge’s chief technology officer.

“We always had a vision for this. It took 20 years to do, but we're up there with the big boys now,” adds Sidler.

Further east, on Long Island, Long Island Interconnect (LII) aims to be a local carrier hotel and offer companies a chance to connect to and from the nearby AT&Towned Shirley cable landing station. Its location offers companies direct access to, or the option to bypass, the busy NYC market en route to Virginia.

The facility, located at 1025 Old Country Road in Westbury, currently spans 18,500 sq ft and offers around 800kW. Another 24,000

sq ft is available for the future. Some 16 network providers are present at the facility along with access to five subsea cables.

LII was founded around 2007 as the Long Island Data and Recovery Center before being acquired by Frankfurtbased Ancotel in 2010. Ancotel was then acquired by Equinix in 2012, but the Long Island facility was separately spun out and renamed 1025Connect (and to LII in 2022).

Today LII is owned by Ancotel USA, LLC and 1025 Old Country Road is owned by 1025 II, LLC and managed by Steele Harbour PM.

“Nowadays there's more and more investment going to be going into carrier hotels,” says Dash Dalipi, operations manager at Long Island Interconnect. “The idea [here] is to grow and create a link between the little facilities to cover a broader span of the marketplace.”

Dalipi says that in comparison to the larger providers, LLI is keen to build on its local expertise and customer service.

“A lot of the small businesses on Long Island want to be able to come close to their servers,” he says. “We provide that personal

attention. When you go into some of these other larger facilities you get that 1800 number - but they don't even know who you are, don't know your name.”

Like JEMB at 75 Broad, LLI has partnered with NYI to broaden its offerings. NYI’s Koblence says Long Island gives customers the ability to bypass New York City if desired, but be close enough to benefit from the NYC ecosystem:

“Some people don't want to be in New York City in order to get down to Ashburn. So if you are a European company that's landing in Westbury, having that ability, plus giving them the option to interconnect with the ecosystem at 60 Hudson Street, makes make sense.”

NYI and Koblence’s play is to find underutilized but fiber-rich assets and offer interconnection services that can offer established alternatives to some of the larger facilities.

“My thesis is that there's a strategy where, just as a consequence of them having existed for as long as they have, you can find the smaller very wellinterconnected, very fiber-rich sites where you can help people mitigate some of their exposure to interconnection costs.”

His play with the likes of LLI is to create a centralized function for some of the backend administrative requirements such as accounting but allowing those companies to operate in an independent bespoke fashion.

As well as NYC and Long Island, the company offers services out of a data center in Seattle, Washington; a former Navigate facility in Chicago, Illinois, acquired in 2019; and out of the Digital Realty-owned 36 NE Second Street carrier hotel in Miami, Florida, in partnership with South Reach Networks.

“Local players are struggling to find a way to scale. But there is something special about working with like a local provider; that institutional knowledge in a particular market alongside the ability to be nimble and flexible is something that is difficult to achieve when you're thinking of a large REIT.

“The opportunity of finding these nimble sites, benefiting from the institutional knowledge that they have, and interconnecting them in a way that allows companies to start looking at that as an effective alternative bespoke interconnection solution - that is potentially disruptive.” 

New York, New York  Issue 49 • July 2023 | 23 >>CONTENTS
“Local players are struggling to find a way to scale. But there is something special about working with like a local provider; that institutional knowledge, and the ability to be nimble and flexible..."

The CEO who’s an engineer - and a Dame

Pure DC’s CEO Dawn Childs is not like the rest

You’d be excused for thinking that the world has enough hyperscale builder/operators.

Companies like Vantage, Stack, PDG, and the wholesale arms of Equinix and Digital Realty have been doing this for some time, and there’s a very small list of customers wanting multiple megawatts of capacity built to order.

Pure DC, whose website says it specializes in “designing, building and operating hyperscale data centers, anywhere in the world, to the highest industry standards,” runs the risk of sounding like more of the same. After all, there are only so many ways you can describe the job of building, powering, and cooling a large secure warehouse full of IT.

But one thing about Pure stands out: its CEO, Dame Dawn Childs.

DCD Magazine #49
>>CONTENTS

Engineer and a Dame

She may well be the only female CEO in the wholesale data center niche. More importantly, she’s one of the only serious engineers in a set of CEOs that mostly arrived in data centers from real estatehaving served in various settings including the Royal Air Force (RAF).

She is also surely unique among data center CEOs, in being a Dame of the British Empire.

For those not familiar with the British honors system, a Dame is the female equivalent to a male Knight, awarded for services to society in one form or another.

Many famous Dames are actors like Judi Dench. A small number of previous tech Dames includes Dame Wendy Hall, professor of computing at Southampton University who co-founded the Web Sciences Institute alongside Sir Tim Berners-Lee, and Dame Stephanie “Steve” Shirley, who founded F International to create opportunities for female programmers.

Dame Dawn’s engineering strengths are clear. In the RAF she gained a mechanical engineering degree and Master's degrees in defense studies and defense administration. She was the RAF’s first Senior Engineering Officer.

After 23 years’ service, she went into civilian airspace, to become the first female head of engineering at a major international airport (Gatwick) in 2012.

From there she went to the entertainment company Merlin in 2016, establishing a central engineering function for rides and attractions at its theme parks, and then in 2019 went to the National Grid to be UK Change Director.

In 2021, she joined Pure DC, becoming CEO in August 2021.

Military and grid experience

“I was moving my way up through my career, and I was wanting to get a more board-level, strategic role,” she remembers. “And I had some great advice which was that, as an engineer, to be worthy of being on a board I would need to be in a technical industry.”

At the time at Merlin: “With 130 theme parks around the world it was a big crunchy job, but I was deemed to be able to be on the board because it was an entertainment industry. Engineers were more of a commodity.”

At the National Grid, engineers got more status, and even more so at Pure, she says.

Working on mission-critical systems in the RAF and Gatwick, and then on changing the UK’s National Grid, added up to exactly the right set of skills to bring to the data center industry, but that was a coincidence.

“If I was preparing myself for a career in the data industry, in a bizarre way, I have done it without knowing it,” she says. “When I joined the data center industry, I didn't even know what it was, to be candid. But upon discovering the data center industry, I realized that the journey that I've taken in my career has actually set me up for it by happenstance.

“I consider myself to be very lucky,” she

says. “I have critical national infrastructure experience, I have 24/7 operational experience, and I have good technical experience in operations that are akin to data centers, so it has really set me up well.”

Aiming for hyperscalers

Like other players, Pure is backed by private equity. In Pure’s case, it’s from Oaktree Capital Management.

“We're very fortunate to have really great investors who are supportive,” says Childs.

Pure is based in London, and currently builds data centers in the UK and APAC: “We're still a growing business, so whilst we may not compare to some of the other businesses in the sector who've already grown, I think that our growth trajectory is exciting,” she says.

“I think what stands out from the rest is we're racing up the outside of the field,” she says. ”Pure is a small startup, going back a few years, but it has scaled up pretty quickly.”

When she started in June 2021, there were a handful of people: “They had a couple of hyperscale customers and a couple of locations.

“I was brought in to set up the operational arm of the business, ahead of the first operational go-live at our data center in London. Then after probably eight months or so, I took over construction as well.”

Pure DC’s origins go back to Martin Lynch, founder of the London data center operator Infinity SDC, which was recently acquired by Green Mountain. In 2013, Lynch left Infinity to start Pure, remaining as a director until April 2023, having passed on the leadership.

Since 2016, Pure has been chaired by Simon Berrill, an investment banker with ten years at Macquarie Capital.

Pure’s first project was developing and managing a small Birmingham data center owned by GTP3, which opened with a 1.5MW data hall. After the project was completed, Pure handed it over and focussed on bigger hyperscale projects.

“We do have other customers,” says Childs, “but 5MW is the smallest parcel of capacity that we would prefer to deal with. We build, and run the operations including power and cooling. We're effectively the landlords of the facility, giving our customers the service they require to

Issue 49 • July 2023 | 25
CEO Interview 
>>CONTENTS
"We're definitely not just running around trying to be the cheapest for everything because that's not going to be a good outcome for anybody. I saw it in the aviation sector: ground handlers were always cheapest, cheapeast, cheapest. Then there was an issue"

operate their facilities within the data halls.”

Pure set out with Panattoni as a partner to build a three-story, 50MW data center on the site of MGM’s Elstree Studio in Borehamwood. Pure DC dropped that when it hit planning delays, shifting its attention (and the electricity supply it had ordered) to a second development it was working on, in Cricklewood, near Staples Corner

Borehamwood became a logistics site, and Cricklewood opened as LON1 in 2022. Its first phases are operational and there are plans for 150MW of power there.

Meanwhile, Pure has been developing a Dublin site with the first building due to go live this year. Partners have bought adjacent land, so it could grow beyond the current plan for three buildings, Pure says.

“In May this year I stepped up as their CEO to take over all of the group's business,” says Childs.

Since then the company has opened its first international site, a 20MW hyperscale facility in Jakarta, Indonesia - its first in APAC.

That was a learning experience, she says: “Building out in Indonesia through Covid was challenging, it had a worse lockdown than we did here in the UK. Construction actually stopped four times over there. That was problematic.”

Building abroad with a UK headquarters “requires close focus, and it requires ensuring that you maintain the standards that are right for you as a business operating anywhere in the world. It can sometimes be quite challenging to enforce and maintain those standards in other markets. That made it slightly more difficult and less straightforward than building a data center in the UK.

“But there are other aspects that make it easier. Getting a good workforce out there is not easy, but once you've got them they're far more compliant than the UK might be. You must get some trusted partners out there. There's pros and cons of doing business anywhere - it's just being aware of what they are, making the most of the great things about doing business in those geographies, and guarding against the slightly more challenging aspects.”

The company has ambitions in the Gulf states, says Childs. “We have a team out in Abu Dhabi and a data center in construction there.”

Pure has some financial backing from the Emirate of Ras al Khaimah, and has

built on spec in Abu Dhabi - where a 50MW building has reached weathertight status.

As to the future, she won’t make any concrete promises, but Pure is looking at expanding into the European mainland, as “getting some foothold there would be one of our next key steps as well as growing at scale where we already have footholds.”

Despite the number of players there, now could be a good time to enter mainland European data centers, as new efficiency regulations are being launched: “That gives us a slight advantage because we don't have entrenched ways of workingprocesses, methodologies, and design sets. We can very agilely just lean into whatever is required or demanded and hopefully get ahead of that game.”

But it won’t be done in a rush: “I think the important piece is that we will be very much customer-led. We don't intend to go off to somewhere completely new and make a stand and go ‘You should come here!’.”

Distinctions

Wholesale data centers are a big market, but it’s one where a small number of players chase an even smaller number of giant clients to offer a product which seems pretty standard - large powered warehouse buildings to hold IT equipment.

In that situation, competition over margins can be tight.

“In any industry, margins are important, particularly when there are multiple providers for a select few customers,” she agrees.

“And yet we're definitely not just running around trying to be the cheapest

for everything because that's not going to be a good outcome for anybody. I saw it in the aviation sector: ground handlers were always cheapest, cheapest, cheapest. Then there was an issue, and they'd start all over again and go back up to the top.

“That's the sort of race to the bottom, which we are not participating in. We're participating in being an industry player because we're good at solving problems and good at working with our customers to get them where they need to be.”

On top of price, there’s efficiency, and delivering the required quality, she says. “These days the industry is constrained on many levels,” she says. “It isn't just about price or service. It's about the ability to be able to go somewhere where resources are constrained - being able to build data centers in locations that other people aren't able to.”

She takes the example of supply chains. Many complained about the difficulty of getting equipment like generators and chillers during Covid. But there are two sides to those complaints, she says: “It was an issue for everybody - but those talking about bottlenecks had their own supply chains and issues. I think we're getting to the other end of the supply chain issues now - and some of our competitors and suppliers have done a better job than others.”

She thinks a well-organized company is less likely to hit such an issue: “It's important to work better with your supply chain so they understand the demand. So you make sure we give them steady demand rather than randomly going, ‘Hey, I need another 20 generators tomorrow.’ That's never gonna work.”

The sector is unusual, she says, because: “It is critical national infrastructure, hugely important to every aspect of modernday life, and yet it's not regulated. It's not currently considered in many countries as critical national infrastructure, but actually, it completely is.”

The challenges include power and land available for large-scale building: “To achieve planning, or even to find power in this constrained environment is very important. Our challenges are multifaceted, and yet we have to be a competitive business.”

She looks at those facets like a dragonfly, she says, with “multiple lenses.”

One of those lenses is the staff: “I think the power of any business is people. We

26 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS
“If you're forever saying ‘you need to have been in data centers to come and work in data centers,’ then you're never going to get a more diverse workforce. You're only ever going to get an older version of the same workforce"

have gathered a fantastic group, and we have a beautiful blend of thoroughbred data center expertise, and brilliant expertise from other industries, such as critical national infrastructure industries, aviation, and the power sector.

“We've gathered a great team of experts, and that places us really well to solve some of these complex challenges. I think that's what gives us our edge.”

Sustainability is essential

Another lens to look through is sustainability: “In ensuring that the data centers you're building are sustainableand becoming more sustainable - your designs have to move ahead of the times.”

Pure is “really thinking about what we can do to reduce the resources needed - not just building the data center to meet the capacity demand, but doing it in a way that is sustainable, from a delivery perspective and more importantly from the planet’s perspective, and from the consumer’s perspective.”

Customers ”don't want to look at a data center and worry about the power it's using. They want to feel that it's being done in a sustainable way. You have renewable power, and you've minimized the amount of materials being used in the construction process.”

She goes on: “We're having to find better solutions, better ways of optimizing data centers. Not only have you got the heat load in the data center itself increasing, you've also got climate change, affecting the climate around the data center.”

How strong is that commitment to sustainability, we ask: Would Pure build a data center in a geography where no renewable energy is available?

“It's a bit of a hypothetical question, but it depends on the full suite of considerations, if we were able to find a renewable power source or build our own renewable power source.

“There's ways to get into any market and to ensure that you are sustainable, but it all depends on the nature of the customer requirement. The nature of the land parcel you find, and also the longevity of what you need.”

There could be ways to get low-carbon power: ”You can always build your own power center, you can create your own microgrid,” she says. “That would depend

CEO Interview  >>CONTENTS
“The whole global financial landscape is changing as we speak, with interest rates and inflation through the roof, and margins squeezed everywhere. But actually, a lot of the bigger problems to solve are technical problems”

on the customer, the land you managed to acquire and whether you could create your own renewable energy source,” she says.

“Small, modular nuclear is becoming a thing, isn't it? And that's potentially a lowcarbon source of energy if contained in the right way with the right risk solutions around it.”

She’s not expecting nuclear-powered data centers any time soon, but she will be ready: “It depends on how far we get with the availability of product on the market, and figuring out the risk and operational analysis around it. But if you're anyone that's ex-military, we’ve had nuclear power in submarines forever. It's not like it's demanding or challenging. You just need to have the right operational frameworks around it.”

of diesel, but didn’t trumpet it with a press release: “We're not on diesel generators, we're using far more sustainable fuels already for any backup power that we have. Those are the small steps to make - but figuring out that front end and getting the build more sustainable has got to be one of the biggest things that we need to focus on.”

Having said that, don’t expect a wooden data center from Pure immediately: “We have a team within Pure who focused on R&D, and they are very focused on figuring out the best, most sustainable types of materials that can be used, and there are loads of exciting advances. But oftentimes, they're not scalable yet. They might look enticing on their own and have the same structural qualities as steel, but they can't yet be manufactured on the scale that's needed.

technical ability is necessary, she says: “Can an engineer learn sufficient commercial things to make them a compelling leader in this sort of sector? Or can a commercial person learn sufficient technical things to enable them to navigate it? I think either works.”

On one level, she covers both bases, with a lot of commercial experience: “Some of the big pieces that slot into my current role are all around strategic decision making,” she says, and she picked that up at Gatwick Airport, National Grid, and Merlin.

“At the National Grid, I led the transformation to get the organization leaner, get it onto a better commercial footing for new regulations, and made a couple of billion pounds worth of savings. I think that sets you up to ask the right

She also points out that decarbonizing is more than just using renewable energy: “By far the most significant thing in terms of making data centers carbon neutral is the embodied carbon.”

There are limits to how much you can offset all of the carbon and resources used in building a facility, “so minimizing input is going to be important.”

She’s skeptical about the high profile some providers give to decarbonizing their backup generation: “If you're in a very steady supply market for power, then your backup shouldn't be coming into play that frequently. It's not the most important thing to fixate on. There are other clever things that you could do to make things far less carbon intense.”

Pure already operates its backup with HVO (hydrogenated vegetable oil) instead

“We have lots of materials under investigation and when the appropriate project comes along, where we can deploy these materials, then we absolutely will,” she says. “Ensuring that it has the right level of industrialization or availability to make it a meaningful substance to use in a data center is quite important. Oftentimes, it's not the lack of potential alternative choices out there. It's the lack of credible ones.”

Engineer-led

Being led by an engineer is another distinction: “Having an engineer as a CEO is completely acceptable because a lot of the problems that we now face are not just commercial problems, but technical problems.”

At a senior level, both commercial and

commercial questions“

But commercial challenges aren’t the fundamental issue, she says: “The whole global financial landscape is changing as we speak, with interest rates and inflation through the roof, and margins squeezed everywhere. But actually, a lot of the bigger problems to solve are technical problems.”

One of those technical problems is cooling: “If you're in the right climate you can have some sort of free cooling, but sometimes it might be too hot outside to enable that. All of these problems need technical solutions. Literally, every single problem that you can think of that's constraining our sector at the moment, comes back to a technical solution of some kind. It is very rarely a commercial solution.”

That’s the place to start, she says:

28 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

“Knowing how to ask the right questions of the technical experts that are gathered around you, is a great skillset to have.”

Does she ever step back and ask whether the industry should be meeting this demand in the first place? Are all these data centers needed?

“That's a that's a far, far bigger question,” she says. And as an industry, I think it's probably up to us to offer some solutions for that too. Supporting hospitals in real-time, rather than just spreading spam videos, and reply-all emails that really don't need to be kept for 1,000 years.

“There are lots of ways that that data demand could be constrained to allow supported growth,” she says. “At some point, regulators or industry will have to face the data junk element, and there should be a

the iMasons IM Women, she says: “I think a couple of key groups like IM Women are doing a great job, in terms of connecting women together and making sure that we have a soft landing in the industry.

“However, as an industry, we're not really advanced enough in our diversity and inclusion. So from that perspective, it almost felt like we were quite a way behind places like the aviation sector, and other areas where I've worked. It just felt like we weren't advanced enough in our thinking around diversity and inclusion.”

One issue that affects women is “Impostor Syndrome,” where a person is held back by doubts about their abilities feels and the feeling that they are a fraud.

Childs has written widely about overcoming that issue: “Over my career, it's

that I come into work every day and think, ‘oh, yeah, I've got this. I know exactly what I'm doing.’ If ever you get to that stage, then you're probably not trying hard enough.”

To an extent, an awareness of your limitations is important: “You should never be too comfortable and complacent in a role. You should always be endeavoring, questioning, listening to other people. It's pointless surrounding yourself with teams of fantastic people and experts if the only voice you want to listen to is your own.

That’s important, she says: “If you think you have the answers to everything, you're probably slightly delusional. You need everybody's mind in the room. Because if you don't have everybody's mind in the room, then you're not going to be solving the most complex problems. There's always going to be somebody else who's got a slightly different viewpoint. you should never ever be so opinionated and so competent, that you're you're shutting yourself off to having that input from everybody.”

Making the industry visible

That is the reason why she sees broadening the data center workforce as a mission: “The first premise of inclusion is you need diversity in your business. It's to get all of those different experiences and minds in the room, to help you to solve the most complex problems, to help you to be innovative.

solution for that.

“However, as a data center provider, we can't solve the data requirements piece. What we can do is make building data centers far more sustainable. That's a stepping stone ahead of any regulation on good data versus bad data. We need to make data centers as sustainable as they possibly can be.”

A business for women

“Bizarrely, being a woman in the data center industry, I've had two sides to this experience,” she says. “When I first joined the sector. I found it hugely welcoming. As a female engineer, in the majority of other roles that I've been in, it hasn't really felt that way.”

Part of that has been down to groups like

taken quite a long time to feel that I have the credibility that should make people listen to what I've got to say. That's been quite a long fight as a female in an engineering position, but I think I solved that about a decade ago. I felt I made that transition where I had sufficient experience and confidence to be happy with what I was saying, and feel that people should listen to me.”

She has been President of the UK’s Women's Engineering Society for five years, and says “If you're a young woman coming into any industry, and particularly as an engineer, you look at people who are in more senior positions and you think, ‘Oh, my goodness, do they know what they're talking about? And I don't!’”

Even now, she says: “I feel like I have good experience now, good credibility, and good knowledge. But that doesn't mean

“The biggest thing for me around having an open and inclusive culture in your business is to make sure that you get all the minds in the room and to get all of those voices out there.”

Recruitment needs to change, she says: “I think there's something that happens in this industry, where the default position for going out to market to find new talent is to look for extra years of experience in data centers.

“If you're forever going out to market saying ‘you need to have been in data centers to come and work in data centers,’ then you're never going to get a more diverse workforce. You're only ever going to get an older version of the workforce you've already got. They might be more experienced, but they won't have any new insights. You're closing yourself out, you're being incredibly non-inclusive, and you will never be more diverse than you are now.”

Experienced data center professionals are still needed, of course, she says: “But

Issue 49 • July 2023 | 29
CEO Interview  >>CONTENTS

you need to have some new ideas coming in as well, to get to those best outcomes. I'm always trying to find that nice balance in our workforce where we have lots of great data-centric expertise, but also some of that fresh thinking.”

As chair of iMasons diversity and inclusion committee, she’s looking for new ideas: “I'm working with colleagues from across the industry and globally, to think about how we improve diversity inclusion, across our industry across the entire sector to get that new talent in. There's the everburgeoning demand for data centers, so we need to have more talent coming into this industry. And if you're only going to use the talent that's already here, you're not going to meet that skills gap.”

She wants advertising that gets the language right, and increases knowledge and awareness of the data center sector. If an engineer as well-qualified as her was not aware of the data center industry, then there’s clearly some work to be done: “As an industry, we need to get better at that - and I'm going to help in that regard.”

If she asked school and college students what a data center is, she says, “more than half wouldn't be able to tell me.”

Pure has been holding open days at its LON1 facility: “We just had our first open day for schools to come to our site in London. We also had an armed forces Open Day. We had the veterans community in the UK come in and we did a learning session about what the data center sector is and

how you get into it. Getting that educational piece out there so that the industry as a whole can benefit is so important.”

Of course, to get different people in, the sector will have to talk differently. “There was a study done by the Institute of Mechanical Engineering in the UK, called the Five Tribes Study, and it found that a lot of outreach activity to engage school kids into engineering was all landing with the people who were already interested in itthe kid who was taking the Hoover apart or building a Meccano car.

“To get more girls interested, you had to talk about the purpose. During the pandemic, girls have really seen how being a scientist, being technical, can solve problems like getting the ventilators designed.

"Those are life-changing pieces of engineering and it’s a life-changing technical skill set. All of a sudden, the purpose is far more evident, particularly for girls.”

A Data Dame

Being a Dame hasn’t changed things for her, she says: “I don't think I don't think being a Dame changes how people respond to you, but it's definitely welcome.”

She featured in the King Charles’ first Honours List, in January 2023: “It was a complete surprise, and I still have no idea who put the nomination together, which

I think is quite nice. I assumed it was my colleagues in the Women's Engineering Society, or maybe one of the other charities that I support, but it was none of those.”

She expands: “I think the bit that I was most pleased about was that it was for services to engineering, rather than services to diversity in engineering. It looks at my engineering career, rather than just my charity work. That was particularly pleasing.”

She collected the award from Anne the Princess Royal. Like many, she sees the Princess Royal as “hardworking, and a bit of a role model herself.”

They also had sport in common: Princess Anne was an Olympic horsewoman, while Dame Dawn once led the RAF team to victory over the Navy and the Army, in the Inter-Services Challenge Cup.

She’s amused by one thing about the award which many would see as sexist: “There is a bit of a quirk in the honor system. If I was a Sir, then my wife would be a Lady. But as I'm a dame, my husband doesn't get any title. I like to think that that's because if you're a guy and you get a knighthood, your partner has clearly had to support you in getting that knighthood.

“Whereas if you're a lady and you become a Dame, you've probably had to do it on your own.” 

30 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

The Network Edge Supplement

Where telcos meet data centers

AtlasEdge goes all in

> Why the company is betting everything on the Edge

Towers and the Edge

> Telco tower real estate could be reused for the Edge

Space comes for fiber

> Why SES hopes to use satellites to replace fiber

Sponsored by
INSIDE
DCD Magazine • datacenterdynamics.com @ 2022 Schneider Electric. All Right Reserved. Schneider Electric is a trademark and the property of Schneider Electric SE, its subsidiaries and a liated companies. 998-22401452 Let’s create Data Centers of the Future – together. se.com/datacenters

34. Why AtlasEdge is all in on the Edge And how it hopes to become the McDonald’s of the Edge data center world

38. Where is the Edge in 2023? A lot of talk, some action, but still some way to go

40. Towers and the Edge TowerCos have identified their role in pushing the Edge

42. Space comes for fiber: Can satellites offer data centers a new resiliency option?

SES pitches its new mPower fleet as a viable alternative to fiber for telcos and data center operators

Assessing the Network Edge

e're still hearing a lot of promises about the Edge and what it will bring to the data center industry as a whole.

Of course, there are potential use cases like those of AI, IoT, AR, and VR. But we have been hearing about them for years.

While there is still optimism in the industry, there's also an acceptance that the pace at which these Edge deployments have arrived has been slower than anticipated.

The stakes are high, though, with IDC tipping Edge computing investment to hit a staggering $208 billion this year alone.

Of those countries leading the Edge spend race, the US is unsurprisingly top of the list, accounting for 40 percent of the worldwide total.

Leveraging the Tower for Edge

Telecom towers form the basis of mobile infrastructure, enabling critical connectivity in cities, towns, and remote areas.

Without them we'd be up a certain creek without a paddleand we wouldn't be able to call anyone for help.

But these sites aren't just for mobile towers.

These incredibly sought after assets are providing TowerCos with additional revenue streams beyond just hosting mobile network operators.

Some of the biggest tower companies in the world are bringing mini data centers to these sites, as they eye an opportunity to corner the Edge.

WBut power constraints and business challenges remain.

Serving secondary markets

In the massive data center market of Europe, FLAP-D regions are well served.

There are other towns and cities in Europe that process vasts amount of data and require somewhere to store it.

That's the view of AtlasEdge, a company that was set up as a JV between Liberty Global and DigitalBridge.

It has identified the secondary and tertiary markets as perfect for its Edge data center solution. With a recent €725 million ($800m) credit facility, it's looking to serve even more markets.

We speak to the company about why it's gone all in on the Edge.

Satellite to rival fiber?

While the headlines focus on the hype of Low Earth Orbit (LEO) satellites, incumbent operators continue to improve and iterate on the larger satellites at higher orbits, increasingly offering gigabit and terabit capacity connections.

With its latest O3b constellation, European operator SES aims to combine the high throughput and resiliency of geostationary satellites with the low latency of LEO.

Can this new breed of satellite offer viable competition to fiber for telcos and data center operators?

Patience is key

After its slow start, the Edge appears to be finding its feet.

We hear from those that remain confident that the building blocks are in place to deliver on the promise of Edge soon.

34 The Network Edge Supplement | 33 Contents Sponsored by 40 38 42 The Network Edge 

Why AtlasEdge is all in on the Edge

And how it hopes to become the McDonald’s of the Edge data center world

34 DCD Supplement • datacenterdynamics.com  Building at Scale Supplement
The Network Edge Supplement
>>CONTENTS

The Edge remains an illdefined and marketingfocused term used by companies that are often looking to repackage their existing products for a new audience.

But while many see it as a promising opportunity in the future, one company has staked its business on the Edge not just being the wave of the future, but on it being a very tangible reality today.

AtlasEdge was formed in 2021 by Liberty Global and Digital Bridge (then Digital Colony) to combine the two giants’ assets for a European Edge play.

The company now operates more than 100 Edge data centers across Europe in cities like Amsterdam, Barcelona, Berlin, Brussels, Copenhagen, Hamburg, London, Leeds, Madrid, Manchester, Milan, Paris, and Zurich.

Serving the underserved

To gain a better understanding of AtlasEdge’s approach to the Edge, DCD spoke to the company’s SVP for colocation, David Hall.

A veteran in this industry, Hall boasts over 15 years’ worth of experience in the carrier-neutral data center industry, most recently at Equinix.

A year on from that move, he explains that AtlasEdge is on a mission to serve the areas that are often underserved.

“We’re not particularly interested in the FLAP [Frankfurt, London, Amsterdam, and Paris] markets as these are covered by the legacy colos already. Instead, we're very much interested in serving those more underserved markets, such as Berlin, Hamburg, Stuttgart, Birmingham, and Leeds.”

The company provides an array of services to its customers, including traditional colocation, Edge colo, interconnection, cloud access, IXP access, and remote hands.

“Fundamentally, the Internet relies on data centers that are the equivalent of two Michelin restaurants, and you know they're beautiful, and many of them are beautiful, magnificent temples to data centers,” he explains.

“So I guess that is where we would think of ourselves as similar to McDonald's. You can go anywhere in the world and you can get a Big Mac and you know that it is consistent and it's low cost.

“It's super reliable at two o'clock in the morning when you've just stepped off an airplane in Singapore, and you're hungry. There's always going to be a McDonald's there. And I think that speaks to the kind of similarity in our model. We want to provide a consistent, reliable product available at any time to our customers.”

A varied menu for its clients

Like fast food joints, AtlasEdge’s facilities can vary greatly in size. These can be from 2MW all the way up to 20MW and anything in between.

“Cities such as Liverpool and Bristol -and Brighton where I live - there’s not a need for these huge 100MW data

centers, but a smaller class of data center that we call aggregation hubs,” says Hall.

Hall adds that AtlasEdge has the ability to mix it up, depending on what their clients request of them.

“We assumed that these secondary and tertiary markets would require data centers of the order of two-six megawatts, or around that. But we're really seeing them exceeding that, particularly from the platform demand. So, an individual platform requirement in a secondary market such as Hamburg or Berlin is way in excess of two or three megawatts just for one platform.”

He explains that the company categorizes data centers like you would when you’re out shopping for a T-shirt. The smallest AtlasEdge data centers will offer 500kW, while 2MW would be medium, and 4-6MW would be large.

Since this initial approach, he says that AtlasEdge has added an XL size, noting that some projects the company has under development for 20MW.

When it comes to who is using AtlasEdge’s data centers, Hall estimates that hyperscalers account for the highest usage at around 60 percent, followed by telcos and enterprises with 20 percent apiece.

“A lot of the demand has been from hyperscalers. When we deliver capacity to them it tends to create an ecosystem around that. Each of the hyperscalers has an on-ramp to their public cloud product.

“When we’re building out capacity for the hyperscalers, they’re very interested in building these out in Edge locations so they to get traffic onto their networks as quickly as possible.”

Leveraging its assets - Virgin Media

When the creation of AtlasEdge was first announced, DigitalBridge CEO Marc Ganzi said that the new company was "an opportunity for us to apply the entire Digital Colony value-add

Building at Scale Supplement | 35
The big Edge play  The Network Edge Supplement | 35
David Hall
>>CONTENTS
AtlasEdge

playbook, leveraging our operating expertise, strategic M&A capabilities, and access to institutional capital in partnership with a world-class organization like Liberty Global.”

The M&A chops soon came to the fore, after AtlasEdge acquired German data center firm Datacenter One (DC1) from Star Capital.

Although financial details of the deal were not revealed, it saw AtlasEdge expand its footprint in Germany, scooping up two data centers in Stuttgart, and one apiece in Dusseldorf and Leverkusen.

Meanwhile, in the UK, AtlasEdge is planning to develop two data centers in Manchester

Each building will have four data halls over two floors, totaling 3,426 sqm (36,900 sq ft).

Significantly, the company will be constructing these data centers next

to a Virgin Media-O2 facility, which accommodates a data hub for the telco and some colocation operations. Virgin Media-O2 is owned by Liberty.

These developments show just how significant the backing is for AtlasEdge, says Hall, which he calls the lovechild of its two giant parents.

“This JV means we have fantastic financial backing, particularly from DigitalBridge, which is obviously a big investor in this space. But really importantly - and this is how I differentiate us from other data center operators who have suddenly become Edge data center operators in the last couple of years - we have real access to these networks, which really is the crux of what we mean by Edge.”

View on the Edge

During a DCD panel on the Edge earlier this year, AtlasEdge VP Edge strategy Mark Cooper defined Edge as “an ever-

evolving definition” depending on the day of the week it is, before noting the metaverse and what it can bring.

Hall is a bit more direct with his views on Edge. “Edge means where you can connect into the last mile networks. So wherever that place is, that is the Edge of the network. It's not the Edge of the map.”

He adds: “If you look at the landscape of so-called Edge providers, there’s been a lot that has popped up over the years, the last couple of years, certainly. And frankly, they're all businesses that existed before and have now put Edge in the name because it's the next cool, trendy thing,” joking that these people probably had AI in their name five years ago.

“The challenge for many of those providers is that they don't have the relationships with the networks, which AtlasEdge does through Liberty Global.”

36 DCD Supplement • datacenterdynamics.com  Building at Scale Supplement
Network Edge Supplement
>>CONTENTS
Credit: Hardy Fisher Leverkusen

Recent investment will bring further acquisitions

AtlasEdge isn’t showing signs of slowing down with its ambition to acquire and build more data centers.

In April, the business announced that it secured a €725 million ($800m) credit facility , of which the package consists of €525 million ($579m) in committed debt financing and a further €200 million ($221m) uncommitted accordion.

The financing also includes sustainability-linked targets focused on efficiency and renewable energy usage.

At the time, AtlasEdge said the facility provides AtlasEdge with ‘considerable firepower’ to execute further strategic M&A and build new sites throughout Europe’s key markets.

“They are not free to build right?”, adds Hall, who says the money will be invested very quickly. He notes that the investment shows that AtlasEdge is able

to raise significant capital for future M&A opportunities, and data center builds.

“Put simply we’ve had to finance the DC One deal and so part of the facility loan has gone towards that.

“On top of this, we’ve been investing heavily in data centers and have got some data centers in some great locations, but frankly, some have been unloved for at least 10 years, and to love data centers tends to mean money, so we’re investing significantly in improving these assets.”

Hall adds that the company is also investing in its PUE (Power Usage Effectiveness), with liquid cooling a possible avenue for efficiency improvements.

He explains that will help with cost and sustainability, and says AtlasEdge is working with three preferred vendors of liquid cooling, including ChillDyne, ZutaCore, and Submer, which provides immersion cooling.

Identifying the potential of liquid cooling, Hall says he expects it to account for a third of the workload in the coming years.

“We very much want to give customers the chance to be able to make their own choices with respect to liquid cooling, because I think the markets are still in flux. But operationally, we also want to understand this stuff.

“We need to start from somewhere. So we chose three options, which I think gives customers good flexibility. But if your customers want to bring along their own cooling solution, that's fine, too. In terms of our planning, we're expecting about 30 percent of the workload to be liquid cooled in three years’ time.”

European push for now

“Europe is huge,” laughed Hall, when asked about expansion plans beyond the continent, noting that its work in spreading out will keep it occupied for some time.

“We’ve got so many projects ongoing, so I think realistically over the next three to four years, you'll see us build around half a gigawatt, maybe even a gigawatt of capacity.

“We could deliver that in Europe, right? So, for us, we're super focused on growing that European market for now.”

As for other markets such as Asia or North America, the company isn’t immediately looking to venture into these regions, but as Hall says, “never say never.”

“Wherever we are, it’s about being consistent with the solutions we’re providing. We’re really focused on making sure that customers can buy the Big Mac, wherever they are on the AtlasEdge platform.” 

Building at Scale Supplement | 37
The big Edge play  The Network Edge Supplement | 37
>>CONTENTS
Datacenter One

Where is the Edge in 2023?

A lot of talk, some action, but still some way to go

Edge computing has gone mainstream," Dave McCarthy, research vice president, of cloud and Edge infrastructure services at IDC, notes.

The analyst firm believes that Edge computing is forecast to hit $208 billion this year, an 13.1 percent jump over the previous year.

"The ability to distribute applications and data to field locations is a key element of most digital transformation initiatives,” he says. “As vendors extend existing feature sets and create new Edgespecific offerings, customers are accelerating their adoption plans."

The expectation is there for Edge to take off, but is the infrastructure in place to support this?

Where does Edge infrastructure stand currently?

Europe is a strong market for Edge, according to nLighten CTO Chad McCarthy, who refers to the historic data center industry, notably the FLAP (Frankfurt, London, Amsterdam, and Paris) market, plus Dublin too in recent years.

“In Europe, we have a highly developed data center industry, and historically the FLAP locations were the main data centers content depots,” he says in the DCD panel Infrastructure and the Edge. “That’s where a lot compute still takes place now.

“The network infrastructure is evolving rapidly, so it’s possible to have a far better network service at the Edge, which used to be the big bottleneck, and that opportunity means that industrial companies, commercial companies, and also private users are able to to use data more in remote locations.”

Capgemini chief technology & innovation officer

Gunnar Menzel adds that the concept of Edge is nothing new.

“We always talk about Edge like it’s a new development, but it isn’t,” explained Menzel.

“We’re always thinking about where you could locate data, centralized or decentralized. Take retail outlets where you have many stores, you could also argue back in the ‘90s that you had Edge computing in those stores because we couldn't centralize. So it's not a new technology, per se, or new approach.”

He adds that the ability to connect data better is what has changed, noting faster fiber broadband, and the advancements around satellite communication.

As with many though, he wants Edge to be built out even further to help applications such as the metaverse come to life.

Edge taking longer than hoped

With the Edge slow to find its feet, Dataqube CEO Steve Pass argues that the network side is in a strong position to drive Edge opportunities.

“It seems to be definitely leading and is way ahead when it comes to Edge, bringing that capability closer to users through last-mile, high bandwidth,” said Pass, who says compute is lagging generally.

“So now we've got the capability for the Edge highspeed networks, the use cases are going to drive the need for compute services. I think that will drive a lot of innovation and adoption of new technology over the next few years. People who are going to deploy infrastructure at the Edge are going to have to be able to be very flexible as users start consuming at scale.”

38 DCD Supplement • datacenterdynamics.com  Network Edge Supplement
Paul Lipscombe Telco Editor
" >>CONTENTS

5G’s role in developing the Edge

5G may not be the most common form of mobile connectivity at present, but has grown rapidly since its 2019 launch.

Fast forward to 2023 and the technology is tipped to play a pivotal role in driving Edge applications.

“I think one of the key drivers for Edge is going to be the new private 5G spectrum that is coming across Europe, which is going to make it possible for a lot of enterprises and commercial centers to have their own private 5G networks,” says Open Nebula senior Edge solutions architect Alfonso Carrillo.

One such company that specializes in private 5G services is Kyndryl, which was created as a spin-off of IBM's infrastructure services business.

5G is critical to tapping into Edge applications, says Kyndryl SVP US network and Edge, Gretchen Tinnerman.

“I think its critically important because as our clients are investing more in the convergence of IT and OT (operational technology) at the Edge, they're looking for a private network, not a public one,” Tinnerman told DCD.

“Our clients know that these applications and these Edge devices are extremely critical to the infrastructure and require private network connectivity.”

The company has identified the industrial sector as an industry that can benefit massively from private networks and recently worked with Dow Chemicals at a lab plant in Freeport, Texas.

This was driven due to Kyndryl’s partnership with Nokia, with the duo

combining to deploy a private network with Edge computing.

Implementing a private network at the site has aided the modernization of the Dow plant, as the advanced connectivity has increased allegedly worker safety and enabled remote audio and video collaboration and real-time smart procedures.

Tinnerman says that Kyndryl’s been able to support the connected worker at Dow, thanks to the Edge.

“With Dow, it’s really about the connected worker and taking their challenges which may be a lack of connection and trying to increase connected worker solutions to improve use cases such as safety.

“Our private network has meant that the workforce has been able to access applications right in front of them at the Edge, helping them to improve stability, efficiencies, and safety.”

Dow has stated that within the first four months of its digital transformation, its Freeport plant has been able to reduce the time it takes workers to complete operational tasks, and has completed over 28,000 digital procedures.

Challenges around the Edge

Tinnerman thinks that the biggest challenge for the Edge right now is the slow adoption rate, which she blames the mindset around change.

“We’ve not seen the acceleration of the adoption rate in this industry with our clients, that many of us integrators and providers thought we would see,” she said.

She adds that there’s a lack of skills, technology, and resources for traditional networks meaning that some can’t support the low-latency applications that would thrive off Edge.

Others point to the pace at which innovation around technology is evolving, noting that it can be difficult to keep up with the trends.

Positive Pass for Edge

For Dataqube’s Pass, this innovation is a good thing and will help the Edge sector to kick on, and will push industries to be more cost-efficient, energy-efficient, and sustainability-focused.

“The pace of innovation is going to drive the companies and individuals to move towards Edge,” says Pass.

“Ok, the pace has been slower than expected with Edge, but I think that's about to change at some point in the near future and it's just going to accelerate from there.”

He warns that businesses run the risk of being left behind if they don’t adopt Edge services, networks, and capabilities.

“Businesses are going to have to adopt Edge, otherwise they're going to be left behind. That could be as simple as managing a farm or a mine or something like that and doing it more cost-effectively to gain a competitive advantage over the competition. It’s an exciting time for the industry.”

AtlasEdge  The Network Edge Supplement | 39
 The Edge in 2023
>>CONTENTS

Towers and the Edge

TowerCos have identified their role in pushing the Edge

Tower companies have been looking at Edge computing as an opportunity area for new market opportunities for several years now,” said Chris Antlitz, Technology Business Research principal analyst.

“There's actually been some startups that have some very interesting technology like Vapor IO that makes technology that's optimized for far Edge deployments, especially locations like a tower site. The towers are very interesting because they have access to power, fiber, and access to direct integration within the access network, especially on the mobile side.”

Vapor IO recently expanded its Kinetic Grid Edge computing platform to Europe through a partnership with Spanish telco Cellnex Telecom.

Cellnex hosts the Kinetic Grid platform on its fiber optic network in Barcelona, connecting to Vapor's existing grid in the US. Other European cities are expected to follow, using Cellnex's network of data centers and tower ground space.

Vapor IOs Kinetic Grid platform, which has been rolled out into more than 30 US markets,

links data centers to a platform that supports applications close to users and data sources, making distributed Edge applications faster and more reliable, and minimizing traffic toward the core of the network.

Linking Towers and the Edge

So what’s the link between Towers and Edge? Antlitz explained to DCD that this is where data center companies are playing a key role.

He notes that TowerCos have identified the role which data center companies can play in helping them take advantage of Edge computing opportunities.

“Some tower companies have made acquisitions of data center type companies, and part of the reason is that they need to understand this, as they’re not in the data center business,” said Antlitz.

“By acquiring assets, they can familiarize themselves with that, and they can prepare for the proliferation of Edge compute sites.”

The Network Edge Supplement
" >>CONTENTS

Edge data centers located at the tower

At nearly 226,000 sites, American Tower Corporation (ATC) is one of the world’s largest TowerCos. But the company has begun to venture beyond just towers, and into the Edge.

In the spring of 2019, ATC acquired Colo Atl, marking its entry into the industry.

Tha business has six Edge data centers located strategically across the US, with two in Colorado, and one in each of Flordia, Georgia, Pennsylvania, and Texas.

Two years later, ATC snapped up carrierneutral data center company CoreSite for $10.1 billion. CoreSite operates around 25 data centers across the US.

At the time, ATC said the deal will be “transformative” for its mobile Edge compute business, allowing it to establish a “converged communications and computing infrastructure offering with distributed points of presence across multiple Edge layers.”

“The combined company will be ideally positioned to address the growing need for convergence between mobile network providers, cloud service providers, and other digital platforms as 5G deployments emerge and evolve,” added CoreSite CEO Paul Szurek.

ATC isn’t the only tower company doing this either, with rivals SBA Communications and Crown Castle eyeing the market.

SBA recently launched an Edge data center at a tower location in the Dallas Fort Worth area of Texas.

Jeff Stoops, CEO at SBA, recently revealed that the company has “somewhere between 40 and 50” Edge sites in operation or development.

The company began exploring Edge computing modules at tower sites in 2018 in partnership with Packet before its acquisition by Equinix.

SBA’s portfolio of Edge infrastructure includes mini data centers with multiple cabinets for colocation and redundant power and systems for critical IT uses, with these modules varying in size.

Why are they doing this?

Telecom towers could be the ideal location for small-scale Edge nodes.

“Towers are the perfect location for micro data centers as real estate in cities is expensive and limited, hence it is very hard to build larger data centers here,” STL

Partners' senior consultant Matt Bamforth says. “Plus, they are already equipped with connectivity and power – two critical factors to enable data centers.

“Tower companies (and mobile operators who still own their towers) can capitalize on this opportunity. Many next-generation applications will need compute to be brought to the last mile to achieve the necessary latency. Meanwhile, carriers are virtualizing their RAN, which will accelerate investment in data center-like facilities at towers and consolidate network equipment at these premises.”

He adds that IT infrastructure for Edge applications can be colocated at these premises, further optimizing investments and opening up additional revenue streams.

Effectively, the closer that data centers and servers are to the point of transmission, then the easier it is for technologies such as IoT, VR, and artificial intelligence to become accommodated, without increasing network congestion and latency.

However, the Edge sites are usually limited in size due to the power constraints of a tower deployment - with the towers designed without an Edge data center in mind.

Carriers’ view on Edge

Verizon is also keen to get in on the Edge, inking preliminary 5G and Edge deals with Amazon Web Services, Google, and Microsoft.

Verizon has identified virtual RAN (vRAN) as a key part of its Edge strategy, with the operator aiming to deploy 20,000 by the end of 2025

Virtual RAN or vRAN as it’s also known, effectively virtualizes the functions of a traditional RAN (Radio Access Network) and offers them on flexible and scalable cloud platforms instead.

The telco noted that the deployment is in response to customers’ varied latency and computing needs, while it also

provides greater flexibility and agility in the introduction of new products and services. Verizon says that 5G uses will rely heavily on the programmability of virtualized networks.

The towers are also the location of where virtual RAN is, notes Antlitz, which he says is the evolution of radio that will be one of the key use cases for Edge computing.

Reality check on Edge

Whilst Antlitz does acknowledge the potential of Edge computing, he’s also quick to state that the sector remains in an “experimentation” phase that hasn’t taken off as the industry had expected.

“The Edge has not proliferated at the extent the industry thought it would maybe just two years ago, as the market has slowed significantly from original projections.”

There are a few reasons for this, he says, including the silicon that is used.

“Silicon needs to evolve to process workloads at the far Edge more efficiently. It's too power-hungry and there's not enough power. Power is a big limitation at the far Edge, and although it's available, it’s limited. The other thing is it's you need to build sites that are at form factors that are suited for far Edge deployments.”

The cost of these Edge deployments is also a factor, plus the uncertainty around the return on investment it brings. But he’s confident that Edge compute will be deployed at tens of thousands of sites in the next decade.

He expects the tower companies to leverage their assets to tap into an industry he says is worth tens of billions of dollars.

“The tower companies have identified Edge computing as a growth opportunity for them over many years. Edge will help these companies to add a new form of revenue, if they can leverage what assets they already have, along with their existing customers.” 

 The Network Edge Supplement | 41
>>CONTENTS
Towers & the Edge

Space comes for fiber: Can satellites offer data centers a new resiliency option?

SES pitches its new mPower fleet as a viable alternative to fiber for telcos and data center operators

The Network Edge Supplement
>>CONTENTS

Satellites are but a small part of the connectivity landscape.

EuroConsult estimates that total global satellite capacity will reach 50Tbps by 2026 (currently a little less than 25Tbps), while total subsea cable capacity for 2026 is predicted to reach 8,750Tbps.

Fiber will likely always rule the roost, but as satellite throughput capacity continues to increase, in-orbit connectivity is becoming an increasingly viable part of the real-time connectivity strategy for large companies and not just a limited fallback.

At a visit to SES’ HQ in Betzdorf, Luxembourg, the satellite company put on several demonstrations to show DCD that modern constellations can offer viable competition to fiber for telcos and data center operators.

MEO – offering the best of GEO and LEO

Founded as Société Européenne des Satellites by the Luxembourgish government in 1985, for most of its history SES has been a GEO (geostationary orbit) satellite player.

The company was Europe’s first private satellite operator, launching the Astra 1A satellite in 1988 to provide satellite TV for the likes of Sky. Today the company operates more than 50 GEO satellites providing TV and connectivity services.

It also became the first company to operate in multiple orbits after acquiring O3b Networks and taking over its Medium Earth Obit (MEO) fleet in 2016.

Founded by Greg Wyler in 2007, O3b Networks aimed to launch a new

constellation of MEO communication satellites around 8,000km above Earth.

O3b stood for "other three billion" - the three billion people that, at the time, did not have regular access to the Internet. After originally planning to launch in 2010, the first O3b satellite was launched in 2013. The last batch went up in 2019 and today 20 of the 700kg Thales Alenia Space-made Kaband machines are in orbit.

SES took a 30 percent stake in O3b in 2009 for $75 million, joining the likes of Google as an investor. It subsequently raised its stake in the company to 49.1 percent before acquiring the remainder of the firm for $730m in 2016.

Traditionally, GEO’s 36,000km orbit has offered reliably constant coverage – satellites follow the Earth from a fixed perspective – but at low bandwidths and high latency. Newer machines are now offering higher throughput into the terabits, but the latency is an unavoidable byproduct of the distance from Terra Firma.

LEO, at much lower altitudes of under 1,000km, has generally offered much lower latency, albeit with lower coverage of the planet. While their reduced size results in lower costs and rapid development, they aren’t built with the same redundancy and resiliency in mind while flying in a more crowded and dangerous orbit.

SES pitches MEO and its two O3b networks as offering high throughput with latency comparable to LEO, but combined with a field of view and level of resiliency and redundancy that has traditionally been associated with GEO orbits.

Existing connectivity customers for the first generation O3b fleet include AWS, Microsoft, BT, Orange, AT&T, Lumen, TIM, Vodacom, Telefonica, and Reliance Jio.

Deployed use cases include backhaul from remote cell sites in Brazil; equipping telecoms disaster recovery trucks with antennae to provide communications and connectivity coverage in the wake of a major event that’s damaged existing permanent infrastructure.

“You show up with two antennas and two small racks on a van, and within three hours you have a 5G network live helping first responders,” says Saba Wehbe VP, service engineering & delivery at SES, who joined the company as part of the O3b acquisition.

O3b has the power

As satellite technology has improved, the next batch of MEO satellites is set to offer more bandwidth to more customers. SES’ second generation MEO fleet, O3b mPower, was first announced in 2017, with the first

 The Network Edge Supplement | 43
>>CONTENTS

Boeing BSS-702X-built satellites launched by SpaceX in December 2022.

Each 1,700kg Ka-band satellite makes five orbits a day around the Earth. The main difference between the original O3b fleet and the mPower fleet, aside from increased throughput, is the number of beams each satellite can create and therefore the number of sites that can be connected at once.

Where the original O3b satellites featured just 12 steerable parabolic antennae offering 1.6Gbps per beam (800Mbps per direction), the mPower units are equipped with phased array antennas that can provide up to 5,000 beams per satellite, providing from 50Mbps up to 10Gbps.

Phased arrays are computer-controlled antennas that create a beam of radio waves that can be electronically steered without moving the antennas. According to the company, this means multiple companies can have real-time, uncontested high bandwidth, point-to-point connections between multiple sites.

“We can now cater the beam location, the beam configuration, and the amount of bandwidth in that beam to the actual requirements of the application and that specific customer,” says Wehbe, “which is a significant level of flexibility when compared to [the first generation] O3b.”

This means telecoms companies can backhaul multiple sites to a single aggregation point; enterprises can directly connect remote mining, manufacturing, or oil rig sites to the cloud; and data center

operators can link multiple facilities as a backup to fiber routes.

“A telco could have 100 cell sites where they want to extend their 5G Network. They could connect all the various sites, land it at their own gateway at the central data center, service every single remote site out there,” says Wehbe. “They would control a fully private satellite-based network end-to-end, no public Internet involved. And that kind of thing is a very powerful capability for cellular operators, large enterprises, and governments.”

SES says MEOs high orbit provides a wide area of view which means data can be transferred in a single hop via satellite to any other point on the continent at lower latencies than GEO, and in some cases even LEO.

The company claims mPower’s latency will be around 150 milliseconds, much lower than GEO’s (700 milliseconds). SES claims that while LEO can in theory offer as little as 50 milliseconds of latency, each satellite’s field of view means latency can build up quickly as signals are relayed over multiple satellites and terrestrial points to reach the final intended destination.

Point-to-point private 5G

So far, four of the 11 planned mPower satellites have launched. Four more are due to launch this year, with six satellites providing enough coverage to start global services in Q3 2023. Customers already signed up to mPower include Orange, Microsoft, and Reliance Jio.

During a demo at SES’ Betzdorf labs, the company showed a number of live use cases.

One, involving NTT, saw the companies create a live 5G network connected via satellite to the cloud. An NTT private 5G 'network in a box' provide coverages to an area [oil rig, mining site, port, etc]; the network then communicates with an SES satellite, which can then connect that site either to another 5G site for a fully private connection that doesn’t touch the public Internet, or directly to an on-premise or cloud data center.

Part of the demo saw a real-time digital twin of a simulated oil created with an onpremise Azure Stack deployment beamed directly from Betzdorf via satellite to an Azure cloud region.

Announced in 2022, NTT’s network in a box is still currently in the proof of concept stage. The company has previously said the box can provide coverage across a 5,00010,000 square foot area. Azure Stack is Microsoft’s on-premise appliance, offering a limited number of cloud services at the Edge that connect back to the parent cloud region.

“MEO is close enough to the Earth and we can push and pull significant bandwidth,” says Karl Horne, VP, telco/MNO data solutions at SES, “So it has a bandwidth symmetry capability that's important in cloud networking, Edge-to-core, and EastWest networking.”

MEO is still high up enough that it has a reasonable footprint of coverage. So it has an advantage there that it can actually

44 DCD Supplement • datacenterdynamics.com  The Network Edge Supplement
>>CONTENTS

bypass an awful lot of terrestrial networking in a single hop.

Satellite terminal options vary, but even the smaller units can offer 500Mbps down and 200Mbps up, with either a built-in modem or a rack-based modem. Sites serving as an aggregation point for multiple satellite-connected locations might require larger gateways capable of gigabit-capacity throughput.

Satellites come for fiber to connect to the data center

While the company is clearly enthused by the possibilities of being a bridge between private 5G sites – Horne says there are maybe 1,000 such sites globally today but there could be 10,000 by 2026 – there’s a significant opportunity in the data center space.

A number of cloud providers including Amazon and Microsoft have struck deals to place ground stations at data centers around the world. To date, these have been more focused on aggregating data collected from satellites – for example Earth observation information – and delivering it straight to the cloud for processing. But Horne notes that the higher bandwidth of modern satellites offers a way for data centers to have point-to-point connectivity that doesn’t rely on fiber.

“When we talk about data center networking, we can cover so much of the Earth, the true end-to-end performance is not that different from what you would do with a fiber connection because you're

not having to traverse a bunch of router networks.”

“We've had some of the cloud solution engineers say, "I can’t tell the difference when I'm running over your network versus fiber.’ It is very much like fiber in the sky.”

SES has deals in place with Microsoft and AWS; the latter is offering containerized data centers to the US government that are equipped with terminals to link back to a primary cloud region via satellite. But SES is also offering services directly to data center firms as a resiliency play.

“Nothing will ever replace fiber as the thing that delivers that capacity,” says Horne, “but fiber still can be vulnerable. We've done some projects with some of the cloud service providers on their own infrastructure and how we can harden that.

“As they start building out more regional or Edge data centers, it helps bring them online quicker, but be much more survivable, too.

Whereas the first wave of data center customers are the cloud providers focused on "survivability of cloud services,’ Horne says the next wave of data center customers could be the colocation firms looking to ensure "more survivable networking services,’ but the company isn’t seeing the demand for that yet.

“The first actors are the full-service cloud service operators, because their underlying objective is continuity with the cloud services,” he says. “It’s very important for them to keep that highly available for their customers.”

SES has previously said that the O3b orbit and Ka-band frequencies have room for more than 100 satellites if business prospects justified the expansion. In terms of gateways for data centers – especially at cloud facilities – Horne says he would like to see more colocation of satellite infrastructure in densely-populated developing markets such as India.

MEO a carrier-grade LEO

SES claims its MEO services offer "carriergrade’ resiliency and five-9s level SLAs and bandwidth symmetry, compared to "best effort’ services from LEO providers.

Some OneWeb resellers do actually offer SLAs to customers on the basis of GB delivery guarantee per day, up to 99 percent at measurements of five-minute intervals. Starlink does not offer SLAs at this point.

The likes of Starlink, OneWeb, AST SpaceMobile, and Lynk are beginning to offer satellite direct-to-cell services to telcos. But rather than consumer-serving offerings, SES is focused on serving telcos for backhaul, enterprises for large remote deployments, and cloud providers in need of resiliency.

“There is a place for everybody and LEO has a role to play, but LEO and MEO are different niches,” says SES’ Wehbe “We are not really interested in the end customers because we are working with a B2B type of commercial model.”

He says that while the 150Mbps down, 20Mbps up via LEO might be fine for consumers and some enterprise use cases, it isn’t what the telcos, governments, and cloud providers of the world demand.

“Carrier-grade is very high throughput, symmetric capabilities, guaranteed services 99.99999 percent of the time. That's not what OneWeb's selling and that's not what Starlink is selling.”

“We empower the Vodafones and the Claros of the world so, in turn, they can go and deploy connectivity.

“We allow our customers to have their own private virtual networks within their country. The traffic never leaves the country, and they have full control of the network and what they're doing with it, how they assign bandwidth. And this is the level of control you don't get from the LEOs.”

When asked if SES might invest in LEO satellites in the future, Wehbe said the company “might,” but reinforced they are "different fields of connectivity with different requirements." 

A fiber alternative  The Network Edge Supplement | 45
>>CONTENTS

Loughborough University drives IT

with EcoStruxureTM Data Centre solutions

Modernised using the latest in resilient and energy-efficient technologies and harnessing the power of data analytics and predictive maintenance, Loughborough, a leading UK university, has futureproofed its data centres and distributed IT installations for assured operational continuity, bolstering its reputation for research and academic excellence.

Discover how Loughborough University partnered with Schneider Electric and its Elite Partner On365 for increased Reliability, Resiliency and more Sustainable operations. se.com/datacentres

The driving force behind F1

Why IT is essential to the need for speed and success in Formula 1

Formula 1 is perfectly namedbecause its success is entirely dependent on a series of formulas and the computers that solve them.

Many sports are now increasingly datainformed. Football teams do post-match analysis, shot-by-shot data is collected at Wimbledon, and stadiums themselves are kitted out with sensors. Formula 1 differentiates itself by being data-driven at every step of the process: in the factory where the cars are made, before the race, and on-track mid-lap.

Formula 1 hasn’t always been datadriven. The sport has been around since 1950, when the ‘formula’ was simply a set of rules that the cars and drivers had to follow. By the 1990s, teams began putting sensors on the cars to collect data and improve their race strategies. Today, the cars can have as many as 400 sensors on board, collecting vast quantities of data for analysis.

In as much as the race is a feat of athleticism and concentration by the drivers, it is also a feat of engineering, computational fluid dynamics, calculation, and scientific thinking. It is a demonstration of what we can achieve

when we invest staggering amounts of money into a set goal.

However, in recent years, the Federation Internationale de l’Automobile (FIA) has introducing new regulations at a rapid pace, including a cost cap which limited the amount of money teams can spend. Since then, extremely well-funded teams have had to pull back on investment, and IT choices have never been more important.

Enter the Cost Cap

Prior to the cost cap, F1 teams were limited only by the depth of their owner's

racer 
Speed
>>CONTENTS
Georgia Butler Reporter

pockets, as well as various other stringent requirements for the cars and testing facilities.

As a result, the leaderboard has remained mostly consistent throughout the years, with Mercedes, Ferrari, and Red Bull Racing battling it out for the top three spots. In 2019, Mercedes spent $484 million, Ferrari $463m, and Red Bull Racing $445m. The next closest was Renault, at $272m.

The cost cap limited all teams to $145m for 2021, $140m for 2022, and $135m for 2023-2025, and applies to anything that improves the performance of the car – from engineer’s wages to materials used, to the IT set up powering the simulations and monitoring the cars on the track.

While still a staggering amount of money, this effectively cut the top teams’ allowance by two-thirds, and forced them to re-budget.

The cost cap is intended to level the playing field (or smooth out the race track), but so far it has not yet had that effect. Dominic Harlow, FIA’s Head of Technical Audit agreed that this was a “valid observation,” but argued that financing isn’t necessarily on par with the value of expertise.

“In truth, the engineering of the F1 car is a process. It's something that is built up over time across the board, in terms of the cars, the teams, and the knowledge. It doesn't necessarily follow that if you

change the amount of spending in one area then performance is going to be impacted straight away,” explained Harlow.

In other words, it will take time for the results to be reflected on the scoreboard.

While the financial limitations obviously reduce the computational power F1 companies can invest in, we have also seen that the teams who violate the spending limits are made to pay a cost in terms of technology.

In 2021, the new top dog and universal nemesis Red Bull Racing violated its cost cap by 1.6 percent. It was ultimately fined $7 million and had its aerodynamic testing allowance cut by 10 percent, a significant loss for the most compute-heavy element of the sport.

Aerodynamic testing and CFD (computational fluid dynamics) simulations are considered so valuable in the sport that deductions are made for teams based on their ranking on the final scoreboard. The first place spot is allowed 1,400 simulations during its eight-week testing period, while 10th place gets 2,300.

For Red Bull, who in 2021 finished in 2nd place, the 10 percent cut saw their simulations dip from 1,500 to 1,350. Still a significant amount of testing, but drastically lower than some of the other teams. Regardless, Red Bull still managed to claim the title in 2022 and remains at the top of the leaderboard for 2023 so far.

Home and away

IT is essential at every stage of the racing process. For McLaren Racing, this is split into three stages: design, build, and race.

The team starts designing its new cars a year before the season starts – and given the lengthy seasons this means that new designs are being worked on when the current car has just hit the track.

“A small group of people will start looking at next year's car,” said Ed Green, commercial head of technology at McLaren Racing. “A lot of our work is done inside computer-aided design (CAD). Our engineers will design parts for the new cars, and the design is run through a virtual wind tunnel which is based on computational

48 | DCD Magazine • datacenterdynamics.com
DCD Magazine #49
>>CONTENTS
“We have a portable data center that’s two racks inside a shockmounted flight case, and it is one of the only things that travel alongside our cars via plane"

fluid dynamics (CFD).”

According to Green, that process alone generates upward of 99 petabytes of data, all of which is processed on-premise due to CFD-related regulations.

The amount of time the wind tunnel simulations can be used is also limited by the FIA, meaning that the precision and efficiency of the team's computers are a key advantage.

From this point, the digitally-tested parts will be 3D printed at a small scale, and tested in a physical wind tunnel where performance is monitored by sensors, and the data on geometries and air pressures are measured. Provided these results are consistent with the CFD findings, manufacturing begins.

McLaren uses a hybrid approach to its IT, some of which is processed on Google Cloud but, according to Green, the team favors on-premise compute, in part for the reduced latency.

“We have around 300 sensors on the car, and over the course of a race weekend where we are driving for around five hours, that will generate a terabyte and a half of data. We analyze that at the track but we are also connected all the way back to HQ (or what we call ‘mission control’).”

Those sensors are gathering data on every element that could affect the outcome of the race - from tire pressure, race telemetry, speed, location on the track, fuel availability and flow, wind speed, heat, and much more.

“There’s a NASA-style environment where the team back home will also analyze the data and support with decision-making

for the racer,” explained Green.

With so much of the infrastructure on-premise, and the value it has for the performance of the race teams, Green was unwilling to share any more information about McLaren’s compute capacity beyond saying the team has around 54 terabytes of memory.

Track-side computing

For the team to analyze and process race data on the trackside, a portable data center must be transported alongside the cars to every race.

“We have a portable data center that’s two racks inside a shock-mounted flight case, and it is one of the only things that travel alongside our cars via plane –everything else is done via sea freight,” said

Green.

Those portable data centers have to be extremely flexible, as they face wildly different environments week by week. It could be the usually mild climate of Silverstone in the UK, the 40°C steam room of Singapore, or an abnormally dusty track in India, and McLaren has to set up its data center 23 times over the course of a season.

Green recalls one of his first race weekends, when he entered the data center to find a colleague hoovering the servers.

On the trackside, McLaren is using Dell servers and storage, with Cisco switching gear. In total, the team lugs around 140 terabytes of solid-state storage to every race for on-site analysis which is also relayed to the factories. Should the connection to “mission control” fail, the compute at the Edge can make or break the performance.

Issue 49 • July 2023 | 49
Speed racer
>>CONTENTS

Aston Martin’s data shifted the scoreboard

One of the most notable shifts in recent F1 seasons is the sudden and drastic rise of Aston Martin’s team.

In both 2021 and 2022, Aston Martin’s drivers finished 7th on the leaderboard, with several races seeing them placed 10th or lower (even placing 20th on an occasion in 2021). But this year has been a true comeback – led by veteran racer and former world champion Fernando Alonso who has placed on the podium six times this year, bringing the team up to an overall 3rd place – and who doesn’t love an underdog?

The change started in 2020, when Canadian businessman Lawrence Stroll took ownership of Aston Martin’s race team. In 2021, the race team partnered with Net App, and in 2022 hired a new CIO, Clare Lansley, who was previously the director of digital transformation at Jaguar Landrover.

“When I joined the team, it was very clear that IT had been somewhat underinvested in, given the heritage of the team,” said Lansley. “Since Stroll bought it and obviously provided some serious investment, we are now in a position to transform the IT, and the very first start was to ensure that the infrastructure was performant, reliable, and secure. So the concept of implementing a data fabric was absolutely fundamental.”

But while this new investment brought with it new opportunities, the team still had to remain within the bounds of the sport’s budget. Freight weight costs around $500 per kilo transported, and given the nearweekly travel involved, this adds up quickly. Accordingly to Lansley, it was partially this that cinched NetApp the job.

Everywhere that the cars and the drivers go, a NetApp FlexPod follows.

“For these devices, the fact that they were going to reduce the freight weight and the actual footprint, that they were just smaller than the previous kit, was a massive boost. But they were also simpler to set up. When we arrive at the track, my team is given a concrete shell that is completely bare, so I don't want to run numerous cables. I want something that can effectively plug and play at speed,” explained Lansley

The FlexPod solution reduced Aston Martin’s trackside compute from multiple racks and 10 to 15 individual pieces of equipment, to just one pair of servers. One

server for processing and storage, and another for redundancy purposes.

During the race, sensors from the cars transmit data to the FlexPod via radio frequency. This then uses SnapMirror to take snapshots of the data, saving only the differences between each snapshot, which is then transmitted to the FlexPod at the Silverstone factory where the 50-odd engineers start testing and simulating different options for the rest of the race.

Once that data reaches mission control, simulations, real-time CFD (rCFD) and testing begin. But one notable limitation placed on this process by the FIA is that “the solver part or parts of all rCFDs must only be carried out using a compute resource that contains a set of homogeneous processing units,” and those homogeneous processing units must be CPU cores.

GPUs versus CPUs

FIA’s Dominic Harlow explained the decision by the FIA to solely allow CPUbased CFD.

“The decision to use CPUs was based on the discussions we had quite a while back with the teams, independent industry experts, and our own specialists on how to quantify the amount of compute used for a CFD simulation. We came up with a metric that is effectively based around a core hour,” said Harlow.

DCD Magazine #49
>>CONTENTS

“For GPUs particularly, it's obviously an enormous number of cores potentially and quite difficult to define a core, similarly for Field Programmable Gate Arrays, or other types of processors that you might use for CFD.

"CPUs are by far the most common and it was the most practical implementation to regulate.”

While you can have CPUs running in tandem, the nature of CFD, like AI, makes it very complementary to GPU-based processing. To understand this we need to dive deeper into the specific use cases of CPUs and GPUs. CPUs are designed for task parallelism whereas a GPU is designed for data-parallelism and applying the same instruction or set of instructions to multiple data items.

This is why GPUs are central to video games - where the instruction set is the same for the character model, the virtual world elements and all the assets that the gamer will see on their screen.

This data-parallelism is also why GPUs are great for artificial intelligence modelsafter all the same instruction set is applied to huge data sets.

CFD involves breaking data down into small blocks. In the case of an F1 car simulation, the air around the car, the ground beneath, the car itself, is converted into tiny polygons and each needs to be processed in parallel.

In a paper presented at the 2017 International Conference on Computation Science in Zurich, Switzerland, researchers found that GPUs could speed up a 2D simulation using HSMAC and SIMPLE algorithms by 58x and 21x respectively with double precision, and 78× and 32× with single precision compared to the sequential CPU version.

Harlow agreed that, as GPUs are steadily improving, the anti-GPU ruling could change in the future.

“Where the industry is going now we obviously need to watch very, very carefully because it seems that GPUs are reaching a greater level of maturity, particularly for the applications, and not just as accelerators, but actually as the main processor for the simulation. So watch this space.”

It is also this need for homogenization that prevents some F1 teams from processing on the cloud, due to the difficulty of quantifying the core hours used, and the stringent reporting requirements placed upon them.

Racing to the cloud

This has not prevented some F1 teams from relying heavily on cloud computing, however. Oracle Cloud Infrastructure (OCI) became Red Bull Racing’s title sponsor as part of a $500m deal in 2022, and the top-of-the-charts team has publicly stated that it uses Oracle for running Monte Carlo simulations as part of its race prep.

At the end of 2022, winner and Oracle Red Bull Racing Driver Max Verstappen said: “Due to all the simulations before the race even starts, it's very easy to adopt a different strategy during the race because everything is there, everything is prepared. I think we definitely had a strategy edge over other teams.”

Monte Carlo simulations use computer algorithms reliant on repeated random sampling. By exploring a vast variety of possibilities, that randomness can be used to solve deterministic problems. For Red Bull, this means applying a variety of surface variables, wind and weather speeds, possible car issues or choices – any factor that could impact the outcome of a race, and testing them all.

This was all done with cloud computing – the team used Oracle Container Engine for Kubernetes to containerize those simulation applications, and then run those models using high-performing Ampere Arm-based CPUs on OCI.

“The first workloads we moved to OCI were our Monte Carlo simulations for race strategy,” said Matt Cadieux, Red Bull Racing’s CIO at Oracle CloudWorld in London. “The reason this is our race strategy is mission-critical. It has a big

redbullracing.com

influence on our race results. We were running on an obsolete on-prem cluster, and our models were growing so we needed more compute capacity, and we also needed to do something that was very affordable in the era of cost caps.”

In this case, the flexibility of cloud computing and the ability of Red Bull to spin up CPUs for short periods of time to conduct these simulations may have won them the title.

Regardless of whether teams are taking an on-prem or cloud-first approach, the cost cap has proven to be an obstacle that has forced innovation on all fronts. With a sport so reliant upon technological innovation, scientific prowess and out-thebox thinking, there is an argument that this limitation only serves to strengthen the sport.

Forcing the teams to find more financially sustainable methods of achieving this will not only bolster competition in the long term, but it could also change the way the industry, and others, approach technological issues in constrained environments and situations. 

Speed racer  Issue 49 • July 2023 | 51
" When we arrive at the track, my team is given a concrete shell that is completely bare, so I don't want to run numerous cables"
>>CONTENTS

Big plans for the big chip company’s big supercomputer

Both Cerebras and Colovore have aggressive plans to expand

The first thing everyone mentions about Cerebras is its size.

Throughout multiple interviews with DCD, the company’s CEO tried to draw our focus to other potential benefits of the chip architecture and how the startup plans to build a sustainable artificial intelligence business.

And yet, inexorably, try as we might, we kept coming back to the size of its chips.

The world's largest chip, the Wafer Scale Engine 2, has 2.6 trillion transistors - significantly more than Nvidia's top-of-theline H100 GPU, which clocks in at 80bn. Built on TSMC 7nm, the WSE-2 has 850,000 'AI optimized' cores, 40GB of on-chip SRAM memory, 20 petabytes of memory bandwidth, and 220 petabits of aggregate fabric bandwidth.

52 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS
Photography by Sebastian Moss

For those that can afford it, it can be bought as the Cerebras CS-2, a 15U box that also includes HPE’s SuperDome Flex and AMD CPUs for a peak sustained system power of 23kW. “It’s a million-and-a-half dollars, plus or minus,” CEO Andrew Feldman said.

But we didn’t fly to Santa Clara to see something in the paltry single seven figures.

We’re here to see the super-powerful computer Cerebras has constructed from multiple CS-2 systems. We are here to see Andromeda.

With 16 CS-2s, 18,176 AMD Epyc Gen 3 cores, and some 13.5 million cores, Andromeda is one of the world's most powerful supercomputers - at least on single precision AI benchmarks, where it posts more than one exaflop of compute.

Cerebras offers Andromeda as a cloud service. Feldman says some customers will use the service test out the unique architecture, before going ahead with a larger purchase (more on that later). Others will use it as an ongoing cloud platform in lieu of buying their own gear.

Andromeda is the beginning of an audacious plan to grab a slice of the exploding AI market, and its data center host also sees this as an opportunity.

The supercomputer sits in a colocation facility run by Colovore. After years as the niche operator of a single facility, Colovore sees the Cerebras system as a critical inflection point, where the high-density requirements of AI workloads will finally shift the data center industry over to liquid cooling.

Colovore hopes to spread across the US, and build the next generation of data centers.

Using a whole wafer

Before that, though, we must come back to size. Cerebras aims to build a product the size of a semiconductor wafer, which is theoretically big enough for today's challenges.

"When we began playing with the idea and when we started the company in 2016, we saw the need for a huge amount of compute," Feldman explained.

Semiconductor chips are made on circular wafers, 300mm (1ft) across. A complex chip can take up 800 square millimeters, and typically chipmakers get around 60 of these from a single wafer. Cerebras needed more than this.

"We thought it was going to be vastly more than what a single 800 square millimeter traditional chip could bring. And that meant you need to tie chips together. There were two ways to do that: An innovative approach, which was our approach, or you could go out and buy a fabric company and think about how to tie them together in very traditional ways."

Nvidia took the traditional route, buying Mellanox, and using its fabric switch to offer virtual mega chips by tying chips together: "These chips essentially all start on the wafer, and then they're cut up. And then you buy more equipment to tie them back together again. That's the elegance, if you keep Humpty Dumpty whole, you don't have to use glue and all this stuff to tie it back together again."

Cerebras hopes that its Humpty Dumpty chip is ready for a unique moment in IT hardware. The release of ChatGPT and the resulting generative AI hype wave represents a unique opportunity to define a new generation of hardware, beyond the traditional CPU and GPU markets.

That boom highlighted two things,

however: First, that the new market is led by a closed-source company, OpenAI. And second that even Cerebras' mega chip isn't big enough for what is to come.

On the first point, Feldman noted that "it's bad for other hardware vendors if there are a very small number of very powerful software vendors, bad for the ecosystem, bad for innovation, and bad for society."

Seeing opportunity, Cerebras offered Andromeda to the AI community and was able to quickly release its own generative models - with seven models ranging from 11 million parameters up to 13 billion (GPT-4 is rumored to have more than one trillion).

While the models aren't able to compete with those of OpenAI, they served a purpose - to show the community that Cerebras' hardware can be easy to work with, and to show that it can scale.

That’s the other size argument Cerebras makes. The company claims near-perfect linear scaling across multiple CS-2s.

Feldman argues that the large architecture means that it can fit all the parameters of a model in off-chip memory, and split the compute equally among various chips. "As a result, when we put down 16, or 32, or 64, for a customer, we divide the data by that number, send a fraction of the data to this chip, and each of its friends, average the results and it takes about a 16th time or 32nd of the time.

“That's a characteristic of being able to do all the compute work on one chip - it’s one of the huge benefits of being big."

Benefits for the host

While the company has focused on being big, its data center host has always benefited from being small.

Colovore is a small operator, with its single facility barely able to fit Cerebras, Lambda Cloud, and others on the site. Launched in 2013, it carved out an equally small market in liquid cooled racks capable of up to 35kW.

"We don't really think liquid cooling is a niche anymore," CFO and co-founder Ben Coughlin said. "I think with the adoption of AI, this is becoming a lot more mainstream. And so I think we're pretty quickly moving out of sort of a small, defined market into something that's much bigger. We think there's a huge opportunity."

While others are still trying to define their liquid cooling strategy, and are getting used to new designs and processes, Colovore has a decade of experience.

"If we look at our fellow data center operators, it's going to be a little bit of a

Issue 49 • July 2023 | 53
The big chip 
>>CONTENTS

challenge for them to have to pivot or adapt," Coughlin said. "They have very standard designs that they use, and they've used quite successfully, but these are fundamentally different. It's not so easy to pivot from air to liquid."

concurred: "[The major colos] perceive the AI revolution, but they feel that this is not really the time to make that investment.

“Part of it is because they have all of this cost in older facilities, and if they admit to the fact that this niche has now become more and more mainstream they run the risk of the markets punishing them by saying that their data centers are obsolete.”

Harrison believes that the hyperscalers aren’t waiting for wholesalers to catch up, and are retrofitting their own facilities and skipping the middlemen.

"And so when the major players say that they don't see AI, they may not really be seeing it at all. In reality, because they're just being ignored."

A lot of larger colos also target customers with proven revenues, something the new crop of AI startups lack. "Therefore, many startups have difficulties trying to get involved in many of these facilities," Harrison said.

"The facilities require a long-term contract, large amounts of upfront commitment for capacity," he added. "A startup may not necessarily know exactly what that may be because they don't know their demand. So we allow companies to start with one cabinet, and they can ramp up one cabinet at a time."

This approach, alongside its cooling chops, has got Colovore in the mood for expanding as fast as one of the startups it hosts. The

company is starting close to home, recently buying an adjacent building to convert into a 9MW data center. Then it will look further afield.

Coughlin explained: "We have plans to expand and add more capacity both in market and out of market. We’re doing a lot of planning with our customers to figure out where to go.

"It's our belief, fundamentally, that this high density, high transaction processing capacity needs to be in major metros, because that's where the data is being generated, managed, stored, and analyzed."

The company claims to have a standardized data center design for both relatively dry and very humid environments, making most US metros potential sites. "There are a number of underserved markets around the US that we think would need to have these facilities as well," Harrison added.

"Markets that come to mind would be like Detroit."

While other companies are still working out their liquid strategy, Coughlin believes that "in the near term we have an opportunity to grow rapidly and broaden our business out."

It also hopes to be able to stay ahead with the level of its cooling. "When we go direct liquid, you can get designs of up to 300 kilowatts in a single cabinet," Coughlin said.

For its base configuration, the company offers liquid cooling via rear door heat exchanger, which can support up to 50kW in a cabinet.

"We size the pipes on the front-end to be able to deliver the highest densities, but if a customer comes in and says I only need 10kW in a cabinet, we just don't provide as much water into that one cabinet. We can control the flow rate to every single cabinet," Coughlin said.

But for all the company's experience with liquid cooling, moving beyond its single building would be a huge leap. Perhaps, DCD suggested, the company could work with its investor Digital Realty?

“Moving to this next phase, we're very, very much open to partnering with Digital, they have footprints in all the markets that we would want to address.

"And we've talked to them informally about rolling out Colovore as their high density offering,” Coughlin admitted.

54 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
CTO, co-founder, and former Google data center manager Peter Harrison
>>CONTENTS
"We don't really think liquid cooling is a niche anymore. I think with the adoption of AI, this is becoming a lot more mainstream. And so I think we're pretty quickly moving out of sort of a small, defined market into something that's much bigger"

Aurora meets Galaxy

As we talk, another informal discussion was nearing completion. Just a few weeks after the visit, Cerebras cloud customer G42 signed a major deal with the chip company, initially to build a huge new supercomputer at the Colovore facility.

The UAE-based AI businesswhich is controlled by the son of the founder of the state and has been accused of spying on UAE citizens, dissidents, and foreign nationals - turned to Cerebras to build the Condor Galaxy supercomputer. Already deployed, it has 27 million AI compute cores, with two exaflops of single precision AI performance.

Within a few months that supercomputer will double in size. In the first half of 2024, two more will come online in different data centers - one in Austin, Texas, and another in Asheville, North Carolina - and then a further six are planned later in the year. In total, that's 36 exaflops of single precision performance and 489 million AI cores.

"These guys were looking to build a partnership with a company that could build, manage and operate supercomputers and could implement very large generative AI models, and had expertise in manipulating, cleaning, and managing huge datasets," Cerebras' Feldman said of the deal thought to be worth more than $100 million per system.

"There's a misconception that the only people that could build clusters this size are hyperscalers in the US. That's clearly wrong. They're being built all over the world. And there are companies that many people in the US haven't heard of that have a demand for hundreds of millions of dollars' worth of AI."

He added: "It's extraordinarily exciting, it's a new phase for the company. As a startup, why you dream big is for customers like this."

Whether it’s chips or dreams, any talk about Cerebras keeps coming back to size. 

 >>CONTENTS
The big chip

The many lives of Evoque's data centers

Evoque's data centers have gone through changes before.

Most began at telco AT&T, while others come from SBC Communications, before it was acquired by the telco giant. Then, in 2019, the company offloaded its portfolio of 31 data centers to Brookfield Infrastructure for $1.1 billion. Those sites were then repurposed for a broader retail data center business. Now they could be set to be converted and modernized yet again.

DCD Magazine #49
After exiting markets and selling legacy telco sites, Evoque now says it’s ready to expand
Photography by Sebastian Moss
>>CONTENTS

A new vision

When we spoke to Evoque's then-CEO Andy Stewart in 2020, he was keen to promote two selling points of Evoque - that it wouldn't compete for hyperscalers, and that it had a global presence.

Both of those points no longer hold true. So, to find out more, we headed to the company's data center in Ashburn, Virginia, to see what had changed.

The 164,500 square foot (15,300 sqm) data center is spread across two buildings. The oldest, A, dates back to 2000, while B came online in 2007, '08, and '09.

Built for a single telco, "the first challenge was to make these sites carrier diverse," the company's VP of strategy, Drew Leonard, explained. "I think we have an average of about 12 carriers, and we needed to build a little meet-me area for them to go in."

Another challenge was the UPS systems, which were being upgraded during our visit. "These facilities were overbuilt at the time," Leonard said. "But it's about getting them up to date. As we do the reconfiguration on the UPS we're getting more efficiencies, and we're able to eventually deliver more power whilst also saving tons of room."

During its modernization of the facility, the company moved from potable water as its primary cooling source to reclaimed water. To do this, it turned a disused storage space into a water treatment room to ensure it doesn't damage pipes. "Don't drink the water," Leonard urged.

The 10MW data center is by no means one of the largest in Loudoun County. Even with potential plans to convert adjacent parking lots into additional footprint, which Leonard - reckons could boost the footprint by some 60 percent - it is still a relatively small presence in the world's data center capital.

"Our model is retail data center space," he said. "So lots of customers with smaller footprints - we're not doing single tenancy. [In the unique market of Virginia], we're really benefiting from the fact that we have available space. If you have space, you win."

But Loudoun does not follow the same laws of reality as elsewhere, and Evoque has begun to broaden its ambitions from solely retail colocation.

Hello hyperscale

"I think you're always gonna have the midsized enterprises and large enterprises that are going to need this type of arrangement," Leonard said. "But we understand the importance of the hyperscale businessand we've built a team to do that."

That marks a distinct pivot from the strategy first laid out when the company was formed, but was perhaps a decision signaled back in 2021, when the company acquired cloud services company Foghorn Consulting.

"We knew that customers are migrating to the cloud, and that represents a potential risk to our business of smaller footprints," Leonard said. "But being in the interconnectivity business, we can enable those connections to cloud providers and to local zones, and things like that."

That's the first prong of its cloud strategy. "They work with clients on that transition from data centers," Leonard continued. "They containerize everything and allow customers to put the application where it makes the most sense - whether it's in an Evoque data center, the cloud, or on-prem."

The second prong is to get involved in building those cloud data centers, putting it in competition with Equinix's xScale, Digital

Realty, and a host of hyperscale developers.

"We've built a team to go out and do greenfield builds and expand," he said. "That means going into new markets, shoring up the ability to meet demand in existing markets, and being able to help our customers go through that migration from data center to cloud back to data center as well."

That requires "two different paths, one for the retail side, and one for the hyperscale side," with each given enough resources and focus. Evoque will have to carefully balance its desires for expansion without forgetting the needs of its retail business.

"Retail is our core business right now," Leonard contended. "The hyperscale business gives us the opportunity to get into new markets and support it with the retail side as well."

But for all the talk of expansion, it's worth noting that Evoque is no longer the global business its previous CEO was eager to highlight. In 2022, the company quietly removed references to overseas data centers from its website, later confirming to DCD that it was exiting some European data centers on July 31 this year.

"We purchased the portfolio with a global footprint, and what we're doing is optimizing our footprint," Leonard said. "We took the data centers that didn't make sense from a business perspective - they didn't have the ability for us to grow and were not as financially profitable for us - and we exited that market."

He noted that the company still has one facility in Redditch, UK, and another in Asia. "And then here in the United States, we've gone through a similar exercise of exiting, where they were more like telco PoPs [points of presence] than a data center, so it just didn't make sense for the brand going forward."

At the same time, Leonard contended, "while we sort of stepped back, that allowed us to focus on the facilities that really aligned to where we want to go with the business in the markets where we need to be." That has meant new investments, he said "including just outside of Nashville, Gallatin. That comes with a lot of opportunity for growth there. And then we're also looking at a few other opportunities internationally and domestically, for expansions and new builds."

He added that the Evoque data center

Issue 49 • July 2023 | 57
Evoque's different lives 
>>CONTENTS

in Secaucus is due to expand, alongside discussions for the Virginia facility: "So we've got a good footprint."

With a number of new players entering the space, Leonard pointed to the deep wallet of Evoque's owner, Brookfield. "They're continuing to invest with us, and they don't need to raise money if we want to build a data center," he said. "Having that access to capital is much easier."

He demurred, however, on questions about Brookfield combining Evoque with other data center assets it owns - including Compass and Data4.

In the 2020 chat with Andy Stewart, Brookfield's large investments in renewable energy projects were also noted as a potential synergy point. But the data center company has yet to aggressively pursue a renewable strategy.

"We're not doing any PPAs at this point in time," Leonard said. "Everything's just straight off the utility - although in California we use Bloom fuel cell technology, and in Secaucus we have about 750 kilowatts that we're using there from solar."

Another way data center companies can be greener is simply by running hotter. Despite more than a decade of evidence that data centers could be operated at warmer temperatures, much of the industry over-cools. This year Equinix announced that it would begin to run its facilities warmer, but gave no specifics - because it requires the customers in its data centers to buy into the concept.

It's no different at Evoque. "We're trying to keep trying to keep [the data halls] at temperatures that allow our customers to feel comfortable that their equipment's going to be safe," Leonard said. "We are slowly ramping that up, but it's about

working with our customers and setting proper expectations as well as educating."

Helping the company's discussions is a partnership with Vigilent, which uses sensors to dynamically match cooling output to heat load. "

You can see temperature sensors on some of the racks," Leonard said, shouting over noisy fans.

"We put them at the air intake, which gives us a much more accurate temperature reading on what the customers' equipment is experiencing," he added. "All these are tied into controllers, which aggregate all of the temperature information and feed it into the Vigilent AI system, which then turns on/off or changes the fan speeds based on the temperature."

The system learns the facility's quirks, and can realize that "if that rack over there is showing higher temperatures it may make sense to turn on the CRAC unit way over in the corner, because it's learned that that’ll impact the airflow there better," Leonard claimed.

As we visited, the Vigilent rollout across all of Evoque's facilities was just finishing, but "what we've seen is that we're running much fewer of the units on the floor all the time, sometimes by as much as two-thirds in some data centers.

“And, with the variable fan speeds, we can then modulate the speeds as opposed to turning on units at 100 percent. The overall savings that we're getting from the power reduction have been very good so far."

Those efficiency improvements, the company hopes, will be able to breathe yet more life into the facilities.

"We think that they're going to be around for a while," Leonard said, affectionately touching the wall. "That's why we're making the changes." 

58 | DCD Magazine • datacenterdynamics.com
>>CONTENTS

The Business of Data Centers

A new training experience created by

What is green software?

How do you make a data center more efficient? When we asked Paul Calleja, director of research computing services at the University of Cambridge, we got a surprising answer:

“It’s all about the software,” he said. “The gains we get from hardware, in terms of output per megawatt, will be dwarfed by the gains we get from software.”

That might come as a shock to data center designers. When they consider the environmental footprint of a facility, they start with the energy used at the site.

They focus on the waste heat produced, and the cooling systems which remove it. In the IT equipment itself, they look for servers that perform more calculations per Watt, or storage that has a lower idle power consumption.

They sometimes (but too rarely) look at the embodied energy in the data center - the emissions created by the concrete it is built from, and the raw materials in the electrical components.

But the thing that is almost always ignored is the factor that caused the creation of the building in the first place, and drives every single action performed by the hardware.

All data center emissions are caused by software. Without software, there would be no data centers. If we want more efficient data centers, why do we start with the hardware when, as Calleja says, it is all about the software?

60 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
We have spent years looking at cooling systems. Now it’s time to make more efficient code
>>CONTENTS
designs, credit:Green Software Foundation

Why don’t we have green software?

“Going back to fundamentals, software doesn't emit any carbon by itself,” says entrepreneur and academic David Mytton.

“It's not like it's combusting or generating anything physical. The question is, what is the impact of the infrastructure the software is running on? And the first step is to try and improve how the infrastructure is behaving in terms of electricity, and the energy that's going into the data center.”

Regulators and corporate green leaders can specify the amount of power that a building can use, and the type of materials it is made of, and data center builders can demonstrate that they are working towards the best practice in these areas.

But beyond that, it’s down to softwareand, as Mytton says, “there's been less focus on the characteristics and behavior of the software itself.”

“Operational engineers and hardware engineers have really been doing all the heavy lifting up until now,” says Anne Currie, an entrepreneur and developer who is writing a book on building green software, for the publisher O’Reilly. “Software engineers have all been ‘Lalala, that's their problem’.”

Efficiency is not a hardware problem, she says: “The steps that we have left to take in data centers are software related. And a lot of the steps that we have to take in people's homes are software related as well.”

To be fair, the sector has already effectively cut data center emissions. In the early years of this century, researchers noted that data center energy use in the US was growing rapidly. In 2011, it was predicted to continue growing out of control, but in fact, stayed at around 2010 levels

This was because software became more efficient. Virtualization made it possible to consolidate applications within a data center, and cloud services offered these benefits automatically. Virtual servers, in centralized cloud data centers, started to replace standalone servers.

Virtualization software literally reduced the need to build new data centers, by utilizing the hardware better.

”The advantage of the cloud over traditional data centers is the use of software to get much much higher server density within data centers,” says Currie.

Use fewer cycles

That is the infrastructure. But when you are making new applications, how do you make them more efficient?

Green software should be engineered to be carbon-efficient. As Currie, Hsu, and Bergman put it: “Green software is designed to require less power and hardware per unit of work.”

Programmers think there is a simple answer to this, but Currie says it is almost always wrong.

“My co-authors and I speak at conferences, and every time we speak, someone gets up at the end and says ‘Should I just be rewriting my applications in C?’”

Everyone knows that C is efficient because it handles the server hardware more directly than a higher-level language like Java or Python. So programmers expect Currie and her co-authors to tell them to go use it. But it’s not that simple.

“It's hard to say no, that’s not the answer, because if you rewrite your applications in C, you might well get 100-fold improvements in efficiency,” she says. “But it will kill your business.”

She explains: “In the old days, I used to write big servers and networking systems in C, where performance was really critical. This was in the ‘90s. All the machines were absolutely terrible. The Internet was terrible. Everything was 1,000 times worse than it is today. You had to work that way, or it wouldn't work at all.”

There have been improvements since then, but “that 1,000-fold increase in the quality of machines has not been used to

deliver machine productivity. We've used it to deliver developer productivity.”

Higher-level languages make programs easier to construct, but there are layers of interpretation between the program and the hardware. So less efficient software has soaked up the added hardware power. As the industry adage puts it: “What Intel giveth, Microsoft taketh away.”

It is inefficient, but it’s been necessary, says Currie. Writing in higher-level languages is easier, and that is what has enabled the speed and volume of today’s software development. We couldn’t go back to lower-level languages, even if we tried.

“We just don't have the number of engineers,” she says. “If we were writing everything in C it wouldn't touch the sides. It just takes ages to write everything in C or Rust. It’s slow. Most businesses would be killed by doing it this way - and I don't want everybody's business to be killed.”

Code better where it counts

Improving code is not straightforward, says Mytton: “It's always more complicated. We'll leave the simple answers to politicians, the real answers really come down to what you are trying to improve.”

Physics tells us that energy is power multiplied by time, so if you want to reduce the carbon caused by your energy use, you can reduce the power used, or improve the characteristics of the power by moving to clean energy.

“That reduces one part of your equation,” says Mytton. “But the time variable is often what software engineers and programmers

Issue 49 • July 2023 | 61
Green software 
>>CONTENTS

will think about. If I reduce the time, by making my code faster, then the amount of energy consumed will be reduced.”

Of course, this assumes there are no other variables - but there usually are more variables, he says: “To give two examples, memory and CPU are two separate resources. Optimizing those two can be difficult, as you get different trade-offs between them.

“A second example is where you could reduce the time by running the code across 10,000 different servers, but you may have increased the power used. You need to figure out what you're trying to optimize for.”

For most of us, Currie says it’s about optimizing software “where it matters. Where it matters is things that are going to be used by loads and loads and loads of people, because it takes ages to do this stuff.”

As the book puts it: “Don’t waste your time optimizing software that hardly anyone is running. Before you begin, consider how much hardware (servers or devices) and energy (data and CPU) in aggregate an application is likely to cause to be used everywhere it is run. For now, target only what’s operating at scale.”

They go on: “The best application of your effort is always context-specific, and when it comes to going green, pain does not equal gain.”

The things that need to be optimized are generally the shared tools that underlie everything we do. So most IT departments should be operating as enlightened consumers, demanding that these tools are efficient.

“For most people in an enterprise, it is not about your code, it’s about your platform,” says Currie. “This is really about managing your supply chain. It's about putting pressure on suppliers. Every platform that you're running on needs to be green.

“Take the standard libraries that come with the very common, popular languages. Are those standard libraries optimized? Those standard libraries are used all the time, so it's really worth getting the people who are writing those languages to tune those libraries, rather than just tuning your code that runs on top of those libraries.”

Customer pressure will make platforms “actively great,” she says. “If the platform is rewritten in Rust, measures and checks itself, and has made itself as efficient as possible, that's much, much more effective than you doing it just for your own stuff.”

Mytton says: “I think the goal of

sustainable computing is to make it so that consumers don't have to think about this at all. They can just continue to use all the services that they like, and they are automatically sustainable and have minimal or no impact on the environment.”

And that is already happening. “If you're using AWS today, you're benefiting from all of their renewable energy purchases, and improvements in their data center efficiency, and you've not had to do anything as a customer. And I think that should be the goal.

“Software developers should hope for something similar,” he continues. “They may have to make a few more decisions and build a few more things into their code. But the underlying infrastructure should hopefully do a lot of the work for them and they've just got to make a few decisions.

The start of the movement

The green software movement began as

an informal effort, but in the last couple of years has increased its profile. Sami Hussain, a cloud advocate at Microsoft, formed a focus group which in 2021 was launched as the Green Software Foundation, at Microsoft’s Build conference.

“As sustainable software engineers, we believe that everyone has a part to play in the climate solution,” says Hussain. “Sustainable software engineering is inclusive. Whatever sector, industry, role, technology – there is always something you can do to have an impact.”

Hussain is now Intel’s director of green software, and part-time chair of the Foundation, which operates under the Linux Foundation. It has backing from organizations including Accenture, Avanade, GitHub, UBS, Intel, and Microsoft, and even apparently got the blessing of former Microsoft CEO Bill Gates.

“The idea was to answer the question: is this a software problem or is it a hardware problem? And the answer is it's both,” says Currie. “But while data centers were addressing it, the software industry really wasn't - because it just wasn't something that was that was occurring to them.”

The Foundation wants to be a grassroots organization, rather than trying to get top-down regulations: “We did talk about whether we should be lobbying governments to put rules in place, but it's not really our skill set. At the moment we are completely focused on just pushing people to be aware, and to measure, rather than getting the law involved.”

The Foundation has produced a report on the state of green software, and the three O’Reilly authors are all members.

Making measurements

“A lot of the focus of the Green Software Foundation has been about measurements,” says Currie, “because if you can measure then you can put pressure on your providers and your suppliers.”

The idea is to create a measure that will be called Software Carbon Intensity (SCI), which measures how much energy is used (or how much GHG is produced) for a given amount of work.

But it’s difficult. “Watts per byte is a key measurement criterion in the networking industry, but it isn't in the software industry,” says Currie. Because in networking, watts per byte is quite clear, but what is the unit of work when it comes to software?”

The basis of the SCI is a “relatively simple equation, which looks at things like the energy consumed by the software, the embodied emissions of that software, and

62 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS
“This is really about managing your supply chain. It's about putting pressure on suppliers. Every platform that you're running on needs to be green"

where that software is actually running,” says Mytton.

The unit of work is a bit less clear: “The functional unit can be something like a user call or an API call, or running a machine learning job.”

Combining these components gives a score, to help understand the overall carbon intensity of software, says Mytton, who is not directly involved in the SCI work: “I believe the goal of that is to be able to look at improvements over time. So you can see how changes to your software architecture or the components, reduce or potentially increase your score. And the long-term goal is to be able to compare different pieces of software - so you can make choices based on the carbon intensity.”

The goal of comparing software with SCI is still a way off, as different vendors define their system boundaries differently - and the SCI measure is still emerging.

The Foundation explains it "is not a total carbon footprint; it's a rate of carbon emissions for software, such as per minute or per user device, that can serve as an important benchmark to compare the carbon intensity of a software system as it is updated over time; or between similar types of software systems, such as messaging apps or video conferencing solutions."

Importantly, for SCI to work, software must be aware of where it is running, what electricity it is causing to be consumed, and the local carbon intensity of that electricity.

The good news is that modern processors from suppliers like Intel and AMD now routinely include tools to report on their energy consumption, but these are still evolving.

Mytton again: “Intel’s tool only works on Intel CPUs, and it's different if you want to get data from AMD. And so far as I'm aware, Arm chips don't have anything available. Given that more systems are moving to Arm [for reasons of energy efficiency] on mobile, some laptops, and in the data center as well, that's a problem.”

These measurements are going to be important, because organizations may need to balance efficiency improvements against performance. However, Greg Rivera, VP of product at software intelligence firm Cast says there won’t be many such cases.

“Research from the Green Software Foundation is finding that making your software greener, typically makes it perform better, and cost less, and it makes it more resilient,” he says.

Coders making systems work well might sometimes hit on methods that trade performance and efficiency, however - and

that might have an effect on efforts to give users the best experience.

“It can increase the amount of energy used, if you deploy your code very close to your user on a less efficient set of equipment, versus putting it in the middle of nowhere on the very highest efficiency equipment in the highest efficiency data center,” says Mytton.

The cloud might reduce the power, but increase the delay: “You need to figure out these trade-offs: you could increase CPU processing, because you've got a faster processor that can reduce the time. And memory has an energy cost as well.”

There’s a whole piece of work to do there on the software efficiency of mobile

This observation leads to another major plank of green software. Software can be made which reduces its power consumption and the emissions it causes, by changing where and when it operates.

“Green software also attempts to shift its operations, and therefore its power draw, to times and places where the available electricity is from low carbon sources like wind, solar, geothermal, hydro, or nuclear,” say the book’s authors.

“Alternatively, it aims to do less at times when the available grid electricity is carbon intensive. For example, it might reduce its quality of service in the middle of a windless night when the only available power is being generated from coal. This is called carbon awareness.”

applications, he says. Phones are built to run efficiently, because the customer needs them to keep operating for a long while between charges, but they don’t always divide software in the same way.

“There is almost a philosophical difference between Android and iOS,” says Mytton. “Android typically wants to do more in the cloud and offloads a lot of things to it, whereas iOS devices are doing more and more on device. That's partly to do with how Google and Apple think about their infrastructure and where their core competencies lie - and there's a privacy element behind that.”

Just run it in the right place

There’s another very significant added complexity. The same software can have a different carbon footprint at a different time or place, because it is being run on more efficient hardware, or on a server powered by low-carbon electricity.

“I'm a huge fan of this,” says Currie. “Google have been working on this for years, and they have done some really interesting stuff. They're really, really looking at time shifting. There are jobs that really aren't all that time-sensitive; things like training a machine learning system. They can be delayed a couple of hours to when the wind’s blowing, or when the sun's about to come out.”

The practical example Google talks about is the encoding of YouTube videos. “When you upload a video, you want it to be available relatively soon,” says Mytton. “But it doesn't matter whether it's half an hour or an hour after you upload it. Processing and encoding is a one-time job, and they are able to move it to a region with more clean energy.”

Google can do this because owns a lot of data centers, in a lot of regions, and has the data. At present, that data is not yet fully available to customers. “It's only very recently that all three of the big cloud

Issue 49 • July 2023 | 63
Green software 
>>CONTENTS

providers have started allowing customers to get carbon intensity information from the workloads that they're running,” says Mytton.

Once users have that information they could re-architect their distributed applications on the fly, to send workloads where the carbon intensity is lowes. But they need accurate and comparable data, and early implementations from the major players tend to be dashboards designed to show their own service in the best light.

“The data just hasn't been available and generally still isn't available for the most part,” says Mytton. “This is a particular challenge - and the Green Software Foundation has a working group that is looking into access to data and making that open source.”

If that’s solved, then users can start to do environmental load shifting for real: “By that, I mean things like moving workloads to regions where there is more clean energy available at a particular time, or choosing how you're going to schedule jobs based on the availability of clean energy.”

And beyond that, load shifting could also address other metrics such as water consumption, and the embodied energy in the manufacturing of the hardware.

Of course, moving workloads to follow clean energy supplies would have an effect on the footprint of data centers, because it could leave machines idle, and this would make their embodied carbon becomes more significant in the overall footprint of the system.

“If the servers are not being used all the time, then then they need to last longer,” says Currie. “So if we solve our problem, it's gonna give you [hardware] guys a problem. Delaying work is not a no-brainer, it means we have to balance these things.”

In the past, idle servers were a heretical suggestion, as the wisdom of a Moore’s Law world was that assets needed to be used continuously and replaced quickly with more performant versions.

That world is over now. New generations of hardware won’t have the same performance boosts, and older generations will be kept in use for longer - especially as organizations move towards reducing embedded emissions and a circular economy.

Can we make it happen?

Green software creators are serious about changing things, But they will have to work against the instincts of the industry that more is always better.

Big data center operators believe they can carry on with untrammelled growth, if they are carbon neutral.

Those managing software - even underlying infrastructure software that is heavily used all the time - very often don’t rate efficiency highly enough.

Cloud vendors “want to make sure that the usage of their resources has a minimal or zero environmental impact, which is why they're putting all their efforts into trying to be carbon neutral and get to net zero,” says Mytton. “They tell you all of the good things they're doing around buying renewable energy and all those kinds of things, rather than necessarily focusing on how to use fewer resources - because those are things that they're charging you for.”

Mytton notes the Green Software Foundation has a lot of backing from Microsoft, but praises its work: “The GSF has done a good job at being independent from Microsoft cloud products or anything

like that, although a lot of Microsoft people are involved. But we're seeing a lot of competition between the cloud providers now about who can be the greenest.

“Google has been leading on this for quite some time - but I think Microsoft is doing a very good job as well. They're just looking at different things and they're on different timelines,” he says, noting that Amazon Web Services is lagging on transparency and environmentalism.

The Green Software movement’s answer to these questions boils down to a simple guideline.

“Turn off stuff that isn't being used, be lean,” says Currie. “Don't do things just in case, turn off stuff while it's not being used, and turn off test systems overnight.”

“Are you storing more data than you need? “ she asks. “Get rid of it all, or move it to long-term storage like tape.

“The biggest win is always do less.” 

Grammarly for efficient software?

One Green Software Foundation member working towards measuring the carbon footprint of code is Cast, a firm whose products analyze the attributes of software.

“Our application gives insights into the workings of software by understanding how it is engineered and written,” explains Greg Rivera of Cast. “Those insights include things like how resilient is the software, how secure is it? What risks are in it, and what's the technical debt?”

In 2023, Cast added a module called Green Software Insights. Rivera explains that it looks into the source code of applications and “provides insights and recommendations on how to optimize the coding so that it becomes more environmentally friendly.”

Static source code analysis has been used for some time to check on code quality, but it’s possible to also use it to spot where software is wasting resources, he explains: “Our technology will look for things like a SQL query inside of a loop statement. That's something that's been done many, many times by developers when applications are being developed. But it's not the optimal way to do it.”

The query inside the loop will get run many times. Placing it outside the loop means it only runs once. “When you write it and implement that type of functionality in a more efficient manner,

then it actually uses less compute resources, which means it uses less energy, which ultimately means it's going to emit less carbon into the atmosphere.”

The engine checks for dozens of inefficient patterns, gives the software a grade and displays that on a dashboard, providing automated recommendations.

“The companies using our product are very large enterprises. They have hundreds, maybe even thousands of applications,” says Rivera. “It's very complex, it's becoming overwhelming. They need automated insight to identify quick wins, across all of your applications.”

If organizations need further incentive to pick up on those quick wins, Cast includes an anonymized benchmark, which compares customers’ efficiency levels with their peers.

And the company is also looking to answer the big question: how much carbon does all this work actually save? Cast is combining the benchmark internally with real time analysis of the carbon intensity of the electricity sources powering its servers, to show the effect on its own footprint.

“We have a model under development, where we'll be able to translate the insights coming out of our product into actual impact on energy consumption and carbon emissions.”

64 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

Data centers have never been more strategic or critical than they are today. They have also never faced greater scrutiny regarding efficiency, availability and flexibility. As your data center evolves, we’re here to help you operate with optimized performance at every stage of the lifecycle.

Management Systems

Control building and IT infrastructure for greater efficiency and optimization with Automated Logic’s WebCTRL system and Nlyte’s DCIM software solutions

Equipment

Delivers essential cooling while reducing both your energy consumption and carbon footprint

Connect with us at carrier.com/datacenters to learn more about these and other lifecycle solutions designed for the data center:

Service and Support

Ensures peace of mind and helps enable continuous operations around the clock, and around the globe

Issue 49 • July 2023 | 65
carrier.com/datacenters Maximize Uptime Meet Sustainability Goals Support Increased Workloads Reduce Costs Improve Efficiency MGMT SOFTWARE EQUIPMENT SERVICE & RENTALS ©2023 Carrier. All Rights Reserved. Automated Logic and Nlyte Software are Carrier companies.
CARRIER HVAC SOLUTIONS DESIGNED FOR THE DATA CENTER Turn to Carrier for purpose-built solutions and expertise to help you face all your data center challenges with confidence.
Automated Logic’s WebCTRL® building automation system and Nlyte’s Asset Optimizer®

Evaluating carbon capture’s promise

The human race needs to have less carbon dioxide in the Earth’s atmosphere. We are seeing the hottest temperatures ever and the world is expected to break the 1.5°C temperature threshold within five years.

We have a limited budget of greenhouse gases we can emit - 380 billion tonnes - and we are expected to burn through that in the next nine years.

Against this background, tech firms aiming to reach net-zero, have found some emissions are stubborn and irreducible. For Microsoft, the problem is “Scope 3 ” emissions produced by its Xbox sales, among others.

DCD Magazine #49
Taking carbon out of the atmosphere sounds like a brilliant idea. Why is it controversial?
>>CONTENTS

Given all this, it’s not surprising that tech players like Microsoft, governments, and agencies all the way up to the International Energy Agency (IEA) are increasingly saying that carbon capture technologies will be needed to pull CO2 back out of the air and eventually get the planet back on course.

That sounds logical, but respectable figures are dismissing carbon capture as a “hoax” or a “mirage.”

UN secretary-general António Guterres has said that when the oil industry touts carbon capture, it is providing a false justification for continuing to use oil, describing its ideas as “proposals to become more efficient planet wreckers.”

Greenpeace International founder Rex Weyler has described carbon capture as a "scam" by oil companies, designed to divert billions of pounds of money to continue a deception in which they posed as environmentally responsible organizations, while distracting public attention from the need to stop burning fossil fuels in the first place.

What’s the truth in all this?

Carbon capture in brief

There are two distinct strands to the carbon removal story, usefully explained in a primer from EnergyWorld. Carbon capture and storage (CCS) has been developing for some time in the energy and heavy industry sectors. It siphons carbon dioxide out of the flues of power stations, so it can be captured and stored permanently.

Direct air capture (DAC) takes a different approach, drawing CO2 out of the atmosphere, anywhere in the world, away from the sites where it is produced. Both use similar storage methods, essentially making the CO2 inert and injecting it underground.

These methods might sound similar, but CCS is a lot easier because smoke stacks are very rich in CO2 . About 12 percent of the exhaust gas from a power plant is carbon dioxide. That can be captured cheaply.

By contrast, DAC systems such as ClimeWorks in Iceland have to work with the normal atmospheric concentration of only 420 parts per million (0.042 percent), and that puts the price up.

CCS can produce carbon dioxide at around $20 per tonne, while DAC currently costs around $1,000 per tonne.

On top of this, the energy demands and cost of both techniques vary according to the purity of the CO2 they produce.

at US firm CarbonCapture, explains it as follows: “The cost of capturing CO2 changes based on the purity of CO2 output. To drive to 50 percent, purity is one figure. To get to 99 percent is much higher number.”

That’s significant, because some ways to dispose of carbon dioxide, like mineralization, don’t need very pure CO2 “If you combine it with water, you can feed in 50 percent carbon dioxide, and the mineralization process will work.”

If your carbon capture system aims to produce pure carbon dioxide for use in industrial processes, instead of storage, then it will be more expensive.

Backing for CCS

Looking at the location of carbon capture, it is tempting to think that it should all be done by CCS in smokestacks, where it is cheaper.

The US is subsidizing both CCS and

DAC. The Bipartisan Infrastructure Law has allocated $3.5 billion for regional direct air capture hubs The Inflation Reduction Act allocates a subsidy of $85 per tonne of CO2 permanently stored, or $65 per tonne taken out of the atmosphere for use.

These subsidies could seed a major carbon capture industry in the US, says Lee: “Those two things have really made it so that it doesn't really make much sense to go anywhere else.”

One sign of the burgeoning interest, says Lee, is a bottleneck in applications for "Class 6" certificates on injection wells, to satisfy the demands of companies like CarbonCapture. These wells take a lot of certification - which is a good thing because we want to ensure that the storage is permanent.

However, if the world is aiming to decarbonize, environmental advocates want some distinctions made. Instead of

using CCS to clean up smokestacks, we should be finding ways to avoid burning the fossil fuels in the first place.

CCS just reduces the new CO2 being pumped into the atmosphere, while DAC is a "negative emissions" technology that removes existing CO2

It also has a negative history. CCS has been touted by the oil industry as a way to reduce CO2 emissions, but it has actually acted more as a way for energy companies to keep burning oil.

The CO2 they captured has very often been pumped into boreholes to increase extraction by pushing more oil and gas to the surface. Somewhat surprisingly, the Biden Administration has include this use (“enhanced oil extraction”) for the $65 CCS subsidy, even though it actually results in more emissions overall.

It is schemes like this that have led to opposition from people like Guterres and Weyler.

Issue 49 • July 2023 | 67
The carbon promise 
Jonas Lee, chief commercial officer
“With technical solutions, we actually can weigh the CO2 that we capture, and then pass it to our partner who buries it in Class 6 injection wells 12,000 feet down. There's verifiability, as well as third parties that we engage to monitor the whole system”
>>CONTENTS

However, the energy industry has produced other schemes which might potentially be more beneficial. The UK’s Drax power station burns biomass, and plans to remove carbon from the smoke stack. Since it is burning recently grown material rather than fossil fuel, this bioenergy carbon capture and storage (BECCS) scheme could be potentially carbon negative, producing energy and reducing the carbon in the atmosphere as new material is grown.

Similar schemes are being put in place within Denmark, sometimes referred to as BECCUS, because they also propose to produce CO2 for use as well as storage.

Denmark aims to become Europe’s premier carbon storage site. Multiple projects have started, backed by Total, Ineos, and Wintershall DEA to store millions of tonnes of CO2 in the sandstone of used North Sea oil and gas reservoirs. Other countries in Europe are expected to pump their captured carbon to Denmark for disposal.

DAC backing

If the planet is to rebalance the atmosphere in the long term, there will need to be some method to take CO2 out of the air. It is not possible to plant enough trees to do this, because of the sheer volume of fossil fuels which will have been burnt in the last two centuries.

The world needs to remove 10 billion tonnes (10Gtonne) of carbon dioxide per year by 2050 in order to remain on a path to limiting global warming to 1.5°C. So the world is turning to DAC, along with some possible biologically-based methods (see Box)

“We should do as much as we can with nature-based solutions,” says Lee. “But land is a competitive asset.”

Solutions around trees have proven to be unreliable and easy to falsify. There have been a large number of carbon offset schemes based around forest owners promising not to exploit them. In many cases, they have taken money for old forests which were never going to be cut down - or worse, having the trees cut down anyway.

Solutions based around trees suffer from a lack of “measurability, verifiability, and permanence,” says Lee. “With technical solutions, we actually can weigh the CO2 that we capture, and then pass it to our partner who buries it in Class 6 injection wells 12,000 feet down. They bill us based on how much it weighs. So there's verifiability, as well as third parties that we engage to monitor the whole system.”

Those certified injection wells have to be permanent, he says, “so the cap rock is strong, and there’s no seismic risk, etc.”

Each injection well can handle “somewhere between 250,000 and a million tonnes of CO2 per year, so once a sizeable number are certified, the industry will have a good prospect of storing plenty of CO2 .”

The economics of DAC are still emerging, and current technologies also need a lot of power. Currently, it takes around 1200kWh (1.2MWh) to remove a tonne of CO2 , and this is not expected to fall below a minimum of 250kWh to create pure CO2

Removing the total requirement10Gtonne - at a cost of 1Mwh per tonne would need a total of 10 billion MWh, a staggering amount beyond the amount

currently used in domestic electrical systems.

It also requires renewable energy: using fossil power could push more carbon into the atmosphere than the DAC system removes. That renewable energy ideally also needs to be new, otherwise you are forcing other industries to rely on fossil fuels, defeating the point.

Go where the power is green

For this reason, DAC schemes are starting relatively small, and the leader, ClimeWorks, is located in Iceland, where it has had backing from Microsoft, and also former Microsoft CEO Bill Gates, both of whom have paid a reputed $1,000 per tonne of CO2 removal to get the project started.

Iceland is a logical place to start such a project, because it has far more renewable energy than its population of 300,000 can use.

However, the total energy available there is small in global terms, and so far ClimeWorks has so far only been able to remove CO2 roughly equivalent to the emissions of less than 1,000 cars.

For this reason, and for the subsidies, CarbonCure is located in the US, says Lee, with a pilot plant in Wyoming.

“For direct air capture, we could be anywhere,” he explains. “We can go where the land is cheap, where the energy is the cleanest and cheapest we can get, and where the community wants us.”

DAC can be literally anywhere, says Lee, because “CO2 mixes in the atmosphere incredibly quickly.” The basic concentration of 420 parts per million, changes very little based on where you’re based.

With funding from sources including Microsoft, CarbonCapture is starting with a footprint of 10,000 tonnes per year, “which should grow to five million tonnes per year by 2030.”

It’s not the whole answer

As with all climate technology proposals, it is important to keep carbon capture in context.

It cannot solve the problem on its own - and will take some decades to become a really significant player in removing carbon from the atmosphere.

However, if the planet is to reach a steady state, natural carbon removal is likely to need a helping hand, so we shouldn’t reject it. 

68 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

Many methods

Microsoft is hedging its bets, by supporting every possible method of carbon removal.

The company is paying for direct carbon capture from ClimeWorks in Iceland, as well as Heirloom and CarbonCapture in the US.

It is also supporting BECCS (bioenergy with carbon capture and storage) in a deal with Ørsted and Aker Carbon Capture in Denmark.

But Microsoft is also investing in biological carbon removal, which has the potential for massive amounts of carbon sequestration without the need for massive amounts of renewable electricity.

Microsoft has paid "ocean health" company Running Tide an unknown sum in the millions, to remove 12,000 tonnes of CO2 using a method which grows biomass and sinks it in the deep ocean.

Running Tide's website says its technology "accelerates the ocean's ability to naturally remove carbon dioxide, sinking it to the deep ocean in a safe and permanent form.”

The company deploys floating buoys which contain limestone, boosting the ocean's alkalinity, and algae which grows and captures more carbon. When the algae reaches a certain size, gravity pulls it down, and it sinks to ocean floor, where gravity and water pressure hold the biomass.

Running Tide says its method provides a very slow cycle which will keep CO2 locked in the ocean depths for thousands of years. It has promised to submit to measurement, reporting, and verification, and Microsoft says it will step up to help establish third-party certifications.

From all indications, Microsoft is already paying a price lower than the $1,000 per tonne charged by electrical DAC systems, but the price will apparently fall, as the rollout passes certain "gates" which will unlock larger purchases.

Physically moving large numbers of seeded buoys to the open ocean requires ships, so there is some energy input, but once the buoys are launched, the process uses gravity and ocean currents which require no further energy input.

Running Tide isn’t the only company aiming to capture CO2 in the ocean. Additional Ventures, founded by former Meta CTO Mike Schroepfer, is supporting ocean alkalinity enhancement (OAE), which encourages more CO2 to dissolve in seas.

The oceans already absorb more carbon than the world's forests by processes including the erosion of alkaline rocks such as carbonates. Once these rocks enter the sea, they neutralize harmful acid and pull still more CO2 from the air.

This process already adds up to around a gigaton of carbon removed each year. Additional Ventures has set up a $50 million Carbon to Sea research program which has so far committed $23 million of research grants to scientists exploring ways to increase the amount of CO2 removed by the ocean, without unbalancing ecosystems.

OAE hopes it can double the gigaton created by natural ocean alkalinity processes.

Issue 49 • July 2023 | 69 The carbon promise 
>>CONTENTS

WBBA hopes to be the broadband industry’s governing body

Can the mobile model fronted by GSMA provide a blueprint for the broadband industry?

In the business of connectivity, collaboration is key.

Since its formation in the mid-1990s, the GSM Association (Global System for Communications Association), the mobile industry has had a governing body that has represented mobile operators worldwide.

The GSMA, a not-for-profit organization, was established in 1995 to provide a standard for cellular networks to adopt.

It counts 750 mobile operators as its members, plus over 400 companies in the broader mobile ecosystem are associate members.

If you’re a keen telecoms enthusiast then you’d know who the GSMA are and what they do.

But this body predominantly covers the mobile industry. Is there something similar for the broadband industry?

A gap in the market

At present there is no such alternative that represents the broadband industry in the same way as the GSMA.

This is something that has long been a frustration for Martin Creaner, a telecoms veteran, who has held positions at BT and Motorola Solutions during his lengthy career spanning back to the eighties. His career has taken him to many different parts of the tech industry, but he’s landed in broadband, specifically at the World Broadband Association (WBBA).

He’s got experience leading a trade association, doing so as president and CEO of TeleManagement

Forum for 12-and-a-half years, which is a global industry association for service providers and their suppliers in the telecommunications industry.

But his project at the WBBA is different, as this is focused specifically on broadband.

Creaner played a key part in launching the WBBA last year, in his efforts to provide leadership for digital broadband to overcome industry challenges that businesses and the industry face.

“It’s an organization that has been set up relatively recently, the first discussions about it were in mid2021,” Creaner told DCD

“The WBBA is about creating an open member-led organization that creates a platform to drive broadband cooperation and partnership across the whole industry, and to accelerate broadband adoption everywhere in the world.”

Broadband is a global good and an absolute necessity for everyone, says Creaner.

“If you believe the argument that broadband is a global good, it doesn't matter where you are, whether you're in Birmingham or Botswana. Having broadband improves your life and improves your ability to do business, it improves your access to information, your access to education, your access to health care, it improves the quality of life.”

He wants broadband for everybody, regardless of geographical location, to bridge the divide in the world’s developing countries.

Driving conversations to improve broadband

It might sound pretty obvious, but broadband is every bit as important as the use of mobile phones.

While the GSMA has the latter covered, it’s the work that the association has done to build out its message that inspired Creaner to create the WBBA.

The GSMA’s place in the market for mobile made it clear something is missing in the broadband sector.

“There's nothing that really matches the GSMA for the broadband industry and this is what kicked off a number of discussions. For us, it provided us with soul-searching from a whole load of companies and industry bodies. It was essentially concluded that we

>>CONTENTS

need to create a business association and an advocacy association for broadband.”

He points out that people - not even just those who have much knowledge of the telecoms industry - understand the concepts of 2G, 3G, 4G, and 5G, largely because of the GSMA, even if it’s not known to them who are the GSMA is.

Creaner wants the WBBA to drive the conversation around fiber services in the same way, driving industry standards where telcos and broadband firms are on the same page, while also educating the mass market.

Open for everyone

So far, some of the telecoms industry’s biggest names have signed up to the association, including network vendors Nokia and Huawei, plus telcos such as Swisscom, China Telecom, and China Unicom.

Membership is open to everyone, says Creaner, though he didn’t disclose a target on how many members the association is looking to onboard.

“The process for membership is that it’s open to any company in the broadband industry, by which we mean that group of stakeholders, ranging from the supply side operators and our traditional suppliers to the demand side, plus those investing in the industry.”

It’s not about the quantity, but instead about representation across the industry, insists Creaner.

“It's not just about the volume of membership during the next few years, it is really about making sure we have got good representation, representing all the different stakeholder groups from all the different parts of the world. Once this begins to fill up we’re going to start seeing a fabulous dynamic start to emerge.”

There is a membership fee that is being structured, which is dependent on the size of the company, he adds, though, for academic institutions and not-for-profits, there’s no membership fee.

Collaboration is key for the industry

While Creaner and those involved at the WBBA will try and hype up the need for an organization to represent the industry, is there a need for such an association?

One such company that has joined as a member is the Internet performance metric analyst firm Ookla.

Speaking to DCD, Sylwia Kechiche, the principal analyst for enterprise at the company says there is.

“In every industry, there is always a trade association or organization that represents it.

The goals of GSMA and WBBA are quite

similar,” she said.

“GSMA aims to unlock the full potential of connectivity for the betterment of people, industry, and society.

Meanwhile, WBBA’s mission is to connect the world by providing broadband access to everyone”

“It’s worth noting that other organizations have similar missions to WBBA, so collaboration is key to preventing duplication of efforts.”

It’s for this reason that Ookla has joined the WBBA says Kechiche, acknowledging that without dialog and collaboration, technological advancements in this space won’t happen.

“It’s important to note that simply being connected to the Internet isn’t enough. The quality of the connection is also crucial. That’s why Ookla has joined WBBA to fulfill our mission to measure, understand, and help improve connected experiences.

Attacking issues head-on

Given that the association has been set up to deal with challenges that broadband companies face, what are the biggest challenges that will need to be addressed?

Some of the biggest issues include infrastructure limitations, which is why satellite connectivity has become more of an option for remote areas where it’s not possible to place telecom towers or build base stations.

Kechiche says that on top of this, affordability, language barriers, and regulatory challenges are some of the biggest challenges facing the broadband industry.

“To address these challenges, a multi-faceted approach involving multiple stakeholders, including government bodies and private and public sectors, is necessary,” she said.

“Although it may be difficult to measure the direct impact of a governing body, there are other ways in which it can play a role. These include advocating for the industry, providing a forum for sharing knowledge and ideas, stimulating discussions, and bringing together various players. It is important to ensure interoperability, the ability to exchange ideas, and international representation through existing alliances.”

The industry, like others, has had to contend with shortages, in particular around fiber rollouts.

Last summer, reports suggested that fiber providers were struggling to get the materials necessary to run their networks

A report from business intelligence firm Cru Group noted that the global shortage of fiber cables led to delays and price hikes for the sought-after kit, though DCD spoke to several telcos, who explained that such

challenges have since eased

But it’s these such challenges that can differentiate from country to country, says Creaner.

“So in some places, one of the biggest challenges is overcoming local statutes to rapidly roll out fiber into our rural environments,” he adds.

“In other cases, it's about trying to roll out fiber into an advanced city, while for others it's trying to roll out fiber into impoverished villages in parts of Africa, North America, or Asia. So, there are lots of different challenges in different parts of the world, and we're very focused on trying to understand the nature of those challenges.”

Fiber is mentioned a fair bit in our discussion, with Creaner stating its importance in developing a more energy-efficient industry as a whole, as opposed to the legacy copper network, which is widely being switched off.

“We need to reduce carbon emissions and the telecoms industry is very, very focused at the moment on how do we do our part in reducing our carbon footprint. One of the great ways to do this is to roll out fixed broadband on a large scale, right across the world because broadband, particularly fiber broadband, is much more energy efficient than any of the other technologies by which we can connect.”

Leverage his experience

With a lengthy career in the telecoms space, Creaner is keen to leverage his experience in building up the WBBA.

So what can we expect from the WBBA as the association aims to grow its presence? Creaner has identified the importance of industry-wide events, not too dissimilar to the massive Mobile World Congress (MWC).

Referring to the GSMA’s MWC event as a “must-attend” event for the telecoms industry, he aspires to launch something similar for the broadband community.

“It’s an event that everybody goes to, has the best speakers, and is a great place for people and businesses to do business. It’s a hugely valuable thing that the GSMA has created for the mobile industry.

“We’d love to create something similar for the broadband industry and are in the early stages of creating events and have a big event scheduled in Paris in October. Our ambition is that we hope we can create an event that becomes a magnet for the broadband industry to come together.” 

Issue 49 • July 2023 | 71
GSMA for broadband 
>>CONTENTS

The great telecom tower sell-off

Why are operators spinning these assets from their portfolios?

Telecommunications infrastructure is a busy sector, with more mergers and acquisitions than ever. At the same time, mobile operators are selling off assets, and investment firms are keen to invest heavily in those assets.

It’s something that has fascinated us at DCD, and something we have covered at great length.

Big-name telecom companies such as Deutsche Telekom and Vodafone have completed deals to sell tower assets in the past 12 months, generating billions of dollars in capital as a result.

We’ve seen a big data center sell-off in years past, as telcos got out of owning facilities (see Evoque, page 56). Now, we’re seeing a great telecom tower sell-off. But why?

>>CONTENTS

The significance of telecom towers

It’s worth first understanding the importance of telecom towers or masts. They have electronic equipment and antennas that communicate with phones and other devices.

Without them, we’d struggle to keep connected.

Typically they are tall structures, either guyed or self-supporting, and may also be found on rooftops. The size of each site varies depending on the generation of connectivity.

Operators have traditionally built and owned these assets, seeing them as strategically important to their network coverage.

But many have begun spinning off their “TowerCos” into separate units or companies. Deutsche Telekom set up its GD Towers unit, then sold a majority share to Brookfield and DigitalBridge for a cool €17.5 billion ($19bn).

Other operators have sold these tower assets outright, with leaseback deals in place that still sees them lease at least part of the towers for their networks.

Industry veteran Ineke Botter, a former CEO of telcos in countries such as Azerbaijan, Haiti, Kosovo, and Lebanon, explained the importance of these towers to mobile network operators (MNOs).

“I’ve run operators in the past that were owned privately. At the time, it was not an option to sell [towers], because the competition was based on coverage,” said Botter.

“When the first mobile operators started, the name of the game was to get as much coverage as possible, and the best quality of course, with the least disruption.”

So why are operators selling them now? According to PP Foresight telco analyst, Paolo Pescatore, it’s all about efficiencies and pushing new revenue streams.

“All telcos are struggling to generate new forms of revenue. Margins continue to be squeezed due to the rollout of nextgeneration networks and people are reluctant to spend more on connectivity. We are in a golden era of connectivity. Therefore it represents a good time, better than any, to sell off what was once a prized asset.”

Other analysts that DCD spoke to echoed these sentiments, saying shareholders are pushing operators to cash in their towers.

Short-term gains

James Gray, formerly of Vodafone and Three in the UK, says that operators can make a lot of money in the short term.

Mobile coverage among operators now, in a country like the UK, is relatively evenly distributed, so tower infrastructure isn’t the big differentiator that it once was, back when coverage was king.

“This infrastructure is a big cost for the operators, but it's not a big differentiator anymore. So you could see why it's appealing to maybe try and outsource that cost,” explained Gray.

“The costs continue to grow as we deploy new standards, such as 5G and from there, you’ve got to keep on investing in the network.”

It takes work and resources to maintain such infrastructure, and Gray says it makes sense for operators to sell this infrastructure to tower companies who lease it back to them as a managed service, enabling them to still make money.

“It’s easy to see why it’s appealing for operators to do this as they can make a lot of money selling all that infrastructure to companies or whoever it might be in the short term. It will enable the operators to move away from thinking like they are infrastructure organizations and start thinking about being customer-centric organizations that are focused on driving the brand and driving ARPU [average revenue per user].”

Christopher Greaves, a researcher for TowerXchange, explains that the networks don’t make telcos a huge amount of money. He says the move for telecom companies is a “win-win.”

“With the arrival of 5G and with the digitization of our economies towards cloud and IoT, there's a lot of capital that needs to be deployed to advance our economies to provide more services.”

Operators want to deploy capital where they can make money, and the network is not such an area: “So for operators, it's sort of a win-win for them because they can free up capital to invest in new verticals, whilst at the same time they can reduce their existing opex costs by stepping away from their tower infrastructure.”

Christopher Antlitz, principal analyst at Technology Business Research, describes MNO’s tower assets as real estate, which they must fully utilize in order to provide the best service.

Rolling out 5G has become so expensive, Antlitz says, and tower sales can fund spectrum acquisition.

“We need to remember the context here, which is that the telcos are heavily levered,” he said. “Most telcos are in a lot of debt. They have had to buy spectrum, they have to fund capex, and they have to fund dividend payouts. They have a lot of expenses. So the towers are an asset that they have had for decades that they can repurpose and free up capital and that’s what they are doing right now.”

Repurposing assets can help operators reinvest

Vodafone sold off some of its Vantage Towers business in November 2022 to a new joint venture with KKR and Global Infrastructure Partners (GIP). The transaction valued the Towers unit at €16.1 billion ($17.65bn), and Vodafone got €3.2 billion ($3.5bn) in cash, which it used to pay down its debt.

Andrea Dona, chief network officer for Vodafone UK, says it’s a necessity for the operator to offload these assets, in order to keep pushing forward with network upgrades and advancements.

He says towers are high-value assets and form the background for connectivity, but it doesn’t make sense to spend “a lot of money on the passive asset network infrastructure in an environment where operators are struggling with revenue growth and margins.”

Issue 49 • July 2023 | 73
Telco
for sale 
towers
>>CONTENTS

Like everyone else, for Dona, it’s about realizing money, so Vodafone can invest in its “core business which is customer experience, innovation, the rollout of technology, network quality, digital skills and services beyond connectivity.”

In the Philippines, PLDT said offloading tower sites could fund expansion and ease mature debts, while Globe Telecom sold off over 7,000 towers to raise over $1.2 billion to fund its own network expansion.

Not all operators though

In France, however, things seem to be different. Orange’s CEO Christel Heydemann has called MNOs spinning off their assets “weird.”

“When you see companies selling their towers [or] using financial vehicles to continue to invest in infrastructure there is something that is, maybe not wrong, but something weird going on in the market,” Heydemann told the Financial Times in February.

For context, Orange’s Totem tower subsidiary is fully owned by Orange, and

operates around 27,000 towers across France and Spain, two of its biggest markets.

An Orange spokesperson told DCD that Europe remains a key market for the operator.

“The European tower market remains attractive due to long-term contracts indexed to inflation and the coverage densification and the roll-out of 5G on the continent, which remains a key growth opportunity,” said the Orange spokesperson, stating that Totem is vital to its ambition to become a trustworthy European TowerCo.

The growth of The TowerCo

What about the TowerCos themselves? While mobile operators have traditionally built these assets, many independent tower companies have been established in recent years in regions such as Europe and Africa.

In recent months we’ve seen a flurry of acquisitions completed. Spanish telco Cellnex recently finalized the purchase of CK Hutchison’s tower business in the UK

This was part of a deal that also saw Cellnex snap up tower assets from CK Hutchison across six European countries, including Austria, Denmark, Ireland, Italy, and Sweden for a combined €10 billion ($11bn).

"This series of agreements with CK Hutchison not only strengthens our position as the leading pan-European operator but also bolsters our relationships with our customers and opens us up to new opportunities and perspectives for collaboration," said then-Cellnex CEO Tobias Martínez in November 2022.

"In essence, this rationalizing of the infrastructure managed by a neutral operator like Cellnex will create the necessary incentives to accelerate, improve and expand mobile coverage, including 5G, in these key markets.”

The company, which operates around 53,000 tower sites in Europe, continues to do deals.

It most recently took full control of OnTower Poland, paying €510 million ($556m) to buy the remaining 30 percent stake from Iliad Group.

A booming market - Africa’s TowerCo push

London-based Helios Towers operates over 13,600 towers in nine countries across Africa and the Middle East.

Group Commercial Director and Regional CEO for Southern Africa Sainesh Vallabh says TowerCos were created by the movement from the market towards shared infrastructure: “Network as a competitive advantage has become less of a feature for mobile operators around the world.

“That coupled with the regulatory drive to have an efficient and more sustainable investment in the market so we don't duplicate infrastructure and waste resources has driven the advent of TowerCos looking to acquire these assets and operationalize them much better, in order to create the efficiency and sustainability to benefit both the telco and the mobile operator.”

Africa has seen a flood of investment in tower infrastructure. In 2021, Africa Infrastructure Investment Managers (AIIM) was one of three key investors in Eastcastle Infrastructure, developing new towers in the Democratic Republic of Congo, Nigeria, and Côte d’Ivoire.

Patrick Kouamé, an investment director at AAIM, repeats that infrastructure is no longer such a strategic asset: “Telcos

74 | DCD Magazine • datacenterdynamics.com DCD Magazine #49
>>CONTENTS

recognize that they can’t do it all. Their main value is to manage the customers.”

Also, he points out that regulators are increasingly making it mandatory for telcos to share infrastructure.

Tempting investors

There are operators that own the towers, and there are TowerCos that specialize in the infrastructure. But there are also investment firms that have a strong interest in this industry. Why is this?

Carl Gandeborn, a former executive at Ericsson and Nokia who also worked at Helios, says it is because tower assets come with very little risk.

“They offer very steady cash flows for investors. They are no risk, because the companies that you would have on your books are a certainty. So if you're a pension fund, you can go and you put capital and you invest into our company and get steady returns. It's like a steady rental business and it's long-term contracts.”

Alessandro Ravagnolo, a partner at Analsys Mason, agrees: “TowerCos have a business model that is very attractive for infrastructure funds because the technology risk is extremely low, if not absent.”

“There are high barriers to enter and exit and you have long-term contracts which all contribute to a predictable cash flow, so this makes them very attractive for investors. There's a continuous need for investment to expand the networks, plus there’s obligations attached to these contracts too.”

Tenancy sharing

It is more sustainable for operators to share tower infrastructure, says TowerXchange’s Greaves: “TowerCos are by design

sustainable because they encourage tower sharing. By sharing tower infrastructure, you are reducing the energy consumption and CO2 emissions of your network, so it's more efficient.”

It’s also lucrative, says Antlitz: “You can increase the tenants on the tower. Historically, you would have one tower owned by one telco. Over the years those telcos would sub-lease or rent out space on their tower if they have extra space to another telco or to some other entity,” he explained.

“As for investors buying the towers, there is much less of a competitive angle and there's more of an emphasis on the sharing model. With multi-tenancy, every tenant you add to an existing tower site, your cash flow and your margin structure gets very lucrative.”

This can give investors a profitable longterm revenue stream.

Challenges for the industry

There are questions. Gray thinks operators may be selling too many of these assets for an initial quick cash bonanza, without thinking long-term. “After the initial money made from selling the infrastructure, operators will have to pay to use it.”

By selling these assets or spinning them off as TowerCos, mobile operators have also “lost control over an important part of their value chain,” he says.

“In some cases in Europe, operators have kept a 51 percent share and so they've got a bit more control,” he says. “They’ve probably pleased some investors with the initial sales, but longer term they've got this ongoing cost, and are no longer in control of that cost. So that's going to need to be watched.”

But challenges differ from market to market. Africa has infrastructure issues,

such as the problem of aging grid systems in South Africa. For 12 months the country has been hit with regular rolling power cuts, with the country’s state power utility Eskom warning that outages could last for as long as 16 hours this winter.

Vallabh says that the lack of infrastructure in Africa is the biggest challenge for the continent right now.

“None of our markets in Africa actually have a 24-hour grid availability. In the past, South Africa used to have 24 hours, but with the load-shedding program, it’s reduced substantially.

So alternative forms of energy are critical for ensuring that networks stay up and running when the power goes down,” he says. “Mobile networks are critical infrastructure, and it will cripple the economies of these markets if they don't remain up,”

He’s not wrong. It’s estimated that load-shedding in South Africa has cost the country’s GDP five percent.

When will it end?

Is it sustainable for mobile operators to keep selling their assets?

Ravagnolo thinks there are some sectors like Central Eastern Europe and maybe the Nordics, which have seen fewer tower acquisitions, that might follow in Western Europe’s footsteps.

But overall, there is a limit, he says: “There is a natural ceiling to the market if all the operators spin off their towers, then there is no more to spin off right?”

So, when 6G is being rolled out, will telcos still be spinning off assets? Maybe not. Because by then, they might not have any left. 

Issue 49 • July 2023 | 75
Telco towers for sale 
>>CONTENTS

Don't get hooked on carbon capture

When I wake up in the morning, I take a handful of supplements.

Various vitamins of dubious origin are swallowed in an attempt to improve my health via the use of money in lieu of simply looking after myself. Most of the vitamins are not even absorbed by my body, making their way out the other end.

Carbon capture feels no different. While there are a few benefits, they are small in comparison with simply using existing technologies to deliver drastic emissions reductions. Just as I would get far greater health benefits from exercise and a better diet, the planet would benefit more from fewer emissions and cleaner supply chains.

But we’re not good at change, especially if it requires sacrifice. I don’t feel like I have time or energy to improve my ways, just as we all feel like we don’t have the resources

to overhaul society’s relationship with fossil fuels.

That’s what makes the promise of carbon capture alluring - we don’t have to change anything about how we operate, we only have to spend a little more to make up for the damage we cause. It’s a cheat code for climate change that requires no change on our end.

But sadly the cheat code doesn’t work. Even if it scales rapidly, it is only a small part of the much greater effort we have to undertake to avoid the worst of climate change.

Like supplements, the industry lacks safeguards and is riddled with companies promising a lot and delivering a little. Like supplements, they are not enough to keep us alive.

If we bet everything on them, then my vitamins will not be the only thing flushed down the drain.

 Cheat codes don’t work 76 | DCD Magazine • datacenterdynamics.com
Listen to the DCD podcast, new episodes available every two weeks. Hear from Microsoft, Intel, Google, Digital Realty, and more! Brought to you in partnership with Vertiv › Zero Downtime

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.