DCD Magazine Issue 54: Chasing the last nanosecond

Page 1


From magnetic-bearing chillers to purpose-built air handlers, the full line of proven data center solutions from YORK® delivers performance optimized to meet the uptime requirements of today and the sustainability goals of tomorrow. After all, we’re not waiting for the future: we’re engineering it.

Learn more about the full line of YORK® data center solutions: YORK.com/data-centers

CRAH Computer Room Air Handler
YVAM Air-Cooled Magnetic Bearing Centrifugal Chiller

6 News

Big deals, gassy growth, data center opposition

15 Chasing the last nanosecond

Every fraction of a second counts in this profile of the wild world of highfrequency trading

22 De-risking AI growth

Kao Data CEO Doug Loewe on making hay while the sun shines, while getting prepared for the coming darkness

25 The Cooling supplement F-gas bans, vortexes, and AI cooling

41 London calling Tories out, data centers in?

47 Grid capacity challenges vs renewable targets

Why data centers are key to solving both

53 Colocation and Helios

How an African telco hopes to build out the Edge at its towers

58 Here, Vaire, and everywhere

One chip startup’s reversible computing dream

61 Nuclear solutions

The promise and peril of SMRs

64 Nuclear missions

The DOE’s CIO on managing a giant, weaponized infrastructure

68 To Slough we go

Touring Virgin Media O2’s data center

72 Beyond Loudoun

When the heartland fills up, here’s where data centers are going

78 Did your country win?

Behind the Olympics networking challenge

81 Sustainable teleports

Greening ground stations

86 Lambda’s AI cloud dreams

The race to build a GPU cloud competitor to hyperscalers

90 Op-ed: When the music stops

Living life in the fast lane From the Editor

We all know how it feels to be expected to work to ever-tighter deadlines. Projects are getting faster, expectations are getting higher. Speed is the name of the game.

Trading places

For the cover, we profile the wild world of high-frequency trading, where success is measured in nanoseconds. The flash boys have been replaced by flash toys, all focused on doing one thing: Making money faster than anyone else.

Wall Street meets Data Center Alley

Direct lines of high-speed cables, microwave towers, and specialized FPGAs are all being built to squeeze out precious fractions of a second as fortunes are made and lost in the time it takes to read this sentence.

The big question

Traders aren't the only ones making money. The AI boom has been undeniably good for data centers - but how long can it last?

As Kao Data gears up for the largest expansion in the company's history, we chat to CEO Doug Loewe about how to avoid over-extending and ending up in trouble when the AI hype train starts to slow. It's impossible to predict how dramatic any fall will be, but it is possible

Dive even deeper

- and prudent - to insulate yourself from any collapse.

The UK's comeback

There's a new government in charge, bolding promising to slightly reverse a decade of austerity.

Key to their plan to get the economy chugging is a concerted pitch to data center companies. But can they overcome grid and NIMBY challenges?

The nuclear family

The Department of Energy is best known in this industry for its giant supercomputers.

But the department also has a vast scientific portfolio, significant regulatory demands, and a terrifying nuclear arsenal. We chat to the DOE's CIO about managing all of this, and more.

Talking telco

Helios has been on a tower buildout binge. Now, the CEO tells us, the company is ready to slow down.

We chat about how the Africa-focused telco is looking to monetize what it has, including with Edge and colocation sites.

Elsewhere, we visit Virgin's Slough DC.

Beyond Loudoun

Virginia is filling up.

The data center capital was already struggling to keep up with demands, but now the rise of the megacampus is putting new pressures on the state. We profile different counties in Virginia to see when and where data centers will live.

Plus more

A cooling supplement, novel chips, ground stations & more!

The length of IEX's coil of optical fiber, which intentionally adds a 350ms delay.

Meet the team

Publisher & Editor-in-Chief

Sebastian Moss

Senior Editor

Dan Swinhoe

Telecoms Editor

Paul Lipscombe

Compute, Storage & Networking Editor

Charlotte Trueman

Reporter

Georgia Butler

Features Editor

Matthew Gooding

Junior Reporter

Niva Yadav

Head of Partner Content

Claire Fletcher

Partner Content Editor

Chris Merriman

Copywriter

Farah Johnson-May

Designer

Eleni Zevgaridou

Media Marketing

Stephen Scott

Head of Sales

Erica Baeta

Conference Director, Global

Rebecca Davison

Content & Project

Manager - Live Events

Gabriella Gillett-Perez

Matthew Welch

Audrey Pascual

Channel Management

Team Lead

Alex Dickins

Channel Manager

Kat Sullivan

Emma Brooks

Zoe Turner

Director of Marketing

Services

Nina Bernard CEO

Dan Loosemore

Head Office

DatacenterDynamics

32-38 Sa ron Hill, London, EC1N 8FH

Sebastian Moss Editor-in-Chief

The biggest data center news stories of the last three months

News

Blackstone and CPP acquire

for AU$24bn

Blackstone has acquired APAC data center firm AirTrunk for US$16.1 billion, making it the largest-ever deal in the space.

Funds managed by Blackstone Real Estate Partners, Blackstone Infrastructure Partners, Blackstone Tactical Opportunities, and Blackstone’s private equity strategy for individual investors, along with the Canada Pension Plan Investment Board (CPP), have entered into a definitive agreement to acquire AirTrunk from Macquarie Asset Management and the Public Sector Pension Investment Board.

The deal has an implied enterprise value of over AU$24 billion (US$16.11bn) - making it the largest ever deal for a data center company. The transaction is subject to approval from the Australian Foreign Investment Review Board.

APAC-focused operator AirTrunk was founded in 2016 with plans to develop hyperscale data centers in Australia. The company opened its first facility in Sydney in 2017, and has since expanded across the region, operating and developing campuses in Australia, Hong Kong, Japan, Malaysia, and Singapore.

Jon Gray, president and COO of Blackstone, said: “This is Blackstone at its best – leveraging our global platform to capitalize on our highest conviction theme.

AirTrunk is another vital step as Blackstone seeks to be the leading digital infrastructure investor in the world across the ecosystem, including data centers, power, and related services.”

Robin Khuda, founder and CEO of AirTrunk, said: “This transaction evidences the strength of the AirTrunk platform in a strong-performing sector as we capture the next wave of growth from cloud services and AI and support the energy transition in Asia Pacific. We look forward to working with Blackstone and CPP Investments and benefitting from their scale capital, sector expertise, and valuable network across the various local markets, which will help support the continued expansion of AirTrunk.”

Macquarie and Public Sector Pension Investment Board (PSP) collectively owned 88 percent of AirTrunk - with the former the majority owner. A group led by Macquarie took control of the company in 2020 in a deal that valued it at about AU$3 billion ($1.96bn). Macquarie’s own announcement of the deal noted that CEO Khuda will also realize part of his stake.

DCD reported in January that AirTrunk was up for sale, with Macquarie and PSP having decided to dispose of their asset after shelving a previous plan for an IPO. buying the company a month later.

NEWS IN BRIEF

Swiss startup raises $1.85m for metal foam cooling

Apheros has announced a $1.85 million funding round to help commercialize its metal foam product. The foam is meant to cover the chip, which helps dissipate the heat better.

STT GDC to pilot Phaidra’s AI data center cooling control system

STT GDC plans to pilot an artificial intelligence-based autonomous control system for cooling. The company will test the system from US startup Phaidra, founded by former DeepMind developers.

King Street acquires liquidcooling pioneer Colovore

Investment firm King Street Capital Management has acquired a majority ownership stake in Colovore, the Silicon Valley-based liquid-cooled data center specialist. The company has two all-liquid-cooled data centers in Cali.

Paul Allen’s estate to auction off Cray

supercomputers

Auction house Christie’s is auctioning off a plethora of notable early computers from the collection of Microsoft co-founder Paul Allen. The collection includes Cray-1 and Cray-2 supercomputers, as well as several mainframes, and even two early Apple computers.

Nokia to equip astronauts with 4G spacesuits

Nokia has paired with Axiom Space to equip the company’s next-generation spacesuits with advanced 4G/LTE communications. The spacesuits will be used by astronauts on Axiom’s Artemis III lunar mission, due to take place in 2026. The suits will capture real-time video and communicate with mission controllers.

Open Compute trials “green concrete” with hyperscalers

The Open Compute Project Foundation has announced a new collaboration with several hyperscale companies to test developing and deploying low-embodied carbon concrete, or “green concrete.” Google, AWS, Meta, and Microsoft are some of the companies participating in the project.

US gas companies in talks to power data centers

US Energy Transfer and Williams Cos. are in discussions with data center operators about the possibility of building pipelines directly to their facilities.

Within the context of the artificial intelligence (AI) boom, data center power demands are continuing to grow, and many are looking at onsite power generation as a solution, reports Bloomberg Energy Transfer’s co-CEO Marshall McCrea said in a call with analysts that the pipeline giant was in discussions with data centers of a variety of sizes, and that many of them want to generate power on-site.

The company is reportedly looking to expand currently connected plants and explore connecting with new power plants.

Williams Cos., a competitor of Energy Transfer, is seeing a similar growth in demand.

“We, frankly, are kind of overwhelmed with the number of requests that we’re dealing with, and we are trying to make sense of those projects,” CEO Alan Armstrong said, adding that this is particularly in the southeastern part of the US and mid-Atlantic.

Unsurprisingly, included within these regions is data center hotspot Virginia, which has been struggling with data center power demand.

Dominion Energy, Virginia’s local power provider’s 2023 annual report said data centers represented 24 percent of Virginia Power’s electricity sales for the

New UK government scraps exascale supercomputer plans

year ending December 2023, up from 21 percent in 2022.

The utility said individual facility demand is growing from around 30MW to 60-90MW, and campus requests are now ranging from 300MW to “several GW.”

In January 2024, PJM Interconnection revealed that it is expecting energy demand in its transmission zones to jump almost 40 percent over the next 15 years, driven largely by data center growth.

With increasing grid constraints, exploring onsite power is often the only way a data center development will be able to get approval for projects.

Canadian energy firm TC Energy recently said was excited by the opportunities around providing gas to data centers near its natural gas pipelines.

Natural gas provider New Fortress Energy last month launched a new data center unit, with plans to develop facilities near its pipelines and gas plants.

Amid a block on new grid connections, several data centers in Ireland looked to connect to the national gas pipelines instead. Companies also filed to built on-site gas power plants.

Amazon recently shelved plans to pump natural gas into Oregon to power its data centers, a move that would have increased the carbon footprint of the facilities, which currently run mainly on hydropower drawn from the state’s grid.

The UK government has shelved £1.3bn ($1.66bn) in funding for tech and AI projects that had been announced by the previous administration.

Projects impacted include the exascale supercomputer that was set to be built at the University of Edinburgh and the AI Research Resource (AIRR).

Chancellor Rachel Reeves said the previous Conservative government had left a £22bn ($28.04bn) “black hole” in public finances and that she had asked government departments to find £3.1bn ($3.95bn) in “efficiency savings.”

A spokesperson for the Department for Science, Innovation, and Technology told DCD: “We are absolutely committed to building technology infrastructure that delivers growth and opportunity for people across the UK. The government is taking difficult and necessary spending decisions across all departments in the face of billions of pounds of unfunded commitments. This is essential to restore economic stability and deliver our national mission for growth.”

The University of Edinburgh has said it will fight to secure funding for the project. Sir Peter Mathieson – Vice-Chancellor at the University of Edinburgh – has reportedly been personally lobbying ministers in an attempt to get the funding restored.

Should the system be funded and brought online, it would be 50 times more powerful than the UK’s current top-end system.

Supermicro leases 21MW at Prime facility, sublets to Lambda Cloud in $600m deal

Server maker Supermicro in June agreed to lease 21MW from a Prime Data Centers facility in Vernon, California.

At the same time, the company said that it would sublicense the space and power to startup Lambda Cloud.

In an 8-K filing, previously unreported, Supermicro said that it had entered into a Master Colocation Services Agreement at the 33MW 4701 Santa Fe data center.

That ten-year deal “is estimated to be $600m, which amount includes the monthly recurring charges, power charges, and other anticipated costs,” the filing states.

That cost will be covered by Lambda through the sublicense, along with “an additional monthly charge.”

This is the first time Supermicro has made such a deal, at least publicly. The company declined to comment to DCD , while CFO David Weigand dodged the question in its latest earnings call: “We consider ourselves experts in data center solutions. And so this is really just one

more facet of being a total provider.”

The company has, however, moved further down the data center supply chain as AI server sales boomed. In its earnings call, Supermicro announced that it would launch a data center ‘building blocks solution’ to help speed up data center deployments later this year.

Lambda did not respond to requests for comment.

It is not clear if, as part of the deal, Lambda agreed to buy servers from Supermicro.

The company also offers its own Lambda Scalar servers - which are based

on Supermicro or Gigabyte hardware.

“Lambda focuses only on GPU use cases for AI, initially for training and now going into inference,” the company’s head of cloud, Mitesh Agrawal, told DCD for our latest AI supplement.

Lambda is believed to be courting investors for $800m to help fuel its data center expansion. It currently operates out of colocation data centers in San Francisco, California, and Allen, Texas.

Rival GPU cloud firm CoreWeave recently announced plans to lease data center space in Sweden with local operator EcoDataCenter.

CoreWeave said it has completed nine new data center builds since the beginning of 2024, with 11 more in progress.

The company expects to end the year with 28 data centers globally, with an additional 10 planned in 2025.

Cogent converts 30 Sprint switch

sites to colo data centers; 18 more in works

Fiber firm Cogent Communications has completed the conversion of more than 30 former Sprint switching sites in the US into colocation data centers.

T-Mobile sold its Wireline business to Cogent Communications for just $1 in September 2022. As well as thousands of miles of fiber, the deal included more than 40 data centers totaling some 400,000 sq ft (37,160 sqm) of space and a significant real estate footprint totaling 482 technical spaces and switch sites.

Cogent said it aimed to convert the largest 45 sites into colocation data centers. That footprint totaled around 1.3 million sq ft (120,775 sqm) and 160MW, adding to the 55 facilities the company previously operated, totaling 77MW.

In its Q2 2024 quarterly results, the company said it added 31 data centers to its colocation portfolio, taking its total from 55 to 86. In total, the company said 34 of the 482 technical buildings as part of the Wireline business acquisition have been converted to Cogent data centers to date.

Cogent CEO Dave Schaeffer said the company is working on converting an additional 18 facilities.

Tract announces 1.8GW data center campus in

Buckeye, Arizona

Data center park developer Tract has plans for a new campus in Phoenix, Arizona.

In August the company announced the acquisition of a 2,069-acre land parcel in the Buckeye area of Maricopa County.

The park could see up to 20 million square feet of data center space developed across as many as 40 individual data centers at full build-out.

Tract said it’s currently working with the local utility on plans to support up to 1.8GW of capacity.

Tract reportedly paid $135 million for the land, which was previously earmarked for a large residential development that never materialized.

The company previously pulled out plans for a 30-building campus nearby after concerns were raised by city and county officials.

Tract is planning other large parks in Reno, Nevada; Eagle Mountain, Utah; and Richmond, Virginia totaling around 5GW and thousands of acres.

Google denied data center planning permission in Dublin

Google has been denied planning permission for another data center in Dublin, Ireland.

August saw South Dublin County Council refused Google’s request for planning permission for a new building at the Grange Castle Business Park in the south of the Irish capital.

The council cited what it called “the existing insufficient capacity in the electricity network (grid) and the lack of significant on-site renewable energy to power the data center” as reasons for refusal.

It added that the lack of clarity around Google’s Power Purchase Agreements in Ireland and the lack of a connection to the surrounding district heating network were also factors in its decision to deny the company.

Google filed to build a 72,400 sqm (779,310 sq ft) facility on a 50-acre site, adjacent to the two facilities it already has there, in June 2024. The company said the new facility, set to go live around 2027, would be powered through an existing grid connection authorization from local grid operator EirGrid.

Google first announced plans to convert a warehouse in Dublin back in 2011, with the data center going live in 2012. The company was granted permission to build a second two-story, 30,360 sqm (326,790 sq ft) facility in 2014.

Elon Musk’s xAI data center ‘in Memphis faces opposition

Health officials in Memphis, Tennessee, are being urged to investigate turbines at xAI’s data center over claims that the devices installed by Elon Musk’s company could leave residents breathing poorer quality air.

Campaign group the Southern Environmental Law Center (SELC) has written to the Shelby County Health Department stating that it believes 18 generators on the site, in the Boxtown area of South Memphis, require permits.

xAI’s data center is housed in a 750,000 sq ft (69,677 sqm) former Electrolux plant, which closed in 2022.

Musk has dubbed the site ‘the gigafactory of compute,’ and says it will eventually be home to up to 100,000 Nvidia H100 GPUs for training and running the next generation of xAI’s large language model, Grok, with a second cluster of 300,000 B100 GPUs scheduled to come online next year.

However, getting sufficient power to the site for such a powerful set-up requires significant investment.

Blue

Owl, Chirisa,

and PowerHouse

announce $5bn data center JV

Dan’s Data Point

While xAI has pledged to spend $24 million building a substation that would give it access to up to 150MW (if approved by the state grid operator), the site currently only has 7MW available from the grid.

To solve this problem, Musk has drafted in 14 mobile natural gas generators from Voltagrid, each capable of supplying 2.5MW.

The SELC letter says that four 16MW SMT-130 turbines from a company called Solar Turbines have also been brought to the site, adding to the problem of poor air quality in Shelby County.

In July, the SELC warned of “harmful consequences” to residents as a result of the data center development due to the strain it is likely to put on the Memphis power grid.

“The xAI facility is demanding a jaw-dropping 150MW of firm power by the end of 2024. To put that demand in perspective, 150MW is enough electricity to power 100,000 homes,” it said in a letter to grid operator Tennessee Valley Authority requesting a review of the data center’s power arrangements

Overcommitting to industrial load “could have serious and even life-threatening consequences for residential customers in Memphis, contrary to the purpose of the TVA Act and the board policy,” the letter said. “When TVA cannot meet peak demand, families go without power during increasingly severe hot and cold weather.”

Musk’s company has also aggravated local councilors, who said they had been left in the dark about the plan for the data center, only learning about it via media reports.

QTS has filed to build a 1.1GW campus in Blyth, Northumberland, northeast of England. Up to 10 data center buildings are planned on the 100-hectare campus, totaling up to 540,000 sqm (5.8 million sq ft), potentially making it the largest in the UK.

Data center firms Chirisa and PowerHouse are partnering with asset manager Blue Owl for a multi-billion-dollar joint venture to build data centers for AI cloud firm CoreWeave.

Funds managed by Blue Owl Capital Inc., Chirisa Technology Parks (CTP), and PowerHouse Data Centers signed a joint venture development agreement focused on the development of large-scale AI/HPC data centers for the GPU cloud company.

The agreement is intended as the first stage of a partnership with capacity to deploy up to $5 billion

of capital for turnkey build-to-suit AI/HPC data center developments supporting CoreWeave and other hyperscale and enterprise data center customers.

The initial 120MW of capacity under the JV will be delivered for CoreWeave in 2025 and 2026 at CTP’s 350-acre campus near Richmond, Virginia.

Further deployments in the pipeline include brownfield and greenfield campuses in New Jersey, Pennsylvania, Texas, Kentucky, and Nevada.

CTP’s pipeline totals 400MW.

Eutelsat in talks with EQT over ground station network

Investment firm EQT is in talks with European satellite firm Eutelsat over a deal to acquire the latter company’s ground station infrastructure in a sale-leaseback deal.

The EQT Infrastructure VI fund has entered into exclusive negotiations to acquire a majority stake in the business.

The transaction would carve out passive assets, including land, buildings, support infrastructure, antennas, and connectivity circuits, to form a new standalone legal company.

EQT would own 80 percent of the business, while Eutelsat would remain a long-term shareholder with 20 percent and become the unit’s anchor tenant. The

contemplated transaction values the new entity at €790 million ($863.8m).

The ground station business consists of approximately 1,400 antennas across more than 100 locations globally, enabling satellite communications for Eutelsat Group, OneWeb, and other third-party customers.

The new unit would be rebranded after the closing of the transaction, with its headquarters remaining in France.

EQT said it would “support the continued development” of the acquired ground station business in its “journey to becoming a premier independent ground station operator globally,” including through investments in new and existing

Wyoming Hyperscale becomes Prometheus Hyperscale, expands

antenna infrastructure and M&A-driven growth.

Carl Sjölund, partner within the EQT valueadd infrastructure advisory team, said: “At EQT, we identified satellite ground stations as an attractive digital infrastructure vertical several years ago. They play an important role in ensuring global connectivity, especially for those not covered by fixed and mobile connectivity solutions, and require deep global expertise in developing and operating telecommunications infrastructure businesses.”

Reports Eutelsat was mulling the sale of its ground station network surfaced earlier this year. Paris-based Eutelsat has reportedly been reviewing its strategy since its merger with rival provider OneWeb, first announced in 2022, closed last year.

Eva Berneke, Eutelsat group CEO, added: “We are proud to become the first satellite operator to embark on this innovative transaction which would allow us to build on the model adopted in other industries, and to optimize the value of our extensive ground network.

“This transaction would represent a winwin situation for all parties, and would enable Eutelsat to strengthen its financial profile, whilst continuing to rely on the unparalleled quality and reliability of its ground infrastructure. Moreover, we are confident that with the backing of EQT, the business would be in a position to fully embrace the opportunities opening up to it as the new global leader in this dynamic sector.”

Though Eutelsat has previously admitted OneWeb’s Low Earth Orbit (LEO) network was “running behind schedule” due to ground station delays, OneWeb has seen a number of sites go live this year.

120MW data center planned for farmland in Wyoming will now have IT capacity of 1GW after the company developing the site, Wyoming Hyperscale, was merged into a new entity, Prometheus Hyperscale.

Prometheus has also revealed it plans to construct four other data centers across Arizona and Colorado.

The new company will be led by Trenton Thornock, the founder and CEO of Wyoming Hyperscale Whitebox, with Trevor Neilson, an experienced climate tech executive and investor, joining as president.

“Prometheus Hyperscale is redefining data center infrastructure,” said Thornock. “Our innovative approach, which combines cuttingedge technology with a commitment to sustainability, positions us at the forefront of the industry. I am excited to work alongside Trevor

whose leadership and vision will be instrumental as we scale our operations globally.”

Wyoming Hyperscale has been building a data center campus on 58 acres of land on Aspen Mountain, a remote site southeast of Evanston in Wyoming.

The company has claimed the facility will be “the most advanced sustainable data center in the United States” once up and running, and has already pledged to utilize liquid cooling, with waste heat put to use on a nearby farm.

Capacity at the data center had been expected to be 120MW, but this has now been revised up to 1GW. In May it agreed to a deal to buy 100MW of energy from small nuclear reactor startup Oklo.

A
Neilson,

Microsoft adopting DTC

liquid cooling, exploring micro-fluidics

Verizon to acquire Frontier Communications for $20bn

Microsoft is adopting direct-to-chip liquid cooling and exploring the potential of microfluidics.

In a recent piece detailing the company’s water use and cooling ambitions, Microsoft outlined some of its latest moves to adopt liquid cooling.

“To harness the increased efficiency cold plates offer, we’re developing a new generation of data center designs optimized for directto-chip cooling, which requires reinventing the layout of servers and racks to accommodate new methods of thermal management as well as power management,” the company said.

As well as the new design, Microsoft noted it is using its sidekick liquid cooling system in its existing data centers.

The company first showed off the sidekick last year when it announced its Cobalt CPU and Maia AI Accelerator chips.

Maia chips will be deployed in a custom-designed rack and cluster known as Ares, SemiAnalysis reports. The servers are not standard 19” or OCP and are reportedly “much wider.”

Ares will only be available as a liquid-cooled configuration, requiring some data centers to deploy waterto-air CDUs. It will only be deployed in Microsoft-owned or -leased data center space, and not for external sale.

Each server features four Maia accelerators, with eight servers per rack. In the sidekicks, cooling infrastructure is located on the side of the system, circulating liquid to cold plates.

The company also noted it is working on microfluidics cooling technologies. Still a nascent technology, microfluidics brings cooling inside the silicon by integrating tiny fluid channels into chip designs, embedding the liquid cooling inside the chip and bringing the coolant right next to the processors.

Further details about Microsoft’s microfluidics research weren’t shared.

“Our newest data center designs are optimized to support AI workloads and consume zero water for cooling,” the company said.

“To achieve this, we’re transitioning to chip-level cooling solutions, providing precise temperature cooling only where it’s needed and without requiring evaporation,” Microsoft added.

“With these innovations, we can significantly reduce water consumption while supporting higher rack capacity, enabling more compute power per square foot within our data centers.”

The thermal design point (TDP) of semiconductors is increasing rapidly, particularly with AI accelerators. Maia has a TDP of up to 700W, but Nvidia and others are working on processors that break the 1kW level.

As that continues to grow, Microsoft and its rivals are looking to find ways to keep cooling ahead of the curve.

We profiled microfluidics in a cooling supplement earlier this year. Read it today: (bit.ly/ Chipmicrofluidics).

Verizon has confirmed it will acquire Frontier Communications for $20 billion in an all-cash transaction.

The deal is set to bolster Verizon’s fiber network, allowing it to compete with rival telco AT&T.

Under the agreement, Verizon is set to buy Frontier for $38.50 per share in cash.

Verizon stated the deal would also expand its intelligent Edge network to include digital innovations like AI and IoT.

Frontier provides broadband connection to around 7.2 million locations across 25 states and has 2.2 million fiber customers.

At present, Verizon provides fiber services via its Fios offering. The company has around 18 million fiber locations across the country.

Combined, Verizon and Frontier have approximately 10 million fiber customers across 31 states and Washington D.C. with fiber networks passing more than 25 million premises.

Frontier had previously acquired Verizon’s rural fixed-line assets for $6.8bn in 2010, covering 14 states, before snapping up operations in California, Florida, and Texas for $10.5bn in 2015.

W E A R E A I R E A D Y, A R E YOU ?

L e grand , Your AI Inf ras truc tur e A ll y P O W E R E D B Y E X P E R T B R A N D S w w w. l e g r and . c om/ d a t a c ente r

Chasing the last nanosecond

The

millions being invested and made in the pursuit of speed in high-frequency trading

Speed is relative.

Depending on the context, ‘fast’ can mean different things. Usain Bolt can run 100 meters in 9.58 seconds - high-frequency traders can trade billions in a fraction of a blink.

“You get used to really big numbers at really crazy speeds,” says Dave Lauer, a high-frequency trading veteran.

Previously a quantitative research analyst at Citadel Investment Group, and responsible for developing their trading models, Lauer has also worked for Allston Trading, and went on to work at Better Markets researching high-frequency trading and its impact on the market, before going to the IEX stock exchange.

High-frequency trading is method of algorithmic trading in which large volumes of shares are bought and sold automatically, within a fraction of a second.

“These are speeds that your brain can’t really comprehend. We had strategies that would be millions of dollars in milliseconds that would go in and out. And sometimes, it could go wrong, and you would have to dissect things to try and figure out, what caused us to lose millions in that moment?”

For many, stock exchanges bring to mind images of cramped rooms full of shouting and hand signals used to conduct trades. But these images are, at this point, archaic.

By the 80s and 90s, phone and

electronic trading became the norm, and those traders were able to finally put their hands back down. Today, trading floors are somewhat symbolic. Stock markets are instead a network of data centers and databases, and trades are all made via the Internet.

With the phasing out of open outcry trading, and the introduction of new technology - notably when NASDAQ introduced a purely electronic form of trading in 1983 - High Frequency Trading (HFT) gradually developed, and has continued to get quicker.

“HFT isn’t really a trading strategy, it's a technology and an approach,” explains Lauer. “It powers lots of trading strategies.

“HFT is using the fastest computers and networking equipment in a distributed manner to buy and sell anything really fast. It doesn't just have to be equity markets or futures markets like crypto. Generally speaking, it's a way to arbitrage markets,” he says.

know that the NBBO has changed before the exchange does, then you can pick off all the stale midpoint orders and when the NBBO changes, you make money.”

HFT firms aren’t just trying to be faster than the exchange, however. They are also in constant competition with one another to be first in line.

It is this competition that has led to traders around the world constantly innovating their technology, scrounging to save a microsecond or nanosecond, at every level of their networking equipment, computing hardware, and software.

Arbitrage is the simultaneous purchase and sale of the same asset in different markets in order to profit from small differences and fluctuations in the asset’s listed price.

An example Lauer offers is slow market arbitrage - which is a form of what he calls ‘structural trading.’

“A lot of exchanges have order types called midpoint peg orders. They fill the midpoint of the National Best Bid and Offer (NBBO). If you, as an HFT firm,

There are stock exchanges all over the world, but much of the trading that occurs is concentrated in what is known as the “New Jersey Triangle,” and Chicago.

“The New Jersey Triangle is referring to an area where several exchanges and their data centers are concentrated. You’ve got NYSE which is located in Mahwah, and then you’ve got the Carteret data center which houses NASDAQ, and then you’ve got Secaucus where all the other exchanges - anything that isn’t NYSE or NASDAQ - are located,”

Cliff Maddox, director of business development at FPGA (specialized hardware that can be optimized for HFT) company NovaSparks, tells DCD

“Something like 60 percent of global trading volume happens in those three data centers (and Chicago), and it's gone

Georgia Butler Reporter

up substantially over the last 15 years, from something like 33 percent to 60 percent.”

Those data centers are all colocated - at a premium - and connected to one another, via cables or in other cases, microwaves or laser-based communications.

A look to the past

Throughout DCD’s investigation into HFT, one thing was consistently mentioned by all interviewees: Flash Boys.

Flash Boys, written by Michael Lewis and published in 2014, in many ways exposed HFT and brought it into public consciousness. Its reception was, to say

the least, mixed.

real ‘Spread Networks’ line, a cable that connects the Chicago Mercantile Exchange, and a data center in Carteret, New Jersey, beside the NASDAQ exchange. It has since been expanded to the Equinix NY4 IBX data center in Secaucus, and another in Newark.

The line was created in near total secrecy in 2010 (which has since been fictionalized in the movie The Hummingbird Project) and was developed under the premise that a straight cable would be quicker than the current connections, giving firms who leased bandwidth upon it an inherent advantage. That advantage was in the realm of a single millisecond.

NovaSparks PM4-Edge

The book’s overarching message was that technological changes to trading, along with unethical trading practices, had led to the US stock exchange becoming a rigged market. Naturally, many financial institutions - and those engaging in high-frequency tradingcriticized the book and its claims.

Notably, Manoj Narang, CEO of highfrequency trading firm Tradeworx, argued that Lewis' book was more "fiction than fact."

Lauer, when discussing the book with DCD, surmises “he got a lot right, and a lot wrong.”

“He made some basic factual errors. But those who are nitpicking what is and isn’t true are missing the point of the book. The book is about conflicts of interest and the fullness of this activity,” says Lauer. “You talk to people in HFT, and they'll tell you that book is pure fiction. But I lived that. The book is not a fiction like that.”

Towards the beginning of Flash Boys, Lewis writes about the undisputably

The first cable cost around $300 million (at the time) to develop, and involved carving out a straight line between the two data centers, cutting through mountains and other complicated terrain. When it was developed, the first 200 stock market players who were willing to pay in advance for a five-year contract received a discount: $10.6 million, instead of $20m.

“He made some basic factual

errors.

But those who are nitpicking what is and isn’t true are missing the point of the book. The

book

is about conflicts of interest and the fullness of this activity. You talk to people in HFT, and they'll tell you that book is pure fiction. But I

lived that. The book is not a fiction like that”

>>Dave Lauer

All in all, this was around 10 times the price of standard telecoms routes. But with millions being traded in moments, and the threat that hundreds of traders might now be ahead of you in the line, that expense became worthwhile.

This was, however, 14 years ago, and in an industry where change happens at a much more rapid pace than usual technology upgrades might be elsewhere.

With the entire sector chasing that millisecond or even nanosecond reduction, new solutions are always being sought out.

At some point, it was realized that using fiber optics was not actually “the speed of light” over a straight distance, as the signal would be bouncing around inside the cable itself. This led to the use of microwaves.

Usman Ahmad, chief data scientist and quantitative developer at Acuity Knowledge Partners, a research and business intelligence firm in the financial sectors, tells DCD that microwaves were really “phase two” of data transmission in the HFT industry.

“Microwaves travel through the air, and you can get a straight line directly from tower to tower. So you have a microwave emitter that transmits data, and a detector on another tower high up and it's a nice straight line and can transmit data faster than fiber optics,” explains Ahmad. “But they are unstable by themselves. Adverse weather conditions can affect them, for example.”

Microwave radio services started being offered by McKay Brothers and Tradeworx in 2012.

It is highly impractical for traders to be put out of shop simply because it is raining thus, while the microwave was phased in, fiber optics would still be used for redundancy purposes.

Microwaves were phase 2, but it is when we talk about phase 3 that Ahmad gets truly excited.

“What they are doing now - and I’m honestly really wowed by the technology - is they are using something a bit better and with a shorter frequency, mmWave or millimeter wave.”

MmWave refers to electromagnetic waves that have a wavelength between 10mm and 1mm, a frequency between 30Ghz and 300Ghz, and lie between the super high-frequency band and the far

infrared band.

But one of the biggest obstacles with the mmWave method is that, while it has a higher bandwidth, it travels shorter distances.

Joe Hilt, CCO at Anova Financial Networks, a major player in this newer method of transmitting trading data, explains the predicament to DCD: “Microwave technology goes a longer distance but has less bandwidth - say 550 megabits. Then, on the other hand, mmWave can give a customer, say, 1 gigabit of bandwidth, but it needs more ‘hops’ along the network.”

“Anova is a little different in that we do use microwave, we do use millimeter wave, and then we do something called free space optics,” explains Hilt.

Anova began using the free space optics technology “about eight or nine years ago.” The solution has previously been used by the Department of Defense, for example for fighter jets to communicate with one another.

Free space optics - where data is transmitted through ‘free space’ as opposed to via a solid material like optical fiber cables using laser beams - are also being used by Google's parent company Alphabet in its Taara project. Taara is using the technology to bring high-speed Internet access to rural and remote areas including in India, Australia, Kenya, and Fiji, and can transmit information between terminals at speeds of up to 20 gigabits.

“It’s a laser technology, it's not different almost than what you use to turn your TV on and off, except we are pushing it to 10 kilometers,” he says.

“Something like 60 percent of global trading volume happens in those three data centers (and Chicago), and it's gone up substantially over the last 15 years, from something like 33 percent to 60 percent”
>>Cliff Maddox NovaSparks

close they get to the data centers. HFTs are trying, according to Hilt, to use as much radio and as little fiber as possible, so those receivers need to be on the roof of a data center, or a pole next to it.

Conceptualizing these transmission methods is relatively simple. Deploying them, is anything but.

Free space optic lines need to be deployed within the line of sight. In other words, while each hop might be 10km apart, there cannot be any obstacle in the way between them.

Before setting up, Anova will look over the route and see what could get in the way, be it trees or a building is in the way. “If a building’s in the way, our first point of call would be to see if we can lease space on the roof,” says Hilt. “We’d put the mast right there and take the signal up over the site and down the other side. That’s the fastest way around it.”

Every 10 kilometers, there is a ‘hop,’ or a transmitter and receiver which takes the message and passes it on to the next hop until eventually the data meets its end location.

Anova takes charge of every element of these networks. “We make the radios, we install them and we maintain them. Because of this, we’ve been able to get to a place where our radios can now do 10 gigabits of bandwidth instead of just one. This is major because we are in a market where customers previously would buy 10, 20, or 50 megabits, and they can now buy one gigabit of bandwidth which is almost as if they had built their own network.”

The three technologies are all used, and when it comes down to it, the measure of their advantage lies in how

As part of its maintenance operation, Anova will “walk” the route regularly and check if there are any new builds coming, anything from a real estate perspective that may mean they need to adjust.

HFT firms will typically lease bandwidth on Anova’s network, though on occasion the company is hired to build a private network for a client. According to Hilt, this only ever occurs on a “metrobasis” - for example connecting NASDAQ to NYSE - not for long-haul routes.

Regardless of whether microwave, mmWave, or free space optic technology is used, once the data arrives at the data center, it travels through fiber optic cabling.

At this point, there is an advantage in where the servers doing the trades are

Joe Hilt, Anova
Anova Network

actually located in the data center - the shorter the cable, the lower the latency, after all. This led to HFT firms battling over the prime real estate that would bring their servers closer to the matching engine.

The flash crash

Following the 2010 Flash Crash, regulations that limited HFT latencyreducing techniques crept in.

HFT’s role in the Flash Crash is indisputable - though it was not the initial trigger.

May 6, 2010, was a clear and warm day in New York. For those not on the trading floor, it would have seemed like any other Thursday - the weekend was approaching and the weather was optimistic. By 1:00pm, things had taken a turn for the worse.

The crash has to be remembered in context. May 6 began with some nervousness in the market. Traders would have awoken to news of a troubling political and economic market across the pond in Europe, leading to widespread negative market sentiment.

In addition to this, a mutual fund trader had initiated an unusually large, and flawed, program to sell 75,000 E-Mini S&P 500 futures contracts, valued then at $4.1 billion. The algorithm used to manage the sale had mistakenly only accounted for volume, but not time or price. The sale, as a result, was executed in just twenty minutes, as opposed to the five or so hours that would normally be expected.

HFT algorithms, by their nature focused on being as fast as possible, were

naturally the first buyers of the contracts, only to sell them again simultaneously, creating what the Commodity Futures Trading Commission (CFTC) and the Securities and Exchange Commission (SEC) official report called the “hot-potato” effect.

Ultimately, this pushed prices down at a rapid rate, and buyers were then either unable or unwilling to provide buy-side liquidity.

Other market participants began withdrawing from the market and pausing trading. Official market makers used “stub quotes” in which they reduce bids in shares to $0.01, and increase asks to the maximum of $99,999.99, which prevents trades.

Share values became entirely unpredictable, with some running down into practically nothing, and others skyrocketing only to spin back to earth without clear reason.

Stability was achieved by 3:00pm. Later, after the markets closed, the Financial Industry Regulatory Authority (FINRA) met with the exchanges and they all agreed to cancel all trades that had occurred during said Flash Crash.

While this can be seen as a “no harm, no foul” situation, what it demonstrated was the potential for mass instability in the market directly related to the use of technology for trading.

The process of regulating HFT has been slow, to say the least. There was a call for greater

transparency immediately following the 2010 crash, but HFT firms are naturally unwilling to give away what is effectively intellectual property. The details of their strategies are the entire basis of their success and to publicize that is to lose their edge.

In 2012, a new exchange was founded - the Investors Exchange (IEX) - which was designed to directly mitigate the negative effects of high-frequency trading.

The IEX matching engine is located across the Hudson River in Weehawken, New Jersey, while its initial point of presence is in a data center in Secaucus, New Jersey. Notably, the exchange has a 38-mile coil of optical fiber in front of its trading engine, which results in a 350-microsecond delay, designed to negate some of the speed advantages. The NYSE has subsequently adopted a similar “speed bump.”

Speaking on IEX, Lauer explains: “Everyone, regardless of where you're located, gets the same length of a cable, so now everyone's on an even playing field from a latency perspective within the data center.

“The idea with IEX, which is an important one, is that the exchange is always going to be faster than its fastest participants. And that's what really differentiates it from the other exchanges which say that HFT is going to be faster than they are.”

The current market structure is made up of self-regulatory organizations and stock exchanges that are also forprofit and publicly traded. But the way exchanges make money has changed, explains Lauer.

“Exchanges don’t really make much money from trading anymore, most of it comes from data and connectivity, and from selling private feeds to HFT firms. Then there is a massive public subsidy that is paid to exchanges because of the SIP public data feed.”

According to Lauer, a big part of this shift was caused by exchanges paying

rebates for passive orders, and charging fees to aggressive orders which resulted in a significant spread compression (when the difference in yield between bonds with the same maturity narrows).

IEX as an exchange takes a different approach, charging much more than other exchanges to trade, but close to nothing for data and connectivity, more or less just what their costs are.

Regardless, with each exchange to an extent creating its own regulations, the real impact on “leveling the playing field” for HFT has been limited.

Lauer says of this: “[The exchanges] are beholden to their biggest customers because they have to make quarterly earnings, and they make choices that are not necessarily in the interests of fair and efficient markets, which is what their self-regulatory responsibility is supposed to be.”

Inside the data center

Regardless of the exchange's stance on latency standardization, there are still gains to be made inside of the data center, both through the compute and the software.

In the cloud computing sector, major hyperscalers such as Amazon Web Services, Google, and Microsoft have increased their server life span to six years in order to reduce operational costs and reach Net Zero goals.

In comparison, Lauer tells DCD: “We were getting new servers every six to 12 months. That was the refresh cycle. Some of those would be repurposed as research

“With HFT currently, the majority of that behavior and activity in the markets is from the top maybe 10 to 15 firms in the US that are doing that. They dominate the markets these days; you're not going to see a tiny startup come in”

clusters, but sometimes it would be like, ‘what are we going to do with all this?’ and we’d sell them. At one point, I had in my house six rack-mounted top-of-the-line servers where I was running a research cluster just for fun.”

The simple reason for these rapid refreshes is that should a piece of equipment come out that can trade faster than the one you are currently using, then it is time for an upgrade.

This constant refresh cycle and high initial capital investment can make it challenging for new entrants to get into HFT.

Christina Qi was an undergraduate student at the Massachusetts Institute of Technology when she began trading out of her dorm room.

“I was trading from my dorm room, doing quantitative trading which was using fundamental analytics to guess whether stocks would go up or down each day,” she says.

“We were using a mathematical basis, looking at the history of different prices or the relationship between different stocks, and whether they would go up or down, and that’s when we realized that the faster you are in that space, the more advantage you would have.”

Qi went on to found Domeyard LP, a high-frequency trading firm which, at its peak, traded billions in volume daily. The firm operated between 2012 and 2022 but, as Qi notes, increasingly found it hard to compete and scale.

“With HFT currently, the majority of that behavior and activity in the markets is from the top maybe 10 to 15 firms in

>> Christina Qi

the US that are doing that. They dominate the markets these days; you're not going to see a tiny startup come in. Back then, we were rare in that we ended up trading for many years, but it took us a long time to get started, and it was only after seven or eight years that we started trading billions.”

Eventually, Domeyard began to face constraints. With higher frequency, the capacity the firm could trade was lower. “We ended up with a waitlist of investors that wanted to come in, and we couldn’t take any of them on. It was really frustrating. We reached a point where we would have had to branch out - for example doing longer-term trading, private equities, or real estate. But we had hyperfocused and put all our money into the basket of HFT.”

When the pandemic hit, Domeyard struggled, before eventually winding down its operations in 2022.

When asked if the firm ever struggled in the facing of regulations, Qi says “not really.”

“There were genuine regulations, but most of them were just reporting requirements, or sometimes it was fundraising requirements, like ‘do we need to have another disclaimer in our pitch deck for example.’ But there was nothing that would shut down highfrequency trading.”

Qi experienced a similar level of hardware refresh cycles and excess that Lauer spoke of, noting that Domeyard would at times have FPGAs lying around the office - she jokingly adds that they would have been easy to steal, and are

expensive pieces of hardware.

Servers, FPGAs, and more

The hardware primarily used by HFT firms are known as Field Programmable Gate Arrays (FGPAs). These are integrated circuits that provide customers with the ability to reconfigure the hardware to meet specific use case requirements.

Cliff Maddox, from NovaSparks, a major provider of FPGAs to the HFT sector, tells DCD that the benefit of an FPGA is that it can get “dramatically better latency that you can by writing it in software, and a better power footprint.”

An FPGA differs from a general purpose chip like a CPU or GPU in that it has one specific purpose. Maddox explains with the comparison to the Intel 8088 processor from the late 1970s.

“Fundamentally, you could give it [Intel 8088] any instruction you wanted. It might have been slower than those today, but it could do lots of different things. Computers, in general, continue to follow that instruction set - they’ve added new features, the clock speed has gone up massively, and there are more cores. But we are approaching a limit of what's possible with silicon,” says Maddox.

“Moore's Law hasn't changed much, and Moore's Law has kind of died. You're not getting double the power performance ratio every year that you used to, that's gone away. So now the only way to really expand your scale is going to specialized physical architecture.”

The FPGA is a chip that you can flash a design onto, meaning the chip does just the specific thing you want it to. It is optimized at a hardware level.

NovaSparks takes on the responsibility of finessing the FPGA for trading firms: “We sit there and optimize the hell out of it every year. We just keep making it faster, adding little tweaks to whatever it is people want and ultimately try and convince them to not build it themselves but get us to do it for them.”

Maddox similarly notes that there are around 10 firms that have the resources available to throw “everything” at optimizing this (citing Citadel as an example), but it would cost them several times more to do it themselves rather than just contracting NovaSparks to handle this.

“The idea with IEX, which is an important one, is that the exchange is always going to be faster than its fastest participants. And that's what really differentiates it from the other exchanges which say that HFT is going to be faster than they are”
>>Dave Lauer

The company offers 50 chips with different parameters to them - so HFTs can select those that will match their trading strategies - and do regular tweaks to improve them. Major overhauls are less frequent.

“We did a physical upgrade back in 2019 to a chip that had been around for a year and a half at that point. We look at every chip that comes out, but there hasn't been one that is a significant improvement to make a full upgrade worthwhile,” explains Maddox.

Regardless, through those iterative tweaks and adjustments, NovaSparks has improved latency to that chip by around 25 percent. The power consumption is also drastically reduced, because the FPGA is only ever doing the one thing it is optimized for.

The company is open to new solutions. “We would like to find a new chip. There are a lot of ways they are improving. For example, they’re improving the interfaces, so the 10-gig interface for Ethernet has gotten better. But when you add up all those adjustments and bring it to our scale, we haven't found something that is better enough overall to make that leap yet, but I think it's coming close.”

Maddox half-jokingly notes that NovaSparks isn’t too keen to overshare future plans - referencing the “Osborne effect.”

“The Osborne computer was something from the early 80s. When Osborne put out the computer, everybody thought it was great, and then Osborne started talking about the next computer, so no one bought the current Osborne because they were waiting for the next.”

Beyond the FPGA, there is an even more specialized version known as an ASIC - application-specific integrated circuit.

Maddox explains this as FPGA is an etch-a-sketch chip, but ASIC is where you get it made from scratch. “That could cost something like $30 million to spit out,” says Maddox. While more specified, making an upgrade or adjustment would take months, something that isn’t necessarily in line with the HFT faster-isbetter philosophy.

While the FPGA seems to dominate with those companies at the front of the HFT sprint, more general chips still have a role to play.

Blackcore Technologies specializes in servers optimized for HFT - their speed increased by “overclocking.”

“We take the relatively standard hardware, and push it beyond its standard manufacturing specifications,” James Lupton, CTO of Blackcore tells DCD

“Overclocking is quite a long-standing thing in the IT industry. I think it's human nature to take something and try to push it faster. Ever since we first had computing, there were stories of people taking a 100 megahertz processor and trying to push it to 200 megahertz.”

To cater to HFTs, as well as gamers and other use cases, CPUs are released with overclocking enabled. Those creating the CPUs work in tandem with the motherboard vendors. “It enables you to go in at a low level and adjust certain variables within a motherboard and CPU to increase the clock speed and also other technical things like the cache speed or memory speed.”

Blackcore has learned, over the eight years it has been in the market, how to toe the line in making the server faster while not risking pushing it too far.

“A CPU sold is intended to work a specific specification - the thermal and power and everything is designed around the specification that was shipped up. So

when we start to play with them, you're putting more power in than they were maybe necessarily designed for.”

Lupton adds: “They're also going to be generating more heat than they were before. So we have to be very selective with making sure we've got ancillary components with a motherboard that can provide that power to the CPU safely, and that we've got the right cooling systems in place to be able to cool the extra heat that's generated from overclocking.”

As is the standard with high-density racks, this is liquid-cooled. Blackcore offers a complete rackmount server system with liquid cooling already in the unit via a self-contained liquid cooling loop within the server chassis itself, meaning customers don’t need any additional data center infrastructure and can be deployed in a standard air-cooled data center rack.

Blackcore is angling for customers who are relying on a software-based trading algorithm. Lupton acknowledges that FPGAs provide a lower latency solution, but notes the higher barrier to entry.

“It’s very, very fast, but it reduces the complexity you have - you can’t cover every single strategy in that one scenario, so a lot of clients end up needing the fast CPU as well for other strategies or for reprogramming the FPGA in real-time.”

ASICs, notably, cannot be changed once they are manufactured and are more expensive, even if they do offer a lower latency again.

HFT: A valuable contribution to society, or a drain on resources?

The sheer quantity of money and brain power that is put into HFT and shaving off that last nanosecond is monumental, and those who stand to benefit from the technique will maintain that it has a positive impact on the economy and wider society.

They will also fight, tooth and nail, against any move that seems to damage their efforts with latency arbitrage.

In fact, the firms engaging in these practices go so far as to deny their existence.

According to Lauer, following the

Share values became entirely unpredictable, with some running down into practically nothing, and others skyrocketing only to spin back to earth without clear reason

publication of Flash Boys, many firms would argue “latency arbitrage used to happen, but it's not happening anymore,” but when it came to the IEX attempting to bring in a new stock-order type that would mitigate some of those latencybased advantages, Citadel actually sued the SEC in 2020.

For context, it is worth noting that Citadel was previously fined $22.6 million for "Misleading Clients About Pricing Trades” in 2017 regarding charges that its business unit handling retail customer orders from other brokerage firms made misleading statements to them about the way it priced trades.

The stock order type was known as the “D-Limit order type.” The D-Limit, in simplistic terms, used the same math that underlies many high-frequency trading models to predict when an impending price change is coming, and when that is triggered, it prevents trades occurring until the order types have been repriced, thus removing the opportunity for HFTs to take advantage of this.

Should latency arbitrage ‘no longer be used,’ then this would not be a problem. But to HFT firms it was, thus the Citadel lawsuit in which the firm claimed that this would harm retail investors by delaying trades.

It should be added that the SEC and firms that represent the retail investors and their interests were unanimously in favor of this proposal. Citadel lost the suit in 2022.

Citadel also led a suit seeking to have the Consolidated Audit Trail - a database that in May 2024 began collecting almost all US trading data and giving the SEC insight into activity across the market -

made illegal Citadel, while seemingly a leading force in the fight against regulating these financial markets, is not alone in purporting its belief that these trading methods actually have some sort of benefit.

In an opinion piece, Max Dama, a quantitative researcher at HeadlandsTech, kicks off with: “There is a common misunderstanding, even among practitioners, that low-latency trading is a waste of human talent and resources that could instead go to advancing physics or curing cancer.”

Dama goes on to claim that HFT benefits markets by reducing bid-ask spreads, thus reducing the cost of trading, and by speeding up the handling of supply and demand negotiations to uncover “true” prices.

Dama notes the disillusionment within the industry itself, stating that those who feel that way typically fall into two groups: “People working at smaller, less-profitable firms that worry their lack of traction is due to latency being an “all or nothing game;” and second, people at big firms that feel their micro-optimization work maintains a moat for another year or two but will be replaced shortly and not have any lasting value to humanity.

“They both think, isn’t it wasteful to invest so much in ASICs (he goes on to note that few did opt for the ASIC approach) in a race to the bottom to shave off five nanoseconds to capture a winnertakes-all profit in a zero-sum game? Isn’t it a misallocation of resources for so many companies to build something, for only one to win, and the rest to have squandered resources and time?”

While Dama goes on to dispute these claims, well, aren’t these valid questions?

“It’s a set of really brilliant people that could be doing a lot of very different things in this world,” Lauer says. “It's a damn shame.

“These are geniuses - some of the smartest people in the world. And instead of solving real problems, this is what they're doing.

“They're trying to eke out a couple of nanoseconds so that they can make rich people a little richer. It’s a really poor use of our collective intellectual.”

De-risking growth in the AI data center boom

Kao Data’s CEO talks about how to avoid becoming roadkill when the bubble bursts

It’s a good time to be in the data center business.

Every week brings a story of a new billion-dollar development, as the industry races to keep up with an AI explosion. Existing operators and suppliers have seen valuations soar, while GPU maker Nvidia’s share price has rocketed to astronomical levels.

UK-based Kao Data, which made an early bet on supporting high-density racks, is no different. Backed by Legal & General, Goldacre, and Infratil, it has ambitious plans to expand across Europe and upgrade its existing sites.

When DCD speaks to CEO Doug Loewe, on the eve of his six-month anniversary at the company, he is confident that he can build upon the foundation laid by the company over the past decade.

KAO Data center -London
Sebastian Moss Editor-in-Chief

“I'm now driving projects towards completion,” he says. “There were some seeds that were planted five years ago, we just need to make sure they germinate and are brought to fruition. If you start from a cold start, you're at a real disadvantage at this stage of the evolution of our industry, because you need to have been in there getting power, for example.

“You're going to hear some new startups that are coming forward, but unless they have picked up some kind of historical opportunities or contracts that are already in motion, they're going to have some real challenges, because the industry is moving at an accelerated pace, and human capital is just as scarce as power or permitted land.”

But behind all the excitement about that acceleration, a growing sense of unease is seeping through the industry. The extreme spikes in valuations, the lack of clear AI business models, and the echoes of the dot-com era of excess all have operators nervous about a fall coming.

One bad Nvidia quarter could start a domino effect, upending the market’s faith in AI’s potential, and felling those that weren’t ready.

In a wide-ranging discussion at the company’s London offices, we sat down with Loewe to understand how he’s insulating his business from the risk of AI.

Riding the wave

Supporting 40kW racks air-cooled and 100kW liquid-cooled, the company has long managed to attract AI and highperformance computing customers.

Kao’s Harlow campus scored Nvidia as a customer in 2021 and also hosts the Wellcome Sanger Institute and Arm, among others.

“The key is, even though AI is our heritage, we did not just become a pureplay AI business,” Loewe says. “Although it's incredibly lucrative, the fill rate isn't as fast as most people think, there's some companies right now that are still expanding AI only in their respective data centers. They haven't begun to distribute it, either by using third-party colo or even third-party network locations.”

Instead of going all in for AI, Loewe says that Kao sticks to an ‘ACE’ strategythat is, AI, cloud, and enterprise. “Having that diversified customer base is our

differentiator,” he says.

Loewe continues: “We're not pure enterprise, which can be very profitable, but the fill rate is very slow. If you're looking for growth, that alone won't solve the equation.”

Then there’s cloud: “If you do pure cloud, I often refer to it as the heroin habit,” he says. “It’s so consuming, it's so absorbing of capacity that you're like, ‘let's not do enterprise, let's just do cloud.’” That leaves you at the mercy of the hyperscalers, with tough margins and little to differentiate from others.

The three portfolio types are, of course, not fully distinct. “You potentially have embedded AI within the cloud,” says Loewe. “That's a situation you can solve by having a Microsoft build within our data centers to deliver workloads that support both their traditional cloud business and their AI.

“What you do in that situation is you give them 100MW, and they can start filling the data center from one end with their cloud, and you can fill the other end with AI and, just by definition, you de-risk their whole model, because it's blended. You don't have to make that decision at the beginning, because no one could forecast how quickly those those respective segments are going to grow.”

Similarly, enterprises are looking into doing their own AI, with the high-density deployments too large for their own data center footprint.

Finally, there are the GPU-as-a-Service and AI cloud companies like CoreWeave, Lambda, Paperspace, Taiga, and others.

“Some of them are going to be wildly

"Some of the [AI companies] are going to be wildly successful. But with that said, there are going to be a number that are going to be roadkill. It's very much like when the dot-com bubble burst"

successful,” Loewe says. “I tend to see one or two of those potentially having a market cap that rivals Microsoft or even Nvidia itself over the long term. But with that said, there are going to be x number, 20 or 30, that are going to be roadkill.

“It's very much like when the dot-com bubble burst.”

It’s impossible to know which of the companies will survive any coming collapse. But there are indicators a data center operator can look to - “it all depends on how they are being financed, what their investors’ time frames are,” Loewe says. “We learned the cost of capital for some of these GPU-as-aService companies is anywhere from 13 plus percent. That's like putting it on your credit card.

“And so that's not sustainable for some of the companies. Others are backed very, very well. But we can't say for certainty which ones are going to be the winners or which ones are the losers.”

With that in mind, it might be easier to keep away from AI companies altogether. “You avoid at least being participatory in that sector at your peril,” Loewe cautions. “You're just going to completely miss the boat.

“But if you've bet on the wrong one, you potentially could be big trouble.”

For Kao, avoiding that trouble comes in several forms. “Depending on who that company is and how they're backed, you can ask for first year and last year's rent upfront, and that de-risks it fairly well.

“You make sure the term of the agreement is ideally 10 years. If they went five plus five, and there's a break, you

potentially have some kind of recovery mechanism for some of the other upfront costs.”

Another step one can take is to find the ultimate end customer that will be using the GPUs - if it is one large client. That customer could potentially underwrite the asset and have step-in rights to take over the lease should the GPU company fail.

The other part is to not over-rely on any of these companies. “We have a portfolio of locations, a portfolio of customers and, if one hits the jackpot, we’re great,” Loewe says. “But if one of them potentially goes out of business, it doesn't impact the overall integrity of Kao Data.”

Across its ‘ACE’ portfolio, “you want to make sure you have a statistically valid sample size of customers,” he says. “Don't bet the farm on one.”

Moving beyond Harlow

Kao isn’t betting the farm on one site, either. Its Harlow campus, which will eventually support 80MW, was joined in 2021 by two other UK facilities, in West London and Slough.

The West London site, a diminutive 4MW facility, was acquired from Barclays and partially leased back to the bank. The rest was leased to a cloud provider.

The 16MW Slough site was redeveloped in 2023, and targets companies across the ACE spectrum.

Now, the company is looking to build the north’s largest data center with a foray into Manchester.

“Our initiative by building a £350 million ($461m), 40MW data center in Manchester is to distribute the compute regardless of the initiatives or investments [from the new UK government],” Loewe says.

“I don't think we need a half gigawatt data center in the UK, instead of killing everybody with these short-term enormous capital investments that are going to be required and take time, the industry can help society by embracing a more distributed approach.”

Kao also plans to distribute its compute across the continent. “There's no space and power in the United States,” Loewe says, so “companies that want to have their business plans be successful are just going to have to do it on a distributed basis.

“London is the natural landing point,

"You avoid at least being participatory in that sector at your peril. You're just going to completely miss the boat. But if you've bet on the wrong one, you potentially could be big trouble"

because Dublin is saturated, but then they come to Europe.”

The company is in due diligence now for “an inorganic play in both Frankfurt and Berlin.” At the same time, “the team was in the front of the queue for a nontrivial amount of power in Amsterdam,” despite the region’s historical friction with the data center sector.

Kao is also looking to Barcelona to act as an alternative to Marseille. “It's really a single vendor in Marseille,” he says, referring to Digital Realty.

“It's in an incredibly wonderful location, it's the seventh most interconnected Internet point in the world because of the subsea cables coming in, but the industry is looking for an alternative to that.

“We're putting our hat in the ring to be able to be participatory in that. We believe that Barcelona is a great example where you can have a second cable landing location on the Mediterranean solve for large AI workloads, because it supports and mirrors the Nordics with costeffective electricity.”

An undisclosed cloud provider is supporting Kao’s Barcelona push, alongside another project in Madrid. About a third of the capacity of its European sites will go to anchor tenants that are existing clientele.

“We're not doing the Nordics, we're not doing Eastern Europe, we're not doing France, we're not doing Italy, not doing Ireland. So people like, wow, that's that's a lot you're leaving,” he says.

“We want to be able to really focus in on those seven locations, those four

countries, and be just as good as we were with the three incremental ones in the UK for the last 10 years.”

Seven points

Having seven sites acts as another de-risk. The eggs are spread across seven baskets, with the company able to begrudgingly swallow any single customer failure.

That said, should the bubble burst hard and fast, the impact could be much wider. It may not just take down one customer - and could decimate a number of other users of rival data centers, just after the largest data center buildout phase in history.

For those still looking for data center space, prices could crash as it becomes a buyer’s market.

“When, all of a sudden, it starts to be a feeding frenzy - this is three to five years out - and we [as an industry] have overbuilt, you'll see a flight to quality,” Loewe says.

“One of the reasons the cloud provider was thrilled to move in with us is not just today the space, but there's a history of consistent on-time delivery.

More often than not, what makes or breaks your organization more than the timely delivery of new capacity is, how are you operating successfully with your installed customer base?”

Another risk - or opportunity - is that struggling data center companies will be picked up en masse by savvy investors, willing to wait for the market to inevitably return. “I think there's definitely a situation where a roll-up play could occur, real estate money coming into the sector that might go, ‘Hey, these platforms are trading at a discount. That's very much a possibility.’”

That could create a powerful new competitor, but again Loewe points to quality as a defensive moat.

For now, with talks of a bubble still hypothetical and hyperscaler capex still rocketing, Loewe and the wider data center market are still confident in growth and prosperity.

Where possible, he says has tried to de-risk the expansion with caution and diversification.

But, he admits: “You can never ever, by definition, shield yourself from systemic risk.”

The Cooling Supplement

Altered carbon

Cooler minds prevail

> With F-gas based cooling systems set to be banned, could carbon dioxide emerge as a natural alternative?

Into the vortex

> A startup thinks it has a solution that can help data centers save water and energy by mimicking a natural phenomenon

Finding the move 37

> It’s not just fun and games for the cooling startup getting serious about AI after leaving Google DeepMInd

Expertly tailored for your success.

All in one solution designed for your needs.

The Vertiv™ MegaMod™ CoolChip is an e icient and flexible alternative to traditional construction methods.

Designed by dedicated experts with over 20 years of experience, Vertiv™ MegaMod™ CoolChip is your best choice for your modular liquid cooling needs.

Contents

28. Altered carbon With F-gas based cooling systems set to be banned, could carbon dioxide emerge as a natural alternative?

34. Into the vortex Startup H2OVortex thinks it has a solution that can help data centers save water and energy by mimicking a natural phenomenon

37. Finding the move 37 It’s not just fun and games for the cooling startup getting serious about AI

Cooler thinking

Data center cooling is big business.

But as data center operators invest in new cooling set-ups, there is a recognition that the old HVAC systems are not going to cut it in the era of AI and high-density racks, which generate more heat than ever before.

This is why many companies are turning to liquid cooling to keep their most valuable servers running, but there is also a need to look at new ways of optimizing existing cooling systems.

In this supplement, we hear from H2OVortex, a company that is attempting to harness the power of fluid-dynamic vertices in a device that can be attached to data center cooling systems, and which has already been installed in several facilities in the Netherlands. It could be a boon for data centers located in warm climates where there is already considerable water stress.

By industrializing the kind of vortex often seen in the wake of large boats or when you pull the plug out in your bath, the startup believes it can help data center companies cut energy costs and their water use, as well as doing away with the need for harmful chemicals used to treat water.

Speaking of harmful chemicals, time is running for F-gases, which are commonly used in cooling systems for data centers. The gases exhibit high global warming potential, and rules are being drawn up in

markets around the world, led by the EU, that could outlaw their use entirely.

This will leave providers of cooling systems looking for alternatives, and one natural replacement could be carbon dioxide. CO2-based cooling systems have been in development for some time, and advances in the technology over recent mean they are now a viable option for data center cooling.

In this issue, we speak to Advansor, one of the vendors convinced of the potential of CO2 cooling systems, as well as getting insights from two operators, Telus and Kio Networks, which have plumped for carbon dioxide in a bid to make their data centers futureproof and reduce emissions.

Meanwhile, another startup, Phaidra, is using the very AI systems that optimize data center cooling.

The company’s technology was developed at Google DeepMind, where it apparently helped Google reduce energy usage in its data center cooling systems by 40 percent.

A group of former Googlers, led by Jim Gao, are now taking the technology to market and promising big savings for data center operators.

“It's much bigger than Google data centers, it’s much bigger than commercial buildings or HVAC,” Gao says. “Fundamentally, what we're doing here is the future of industrial automation.”

Altered carbon

With F-gas based cooling systems set to be banned, could carbon dioxide emerge as a natural alternative?

Matthew Gooding Features Editor

The clock is ticking for fluorinated greenhouse gases, more commonly known as F-gases.

Often deployed as part of data centers’ heating, ventilation, and air conditioning (HVAC) systems, as well as in other areas of the data hall, the use of F-gases is being phased out in markets around the world due to the impact they have on the environment.

In March 2024, the European Union introduced updated F-Gas legislation as part of its goal to move to entirely fossil fuel-free heating and cooling across Europe by 2040.

With that deadline fast approaching, data center companies are looking at alternative gases, with several considering carbon dioxide (CO2) based cooling systems. These have already been successfully installed in a handful of data centers around the world.

F off

The term F-gas refers to a family of fluorine-based gases; hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), sulfur hexafluoride (SF6), and nitrogen trifluoride (NF3).

HFC is the type of F-gas normally found in HVAC systems. Initially introduced in the 1990s as an alternative to environmentally damaging chlorofluorocarbons (CFCs) used in older chilling units, it was soon discovered that HFCs were not doing the planet any favors, either. The gases are said to exhibit high GWP, or global warming potential (GWP is a measure developed by the United Nations’ Intergovernmental Panel on Climate Change), due to their propensity to hang around in the atmosphere and deplete the ozone layer.

Internationally, there is no agreed date for when the use of F-Gas might end. The Montreal Protocol, drawn up in 1987, is an international treaty that aims to regulate the phasing out of these gases.

This was updated in 2016 with the Kigali amendment, signed by 160 countries plus the European Union, which imposes several different timelines for different countries. Established economies, such as the US and Europe, have agreed to reduce HFC use by 85 percent by 2036 (compared to 2013

levels), while another group of countries, which includes China and Brazil, has until 2045 to cut its HFC consumption by 80 percent. A third batch, comprising India and states in the Middle East, where air conditioning is widely used, have been given a 2047 deadline.

The EU’s F-gas legislation is perhaps the most detailed policy to emerge so far. First proposed in 2006, the version brought into force in March stipulates that the use of HFCs in new small air conditioning units (those drawing 12kW of power or less) must end by 2032, with a 2035 deadline set for new, larger, units such as those used in data centers.

In response to the new regulations, trade body the Institute of Refrigeration Ireland (IRI) said the industry is facing “very significant changes” in the coming years. When manufacturers look for alternatives to HFCs, it argued that “CO2 will undoubtedly feature strongly in mid-size commercial equipment and in industrial systems.”

Great Danes

For Danish native Anders Moensted, this now seemingly inevitable move towards CO2 and other natural cooling products has been 30 years in the making.

“I’ve been working with natural refrigerants since the 1990s,” he says. “Denmark has been a leading country in this area because we had a very progressive environmental minister back then who could already see that F-gas was not the future and put some very tough restrictions in place.”

Indeed, Denmark has had a ban on certain types of F-gases since the early 2000s, way ahead of most of its European counterparts. Moensted is now business development manager for industrial at Advansor, a company that is marketing CO2-based cooling solutions for data centers and other large clients.

Advansor’s core business is in retail, providing cooling systems for supermarkets, but Moensted was hired four years ago to help it push the company into the industrial sector, where data centers are a prime target alongside distribution hubs for large retailers and e-commerce companies. He says businesses are not only looking to move away from F-gas, but also from existing natural alternatives. “Ammonia,

for example, is a good natural refrigerant, but it’s toxic and slightly flammable,” he says. “In some use cases, it’s not allowed, so CO2 is a viable alternative.”

For Advansor, pursuing CO2-based systems makes sense because of its versatility, Moensted says. “There are so many ways of configuring a CO2 rack, because we can do freezing, cooling, air conditioning, and heating in the same system, with different temperatures at the same time,” he says.

CO2 in the data center

For data centers, a CO2-based cooling system shares some characteristics with F-gas or ammonia-based systems, Moensted says. “It generates a vaporcompression cycle, and we have a compressor that is compressing the gas, liquefying it, and then evaporating it,” he says. “So that's exactly the same process, but the system is built in a slightly different way with some additional vessels.”

As for its efficiency as a coolant, Moensted says CO2 matches up to comparable F-gas systems, providing the cooling loop is optimized correctly. But he says CO2 is more susceptible to fluctuations in temperature. “When CO2 was new, it was limited in where you could use it, and in areas like Southern Europe it didn’t really work because of the warm climate,” he explains.

“But now we’ve built up a lot more systems around it, and we’re not really limited by climate as we were before. You still have to design for the hottest day of the year, and accept you get the worst efficiencies on those days, as you would with any other refrigerant. So CO2 is a little more punished by warm climates, but then we can enhance the systems with injectors and things which allow us to match the other refrigerants.”

CO2 is readily available in large quantities. In Europe alone, 2.5 billion tons of CO2 were generated in 2023 as a byproduct of the continent’s various energy systems. “You can liquify this CO2 and use it in our refrigeration systems,” Moensted says. “It needs an industrial process to clean it up, but then you can put it in bottles and use it for refrigeration, or for other things such as soft drinks.”

Certainly, there’s no comparison when it comes to the environmental impact of

CO2 versus F-gas. While HFC, the F-gas used most commonly in data center cooling systems, typically has a GWP of somewhere between 1,600 and 4,000, putting it at the high end of the GWP scale, CO2 scores one, the lowest possible ranking.

Bringing CO2 systems to life

So far, CO2-based cooling systems in data centers are few and far between, with the shift to liquid cooling, necessitated by the increasingly powerful servers required for AI and other advanced workloads, meaning operators are more likely to invest in direct-to-chip cooling systems rather than next-generation HVAC.

Canadian operator Telus has installed a CO2-based computer room air conditioning (CRAC) system at its data center in Quebec. US-based vendor M&M Carnot provided the CRAC set-up for the data center, and it was installed as part of a ‘green’ update to the facility.

The company told DCD that it had seen “significant annual electrical operating savings” since installation, in addition to being able to effectively cool other parts of its building using the same system.

In Europe, Advansor has been working

“By using CO2 as a refrigerant, our energy efficiency ratio will improve at least three times”
>>David de Diego Villarrubia Kio Networks

with Kio Networks on its new data center in Valencia, Spain, which the operator says is the first in the country to run entirely on natural refrigerants. Work began on the 1,000 sqm (10,763 sq ft), €50 million ($55m) facility in February 2023.

The cooling set-up is based on two of Advansor’s medium-temperature SteelXL refrigeration units connected to three of the company’s MiniBooster systems. This combination can provide 1.3MW of cooling capacity for the servers, and 150kW of cooling capacity for the data center’s air-conditioning. Waste heat from the cooling system is set to be siphoned off and sold to district heating systems in the region.

Advansor claims to have enabled additional energy savings of 7-8 percent by adding permanent magnet motors to all 12 of the system’s compressors.

Like Telus, Kio expects to see significant energy savings according to David de Diego Villarrubia, the company’s infrastructure director. “By using CO2 as a refrigerant, our energy efficiency ratio will improve at least three times compared to a system with a non-natural refrigerant, resulting in significant energy savings,” he says. “This will yield an annualized PUE that will be spectacular even at low load levels, ultimately leading to a facility that drastically reduces energy consumption.”

Build your own?

CO2 was a natural choice to cool Kio’s Valencia facility, De Diego Villarrubia says, because it offers the best way to assure the longevity of the data center. “Choosing [CO2] protects our investment by ensuring we do not have to undertake any retrofitting in our facility before the expected life cycles of the equipment due to regulatory obligations related to the decommissioning of refrigerant gases,” he says.

“From a purely regulatory standpoint,

we believe that important aspects such as the EU F-gas regulation will eventually impose restrictions on all refrigerant gases that are not natural. The final application of this regulation will inevitably lead to the use of only natural gases.”

De Diego Villarrubia is confident his company has got “ahead of the curve” when it comes to adopting this kind of technology and adds: “We believe that switching to ecological gases was the natural step to take, demonstrating that our industry can be genuinely clean on all levels.

He says Kio is keen to work with other operators wanting to explore the benefits of CO2 cooling, and hopes his company can provide an example that will “help transform our sector for the better, moving it away from harmful greenwashing.”

Despite the environmental benefits of CO2 cooling, data center companies wishing to carry out an installation should bear in mind a number of complicating factors. Telus has noted its new system experiences greater climate-related variation in performance than a set-up based on synthetic coolants, and says great care needs to be taken during installation as the interconnecting pipes

“Some data center companies are already demanding 100 percent natural refrigerants”
>>Anders Moensted Advansor

are more sensitive to slope gradient than traditional systems.

CO2 cooling systems also operate at much higher pressure than their F-gas equivalent, meaning the potential for a costly leak is higher, and that leak detection equipment is much more likely to be required.

But such factors are unlikely to hold the shift to natural refrigerants like CO2 back for too long, says Advansor’s Moensted. “Some data center companies are already demanding 100 percent natural refrigerants,” he says.

“Data centers have not been a focus segment for us until now, so we are still developing the components we need because the temperature profile in a

data center is a bit higher than the food processing environments we have usually worked in.

“The Kio Networks project was a big system when we built it, but today we could already make something double that size. So we’re developing our offering all the time and the CO2 technology itself is coming on rapidly.”

He agrees with De Diego Villarrubia’s assessment that using natural refrigerants makes sense as a long-term investment ahead of the inevitable phasing out of F-gas.

“The new [EU] rules are going to make it difficult and expensive to get F-gases,” he says. “When you invest in a cooling system you want to keep for 15-20 years, but the reality is those gases might not be available at all in ten years, and then you have to replace your whole system.

“So natural refrigerants is a safer choice, and we’re seeing that more and more from end users because they don’t want a system that immediately becomes outdated.”

Moensted adds: “Big changes like this take time of course, and it will not happen overnight, but we have shown over the last few years in Denmark that it is possible to phase out these gases.” 

Who to trust when things are getting heated

Vertiv offers cooling solutions as individual as your data center

Not a day goes by without further evidence that the world’s power infrastructure is under severe strain. While the storm of uneven AI workloads grabs much of the attention, it has simply compounded an inconvenient truth – we need more data, which means more data centers, which require more power. This has led to restrictions from energy providers in some territories, which are only likely to increase as we come to terms with the twin pressures of increased demand and the need to embrace more sustainable ways of producing it.

Global demand for data grows daily at rates that would have been unthinkable even a few years ago. While great for

consumers and businesses alike, this surge presents a new challenge for the data center market: Operators must find ways to manage up to five times the amount of heat generated by intense east-west loads such as artificial intelligence (AI).

Each facility has unique needs, so solutions must be tailored individually. However, a holistic approach to heat dissipation across the entire thermal chain can yield impressive results, even in older facilities. While exploring greenfield or brownfield sites for new facilities provides more options than retrofitting, John Niemann, SVP of global thermal management at Vertiv, explains to DCD how the company collaborates with organizations to address cooling

challenges across various scenarios by combining air and liquid cooling solutions.

There are a lot of factors to consider when coming up with your thermal strategy, as he explains: “There are multiple steps to providing thermal management. We have liquid-to-chip and air cooling supporting these AI workloads. Some of today’s high-density AI workloads are cooled with air up to 75 kilowatts (kW) per rack. As you get above that, the racks utilize liquid cooling. With liquid cooling, about 20 to 30 percent of the heat still goes into the air from liquidcooled servers. On top of that, you have other applications in the data center that still rely on air cooling.”

He adds that currently, the greatest

traction in solving high-density AI workloads is coming from direct-tochip liquid cooling, a necessity for GPUs running at high temperatures to service AI workloads. However, this approach isn’t without its challenges. Liquid cooling provides closer connections to IT equipment which introduces a level of complexity. Direct liquid connections require a high level of quality and reliability to protect the servers, necessitating comprehensive system control for efficient and reliable operation.

Nonetheless, liquid cooling represents more than a “nice to have” for operators who are looking to accommodate the workloads of 2024 and beyond. The advantages far outweigh the challenges, especially when you have the right partner, such as Vertiv, to come up with a solution to suit your needs.

Niemann explains: “There are a lot of pros. From a compute perspective, the infrastructure for liquid and aircooled data centers is unlocking new performance for AI applications that didn't exist before. Meanwhile, the overall TCO benefit is not only limited to improvements in the infrastructure of the data center but also relates to the compute that the infrastructure is unlocking.”

In other words, the keyword is ‘opportunity’. Those opportunities can sometimes be enhanced by external decisions such as development type – do you choose a new build over a retrofit? –and power availability in an increasingly strained electrical grid system. The future, Niemann believes, lies in a modular approach:

“Modular solutions scale. A modular deployment can take the form of a fully prefabricated data center or prefabrication of parts of the critical infrastructure like the thermal system. We offer prefabricated packaged thermal units that can be placed on the roof or along the perimeter of the building to speed up the deployment. This approach simplifies the overall installation and startup by combining the air and liquid cooling with heat rejection into a single unit assembled and tested in a factory. The unit is then easily installed as a single module onsite, minimizing the work onsite while drastically reducing the time for commissioning. There are many advantages, and the higher density of AI

improves the overall value proposition for modular prefabricated solutions.”

As heat reuse becomes a hot-button issue for data centers, particularly with new regulations in some European regions mandating compliance for new buildings, customers must explore methods to repurpose waste heat. Higher densities from AI workloads lead to increased air and liquid temperatures, creating better opportunities for heat reuse in district heating and other applications. Niemann looks at Vertiv’s approach to heat reuse:

“We have multiple offerings, such as using heat directly from a CDU in a liquid cooling loop or from the heat rejection of water-cooled or air-cooled chiller. We also offer heat reuse options with our direct expansion product portfolio.”

The combination of a wide-reaching portfolio and ability to customize to provide tailored solutions to complete prefabricated systems gives Vertiv the capability required to meet the challenges of today’s complex AI compute environments. Niemann tells us:

“We can provide tailored solutions to meet specific requirements. The choice between a standard off-the-shelf product, configured technology, or a fully engineered order depends on the level of sophistication and how cutting-edge the customers need the solution to be. We can cover the gamut and deliver any level of those offerings at scale globally, which not many companies in the data center infrastructure space can do across the entire thermal chain.”

We ask Niemann if he sees AI use for control and optimization of the thermal chain as something the industry is looking toward. He explains that tight connectivity between IT and the facility, through intelligent systems, is one example of ways where the very

technology that has created challenges could be part of the solution: “Having control of the entire thermal chain from end-to-end is crucial. It’s important to manage this across the system, from the chip to heat reuse. Making these systems more intelligent and responsive to ITlevel changes will be key to ensuring reliability and optimizing performance. So we do see an opportunity to leverage machine learning and AI to boost overall performance and system operation.”

Vertiv is, therefore, able to offer a wide range of solutions to the cooling conundrum. They partner with a wide range of clients to create bespoke solutions. The sooner you involve them in your project, the more helpful they can be, and they are ready for action, as Niemann stresses: “From a go-to-market perspective, we've got technical solution architects and applications engineers globally who work with our customers on the design of the data center from end to end across the entire thermal chain. Our solution architects and sales teams can provide configured to fully customized solutions based on customer requirements.

So if you’re an operator who is all fired up about AI, but whose cooling needs are bringing on cold sweats, Vertiv may well be able to take the heat out of the situation. 

For more information about highdensity cooling solutions, visit the Vertiv™ AI Hub

Into the vortex

Startup H2OVortex thinks it has a solution that can help data centers save water and energy by mimicking a natural phenomenon

Critics of the data center industry often describe it as something akin to a vortex, sucking power and natural resources into the void as part of the seemingly neverending quest for denser racks and more powerful servers.

It is somewhat ironic, then, that startup H2OVortex is looking to harness the power of vertices to reduce water use in data centers and enable more efficient cooling systems.

The company has created an industrial vortex generator for cooling towers (IVGCT) which it says can be applied to new and existing heating, ventilation, and air conditioning (HVAC) cooling systems in

data centers and cut water and energy use, as well as eliminating the need for some chemicals.

H2OVortex has now fitted its technology in several data centers, and has also been given the thumbs up by industry body the Dutch Data Center Association, which recently co-authored a white paper recommending that its members investigate how the technology can be applied in their data halls.

H2OVortex: the face-off

The H2OVortex technology has its origins on the ice rink. The traditional method for building and maintaining the type of surface required for recreational skating

or for a professional ice hockey rink is to pour hot water onto an already-frozen rink, a process known as flooding. Hot water contains fewer air bubbles than cold, freezes quickly, and helps create a smooth surface that is perfect for pursuing the puck.

Given that a single professional ice rink can, according to figures from the International Ice Hockey Association, use up to one million liters of water a year for resurfacing alone, the associated costs and environmental impact are significant. “It’s a ridiculous amount of energy,” says Alain Mestat, managing partner at H2OVortex.

Håkan Grönlund has experienced this issue firsthand. The H2OVortex co-

founder is a former semi-professional ice hockey player in his native Sweden, and set about coming up with a more sustainable solution. The resulting Vortexbased technology, known as REALice, is a device that is able to remove air bubbles from cold water so that it can be poured onto the ice as part of the flooding process, eliminating the need for heating.

“We’ve reduced the operating costs of an ice rink arena by 40 percent, and have about 1,500 of those out in the market,” Mestat says. Happy customers include US NHL team the San Jose Sharks, which has installed the system on five rinks at its Sharks Ice headquarters, and expects to save 250,000 kWh of electricity, over 11,000 Therms of natural gas, and 230 metric tons of CO2 emissions each year as a result.

From the rink to the data hall

But what’s all this got to do with data centers? Mestat explains that his company has taken the core technology that sits at the heart of REALice and applied it to industrial cooling systems, such as those used for servers.

Explaining how the IVG-CT system works, Mestat says: “We specialize in biomimicry, so our system is really copying nature. It’s a nature-based solution, and we’ve looked at the intrinsic capabilities of a vortex to change water and industrialized that. By creating an artificial vortex, we are able to offer fantastic added value from a sustainability perspective.”

In fluid dynamics, a vortex is created when liquid revolves around a central axis. These are often seen in whirlpools, in the wake of boats, or in the bottom of the bath when you pull out the plug. The crucial characteristic of such vortices, from H2OVortex’s point of view, is that the action of spinning the water round and round changes its properties.

“Through the vortex, we can change the characteristics of the water,” Mestat explains. “We increase its density by five percent, and that makes it much more efficient as a cooling liquid. We also reduce the viscosity by 20 percent.”

This second point is important because, as the water becomes more liquid, it “requires less energy to travel from A to B,” says Mestat. And “if you spray it onto a surface, it will cover a much broader

"We’ve looked at the intrinsic capabilities of a vortex to change water and industrialized that"
Alain Mestat

area,” he adds. “That’s very important in a cooling tower because if you are able to cover 85-90 percent of the surface you have a much more efficient system.”

The vortex is created by pushing pressurized water (Mestat says the system requires water pressure of three bar, the same as a regular feed from the mains) into the wide end of a cone-shaped, 3D-printed device which reminds DCD of a terracotta tagine. It doesn’t contain any tasty Moroccan stew, but is able to create vortices in addition to what Mestat describes as a “quasi-perfect vacuum” through a process known as cavitation.

He says: “We create a vacuum at 0.97 bar and that crushes any living bacteriological or organic elements in the water - the system kills anything that is alive in the water on a continuous basis”.

Mestat claims this means the system can use any type of water without the need for any chemical intervention. While the use of non-drinkable, or “grey water” in data center cooling is already being explored by many operators - Amazon announced last year that 20 of its AWS data centers were using purified wastewater in their cooling systems - this usually has to be treated first to ensure it is ready to

be circulated through a HVAC. Not so for H2OVortex, Mestat says. The system can use water of any quality, and at the end of the cooling process, discharge water can easily be collected and recirculated.

“We run all of our systems 100 percent chemical free,” Mestat explains. “The net benefit to a cooling tower at a data center is that we can reduce water consumption by up to 50 percent, though if it is using potable [drinkable] water that figure might be even higher. Our container will adapt to any type of water, so there’s no need to use potable water at all.”

The system is delivered in a shipping container which can be connected up to an existing HVAC system. The number of vortices required to cool a data center varies depending on the size of the facility, Mestat says.

“It’s a fully scalable system, so we ask our clients to fill out a data sheet and will run analysis on that to work out what is needed,” he continues. “The containers are stackable, so if it’s a huge HVAC we simply stack another one on top.” Clients can apparently expect a return on their investment in less than three years.

Going Dutch

H2OVortex is headquartered in Luxembourg, with its R&D work carried out by a team in Swedish capital Malmo, and its engineering done in Tilburg, Netherlands. Mestat, a former investment banker who spent years backing clean tech companies, initially became involved with the company as an investor, before joining the team full-time six years ago.

With an established presence in the Netherlands, it’s no surprise that it has found an enthusiastic user base, installing the system in a number of data centers, which Mestat says are operating in “mostly Edge” facilities.

In July, the company published a white paper with the Dutch Data Center Association (DDA) which demonstrates how the technology can be used by the industry. In an example cited in the white paper, an unnamed Dutch data

"Our added value is going to be in areas where you already have water stress"

center that installed the IVG-CT system was able to reduce its “potable water consumption by over 35 percent (from 95 m3/h to 62 m3/h).”

It continues: “Additionally, the data center achieved a CO2 emission reduction of over 83 percent. Even during an extended heatwave, operations could continue as usual, while the plant remained operational. This made water management much more controllable and resulted in significant savings.”

If the data center had also used alternative water sources, such as “surface water and wastewater,” the white paper says that potable water consumption “could have been reduced by 95 percent.”

The DDA said H2OVortex could help “mitigate water stress and contribute positively to local ecosystems.”

Stijn Grove, DDA managing director, says: "Engaging with stakeholders early, adapting systems used in other industries, and ensuring continuous monitoring and maintenance are essential to maintaining water-efficient operations. These combined efforts

can help data centers not only avoid contributing to water stress but also become part of the solution.”

Backing from the industry will be critical if H2OVortex is to achieve wider adoption, Mestat says: “When you bring any new widget to market that is a disruptive and easy technology - pretty much what we’re saying is ‘put the water through here and all your problems are solved’ - it is going to take time for people to get that.”

The company is part of the accelerator program run by US water technology company Xylem, and Mestat expects it will pursue a joint venture with a larger organization to help market its technology.

“We need a bigger brand name to achieve penetration,” he says. “In the Netherlands it’s easier for us because we have an engineering team on the ground, working and speaking Dutch. They’ve been doing this for ten years and can get referrals for new business.”

In the meantime, the company is pursuing a high-profile client and is in discussions with a hyperscaler looking to build a large data center in one of

Europe’s warmest climates, which could deploy artificial vortices as part of its cooling solution.

Mestat says that warm environments, where water is already scarce, are likely to be the most fertile markets for H2OVortex. The US and the Middle East are obvious targets, but he says Southern Europe could prove lucrative, too.

“Our added value is going to be in areas where you already have water stress,” he says. “You have a bunch of companies at the moment setting up data centers around Barcelona in Spain, and Lisbon in Portugal. There’s already not a lot of water in these areas, and the idea is that we can complement developments in these areas.

“The data center industry is a powerhouse in terms of creating jobs and providing economic prosperity, but it has to be able to mitigate the issues it faces so that it can continue developing sustainably. We think we have a solution that can help.” 

What is the “move 37” of data centers? Phaidra

thinks it’s happened already
It’s

not just fun

and games for the cooling startup getting serious about AI

Jim Gao and his Phaidra cofounders left Google DeepMind in 2019 after creating what they say is the world’s first fully AI-driven data center environment.

A few months before raising $12 million in a funding round led by Index Ventures, Gao, the company’s CEO, sat down with DCD to explain more about how its supervisory layer hopes to use AI to optimize cooling systems.

“It was the very first time, we believe, that anyone had actually created a fully self-driving industrial facility, where a fully autonomous AI is directly controlling things from the cloud,” Gao says.

The Phaidra team wanted to take these technological breakthroughs “beyond Google, beyond DeepMind” and the only way to do this, Gao claims, was through a product - by turning their work into

enterprise software to sit on top of a BMS (building management system) and intelligently manage cooling.

“It means you can work with all kinds of different designs, it means that it's far more plug and play, it means 99.99 percent availability, 24/7 customer support, all of this stuff,” he says.

The future of industrial automation

After four years in development, 2024 is the year Phaidra has come out of stealth, but what exactly does this mean for the data center industry? Essentially, the company is promising compelling energy savings for any facility running its software.

“It's much bigger than Google data centers, it’s much bigger than commercial buildings or HVAC (heating, ventilation,

and air conditioning),” Gao says. “Fundamentally, what we're doing here is the future of industrial automation.

“The true nature of industrial automation is self-learning, autonomous control systems. We believe that in the future, control systems will learn from their own experience and fundamentally improve over time. So I'm talking about truly intelligent infrastructure that will actually learn from operating itself and get better over time.”

Gao cites a blog post he co-authored in July 2016 which revealed that DeepMind’s AI had identified efficiencies that helped Google reduce the amount of energy it used for data center cooling by 40 percent.

“The most important thing is that we didn't do anything during this time,” says Gao.

“It's not like we were going in and

Charlee Gee Contributor

doing a bunch of testing and balancing and rewriting the SOO (sequence of operation) and tuning PID (proportional integral derivative) loops, or whatever. This is the AI, learning from its own actions, taking actions, learning from the actions, doing what reinforcement learning is supposed to do.”

single intelligence system, one single reinforcement learning agent, taught itself to become the best in the world at three completely different games. This was one of the very first signs of generalizable intelligence.

“In the industrial world that we come from, there are thousands of unique ‘games’ to be played. Every industrial

AlphaGo and ‘move 37’

But as ex-DeepMind staffers who are still clearly keen on reinforcement learning, how are Phaidra’s founders reacting to the industry mania around the “other” AI - the generative kind? Gao extols the virtues of reinforcement learning as a creative AI that trains itself to, for example, win games.

“There was something very, very important that happened In 2018,” Gao reflects.

“AlphaGo, or rather, the successor to AlphaGo, which is called AlphaGo Zero, became the best in the world at [abstract strategy board game] Go, but also the best in the world at chess, and also the best in the world at shogi. So, one

facility is its own snowflake, everyone has their own unique ESCO, or different way of operating. Everyone has their own unique design. So the only way that you can scale this sort of self-learning system is through generalizable intelligence. Like AlphaGo, like Alpha Zero.”

But is AI this creative really needed to cool a data center? Wouldn’t we prefer an un-creative AI to carry out this kind of work - something that uses less capacity and resources, and costs less?

“I disagree with that,” Gao fires back. “[That is] one of the biggest misconceptions in AI today. I think people conflate AI with automation. Our customers do it all the time, right? They think, ‘Oh, no, the AI is here to

take my job, or the AI is here to do what I already do today.’ Nothing could be further from the truth. Yes, there are areas where, if something is like a rote routine automation that you don't really want to do anyway, AI can do that.

“But honestly, where AI shines is in creativity. It's in challenging your existing ways of thinking and discovering knowledge that did not exist before. The pinnacle of this is actually, if you go back to AlphaGo, there was a very specific moment in time called ‘game two, move 37’.”

Gao is referring to the 37th move in AlphaGo’s second game, when expert commentators decided the AI had made a mistake, and taken a move an accomplished Go player would have eschewed. However, 100 moves later “people realized that move 37 was a genius move,” Gao explains. “And now, today, millions of people around the world learn about move 37 as a way of playing Go, just like in chess you have the English opening and the French defense and others. Move 37 is one of those classic moves now.”

The same kind of creative AI surprises, says Gao, are seen by Phaidra’s customers “all the time.”

“Whether you're looking at our customers in the data center space, or district cooling or pharmaceutical, there will be a lot of instances where the AI operates a plant in a very surprising way, that the plant operators and even the people who designed these data centers did not know about before. But then the question becomes, well, how is that possible? How can the AI discover new knowledge in these systems?”

Trillions of ways to run a data center

Gao says even he doesn’t know. Despite having designed many of the mechanical systems for Google that have enabled this current generation of reinforced learning AI to develop, he’s still constantly surprised.

“The very AI system that I built is now telling me that there are much better ways of operating the system that I did not know about before. And the reason why that can happen is because of the complexity of the systems,” says Gao.

“So if you look at any typical hyperscale data center, you've got dozens of condenser water pumps across water ponds and cooling towers and chillers and fan rolls and CRAC units all over the place. Even if you took a very simple example, say you have a pump bank of 10 pumps, and each pump just had 10 set point values associated with it, this raises 10 billion different permutations for how to operate your very simple data center.”

In the real world, Gao continues, the average data center has a lot more than 10 pumps, and a lot more than 10 set points per piece of equipment generally, resulting in hundreds of trillions of ways to run a data center.

“And I guarantee you that, because of the way that we've hard-coded these systems, with the sequence of operationsthat ladder logic - we've only ever explored 0.0001 percent, of all the different ways that you could operate your data center,” he says. “So the question then becomes, what is in the 99.999 percent of the space that has never been explored before?”

All of this begs the final questionDCD’s final gambit in this conversation, perhaps. What would be the move 37 for data center optimization?

Gao says it’s already happenedseveral times - because it happens to his customers “daily.”

“[But] one of the move 37 moments was

this: At some of Google's data centers, we used to operate under the assumption, as did pretty much everyone in the industry, that in order to minimize a PUE, you need to minimize the cooling equipment running. Especially chillers.”

But, says Gao, while common logic is to follow chiller efficiency curves to find “a sweet spot of 80 to 90 percent,” the AI did something quite different.

“The AI would actually bring on more equipment running - a lot more chillers running. Why is it keeping the chillers at 40 to 50 percent load instead of 80 to 90 percent? This is horrible! From an energy efficiency perspective, the AI sucks and has learned the wrong thing.”

But then, the penny dropped. The AI had learned this behavior because of the specific way Google had built its data centers. Every time it turned on a chiller, it also turned on the associated process water pump and the condenser water pump associated with that chiller.

“By turning on low chillers, the AI was also turning on more condenser water pumps and process water pumps, and it

was writing that pump efficiency curve down,” explains Gao.

“The AI had also learned that because we were now rechecking the same amount of heat through more cooling towers, the air-to-water ratio through the cooling towers was much higher. So the cooling tower efficiency was shooting through the roof. And this is especially important in times of high wall temperatures.

“Could a human do this? Could a human make these sort of global-level tradeoffs across dozens of equipment that are interacting with each other in very nonlinear ways? I certainly couldn't,” Gao concludes. 

Complete solution for the AI revolution.

With tailor-made chilled water solutions.

Enjoy highly flexible and customized solutions with the  Vertiv™ Liebert® AFC chiller range, providing exactly what your business needs to thrive.

London calling

The UK’s new government is keen to encourage more data center developments around London, but the nation’s capital is already filling up fast

Fresh from a decisive victory in July 2024’s general election, the UK’s new Labour government will host an investment summit in October, which it hopes will bring together some of the biggest names in business.

Chancellor Rachel Reeves plans to use the event to show that, in her words, “Britain is open for business.” The finance minister is banking on the summit to help secure billions of pounds in investment to boost the UK economy, which has been stagnant for a number of years.

Such conferences are a common tool used by governments to try and attract inward investment, but what is notable about the upcoming summit is that data center companies are among the first confirmed attendees. During a recent trip to the US, Reeves met with CyrusOne and CoreWeave, as well as investment fund Blackstone, which owns data center operators including QTS and has a digital infrastructure pipeline worth £52 billion ($70bn). All three companies will be at the conference on October 14.

government, new data centers?

contrast to the previous Conservative government, which spent much of its time focused on attracting software companies, with former Prime Minister Rishi Sunak publicly courting artificial intelligence developers such as OpenAI and Anthropic and billing the UK as a global AI hub.

Meanwhile, building the infrastructure in the UK that helps the likes of OpenAI train and run its large language models has proved difficult, with many large proposed data center developments held up by local planning concerns about building on Green Belt land around London. The new government has pledged to reform planning laws to make it easier to build data centers, but will also need to address power concerns if it is to help the London data center market, already one of the largest in Europe, grow beyond its traditional heartlands in West London and East London’s Docklands.

This very public embrace of digital infrastructure providers is in marked

Reeves certainly believes her government can make a difference to the industry. Speaking on her trip to the US, she said: “By rebuilding Britain we can make every part of the country better

Digital Reef Havering

off. Aiming to have data centers across the country is part of that.” Turning this rhetoric into reality, however, is likely to be a sizeable challenge.

The freest of the FLAPD markets

The London data center market is already in decent shape. As the largest of Europe’s tier-one FLAPD markets (the others being Frankfurt, Amsterdam, Paris, and Dublin),

grid you have to deal with the Greater London Authority, which has a queue system that involves every application, from residential customers to industrial businesses and data centers. That’s a lengthy process.”

The second challenge, according to Lombatti, is around land availability. “London has a housing crisis,” he says. “So when you have commercial real estate

Goodbye green belt, hello data center belt?

The area around Slough in West London remains the UK’s most popular data center cluster, and this year alone has seen companies including Yondr, Segro, and Equinix granted permission for new facilities in or adjacent to the town.

But it is not without its issues, most notably around power supply. In 2022, it was reported that house-building projects in three West London boroughs could be put on hold until 2035 because all upcoming power supply was being sucked up by the data center cluster. Though no ban has materialized, it highlights the fragile nature of the grid in the area, and as a result data center operators are looking to other parts of London.

it continues to attract new business because, according to research firm BMI, “providers there are able to service a large range of domestic and regional customers from Western to Eastern Europe as well as, in extreme cases, New York.”

In a research note published in June, BMI said London and the surrounding area is home to 1.3GW of IT capacity, with 170MW in development.

“There is less regulation in London which means it is much freer than some of the other FLAPD markets, particularly Frankfurt and Dublin,” says Niccolò Lombatti, a TMT digital infrastructure analyst at BMI. “Across the FLAPD markets, we see similar challenges around access to power - everyone is aware of the power crunch - and the industry model right now is to bring your own power, which is much easier to do in London than elsewhere.”

However, developers wishing to set up data centers in London face challenges too, Lombatti says.

“You’re constrained by three different factors,” he argues. “If you want to build in the metro area and connect to the local

“We took the view that power was key, and that we needed a site where we could get a grid connection point”
>> Eleanor Alexander, Digital Reef

companies bidding against data center operators, regulators are more inclined to use the space for building housing. This is an issue that impacts London more than any other FLAPD market.”

As for the third factor, Lombatti says lead times on new developments are longer in London than elsewhere. “People want to digitally transform fast, but if you want to build in Docklands, for example, you face long delivery times in terms of getting equipment in and getting power supply up and running. It’s not just about getting the power, it’s about how quickly you can deliver for the customer.”

The borough of Havering, on the city’s eastern fringe, is not currently a data center hotspot, but could soon be home to one of the largest digital infrastructure facilities in Europe. A 600MW data center is being planned for 175 acres of Green Belt land on Dunnings Lane and Fen Road in Havering, near the M25 motorway. The 12-building campus is set to cost £5.3 billion ($5.97bn) to build and is being developed by Digital Reef, a property company also planning a 250MW data center in Didcot, Oxfordshire.

“When we started work on the Havering site five years ago the market was quite unsure about it because of its location,” says Eleanor Alexander, managing director of Digital Reef. “It’s on Green Belt land and was considered a bit ‘off-pitch.’ But we took the view that power was key, and that we needed a site where we could get a grid connection point and use that grid connection point to deliver a proper ecosystem around the data center.”

The company’s stance has proved prescient, as demand for artificial intelligence and other powerful workloads means available power is now a key factor in determining data center locations. To get the required 600MW to the Havering site, Alexander says Digital Reef is investing £116 million ($153m) to upgrade the Warley substation in Upminster so it can take on an additional 445MW of renewable energy from offshore wind farms in the North Sea, off the coast of Norfolk. Further upgrades to the substation could enable it to take on

Rachel Reeves meeting with CoreWeave in the US

another 1.2GW of clean energy.

Digital Reef is not the only company looking to new locations around the M25. Google announced in January that it is spending £757m ($1bn) on a 33-acre site in Waltham Cross, Hertfordshire. The search giant has also acquired land at North Weald, Essex, for an as yet undisclosed data center project. Nearby, in Harlow, Kao Data Centres runs a campus that will eventually offer capacity of 40MW across four buildings.

NIMBYs and YIMBYs

But getting data centers built around London’s perimeter isn’t easy. “There is less community support for data centers in London than the other FLAPD markets,” BMI’s Lombatti says. “Do residents want them? Do local politicians want them? We’ve seen a lot of applications rejected because people are becoming more aware of issues around power, or data centers not creating many jobs, or taking up space that could be used for housing.”

Digital Reef is facing opposition in Havering from those who are not keen on a massive data campus springing up in their backyard. Though the scheme is backed by the local authority, it has drawn complaints from residents and campaigners. Havering Friends of the Earth coordinator Ian Pirie told the Local Democracy Reporting Service in June that allowing the data center to be built on Green Belt land would contribute to the “creeping industrialization” of the countryside.

Pirie said: “The data center will take between 10 and 12 years to build, and the impact of lorries during construction will be intolerable in these quiet country lanes.

“The impact on the site, if it is built, would also be unacceptable. Instead of farmland, there will be a large number of warehouse-sized buildings, containing banks of computers, batteries, cooling systems, backup power sources, and more equipment.”

Labour has unveiled plans to allow more building on so-called “grey belt” land - parts of the Green Belt that are close to major roads or on the edge of settlements. While this is primarily designed to spur housebuilding, data centers could benefit too.

Changes to the National Planning Policy Framework (NPPF), which at the

time of writing are out for consultation, would ask councils to identify potential sites for data centers as part of their local plans, as well as changing wording around the importance of such facilities. Data centers would also be included in the National Significant Infrastructure Projects (NSIP), which means they could be fast-tracked for planning approval, bypassing some steps in the process.

a means to navigate location planning authority objections. However, it can be an expensive and time-consuming route which may only be suitable for the largest campus proposals.”

If nothing else, the planning changes signal a vibe shift compared to the previous government. Labour has shown it is prepared to come down on the side of the YIMBYs (yes, in my backyard),

“There is less regulation in London which means it is much more free than some of the other FLAPD markets”
>> Niccolò Lombatti, BMI

Currently, NSIP is reserved for transport, energy, and water projects.

Whether these changes will make any difference in the short term is questionable. Nick Finney, from planning consultancy Arup, wrote in a recent town planning note that the alterations to the NPPF “will take a number of years to influence the supply of suitable sites but will provide additional weight to need in the meantime.”

Finney added: “The NSIP route provides the potential for greater certainty in decision-making time periods and

rather than the NIMBYs, by re-opening two applications for large data centers which had previously been rejected by local planners. In her first speech after becoming Chancellor, Reeves said the government would reopen appeals into the schemes in Iver, Surrey, and Abbots Langley, Hertfordshire.

The Iver campus would see a 150MW data center complex built on a former landfill site. The plan was rejected by local councilors in 2022, while the former government’s secretary of state for communities and housing, Michael Gove, turned down developer Greystoke Land’s appeal of the decision last year.

Greystoke is also behind the plan for Abbot’s Langley, which would see a twobuilding campus built on land near the M25. Three Rivers District Council denied this application in January because it said it did not meet the criteria for building on Green Belt land.

The future of London

BMI’s Lombatti expects the London market to continue to grow outside Slough and the Docklands, primarily due to space constraints.

Rachel Reeves (L) with Angela Rayner, Deputy PM

“The big cloud vendors want to expand their availability zones and they don’t want their data centers in other parts of the UK because the addressable market is in London,” he says.

“You’ve got the financial services industry and you’ve got a lot of AI startups who need capacity to train their models. Where these facilities are built will depend on the use case, but a lot of them are going to be outside the M25 because of space and power constraints.”

Lombatti says he expects core data centers with lower power density, for tasks such as AI inferencing, will continue to be built within London, but larger facilities will inevitably move further out. Microsoft is known to be developing at least two campuses up north in Yorkshire, and a hyperscale campus in Lincolnshire was recently given the go-ahead.

“The average financial services client is fine with building outside the M25,” he says. “The latency issue is a direct consequence, but there is very little space available to build in London. The high-density facilities for workloads like AI training will have to be outside the M25 - Hertfordshire is one location and we think Surrey is another which could be explored.”

Will the new government make a difference? Data center operator Colt DCS is invested in the London market, with several facilities around the capital including one in Welwyn, Hertfordshire. Earlier this year, it announced plans to double the size of its planned flagship facility in Hayes, West London.

Matthew Cantwell, the company’s director of product and propositions, says that, regardless of Downing Street proclamations, an open dialogue with local authorities remains vital. “It’s good that the government recognizes the importance of digital infrastructure, but our view is that the key is to have good engagement with local government, to choose an appropriate site and show you’ve got a good story for the local economy,” he says. “That’s what we’ve been doing in Hayes, and we have a great relationship with Hillingdon Borough Council.”

Cantwell would like to see the new government address the power situation. It has already reversed a ban on new on-shore wind projects to try and add additional renewables to the grid, but

“Investment in the grid, and the distribution of renewable energy to ensure it can get to the right places, is critical”
>> Matt Cantwell, Colt DCS

Cantwell says the more pressing issue is allowing companies like Colt access to existing clean power.

“The Iver substation [which serves West London] is being expanded, but all the power is allocated up until 2029,” Cantwell says. “We already have our allocation, but if you don’t that’s potentially going to be a constraint. Investment in the grid, and the distribution of renewable energy to ensure it can get to the right places, is critical.”

Digital Reef is hoping to get permission for its Havering site, via a planning instrument known as a local development order, later this year or early in 2025. DCD understands it already has an unnamed hyperscaler lined up to be the anchor tenant at the site if it is given the go-ahead.

There is optimism that the new government will boost the UK’s data center industry. “The UK’s digital economy is a huge opportunity, but you need the infrastructure that goes with that,” Digital Reef’s Alexander says. “What the government does now is not going to help with our [Havering] project because that’s already in the process, but it could help with future projects, which is great.”

She adds that some of the proposed could help make the UK more attractive to international investors looking to back digital infrastructure projects. “The number one thing which puts investors off the UK is the planning system, so I think some of the changes being proposed to make it easier to build are really positive, and we have to hope those filter down through the layers of government to the local level.” 

LABOUR KILLS SUPERCOMPUTING DREAMS

While the new government is seemingly keen on some new digital infrastructure projects, it has dealt a blow to plans to increase the UK’s national compute power.

In August, the Department for Science, Innovation and Technology (DSIT) confirmed it had shelved £1.3bn ($1.66bn) in funding for tech and AI projects that had been announced by the previous administration.

Canceled schemes included the exascale supercomputer that was set to be built at the University of Edinburgh and the AI Research Resource (AIRR), a shared supercomputing resource for scientific research linking high-performance machines at the universities of Bristol and Cambridge.

The news means it is unlikely the second phase of development of the University of Cambridge’s Dawn supercomputer, which would have made the machine the UK’s most powerful AI supercomputer, will go ahead unless external funding can be found.

DSIT said it had taken the decision because the outgoing government had not allocated funding for the projects.

Power and cool AI with one complete solution.

AI has arrived and it has come with unprecedented demand for power and cooling. Untangle the complexities with Vertiv™ 360AI, including complete solutions to seamlessly power and cool AI workloads.

Grid capacity demands vs renewable targets: uneasy bedfellows?

RHow renewable developers are rushing to meet data center energy needs

ecent years have seen data center priorities switch. Where power was once one of the more straightforward elements of a data center project, it has arguably become the most critical issue of consideration before a project can get off the ground.

A rush to grab capacity amid an AI boom, combined with net-zero goals creeping closer, is seeing data center firms have to seriously look at not only when and where they can get power, but closely consider the source of those electrons.

In the short term, most operators will likely take any power they can get, but amid an ongoing grid transition to renewable power, any company serious about its netzero goals is going to have to think long and hard about power in the near future.

A changing grid mix, increase data center capacity

Energy demands are up. Electricity consumption of data centers (excluding cryptocurrencies) is estimated to have

accounted for about 1-1.3 percent of global electricity demand in 2022 and it could see this share rise to a range between 1.5-3 percent by 2026, according to recent International Energy Agency (IEA) projections.

In the US, IEA says data center energy use in 2022 is thought to have ranged between 1.3 percent and 4.5 percent of the country’s total consumption – though the agency notes reliable data in aggregate is hard to come by. Data center capacities since then have only ballooned as more AIfocused facilities have come online.

Data center developments in the US drove a one percent increase in commercial electricity usage across America over the last four years, the US Energy Information Administration (EIA) said in its own recent report, with electricity demand up by 14 billion kilowatt hours (BkWh) in the US between 2019-2023. In areas without major data center development, energy demands largely decreased.

“If I wanted to build a renewable project in California today, it might be eight or more years before that project is actually delivering any power to the grid”
>> Oliver Kerr, Aurora Energy Research

Utilities are seeing huge spikes in energy demands from data center customers.

Xcel Energy revealed during its Q2 2024 earnings call it has a pipeline of 6.7GW of new data center projects in the works; another provider, Oncor, said it had seen 59GW of data center connection requests.

AEP said it has 15GW of data center loads coming online by 2040, while NextEra has 4GW of data center load in its pipeline. PG&E has said more than 3.5GW of data center capacity is due to come online in California over the next five years.

Alongside this increased demand, the grid is changing. More renewables are coming online, and older fossil fuel power plants are being retired. According to the IEA’s 2024 mid-year report, solar power alone is expected to meet roughly half of the growth in global electricity demand to 2025. Together with wind power generation, it will make up almost 75 percent of the increase.

But the transition isn’t happening evenly, smoothly, or at the same rate that data centers are being developed. NERC published its annual Long-Term Reliability Assessment in December. In it, the nonprofit warned of potential capacity shortfalls amid rising peak demand that the planned retirement of 83GW of fossil fuel and nuclear generation over the next 10 years could have on the grid.

NERC cautioned that the Midcontinent Independent System Operator (MISO) area – which includes data center markets including Illinois, Indiana, Iowa, and parts of Texas – is projected to have a 4.7GW shortfall if expected generator retirements occur, despite the addition of new resources that total more than 12GW.

Marc Ganzi, CEO of DigitalBridge, recently warned that data centers will run out of power in the next two years. During his company’s Q1 earnings call, he said: “We’re kind of running out of power in the next 18 to 24 months. If you think about how much power remains on the US grid, we’re down to less than 7GW on the US grid.”

“It’s power transmission and distribution that are constrained. Transmission grids are capacity-challenged. And imagine, if you think it’s hard to get a new cell tower permitted, think about building new transmission towers or substations.”

Good times for renewables amid data center demand

Despite the warnings of shortfalls, times are good in the renewable sector.

“It's absolutely booming,” Oliver Kerr, managing director for North America at Aurora Energy Research, tells DCD. “Things were going well before, but the Inflation Reduction Act (IRA) has really supercharged the sector. The number of projects that are looking to connect to the grid in the next few years are astronomical.”

Silicon Ranch is one of the companies taking advantage. It operates more than 150 solar projects across 15 states, with 5GW in operation and development. Data center customers include Meta, Microsoft, and Tract. “We're seeing continued significant interest as the sustainability commitments are there from data center companies, the ability to meet those goals is extremely important,” Silicon Ranch co-founder and chairman of the board, Matt Kisber, tells DCD

John Wieland, chief development officer at Leeward Renewable Energy, adds that his

company has seen “explosive” growth in demand. It has 3GW of renewable power in operation across 26 projects, with another 2GW in development, and supplies the likes of Microsoft and Digital Realty. “We're not seeing any slowing indications from these hyperscalers and colo providers,” Wieland says.

Too much of a good thing

But the renewable energy industry is becoming a victim of its own success. Wait times for new solar and wind projects to be connected to the grid have risen sharply.

“Historically speaking, there have been very little in the way of financial commitments that developers have to make to put a project in the interconnection queue,” says Aurora’s Kerr. “They have the option to build rather than the obligation to build. And that became a vicious cycle.

“The more people putting projects in the queue, the more that incentivized other people to start trying to put their projects in the queue to secure a spot. That dynamic feeds on itself, and you get a lot of speculative projects.”

Wait times range from around two years in some locations to eight in others – driven largely by a combination of the complicated studies grid operators have to undertake to assess the impact developments will have locally and the time it takes to physically connect so many new projects to the grid.

Power projects seeking to connect to the US grid increased by 27 percent in 2023, according to a report from the Department of Energy's (DOE) Lawrence Berkeley National Laboratory (LBNL). The report suggests around 2.6TW of projects have joined the interconnect queue –

around twice as much as the US's existing generating capacity. The majority of that was solar or battery – around a 1TW each.

The number of new projects joining the queue has continued to ramp up, from 561GW in 2021 to 908GW in 2023. But the average length of time projects take to deploy is also increasing; going from interconnect study to commercial operations is now estimated to take five years, compared to two back in 2008.

In California, total energy generation capacity is about 200GW, half of which comes from renewable or low-carbon sources such as nuclear. As of last year, the California Independent System Operator (CAISO) network has an interconnection queue totaling more than 500GW of projects waiting to come online. In Texas, grid operator ERCOT has seen the interconnection queue for grid-scale solar and battery storage projects swell to 355.4GW.

“If I wanted to build a project, starting in California today, it might be eight or more years before that project is actually delivering any power to the grid,” says Kerr.

Transmission – a problem for renewables and data centers

In April 2024 the US government set out a goal to upgrade 100,000 miles of transmission lines over the next five years in order to ensure the country’s infrastructure is ready to meet the needs of a grid that is heavily reliant on renewables and that is facing increased demand from data centers and electric vehicles.

Just 55 miles (88.5km) of high-voltage transmission were built in 2023, according to Grid Strategies, despite spending hitting

an all-time high of $25bn. Some 90 percent of that investment was driven by reliability upgrades and equipment replacement. Only 125 miles of new transmission lines were built in the first five months of 2024 – all between Arizona and California. Another $92 billion is set to be invested in transmission over the next three years.

“It's a little bit of looking into the future and also hoping that we did our business development and siting right a couple of years back”
>> Jesse Tippett, Adapture Renewables

milestones and the withdrawal penalties are getting to a point to where customers are not going to be able to speculate on positions as they once were.”

“There's so much economic development tied to the demand, and I think that's going to be the strong catalyst for the change.”

“Transmission is a challenge not just for solar, but all forms of new energy generation,” Silicon Ranch’s Kisber says. “Our transmission system is vastly undersized to serve the growth that is going to take place. If we're going to meet the energy needs to serve data centers’ and others' needs, we're going to need more transmission.”

Kisber, a former Tennessee Representative, likens the transmission to the interstate highway system: “It used to be easy to find an on-ramp or real estate around an on-ramp or off-ramp to the interstate. Today, it is a much more difficult challenge because it is built out, there's congestion in a lot of places, and fixing that congestion takes time. We have the same issues with our transmission system.”

Amid grid-wide congestion, renewable operators say digging into transmission maps and project queues is key to finding an advantage. These companies have whole teams looking for opportunities where queues are shortest and might have projects likely to drop out.

Leeward’s Wieland is hopeful change is coming. “It’s going to take a little time to work its way through, but I do believe the amount of projects in the queue in two years is going to be dramatically less,” he says. “And it's because of the procedures that have been implemented; the readiness

Can data centers be better at working with renewable companies?

Many of the grid issues are out of the hands of data center developers and operators. But one of the main ways to help ensure the electrons powering their data centers are green and continually flowing is to keep investing in renewable energy projects.

Cloud, data center, and telecoms firms remain major buyers of renewable energy.

Amazon is the world’s largest corporate buyer of renewable energy globally, while Microsoft has said it has more than 20GW of renewable energy under contract.

Google and Meta have invested in gigawatts worth of renewable projects worldwide, and the likes of Equinix and Digital Realty have also purchased a significant number of renewable energy power purchase agreements.

Despite these efforts, Google has seen its emissions jump 48 percent in five years due to the company’s data center build-out and focus on AI. Likewise, Amazon’s 2023 ESG report saw its annual Scope 1 emissions rise from 13.32 million metric tons of carbon dioxide equivalent (MT CO2e) to 14.27m last year.

It is a similar story for utilities, with coalfired and natural gas generation in the US expected to grow by around two percent

and 1.5 percent respectively, leading to an increase in emissions, before dropping next year. After declining 8 percent in 2023, the US is expected to be one of the few advanced economies in 2024 where power sector emissions increase year-on-year, rising by slightly below 2 percent.

Ensuring new data centers’ energy demands are matched with renewables is key to net zero goals, but DCD was surprised to hear about the general lack of coordination with data center operators about where projects are happening.

At a time when the technology industry is grasping at any capacity it can get, it seems renewable operators are playing whack-a-mole and making informed guesses about where cloud and colo firms are developing projects.

“Access to clean energy is only component of their site selection criteria,” says Leeward’s Wieland. “We're not being told specifically where they are going before they conduct their site selection activities. It's more us figuring out where they're going, and being clever in how we cite our assets to serve their needs.”

Jesse Tippett, VP of power marketing origination at Adapture Renewables, said data center companies are “not always forthright with where they will be developing.”

He says: “If we do our homework right and happen to have projects in areas that they are looking at, then we're well matched and can do a deal.

“It's a little bit of looking into the future and also hoping that we did our business development and siting right a couple of years back.”

Adapture runs 36 solar projects totaling 262MW across the US, with another 4GW in development, and counts Meta among its customers.

Tippett says more trust between data center and renewable developers is needed to coordinate projects and ensure demand and capacity are well-matched.

Both sides need to be “very transparent about where they have projects,” he says. “By working together earlier, being mutually committed to having each other's best interests in mind, you can start to early on identify what could be an opportunity to work together.”

While Tippett acknowledges such handin-glove thinking might see companies give up opportunities to look for the highest

bidder on projects, the assurance of a customer means developers can focus on finishing development and moving to the next project in parallel. Data center firms, meanwhile, get the reassurance of available renewable power, perhaps before a facility is even built.

Soaring demand for renewable power means data center firms are working with multiple providers across different markets to secure green electrons.

“I think there is a preference to do repeat deals with businesses where they have established relationships,” says Wieland. “However, given the demands, I also see there being a lot of opportunity across the market.”

When asked what data center operators are looking for in renewable partners, DCD is repeatedly told that certainty is a key factor amid so many speculative projects.

Kerr notes the industry measures many of these speculative projects in what’s referred to as "braggawatts;” projects that promise big numbers that they don’t deliver.

“They’re looking for companies with a track record and reputation for delivering on what we say we're going to do,” adds Silicon Ranch’s Kisber. “Silicon Ranch has delivered on every project we've contracted for. That element of certainty is extremely important.”

On-site generation & behind-themeter projects a solution?

The IEA report notes some data center providers are looking into on-site generation to circumvent grid connection challenges.

At the Aurora Renewables Summit in London in July, Bruce Huber, CEO of Alexa Capital, said that while the industry waits for interconnection reform, behindthe-meter (BtM) projects – where high customer loads are located at the same site as renewable projects – may be a good option for de-risking investments and avoiding connection woes.

Huber called BtM projects a “huge aspect” of his company’s funding: “It is real, and it is growing,” he said. He used an example of a project in the MISO connection area around Ohio that is developing megawatt parks for industrial customers.

“They're building these megawatt industrial parks, and then just adding and inviting more and more industrial

customers to what are really behind-themeter microgrids,” he said.

However, BtM or large-scale on-site renewable projects involving data centers are few and far between. Tencent has deployed a large-scale 10MW solar plant at one of its data centers in Tianjin; Amazon recently acquired a data center behind the meter at a nuclear power station in Pennsylvania; and several projects in Dublin, Ireland, have filed to place gas power generators on-site to circumvent a moratorium on new data center grid connections.

“BtM projects are something we're heavily focused on,” says Adapture’s Tippett. “If there's a real win-win, the data center and renewable asset can come online sooner; being colocated can shrink that interconnection timeframe.”

He notes that data center companies could look at building their own on-site renewable projects.

“There will be some BtM facilities, and maybe some of those will be built by the data center companies themselves,” he says. “It’s a great model, but it's a very challenging business.”

How (non-nuclear) BtM colocated projects might look for data centers is still unclear. Given the need for uptime and redundancy, it’s unlikely many operators would be willing to be entirely off-grid and reliant on wind or solar – even if longer battery deployments were available.

But some operators might be brave enough to try such a deployment if demand continues to outstrip supply. 2024 saw startup Aston partner with JLL to market self-contained, grid-independent ‘clean energy campuses’ to data center customers.

The first campuses will apparently be located in Colorado, New Mexico, and Texas, with its existing development pipeline representing 2GW of power. Projects started in 2024 will go live in 2026.

“[BtM] projects are going have to become more common,” says Aurora’s Kerr. “For things like a data center, your biggest cost is power. If you're thinking about putting generation on-site versus buying from the grid, building your own power generation might be a fairly small part of the overall costs.”

“It makes sense to think about building generation on-site, and it just removes one of those worries that people have of not having sufficient power.”

Energy Storage for Resiliency

UtilityInnovation’s Battery Energy Storage Solution for resiliency is a fully integrated power system born from Volvo Group’s industrial electric drivetrains, providing ultimate modularity and performance to meet the needs of the toughest data center applications while maximizing capital and available space.

Powering AI innovation

Vertiv™ Trinergy™ is not just a power solution; it is the reliable driving force behind AI workloads. Trinergy™ sets itself apart with its innovative 3-level IGBT technology to meet the volatile demands of next-generation AI systems. Supercharge your AI capabilities.

Helios: Colocation, colocation, colocation

After an M&A spree in the last few years, Africafocused towerco Helios is looking to add more tenants to its existing towers

"

We created a new five-year strategy last year [2022], which we dubbed ‘22 by 26,’ which is to grow our operations to 22,000 towers by 2026,” Sainesh Vallabh, group commercial director and regional CEO for Southern Africa, Helios Towers, told DCD in May 2023.

Fifteen months on from our last conversation, Helios Towers’ strategy has been altered somewhat, with its new approach favoring that of a colocation model instead.

The London-based company is a subsidiary of Helios Investment Partners, founded in 2004 to focus on private investment in Africa.

Helios owns and operates more than 14,000 towers across nine markets, including Tanzania, the Democratic Republic of Congo, Congo Brazzaville, Ghana, South Africa, Senegal, Madagascar, and Malawi.

Paul Lipscombe Telecoms Editor

However, the total number of towers isn’t crucial, as Vallabh points out, it is instead the number of tenants that Helios supports. At present, that number is upwards of 28,500, per its 2024 first-half figures.

Despite the changes, the balance sheet seems to suggest the new strategy has been a success. In 2023, total revenue grew by 29 percent to $721 million, as operating profit jumped up by 82 percent.

“Since our launch of the strategy, we have shifted it to 2.2 tenants per tower by 2026. So, that's a shift from the acquisition of assets to a focus on colocations and driving a second tenant on our infrastructure, and in some cases, a third or even fourth tenant on our infrastructure, which can benefit all stakeholders,” explains Vallabh.

Small cells

Helios’ tower offerings vary, notes Vallabh, from 50-meter-tall macro towers to rooftop sites, along with others attached to street furniture such as lampposts.

Like many tower companies and mobile carriers, Helios also sees the potential in deploying small cells that take advantage of existing street assets.

Small cells are seen as a simpler alternative to bulkier telecom infrastructure, which might look out of place or draw criticism from residents.

“Availability of space, especially in urban areas, can be a challenge,” Vallabh says. “Many are overpopulated, and as a result, there's just no space to put up these towers.”

“The densification of networks is going to be focused on either small cells or in-building solutions. These buildings exist already, so amplifying networks and creating densification of networks through in-building solutions is going to be key.”

Vallabh also highlights the company’s success with distributed antenna systems. While small cells function as a network of individual cells, DAS can use multiple nodes to function as a single large cell.

He points to a case study in Tanzania, where a fiber cable was deployed from the existing telecoms tower to the lampposts sitting on either side of a market. An indoor sector was additionally planned to improve basement coverage.

“This provided connectivity to 8,000

"Mobile operators' core business is not to find tenants on a site, their core business is to sell airtime, so that’s where the focus, energy, and capital goes into for them. So, while underutilized, it's also under-focused by the mobile operator”

shoppers in an intense market area that wasn’t previously connected,” he says. But, it’s not just towers and small cells that Helios counts amongst its assets. The company also owns data centers.

In 2022, Helios’ parent company, Helios Investment Partners acquired two data center companies, Morocco's Maroc Datacenter (MDC), and Kenya’s IXAfrica.

Discussing opportunities around Edge data centers across its sites, Vallabh is positive about what Edge computing can deliver and notes that the business is always looking to expand beyond its core tower and power offering into other segments such as data centers and fiber.

However, he says it will take time for Edge data centers to materialize on a wider scale across its market, due to demand.

“Looking at how the evolution of digital networks is going to play out over the next 10 years, and looking at what products need to be developed today to cater for that growth into the future. So yes, part of that is absolutely Edge data center capabilities.”

But, he says, the company has not yet “seen the level of interest from mobile operators” to have Edge infrastructure as a central part of its offering for the moment.

That's not surprising, Vallabh adds, because “the Edge requirement is only really relevant when there's extremely low latency that is needed.”

He explains: “Demand [for Edge] is created in Asia or in the US and parts of Europe, it’s very necessary there because gaming, video on demand and streaming are very much the uses of connectivity in these markets. Whereas in Africa, it's not so much so.”

He blames this on the slower adoption of 5G, noting that 4G is still the dominant technology across the continent. Vallabh expects the emergence of a 5G ecosystem in African markets to drive these opportunities in the next decade or so.

Driving growth in Africa

Helios is intent on snapping up telecom infrastructure, which Vallabh says is often “undervalued” by mobile network carriers.

Carriers across the globe have spun off these assets and penned sale and leaseback deals with companies such as Helios, to make the quick gains necessary to fund their respective 4G and 5G network rollouts, which are often capital-intensive affairs.

"These things are cyclical. So there will be a time where we will start looking at new markets, and we are currently keeping an eye on where opportunities are"

“The core business of mobile operators is not to find tenants on a site, their core business is to sell airtime, so that’s where the focus, energy, and capital goes for them,” Vallabh says. “So, while underutilized, it's also under-focused by the mobile operator.

“The assets need care and they need capital and better management. As a result of that, we always see a massive improvement in network availability post the acquisition of assets from from a mobile operator.”

However, in Africa the market is somewhat different, explains Vallabh, noting that the continent is behind markets such as Europe and Asia.

That much is evident when assessing the level of mobile connectivity. In more mature markets, 5G is the focus, while in Africa, telcos are focusing mainly on 4G.

A report from the GSMA estimates that mobile subscriptions in Africa will exceed 1.3 billion by 2030, as smartphone usage on the continent soars.

Helios doesn’t only operate in Africa but also the Middle East. The fourth market the company has entered in the last couple of years was Oman. It acquired 2,519 passive infrastructure sites from Oman carrier Omantel for $495 million in December 2022.

“I suppose the region in itself is probably one of the last regions where mobile network operators are looking to dispose of infrastructure,” Vallabh says.

“The US led with independent Towercos some years back, followed by Asia, while Africa was a bit later to this

trend around about 10 to 12 years ago.

“In the Middle East or in GCC markets that trend is still starting, so we've seen a few potential statements or some rumors around infrastructure being disposed of, etc. I think the trend will continue in the region.”

“Cyclical” M&A opportunities

When DCD spoke with Vallabh last year, the conversation focused on the growth of the telecom tower M&A market.

It’s an industry that is constantly driving headlines, as mobile network carriers sell their tower assets to towers-focused companies such as Helios, or global investment funds.

Several African telcos, including the likes of MTN and Telkom have sold assets to tower companies.

Pushed on the possibility of expanding its portfolio of telecom towers, Vallabh says the company has slowed down on its M&A drive, which has seen it enter Madagascar, Malawi, and Senegal in the last few years.

“We're not really looking to enter into new markets at this moment in time,” he says, “

We've just entered and successfully integrated the four new markets that we entered between 2020 and 2022. Given the macroeconomic environment, the cost of borrowing, etc, we don’t want to jump into new markets at this particular time.”

That’s not to say that Helios isn’t keeping tabs on additional potential new markets, adds Vallabh.

“These things are cyclical. So there will

be a time where we will start looking at new markets, and we are currently keeping an eye on where opportunities are.”

Adapting to challenges

Vallabh explains that Helios’ approach to each territory can vary, acknowledging that operating in an emerging market can present different challenges to those in more developed markets.

Providing an example of one such challenge, he notes that a lack of good roads in The Democratic Republic of the Congo (DRC) means that it can be difficult to carry out logistics at times.

“In DRC, there's very little road infrastructure, so logistically moving our infrastructure from the port or the warehouse to where it needs to be built is quite an operational focus,” Vallabh says. “If you don't get that right, then it becomes extremely difficult.”

Another challenge that the market faces is that of power supply. In South Africa, there are well-documented power infrastructure struggles, notably load shedding, an issue the country has grappled with for years.

Vallabh says that load shedding, which aims to reduce the load on a nation’s grid supply, doesn’t have too significant an impact on Helios’ operations.

“Load shedding is quite a specific element to South Africa,” he says. “There are a few markets, such as Ghana, for example, that have periodic elements of load not being available due to generation capabilities, but it's not significant.”

According to Vallabh the issue of load shedding enables Helios to engage with its customers to develop alternative solutions to work around the limitations.

The cost of delivering power to its sites can be quite expensive too, Vallabh adds, noting the rising cost of fuel for diesel generators at its sites.

Sustainability is also a challenge, particularly when trying to match the power demand needed from network carriers.

In 2021, Helios outlined plans to reduce its carbon emissions by 46 percent by 2030, and become a net-zero carbon emissions business by 2040.

“We provide power where there's virtually no power and have solutions for

"At this point we have not seen the level of interest from mobile operators to have that in our infrastructure. And that's not surprising. Edge requirement is only really relevant when there's extremely low latency that is needed"

that, but the challenge comes where the cost of providing that solution becomes uneconomical or against sustainability objectives,” he says.

“What I mean by that is, if you are providing power to a site where there isn't a grid infrastructure then it’s most likely going to be a diesel generator. Now, of course, you can reduce the reliance on diesel generation by introducing renewable sources of energy.

One of those types of renewable energy is solar power, he points out, but it is not always an option.

“Given the loads that are required by mobile operators on the sites, it's not feasible to always rely on solar only,” he says. “The sun is only up for a certain amount of hours a day, and you have other weather considerations, so it's not a fully reliable source.”

Sometimes, having diesel generators is the easier option, he says, though these create carbon emissions.

Tower consolidation inevitable

The opportunity for expansion in the market appears strong in Africa, with Helios claiming on their website that there are around 260,000 cell towers still owned by mobile operators across the continent.

Should the telcos in Africa continue following the trend in other markets, such as Europe, Asia, and North America, and sell or lease off more of those assets, then a large chunk of those could end up being snapped up by Towercos such as Helios.

But amid the growth, Vallabh is cautious not to get carried away.

In the more immediate future, he expects market consolidation to take shape across Africa’s telecom tower market, though doesn’t comment specifically on whether Helios will engage in such opportunities.

On consolidation, Vallabh says: “Some may be forced, while some might be part of a global strategy refocus and re-shifting in terms of investment in Africa.

“The Towerco industry is becoming more competitive, as customers are becoming smarter, and understanding what specific requirements they need. There’s going to be a shift in how the Towerco model is deployed in Africa in the future.”

> Specializing in Data Centers, Mission-Critical Systems, and Specialty Applications.

Our engineers innovate power distribution solutions that meet complex space requirements, minimize safety risks, improve system reliability, and enable higher power densities.

Our solutions deliver reduced energy waste, scalable configurations, and excellent lifecycle value.

Every customer presents an ever-evolving set of requirements and challenges. Our goal is to develop the most effective, class-leading power distribution and integrated product solutions that support even the most complex applications.

Contact Us:

support@powersmiths.com

> Power Distribution Solutions

Static Transfer Switch

from 400 to 1200A Redundant STS design for power availability and site maintainability

Power Distribution Unit

from 30 to 1350kVa Versatile PDU optimized for safe maintainability

Remote Power Panels

from 42 to 168 circuits Free-standing, Assembled RPP

Here, Vaire, and everywhere: A bet on billions of reversible computing chips

Meet Vaire Computing, the chip startup making hoping the reverse the trend for hotter chips

the only way to increase compute power will be by increasing water and energy usage. Vaire wants to decouple energy and water resources from the compute growth, allowing for the continuous exponential growth of computing without an equivalent depletion of resources.

e want to build 60 billion chips in the ten to 15-year range,”

To put it another way, whatever Arm is currently shipping, Rosini wants Vaire to do better.

WRodolfo Rosini, CEO of Vaire Computing, says with complete sincerity. “The question is whether they are all going to be made by us or licensed out.”

“Chips are designed and built to be very fast, but they're very wasteful in terms of energy,” Rosini explains. “If you were to design a chip today from fi rst principles, you would probably do it the way we’re doing. But the chip industry built this architecture 50 years ago and continued optimizing it over, and over, and over.”

At its most basic level, reversible computing aims to reduce the waste heat generated by traditional processors.

Computing energy term was which it prefers to the

Founded in 2021 and based in London and Cambridge in the UK and Seattle, Washington in the US, Vaire Computing is a reversible computing startup developing what it calls “near-zero energy chips,” a term that Rosini says was coined by the company and which it prefers to the industry accepted term of “resonant adiabatic reversible computing.”

When chips execute operations, the compute power needed generates waste heat – because energy cannot be created or destroyed, an input that receives two units of bit-energy uses one unit of bitenergy to perform the output and then loses the second bit-unit in the form of heat.

Where traditional semiconductors have a small number of cores that run super hot and fast, Vaire plans to build chips with a large number of ultraefficient cores.

In about five years time, Rosini believes Moore’s Law will hit a wall, at which point

The difference will be so extreme, the company claims that, while in a classical

chip, almost 100 percent of the energy is wasted, Vaire’s semiconductors will use almost 100 percent of the energy for compute, wasting almost nothing.

“Instead of dissipating the charge into the packaging and losing the energy through heat, it is recycled internally,” Rosini says.

This has two effects. Firstly, the chip runs cold, meaning you don't need any water to cool it, and secondly, it needs almost no energy to make it work –assuming they can pull it off.

Cornering the market

Vaire Computing is the brainchild of Rosini, a serial tech entrepreneur from Italy, and the company’s CTO Hannah Earley, who completed a PhD at the University of Cambridge in Applied Maths and Theoretical Physics. Earley has previously said that it was while studying that she became interested in “unconventional computing.”

The concept of reversible computing is not new, having been fi rst proposed by scientists at MIT in the 1990s. However, it never took off because, at the time, there was no viable market for parallel computing. Even in more recent history, when use cases emerged, Rosini says that the accepted view on reversible computing had largely been “it doesn’t work,’ and even if it does, it’s slow.”

However, Rosini and Earley didn’t share this stance, and once the founders had become convinced of the technology’s potential, they spent three years emersed in reversible computing to make sure that before the company brought anything into the public domain, it was ready to debunk all previous criticisms.

Unlike other semiconductor technologies such as photonics, where there are hundreds or thousands of experts that startups can approach, reversible computing was such an under-researched space that, according to Rosini, there were only a handful of people globally with any knowledge or expertise.

“We all knew each other, so we just hired them,” he says. Vaire Computing now has 11 employees located in the UK and US.

Rosini says Earley is one of the world’s foremost experts in reversible computing and her PhD supervisor at

Unlike in a classical chip where almost 100 percent of the energy is wasted, Vaire’s semiconductors will use almost 100 percent of the energy for compute, wasting almost nothing

Cambridge was Mike Frank, who built one of the original reversible computing chips at MIT. Shortly before Rosini spoke with DCD, Frank joined the company as a senior scientist.

In July 2024, Vaire announced it had raised $4 million in a seed round, bringing the total raised by the business since its inception to $4.5m. This pales in comparison to the daily spend of its wellfunded competitors.

The company was also part of the original cohort of semiconductor startups that were selected for the UK government-backed ChipStart incubator, which helped provide organizations with access to the full Silicon Catalyst ecosystem, including design tools, IP, and prototyping capabilities. The company was also one of 10 UKbased companies selected for the Spring Cohort of Intel’s Ignite startup accelerator program.

Reversible computing is the future, maybe

Rosini says Vaire Computing was founded out of necessity. He and Earley were so completely convinced by the technology that they couldn’t believe that companies such as Intel and Nvidia were not sold on its potential.

“It was a completely open field and we just could not believe that no one else had reached the same conclusion,” Rosini says.

Rising demand for compute underpins a significant percentage of economic growth in the Western hemisphere and the AI explosion that has been witnessed over the last couple of years is only fueling that need for more processing power. However, it appears that many of the big players in

the chip industry are more focused on developing hardware that can support the increasingly dense AI workloads that customers are demanding of them, rather than considering the long-term implications it could have for the world’s resources.

In the beginning, Rosini said the company did consider other architectures, studying photonics and thermodynamics, before concluding that, while they might have their benefits, reversible computing has greater potential to scale.

The decision to pursue this technology in earnest was also fueled by the company’s belief that the chip architecture that will ultimately win out is not the one that is necessarily the fastest, but the one that operates at the lowest energy point.

“We think that in about four or five years, there will have to be [an industrywide] switch, and in ten or 15 years, every new chip that will come out of a foundry will be reversible,” argues Rosini.

He also notes that unlike in the 1990s, when Intel was able to improve single-core processor performance by about 50 percent each year, making it hard to make the case for alternative chip architectures, the industry has now reached a point where those performance gains are under two percent. Rosini believes Vaire will be able to exceed this by ten times, if not more.

Clearly not short of ambition, the company already has the next 20 years mapped out. In the short term, it plans to have its fi rst tape out in Q1 2025, with the aim of having full-scale production up and running by 2027. Between now and then, the company intends to engage in some more aggressive fundraising to allow it to achieve the milestones it has set out for itself.

“We hold all the key patents for the technology and we have the top talent, so we're trying to get to a product before everyone else,” Rosini says. “I expect in about four or five years we’ll face a lot of competition but I think probably we might want to collaborate rather than trying to fight Nvidia and Intel.

“I mean, I'm bullish about building 60 billion chips - but going after Nvidia, I’m not so sure.” 

Stand 8

Is there a nuclear solution to data center power concerns?

SMRs gain steam, but still remain unproven

The energy demand for data centers is expected to surge over the next five years, with US consumption alone likely to rise from 17GW in 2022 to 35GW by 2030. AI-specific data centers are due to be the primary driver of this growth, each requiring upwards of 80MW of power capacity compared to the 32MW needed for standardized centers.

However, with most of the major operators setting lofty net zero targets for their operations, the balance of securing more power whilst adhering to emission reduction has become a significant concern.

As a result, data center operators are increasingly exploring the possibility of new energy sources, with nuclear energy emerging as a potential solution.

Why nuclear power?

Data center firms have continuously invested in low-carbon energy over recent years. For example, Amazon Web Services (AWS) has been the largest corporate purchaser of renewable energy globally since 2020, while the likes of Microsoft, Google, and Meta have all invested in gigawatts of renewable power.

Zachary Skidmore Contributor

However, renewable energy is often intermittent and dependent on location, making its viability as the primary power source for data centers questionable. Nuclear power has emerged as a potential solution to these challenges, offering a consistent, reliable power source that is not reliant on specific geographical placement to secure power.

James Walker, CEO of microreactor company Nano Nuclear, says data centers “are becoming increasingly energyhungry and can't keep relying solely on the grid.”

Walker says they need to generate their own power. “However, many zero-carbon energy systems are location-dependent, leaving nuclear energy a viable option,” he adds. “Even if they were to use coal or gas plants, those are also location-dependent. Nuclear power is the best solution because it isn't tied to specific locations and provides the most consistent energy output."

This reliability has already pushed the hyperscalers towards nuclear solutions. In March 2024, AWS signed an agreement with Talen Energy to acquire a 960MW data center powered by the Susquehanna nuclear power plant in Pennsylvania, demonstrating the industry's growing confidence in nuclear power.

at Digital Realty, sees a clear synergy between nuclear power and the energy consumption profile of data centers. "The hope and promise of nuclear energy is abundant carbon-free 24/7 generation that matches pretty closely the load profile of a data center," Binkley says.

Additionally, nuclear energy addresses three critical concerns for data centers: cost efficiency, emissions reduction, and energy security. Though building nuclear power plants is expensive and timeconsuming, large-scale sites become cost-effective once operational.

Extending the life of existing nuclear plants can help maintain the energy supply, especially in Western countries.

The International Energy Agency (IEA)

and the US, where there has been limited investment in large-scale nuclear over recent years, SMRs have emerged as a promising business case for data centers' growing energy demand.

The rise of SMRs

While traditional nuclear plants are a good fit for large-scale energy needs, SMRs are emerging as an enticing alternative, particularly for data centers. SMRs offer a more flexible and scalable solution, with an average capacity of 300MW, ideal for single data centers or small clusters.

James Walker of Nano Nuclear points out: "SMRs can be placed almost anywhere and offer an exceptional capacity factor, a key measure of energy consistency, which surpasses even that of gas or coal. Given these advantages, it's no surprise that tech companies have identified nuclear power as the preferred solution."

An Amazon spokesperson commenting on the deal said that its intention was "to supplement our wind and solar energy projects, which depend on weather conditions to generate energy. Our agreement with Talen Energy for carbon-free energy is one project in that effort."

This is demonstrative of an approach that believes there isn't a one-size-fits-all solution when transitioning to carbonfree energy. All viable means are explored and tested to determine what is most applicable for each data center site.

Is large-scale nuclear a solution?

Nuclear power is well-suited to the needs of data centers because it provides the consistent power they require to stay online.

estimates that extending these plants will cost between $500 to $1,100 per kW by 2030, resulting in electricity costs below $40 per MWh, making nuclear energy competitive with solar and wind in many areas.

"Providing a direct, always on, clean power supply makes sense for these growing needs," says Patrick O'Brien, director of government affairs and communications at Holtec International, which provides nuclear energy equipment and services.

The problem would come if they don't look to deploy to small modular reactors (SMRs) and try to acquire power from the grid, further stretching that system. From a developer perspective, the ability to procure clean power at a fixed cost could benefit both sides' needs for construction/ operation costs stability and ensuring the power needs for the significant investment made on both sides from a cost perspective."

As a result, especially within Europe

One key advantage of SMRs is their innovative design, which doesn't rely on regular water for neutron moderation and cooling. Instead, they use hightemperature gas or molten salt, enhancing safety and efficiency.

Moreover, SMRs require much less space and do not need the 10-mile emergency planning zone that traditional nuclear plants do. This makes them theoretically more acceptable to communities, as they raise fewer safety concerns and may require less regulatory oversight, though as the technology is still in its infancy this has yet to be put to the test.

Clayton Scott, chief commercial officer at SMR company NuScale, says: "SMR technology offers a cost-competitive, safe, and scalable solution compared to renewable energy sources and traditional nuclear power.”

The development of SMRs hinges on robust partnerships and supply chain capabilities. Scott notes: "When manufacturing our modules, our relationships with our long-term supply chain partners, many of which are strategic investors, are a significant source of strength. These partnerships are crucial for delivering high-quality, costcompetitive components, and we remain

committed to developing a global supply chain to meet the growing demand for NuScale's technology."

This approach ensures the reliability and cost-effectiveness of SMR technology and enables data center operators to deploy SMRs rapidly and efficiently. Holtec’s O'Brien highlights the potential for SMRs to meet diverse energy needs: "I think SMRs will play a key role,” he says.

“The small footprint of a constant clean power generator will allow many industries to consider a new power source for their needs that can provide them with a growth opportunity. Additionally, the ability to deploy in any region and condition will allow developing areas of the world to have stable, clean power like never before."

Challenges and opportunities

Despite the promise of SMRs, several challenges remain. Like ‘traditional’ nuclear, the sector faces potential delays and cost overruns, which could undermine its competitiveness with renewable energy sources. Additionally, the diversity of SMR designs currently under development creates uncertainty over which technologies will succeed.

Digital Realty’s Binkley highlights this issue, and says: "On the SMR side, different reactor technologies are being promoted. I think there's a bit of a market acceptance curve there, too, to say, ‘hey, is this as good as it's being promised?’"

The sector also suffers from an eroded nuclear supply chain after a hiatus in

nuclear construction during the 1980s and 1990s, necessitating the rebuilding of capabilities and the formation of strategic partnerships for early SMR projects.

Public acceptance is another critical factor. Nuclear power's association with disasters like Chernobyl and Fukushima fuels fears over the technology. However, companies like Nano Nuclear claim to be safe, and say they are addressing conceptual concerns by forming closer partnerships with stakeholders.

"Our approach is not just about providing solutions but also about forming closer partnerships to understand the specific needs of our clients. This enables us to tailor our technology to meet the unique requirements of various industries," says Walker.

Governments also play a role in the success of nuclear solutions for data centers. Policies that streamline the regulatory environment support constructing new nuclear plants and extending existing ones, which is crucial. However, concerns remain over the lead time of the regulatory process, with Binkley believing that "some of the timelines that are out there probably are a bit aggressive in terms of how long that will take."

Both Nano Nuclear and NuScale are facing charges from short seller Hunterbrook Media that their timelines are unrealistic and that their products may not be able to live up to lofty claims. NuScale canceled a project in 2023 over a

lack of demand.

Neither company, nor competitors like Oklo, have deployed a working SMR.

A look to the future

As data centers' energy needs continue to rise, nuclear power, especially SMRs, presents a compelling option - if they can prove themselves to be working.

While traditional nuclear plants provide cost-effective and low-carbon energy, they face challenges like high initial costs, strict regulations, and public concerns.

SMRs offer a potential alternative with lower fees, quicker development, and improved safety features, making their promised version perfect for data centers that need reliable and scalable power.

"Nuclear power can be deployed anywhere,” Walker summarizes. “It isn't dependent on location and has the most consistent energy output. This reliability is why tech companies have gravitated towards nuclear solutions."

For small nuclear energy to become a key power source for data centers, a concerted effort is needed from policymakers, industry stakeholders, and the public to overcome the economic, regulatory, and social challenges.

First, however, it needs to overcome the doubts about whether SMR companies can actually deliver projects on time and on budget - something that would be a first in the nuclear sector. 

A mission with nuclear-scale challenges

The DOE’s CIO Ann Dunkin talks to DCD about the technology needed to green the power of the world

Georgia Butler Reporter

The US Department of Energy (DOE) wears many hats.

Finding its origins in the Manhattan Project, before officially being established in 1977 under President Jimmy Carter, the DOE now handles everything from nuclear weapons development and cutting-edge scientific research, to the decarbonization of the grid. And that is only the tip of the iceberg.

But with that diverse portfolio of responsibilities, comes a variety of challenges, says DOE CIO Ann Dunkin.

The question of what is the biggest challenge is a “huge” one, Dunkin says. One of the more obviously complex areas is the nuclear mission, not least because the consequences of something going wrong are significant.

“Obviously, we have a nuclear mission, and that involves trying to ensure that nuclear secrets and capabilities don’t get into other countries where they don’t already exist,” Dunkin explains.

“We do a lot of things around developing technologies to help identify radiological components and protecting the intellectual assets to ensure that those are not transferred to other countries. We also maintain the nuclear stockpile.

“We thought we were done at some point, and then we realized that we need to replace those weapons with some regularity. I think we all hoped that postCold War, the world would become a safer place where there was less nuclear threat. But with the collapse of the Soviet Union, the nuclear threat changed dramatically,

“All science is computational now, and the vast majority of our programs are better because we are able to deliver more computing capacity”
>> Ann Dunkin

and now we have the rise of other nuclear powers. So we need to maintain that deterrent.”

From a cybersecurity angle, there is a need to protect those assets from hackers and, as a result, many of them sit on classified networks. Understandably, the ins and outs of those networks were not shared by Dunkin.

The need for resilience and cybersecure systems is also extremely pertinent to the DOE’s role in managing the power grid, which it does across 36 states.

Dunkin draws a comparison to the recent - and rather dramaticCrowdStrike outage that took down hospitals, planes, emergency services, and other critical services globally.

“While the CrowdStrike outage wasn’t a cybersecurity event, you could see a cybersecurity event that looks a lot like that,” explains Dunkin, noting, however, that the US power grid was not impacted by the CrowdStrike incident due to a policy in which “nobody deploys untested patches.”

“The power grid was up and running, which obviously made it a lot easier for everyone else to solve all the other problems,” she adds.

Dunkin does reiterate, however, that the DOE is “not 100 percent confident,” in its defenses, and adds that “no one wants to challenge the bad guys to come together and have a go.”

The DOE, besides its obvious role of being in charge of energy policy and the like, is well known for sponsoring

Ann Dunkin

more physical science research than any other federal agency, and much of this is conducted through its system of 17 national laboratories.

Being at the forefront[ier] of supercomputing

Dunkin is a pretty firm believer in the importance of on-premise highperformance computing (HPC), though she notes that she is not in charge of procurement decisions.

“I am convinced that to maintain leadership in capability computingwhere we actually advance HPC - we need to do it on-premise,” she says.

“Not everyone agrees with me, but I really believe that's a capability we need to own, grow, and develop in the DOE.”

The DOE undeniably does take that mission seriously. Frontier, housed at the Oak Ridge National Laboratory in Tennessee, has held the top spot in the Top500 list since December 2023 with an Rmax of 1.102 exaflops.

The DOE also has El Capitan, set to be housed at the Lawrence Livermore National Laboratory in California, and Aurora, hosted at the Argonne Leadership Computing Facility in Illinios, currently in development and set to overtake Frontier.

The DOE also uses cloud-based HPC, for what Dunkin refers to as capacity computing, adding that in some cases this will be done via a hybrid model, starting with the on-premise HPC and the “bursting” into the cloud.

Regardless of whether it is cloud-based or on-prem, Dunkin argues that “what we do know is that supercomputing is critical to our mission.”

She explains: “All science is computational now, and the vast majority of our programs are better because we are able to deliver more computing capacity. Even if it's a physical experiment, we're modeling that system to try and understand the best experiments to do, and then we're analyzing data from those experiments to better understand our results.”

An example of such research Dunkin offers is nuclear fusion. Often viewed as the ultimate power source, fusion has the potential to provide limitless sustainable power by mimicking conditions in the sun, fusing light atoms into heavier ones.

“One of the great things about what we do is the diversity of the labs and the diversity of the bets. We have at least five labs working on quantum with different approaches. I’m not smart enough to know which approach is going to be the best - and that’s kind of the whole point - but we get to make lots of bets, and we will get to see which bets pay off”
>> Ann Dunkin

whole point - but we get to make lots of bets, and we will get to see which bets pay off.”

Among those bets, the DOE is looking at different materials to build qubits - or quantum bits, the basic unit of quantum information - and other projects looking at how to build larger quantum systems and scale them. “I don’t know who's going to win in terms of getting there, but we’ve got lots of smart people working on it and how we are going to commercialize that work,” Dunkin says.

Artificial intelligence (AI) is also central to the DOE’s work. In January 2024, the department’s Pacific Northwest National Laboratory in Washington teamed up with Microsoft to find candidate battery materials. This work, using AI and traditional HPC via the Azure Quantum Elements (not related to quantum computing) service, reduced the possibilities of materials from millions to only a few options.

Starting with 32 million inorganic materials, AI models cut that down to 500,000, and HPC shaved it even further to just 18 options.

“There’s a lot of power in artificial intelligence,” says Dunkin.

Though several companies are working on fusion devices and promising to turn the theory into reality, current working assumptions suggest that it remains years, if not decades, away from fruition.

Still being worked on by scientists around the world, Dunkin states that there is a “direct correlation” between the success of fusion experiments, and the computing capacity available.

Beyond traditional HPC, DOE is also hedging its bets on quantum aiding in the clean energy mission.

The near future of research Quantum computing is another technology that is still a work in progress, with many different approaches being explored. For Dunkin, this is part of what makes the DOE’s work so exciting.

“One of the great things about what we do is the diversity of the labs and the diversity of the bets,” she says. “We have at least five labs working on quantum with different approaches. I’m not smart enough to know which approach is going to be the best - and that’s kind of the

The irony, however, of the resourcedraining nature of these powerful machines, is not lost on Dunkin.

“AI is a huge power sink, and training generative AI models in particular has shown itself to be hugely costly and energy intensive,” she says. “It’s a bit of a paradox, in that doing all this great research that can ultimately reduce our climate impact requires energy. We believe that all the energy we use has a huge payback in the end, and we are doing the best we can to use renewables where possible in the meantime.”

The DOE is certainly not 100 percent renewable, but efforts are made to improve the efficiency of its labs, plant sites, and headquarters. An example is that the agency generates wind power on-site at its Amarillo site, and often deploys solar panels at locations. Dunkin jokingly adds: “We also don’t mine Bitcoin.”

AI is also being used to help bring more renewable energy onto the grid, with the DOE deploying the technology to deal with permitting complexities that have caused projects to stall.

Currently, there is a massive queue of grid interconnection applications - having grown some 27 percent in 2023. As of April 2024, around 2.6TW of planned power projects had joined the interconnect queue which is around twice the US’ existing generator capacity. Of those requests, 95 percent are for renewable power sources.

“You've got local, state, and federal agencies that, depending upon where you are, may have an opinion about your power, where they're running that power or providing additional capacity on the grid to get that power online,” she says.

“We have a project we're working on right now to help folks speed up permitting with AI solutions to help people understand better what they need to do, and how to get that permit.”

In addition to permitting, one unique area Dunkin oversees in her role as CIO at the DOE is the handling of wireless spectrum necessary for “grid timing.”

The spectrum is kept isolated to avoid interference and is essential to ensuring connectivity to keep the grid up and running and the power moving in the right places.

“If you look at the grid, it's incredibly complicated now compared to what it was when we started bringing power into the country 100-something years ago. We have to keep all the power moving in the right direction, to not accidentally energize a line that's not supposed to be energized, all those things, and we need to be able to time the grid, and that requires wireless to do that.

“We have some dedicated spectrum we use for that, which is owned by the government and that’s the primary

need. We also use it for general wireless communications between those sites, for example, if we need to shut down a transformer or power to a line. Those signals need to get through or someone could get hurt or we could take down parts of the grid.”

A problem of distribution

The undeniable fact is that the DOE has a massively distributed footprint, and this creates a logistical and operational challenge, both in terms of reaching 100 percent renewable energy without the national grid doing so, but also in its IT infrastructure.

At this point, the vast majority of the department’s everyday enterprise computing is in the cloud, with Dunkin noting that the vast majority of its data centers that remain open are focused on mission computing.

The agency is also not tied to one cloud provider - in fact, Dunkin suggests the DOE uses “probably every cloud you can think of,” adding that they are making a big investment in the “Big Three”Amazon Web Services, Microsoft Azure, and Google Cloud.

There isn’t a plan for “centralization,” simply because the DOE is so distributed, and in many cases, it simply wouldn’t be appropriate. It makes sense for plants and sites to have their own cloud.

With the department's origins stretching as far back as the Manhattan Project and the Atomic Energy Commission, there remain some legacy applications handled by the DOE, but because the agency does not have many public-facing applications, modernization

isn’t quite as large a challenge as it might be for other federal agencies.

“The biggest public-facing application that we have is the Energy Information Administration (EIA), which puts out statistical data,” says Dunkin. “They post reports, and what happens is everyone wants to look at the EIA’s data one day, but the traffic dips pretty quickly after that.

“We are working to the EIA to move to the cloud and modernize, and the CIO over there is doing really good work to make that transition from what used to be on-premise data centers, to commercial data centers, and then to the cloud.”

The DOE is also working on moving its HR systems to the cloud, which should be completed in the next 12 to 18 months.

Dunkin also notes that the agency is helped by the fact that the DOE’s labs do research for other public sector entities and private companies, even other governments, all of which pay taxes to fund the labs’ overheads. “For the most part, they have pretty modern IT systems,” Dunkin explains.

“This is a little bit less at some of our nuclear manufacturing sites, but definitely our modernization challenges are much smaller than another similarsized government organization.

Overall, the sheer complexity and quantity of work handled by the office of the CIO at the DOE is not lost on Dunkin, but she seems mostly excited by the fact.

“I get to see lots of amazing stuff and support it. It is really complex, and it's bigger than any place I’ve been [CIO of] before. We have a combined IT and supercomputing budget of almost $6 billion, and we are tackling really complicated problems - building nuclear weapons, trying to solve nuclear fusion, and so on.

“An organization managing just one of those problems would be significant, and we have them all.”

Dunkin’s previous CIO experience includes at the County of Santa Clara, the US Environmental Protection Agency, the Palo Alto Unified School District, and other related roles.

Despite this, Dunkin recalls someone telling her she has the coolest CIO job across the federal government.

“I’m not sure I can disagree with that,” she says. 

How Virgin Media

O2’s heartbeat drives its mobile network

With close to 50 million customers to serve, a data center just west of London plays a vital role in allowing one of the UK’s largest telcos to stay connected

Last year, data use UK mobile network Virgin Media O2 (VMO2) grew by more than a quarter CTO Jeanie York revealed earlier this year.

To support this data demand surge as its 5G network rollout ramps up, the telco says it invests a staggering £2 million ($2.64m) each day in its 4G and 5G mobile networks.

This commitment is part of its wider strategy to invest £10 billion ($13.18bn) to upgrade its network.

The carrier, which was formed from a merger between Virgin Media and O2 three years ago, serves more than 49 million customers across broadband, mobile, TV, and home phones in the UK, including 35 million mobile subscribers. This is a lot of data for any company to process.

Paul Lipscombe Telecoms Editor

DCD was invited to visit the company’s Network Management Center in Slough, UK, to understand how its data centers support its mobile network.

VMO2 claims the center is the company’s “operational nerve center.” It’s also home to Virgin Media O2’s Engineering Training Center, where the carrier has its largest mobile data center in the country, from which it manages traffic for its 3G, 4G, and 5G customers.

Dan Goodenough, VMO2’s technical site operations manager, says the data center is the “heartbeat” of the business.

“These are the sites where if something were to go wrong, you would feel it,” he says. “So, it’s one of the vital organs that we've got at Virgin Media O2.”

Across the country, VMO2 operates 550 technical sites, which consist of its Class A sites, which are its core sites that carry the largest amounts of traffic, such as its one in Slough.

The telco also has Class B sites, which are transit sites, and Class C, which are its smallest sites, otherwise known as network repeater sites.

Heritage

Its Slough data center has been in operation since the noughties, according to Goodenough.

Before the company’s merger between parent companies Telefónica and Liberty Global in 2021, the data center was previously managed by Telefónica.

Europe’s largest trading estate, the Mars Bar, and David Brent are some of

"These are the sites where if something were to go wrong you would feel it. So, it’s one of the vital organs that we've got at Virgin Media O2"

which are located in the south, notes Goodenough.

Monitoring the company’s network

The data center itself is where Virgin Media can monitor its overall network performance, as well as its broadband and phone mast network across the country.

the first things that spring to mind when Slough is mentioned, but the growing number of data centers in the town is hard to ignore.

According to Goodenough, Slough is the perfect location to have a data center, due to its proximity to London and the South West.

“Slough is like a data highway here for us in the UK. It’s got an abundance of fiber routings, plus great third-party facilities that you can utilize.

“As a company, we’ve got a great presence here, and there’s Equinix just down the road, which has a couple of data centers here too. The M4 corridor in general from London right to the South West with capacity is huge. It's a good input-output of our network.”

Virgin Media’s network management center consists of a live switch room, a test switch room and, of course, its data center.

The carrier also has another 12 mobile data centers across the UK, seven of

Goodenough explains that the carrier can detect surges in traffic, notably if a video game such as Fortnite or Call of Duty has released an update or new game, there will often be a noticeable spike in data traffic.

He explains that the company implements a service protection period during times when there’s likely to be a lot of demand for data traffic, for example, sporting events such as the Olympics.

“We had a service protection period during the Euros and also the Olympics, where you don't want to disrupt service and don't want to take customers down during that period,” says Goodenough, noting that any non-urgent upgrades were put on hold while the sporting events took place.

The service protection period could last a matter of days or for as long as a couple of weeks, he explains.

“If you had potentially a piece of equipment that would look like it was on its way out, then we would try and get those through, or if there were damage to our fiber which would impact our resiliency then we’d upgrade those, but we wouldn't do any kind of filter changes

or maintenance type stuff during the service protection period,” Goodenough says.

He highlighted that should an outage occur for a customer of Virgin Media O2, it can quickly identify where this has occurred within its data center, using a grid reference style system, designed to speed up the process.

Back in 2021, VMO2 deployed management software from EkkoSense across its entire estate.

This software uses smart sensors fitted to data center equipment to monitor how much cooling each site needs at any one time, and can then report back on how to optimize cooling as demand changes. VMO2 says it expects the software to deliver energy savings equivalent to one million kilograms of CO2 year-on-year.

Ambitions to drive 5G and full

fiber

The company has big plans to expand its 5G network in the UK, which currently provides coverage to more than 65 percent of the population.

Earlier this year, Virgin launched its 5G Standalone (5G SA) network in the UK. 5G SA is not reliant on older mobile generations and solely uses a 5G core network, and is based on cloud.

The service increases network capacity, cuts latency, and can handle a larger amount of connected devices. It is within the data halls, where its 5G network traffic passes through, while it also houses cloud storage.

“The data centers support the 5G Standalone backhaul and backbone networks, which provide the bandwidth and services customers receive,” says Goodenough.

“We invest in upgrading the technology within our data centers to handle to ever-growing demands of our customers, with data usage on an upward trend. This includes services we can provide the end user, quality of service, and capacity on the network.”

Several Faraday cages are also found on-site, allowing Virgin Media O2 to regularly test 3G, 4G, and 5G signals.

A Faraday cage is an enclosure designed to block some electromagnetic fields, making it ideal for testing mobile signals to ensure no interference. Virgin

"As we start to bring out some of the legacy stuff on some of our data centers, it gives us so much more rack space that we can utilize here"

Media O2 says the cages are used daily to put its technology through its paces.

Fiber goals

Aside from mobile, VMO2 has big plans for its full fiber broadband offering. At present, its fiber rollout has passed five million premises.

Earlier this year, the carrier, along with its backers, outlined plans to create a national fixed network company to rival BT's Openreach in the UK.

In February, the telco said that the NetCo will "underpin full fiber take-up and roll-out," and provide new financing optionality and a platform for potential altnet consolidation opportunities.

Along with Nexfibre, which is an independent fiber joint venture between Liberty Global, Telefónica, and Infravia, the separate networks will reach a combined total of up to 23 million homes, placing the company in a stronger position to compete with Openreach, which is aiming to deliver FTTP services to 25 million premises by 2026.

However, it’s not only full fiber that the carrier is serving up, it also has 5.8 million fixed subscribers, while its fixedline network passes more than 17 million premises.

“From a fixed perspective, we've got nearly six million customers on that network, so we are heavily still investing and maintaining that network,” says Goodenough, stating that this will continue to be a big focus for the carrier, despite the company’s full fiber goals.

Out with the old

While VMO2 will continue to push ahead with its fixed network, the telco has outlined plans to get rid of older networks.

The telco will retire its 3G network next

year, at which point it will also begin to phase out its 2G network, ahead of the country’s planned switch-off by 2033. Its 2G network accounts for just 0.1 percent of the total data traffic on its mobile network.

This transition will necessitate some changes in the data center, Goodenough says.

“As we are moving towards 5G Standalone and as we start to switch off 3G, we’re beginning to see a lot of the kit come out of our data center as we transition customers across to 5G,” he explains.

VMO2 has publicly said the retirement of these legacy networks will improve the company’s overall sustainability efforts and its wider goal of going net zero by 2040.

According to the carrier, its 3G mobile network carried less than four percent of all data used on its network last year, but accounted for 11 percent of the company's total energy consumption.

“What we tend to find with our data centers and technology, typically legacy equipment is much larger,” adds Goodenough. “Whereas newer equipment is much more condensed. As we start to bring out some of the legacy stuff on some of our data centers, it gives us so much more rack space that we can utilize here.”

Taking data to the far Edge

As for where data is being processed, Goodenough sees more of it being processed at the far Edge of the network in the future.

He notes that the company has plans to deploy more of these macro sites across the country to match demand for data-hungry services, such as streaming platforms.

“Over time we will try and reduce that and start to use more far Edge sites, so you end up with smaller sites that are effectively just repeaters, made up of lots of switches.

“With far Edge, there are fewer points of failure there as well. If you can condense it down, then it’s also less energy consumption and then you can focus your investment and your engineering teams around those areas instead.”

The future of Virginia post-Loudoun

As companies look beyond Northern Virginia, new counties across the state look to become major markets in their own right

Source: PowerHouse Data Centers

MAE-East, the first Internet exchange, was launched in 1992 in an office building at 8100 Boone Boulevard in Tyson’s Corner, Fairfax County, Virginia.

It’s said around half of all Internet traffic in the early 1990s passed through this small point in Old Dominion. Set up by Metropolitan Fiber Systems and UUNET, MAE-East was then expanded

across a cinder-block room in an underground parking garage across the street at 1919 Gallows Road around 1996.

Once MAE-East moved to Equinix’s DC2 facility at 21715 Filigree Court in Ashburn, AboveNet continued to operate a data center at 8100 Boone, which was taken over by Digital Realty in 2006, but the colo firm has since exited the site. Loudoun County, meanwhile, has become

the data center capital of the world.

Over the past 15 years, Loudoun has built more than 30 million sq ft of data centers, with another five million sq ft (2.7m sqm) currently in development. Neighboring Fairfax County reportedly has around 29 facilities in operation with a pipeline of 4.4 million sq ft of (440,000 sqm) space under construction, more than doubling the existing inventory.

Nearby Prince William County, already home to nearly seven million sq ft (650,000 sqm) of capacity, has an additional 30 million sq ft (2.78m sqm) under development.

But times change, and as Loudoun reaches saturation point, data center developers are looking south for new opportunities.

Data Center Alley is full – just about

Northern Virginia Technology Council (NVTC) reports that Virginia has seen more than $200 billion in data center investment. In 2023 alone, the industry provided 12,140 operational jobs and 14,240 construction jobs in the state. In 2022, data centers paid $640m in taxes to the Commonwealth of Virginia and $1

billion to local governments in Virginia. According to the NVTC, it was only in 2016 that Northern Virginia (NoVA) finally supplanted the New York market as the largest data center market in the US. Today, Virginia has more capacity than most of the other major US markets combined, with gigawatts of capacity in operation and gigawatts more in development.

Vacancy rates in NoVA remain at historic lows, at one percent or lower. But land and power are hard to come by amid a transmission crunch grid operator Dominion is working to rectify.

Northern Virginia continues to grow, but the pace is slowing. According to H1 2024 data from CBRE, NoVA remained the largest data center market, with 2,611.1MW of inventory and 1,157MW under construction.

The area saw 108.1MW of total absorption during those six months, down 87.4MW, or 43.9 percent, year-overyear. H2 2023’s 424.4MW absorption was down 12.5MW year-on-year; H1 2023’s 192.8MW absorption was down 76.5MW after increases in 2022 and 2021. Its inventory growth of ‘only’ 113 percent since 2020, puts NoVA in fifth place nationally in terms of relative growth.

Data centers have been good for Loudoun’s tax base over the years. It's a revenue stream so large for the country that it sometimes creates hiccups, such as when building slowed in 2021, leading to a $60m shortfall in tax revenue. Because of this, the county is looking to diversify, targeting more life sciences and

cybersecurity investment.

But the good times aren’t quite over, and 2024 has seen plenty of activity in NoVA. DataBank is building a 20MW facility in Ashburn due to come online in 2026, and Yondr also broke ground on a new 48MW building in Loudoun’s Arcola, to name but two.

“Loudoun County still has enough projects in the pipeline to potentially triple its current capacity,” says Lilli Flynn, senior analyst at data center market analyst firm DC Byte. “Ashburn still has a huge amount of demand despite longer project timelines.”

Even county rule changes might not prevent future projects coming into play. Companies including Prologis and the Washington Commanders have recently sought permission to build data centers on land around Sterling and Ashburn in the future, despite having no immediate plans to do so.

“I think the Dominion announcement initially scared people away from

“Loudoun can slow down but can’t be stopped – I don’t see the pipeline emptying anytime soon”
>> Lilli Flynn DC Byte

Loudoun – there was a dip in the county’s share of the market in 2022 as operators started looking elsewhere, but it bounced back in 2023,” says Flynn. “I think the county will grow a lot slower than we’re used to seeing, but I often say that Loudoun can slow down but can’t be stopped – I think the Ashburn name is still a draw in the industry and I don’t see the pipeline emptying out anytime soon.”

But the days of unbridled data center growth in the traditional Northern Virginia markets look to be over. Instead, companies are looking further south to new counties within the commonwealth.

New Counties, new opportunities

The NVTC report notes that even if you removed Northern Virginia - Loudoun, Fairfax, and Prince William Countiesthe rest of Virginia would still rank ninth among the top 10 data center states in the nation.

Many more developments are planned in the state. Jeff Groh, executive managing director, brokerage at JLL, says Virginia’s “emerging markets” are experiencing dynamic growth. “The big story is the I-95 Corridor from Stafford to Richmond,” he says.

Spotsylvania, Pittsylvania, Fauquier, Culpeper, King George, Surry, Stafford, Caroline, Lousia, and Mecklenburg Counties have all seen new large-scale data center developments announced or applied for since 2022 as developers look to cash in.

“It's been very interesting to see the spread away from Loudoun County

recently,” says DC Byte’s Flynn. “Prince William County seemed like the obvious successor to Loudoun and saw an increase in planned projects after Dominion announced its power crunch in Loudoun.

“However, the area’s largest data center projects have been hit with resident backlash and litigation. Culpeper and Fauquier Counties have seen a lot of activity recently.”

Culpeper County, some 55 miles south of Ashburn, has seen one of the biggest transformations. For decades, it had only been home to a Swift data center and a former Terremark campus now operated by Equinix.

Now, after AWS secured permission for a two-building data center campus totaling 430,000 sq ft (40,000 sqm) in 2022, a bevy of other companies have followed suit into the county. CloudHQ, DataBank, Pertson Companies, EdgeCore, and others have all been granted permission for data center campuses totaling millions of square feet and gigawatts of capacity.

“Virginia is open for business outside of Loudoun and the big operators are appreciating the ability to build bigger campuses”
>>Adam Cook Peterson Companies

As well as Culpeper and some existing projects in Prince William, Fairfax-based developer Peterson Companies is building a 525-acre campus in Stafford County that could total more than 25 buildings and 5.5 million sq ft (511,000 sqm) once fully built out.

“That is a 1.8GW campus, and have another property in Stafford County kind of nearby that's 83 acres and another 300MW, so just in Stafford County alone today, we're over 2GW and continuing to grow,” Adam Cook, Peterson’s managing director for development, tells DCD Stafford County appealed, he says, because the company had an existing landbank there and the county had the right mix of utility and technology infrastructure to support its plans. Amazon also has a campus planned in Stafford.

Cook says there’s “still some juice” left

gigawatt campus is a real sweet spot,” he adds. “And the time to market matters a lot right now.”

Further south around Richmond, numerous companies are planning new data centers in the likes of Henrico, Chesterfield, Hanover, and Powhatan counties.

QTS has filed to expand its existing footprint and add a new campus, while newer operators like DC Blox and local developers such as WestDulles Properties and Province Group have sought permission for new campuses. Chirisa has filed to expand its existing site and Tract has a massive gigawatt campus in the works that other operators can develop on.

NTT | Source: Clark Construction

in Loudoun, and adds: “Virginia is open for business outside of Loudoun. I think that the big operators are appreciating the ability to build bigger campuses, instead of having to focus on one building at a time.”

PowerHouse is developing an 800MW campus in Spotsylvania County. The 145acre site could see up to eight three-story buildings developed, totaling 3.5 million sq ft (325,160 sqm).

“We’re seeing tiering of the architecture,” says Matt Monaco, senior vice president of asset management and development at PowerHouse. “AI workloads, especially the training, don’t have the latency sensitivity. You’re seeing the circle from Loudoun expanding to include Culpeper, Spotsylvania, and Richmond.

“I think that path of development down through Richmond is all interesting at this point. 300MW and above is the cost of entry for a lot of these big players, and the

While it might seem like a sudden gold rush, things have been moving behind the scenes for a while. DCD understands a number of these projects have been in discussion between companies and counties for years ahead of them being of them being officially filed or announced.

Backlash and Outreach

The benefits of data centers are wellestablished at this point. They can bring in huge tax revenues - unless they are given overly generous subsidies - and generate lots of short-term construction jobs, alongside a handful of long-term well-paid ones. The clustering effect often leads to jobs and investment from other facilities and its wider supply chain.

But it’s also a fact they can blight a rural landscape, and use a lot of power and, often, large amounts of water. These facilities also generate plenty of noise, and diesel generators can impact air quality.

It’s not usual to see significant

opposition to new developments, especially in areas without major industrial development.

Opposition groups including American Battlefield Trust, Sierra Club, the Coalition to Protect Prince William, Citizens for Fauquier County, and the Piedmont Environmental Council are mobilizing organized opposition to data center developments across the state. Several groups - many part of the Data Center Reform Coalition - have filed lawsuits against proposed developments.

Peterson’s Cook claims his firm “welcomes” reasonable opposition. “We're happy to have those conversations, where others sometimes don't or hide behind

“We're quite aware of the role that data centers play in modern society.”

Bolthouse says she’d like to see the industry recognize her organization’s concerns. “They give us lip service, but we’d like to see us come to an understanding that there is more of a need for transparency and start to recognize that we're headed towards a tragedy of the commons,” she says.

When asked if the PEC is in favor of more data centers in the traditional NoVA markets or more of a focus in newer markets instead, Bolthouse is conflicted.

“Initially, I would have said that it's better for us to see these projects happening in a more dispersed fashion,”

on an area. Many counties are actively trying to lure operators to their counties, resulting in many changes to local regulations and major overhauls to how counties operate.

“We're trying to get our arms fully around what it means to go from a data center community of about 500,000 square feet to now 10 million square feet,” Bryan Rothamel, director of Culpeper County Economic Development (CCED), tells DCD. “What do we need to do and how do we need to respond?”

“We really need elected staff, planning staff, and executives all to be aligned, and then go to the economic development folk,” says Cook. “In the communities

larger corporate facades,” he says.

“Those conversations are not always easy,” he adds. “But they're important for us and they're important for communities. And having those conversations is better than not having them, and it's not an unfair ask that local communities get something out of this.”

Local officials are wary of stoking the ire of their residents, and data center operators need to be equally careful, lest they mobilize the kind of opposition levied against the Digital Gateway project in Prince William – which saw hundreds of people speak against the project in marathon 24-hour council meetings.

DCD has received more than a few angry emails from local residents irate about new data center developments coming to their town or county.

“We're not anti data center,” says Julie Bolthouse, director of land use with Piedmont Environmental Council (PEC).

she says. “But at this point, the grid is so overcapacity across the entire state. And I don't want to say putting them out in rural areas is a good idea either – putting them out in farm fields doesn’t make sense.”

She notes that, just like in Loudoun, questions remain around how much transmission infrastructure these localities will see built out to support all the new capacity, and believes a pause on developments is necessary.

According to Bolthouse, there’s a need for “some real planning and transparency about what would actually make sense, to make sure that we're not doing things in multiple places that are pulling against each other and causing massive problems.”

Working with the government Officials across Virginia have long eyed the tax revenues Loudoun generates from its data centers while being wary of the impact too many facilities could have

“We’re seeing tiering of the architecture with AI workloads and the circle from Loudoun expanding”
>>Matt Monaco, PowerHouse Data Centers

outside Loudoun where you see them benefiting from the future data center growth, you've seen those groups come together really well.”

Peterson’s Cook says his company has been helping multiple counties write their zoning ordinances, looking at locations for tech overlay or special exception zone, as well as potential tax rates, in order to avoid falling into what he calls the “Loudoun trap.”

When it comes to ordinance, Cook says the magic formula is pretty simple; identify where you want data centers, keep them away from schools and residences, make sure you incentivize correctly and understand the tax base, and have the right processes in place.

Defining requirements around setbacks, buffers, height, and architectural factors is also key. Including ways to enforce ordinance is also important to give the regulations teeth.

CorScale | Source: Sebastian Moss

“If you can define your overlay, you're in a better position,” he says. “I think Culpeper did that better than anyone in really defining the territories in the areas in which they would allow data centers.”

Not all local governments are equally welcoming. Despite the lure of new tax revenue, many counties are introducing stricter regulations to prevent uncontrolled data center sprawl.

Fauquier County has passed several zoning ordinances that restrict the development of data centers. After a change in make-up following local elections, King George County supervisors backtracked on giving approval to a massive Amazon data center campus.

“Fauquier approved new regulations for data centers which have been called some of the strictest in the state,” says DC Byte’s Flynn. “This could prove to be a serious deterrent, though a few projects have been made exempt from the new permitting process.”

“Not every county wants data centers,” Cook adds. “Some of the counties want to really maintain their rural identities or they just don't have the infrastructure or talent base to support a data center.”

Is the rest of Virginia ready?

Data centers need power, water, fiber, and people. Loudoun has traditionally been able to provide all four, but whether the rest of Virginia can match upparticularly when it comes to green power - is questionable.

As well as the common concerns around aesthetics and noise, PEC’s Bolthouse notes serious questions around water use will have to be addressed.

Data centers are notoriously thirsty, and different counties are approaching water use differently, with some allowing the use of groundwater. The PEC is also concerned about the impact more generators could have on air quality around Virginia.

And though Virginia is adding renewable energy capacity quickly, Dominion recently said it had connected 94 data centers with more than 4GW of capacity in Northern Virginia since 2019, and expects to connect more than a dozen new facilities in 2024.

According to the Solar Energy Industries Association (EIA), Virginia's installed solar capacity topped 5.4GW in Q1 2024. EIA figures suggest renewables in total make up around nine percent of Virginia's grid mix. Natural gas makes up more than half (54 percent), with just under a third coming from nuclear. Coal makes up less than five percent.

“I think power infrastructure will continue to be an issue going forward,” says DC Byte’s Flynn. “It seems like slower development timelines across the board in Virginia will be the new normal.”

Peterson’s Cook agrees that access to power will become “increasingly difficult” and on longer timescales.

“I think we'll see more secondary and tertiary markets open where there are pockets of power and the intersect of those other utilities and favorable economic and

government positions,” he says.

Cook notes the current boom outside NoVA is leading to rampant speculation that is adding to the issues, as the requirement to serve all customers equally is “stymieing” utilities that don't have the resources to support all the inquiries that they're getting.

“Uninformed and irresponsible land speculators get the same attention from Dominion as the legitimate recognized operators,” he says. “Everyone who has an acre thinks it's worth millions of dollars as data center land, and that's creating chaos in the market. The saturation of requests coming into the utilities is nearly crippling them.”

Culpeper’s Rothamel says his county knows there is probably a “ceiling” on the number and scale of developments it wants to host. Once the approved projects have been built out over the remainder of the decade and beyond, the county will assess the impact and go from there.

“We envision us as a piece of the puzzle,” he says. “This is a massive change for a community of our size. We can't and we're not interested in competing with our neighbors to the north.”

While each county will have its own ceiling and perceived saturation point, it’s clear Virginia will continue to lure new projects to the Commonwealth for years to come.

“There's a lot of opportunity and a lot of growth still in the pipeline for Virginia,” says Peterson’s Cook. “We haven't truly seen the economic boom yet, it's still yet to come.” 

SAFEGUARD YOUR SITE

Maintain uptime and avoid data loss – and the potential monetary consequences created by unavailability – by investing in reliable batteries that are engineered with advanced Thin Plate Pure Lead (TPPL) technology. Our batteries are specifically designed to meet the evolving needs of today’s data centers and deliverpeace of mind for the availability of essential equipment. We have the energy storage solutions to ensure the resilience of your mission-critical systems.

Discover more about our data center solutions at: www.enersys.com

The Olympic networking game

How technology triumphed in telling the Olympic story around the world

More than just a glorified sports day, the Paris 2024 Olympic Games saw the transformation of an entire city to host more than 10,000 athletes from 196 nations and 15.3 million spectators.

But the audience extends beyond just the spectators in the stadium. Globally, more than three billion people tuned in to watch Noah Lyles win gold and Duplantis break his own world record. The Olympics generated more than 300 million hours worth of video footage, with the first three days alone seeing a 79 percent surge in viewership compared to the Tokyo edition in 2020.

To prepare the city for audiences watching around the globe, the International Olympic Committee (IOC), Olympic Broadcasting Service (OBS), Intel, and Orange digitally transformed the Paris region.

Size does matter

Digitally transforming the city of Paris for the Olympics is made challenging by the sheer size and scale of the event.

Sotiris Salmouris, CTO at OBS, explains: “In the case of the Olympics, we are talking about more than 30 different sports, all requiring a different setup, different competition rules, different organizations, and different types of venues.”

Unlike the Euros and the World Cup, he says, the infrastructure deployed must be able to be scaled both up and down depending on the event.

Bertrand Rojat, chief marketing and innovation officer at Orange for the Olympics, adds that infrastructure for the games was deployed across temporary venues, rather than the permanent venues seen at the Rugby World Cup earlier this year, presenting a whole new challenge.

River Seine, now ‘swimmable’ thanks to a $1.55 billion investment from the French government. The Seine had to have access to all the digital infrastructure available in a standard stadium or arena. During the Olympic opening ceremony, more than 200 Samsung S24 cameras were set up on boats to capture all the footage.

Salmouris tells DCD all of the venues for more than 32 sports are kitted out with “hundreds of cameras” that capture the action in 8K HDR with 9.1 immersive audio. This forms just part of the OBS’ onpremises technical setup.

“The volume of data we are generating in a single moment is huge. We’re talking about several hundreds of petabytes per second,” he says. And herein lies the second challenge.

Rojat provides the example of the

The OBS, Salamouris says, “is both a production company and a technology company,” meaning its responsibility is to both create and distribute data from the Olympics.

But how did such large volumes of data traverse the globe, from trackside to local screens and televisions?

From track to telly

In addition to trackside cameras, Salmouris says each venue has “a technical IT setup” comprised of Intel hardware.

Intel has deployed its Xeon processors directly on-premises, allowing the data to be simultaneously encrypted and compressed as it happens.

Jean-Laurent Phillippe, CTO at Intel, says: “The closer to the data that processing can happen, the better for latency and for avoiding communicating data over the Internet.”

Where it would normally require 48 gigabits per second of bandwidth, Phillippe says Intel has been able to reduce this to only 40 to 80 megabits per second thanks to compression at the Edge. In return, a replay can be generated and broadcast in five to 15 seconds.

Phillippe explains low latency is crucial and that there is only a small window of time between, for example, a Mexican football team’s goal and the next moment of play for the OBS to broadcast a replay of the action. In moments where spectators are waiting to see whether a goal is offside, Intel says its VVC encryption allows the data to reach its audience without losing any of its 8K quality.

The Xeon server CPUs also feature integrated AI capabilities. Philippe says, as a result, the servers do not “necessarily require additional discrete accelerators like GPUs to do part of the AI workloads.”

From the venue, the data is transported via Orange’s Private 5G and fiber links to the local International Broadcast Center (IBC).

Salmouris says the IBC is “where the heart of the broadcast actually beats” and is a temporary hub that follows the Olympic Games wherever they go.

The IBC this year is located in the Le Bourget Exhibition and Media Center, spanning 80,000 sqm on a 25-hectare plot in Northern Paris. Inside, there are seven smaller data centers, which Salmouris says are not too dissimilar from Edge deployments.

Phillippe says “Intel provides the ingredients” for the OBS to perform the majority of workloads in the IBC. This includes customizing broadcasts depending on their final destination, generation of complimentary graphics, short-form footage for social media, colorizing footage, and distributing archived content. A lot of these workloads rely on Intel’s Advance Volumetric Library Capture.

From the IBC, the data uses fiber networks to reach larger data centers in the Paris region, from which the OBS colocates. The specifics and locations of these data centers are a secret, says the OBS.

Distributors and rights holders can access the data using Alibaba’s LiveCloud solution. The OBS LiveCloud is the main method of remote distribution for the

Olympic Games.

The cloud provider said it distributed live broadcast signals to more than 200 countries and regions across the world, with cloud computing providing lower latency and higher resilience than previously used satellite methods.

The OBS LiveCloud was first used in the Tokyo 2020 games and is AI-enhanced, meaning fans will have access to content with AI-driven features for live spatial reconstruction and 3D rendering.

Salmouris describes the relationship between Alibaba Cloud, Intel, and the OBS as a “three-way discussion,” but adds that, beyond OBS broadcasts, broadcasters work with their own cloud providers.

Finally, the data makes its way out of Paris and onto our screens, at speeds worthy of a gold medal.

Private 5G

In the Tokyo 2020 edition of the games, multiple telcos and operators were responsible for deploying 5G across the venues. This year, it all boils down to France’s homegrown telco, Orange.

Rojat says Orange is providing “a full IP, very high throughput, 100 gigabit per second IP network, combined with a fiber and mobile network” to connect not only the IBC to on-premises IT infrastructure, but also to serve “the displays, the TVs, the catering, security services, payments, and ticketing.”

Essentially, everything is hosted on the same network, which Rojat explains is “fully centralized.” In other words, everything is managed remotely, configured remotely, and can be dynamically changed.

Rojat says this one big private 5G network is preferable to a WiFi deployment and as a result, Orange has enhanced the coverage of 5G services at all Olympic competition venues, maximizing the capacity of the network. In some of the temporary locations, Orange adopted its Cells on Wheels solution, which used temporary systems to provide highcapacity coverage.

During the games, French fiber cables were sabotaged causing widespread network outages. Orange said its network was not impacted by the outages.

For staff and security at the Olympic Games, Rojat says: “There will be 13,000 Push-To-Talk terminals, and for the

first time they will be operating using a prioritized 4G network,” essentially transforming smartphones into walkietalkies.

DCD visited the Orange Velodrome, otherwise known as the Stade Velodrome, earlier this year to see how Orange deployed the latest 5G and Edge technology. Rojat says a great deal of Orange’s capabilities, like Push-To-Talk, were tested during the 2023 Rugby World Cup and scaled up for the Olympic Games.

Much of Intel’s AI capabilities are also made possible by Orange’s 5G network. The company also deployed its AI capabilities around the venues to create an application for the visually impaired. The application uses AI to provide people with live navigation around Paris and the Olympic venues.

“It uses the cell phone and the camera of the cell phone, meaning that most, if not all, of the inference will be done directly on the cell phone, not relying much on any connection to the data center. That is to me, the extreme case of being closer to the Edge,” says Phillippe.

Intel also deployed an AI chatbot onsite for the athletes. The chatbot served to improve the Olympic experience from an operational perspective for the athletes and the staff.

Atos currently has three sites in operation for the Olympic Games. Two of these, the CTOC (Central Technology Operations Center) and the ITL (Integration Testing Lab) are permanent locations in Barcelona and Madrid, respectively. The newly built TOC (Technology Operations Center) in Paris is connected to both these existing facilities. The facilities look after the IT systems at the Olympics from an operational perspective.

Atos did not respond to requests for comment.

Telling the Olympic story

Digitalizing an entire city and deploying an entire “ecosystem” of digital infrastructure is about telling the Olympic story, says Salamouris, and at the heart of everything the OBS does is carrying an age-old legacy for generations to come. Beyond all the national teams that participated in the 2024 edition, there is a team comprised of the OBS, Intel, Alibaba Cloud, Orange, and Atos that told the Olympic story to audiences around the world. 

GF Piping Systems‘ experience of more than 30 years in supporting the semiconductor industry’s efforts to build the most sustainably managed fabrication factories is the foundation of our offering to Data Centers. Our global teams help the industry manufacture some of the world’s most advanced technologies while still supporting their mission to use water resources more sustainability, reduce their carbon footprint, and lessen their impact on the environment.

www.gfps.com/datacenters

Can sustainable satellite teleports find an edge in a grueling market?

As markets remain challenging, where can operators save on energy by going green?

Satellite teleports, sometimes referred to as Earth stations and the ground segment, have been connecting satellites to terrestrial networks since the mid-1970s. Since then, the market has gone through several major changes.

Players like Eutelsat, Arqiva, and Speedcast have been grappling with the shifting reality of the satellite business, aiming to remain agile while more satellites come online capable of transmitting directly to devices instead of the gateway.

“The past years have been a time of tremendous change in technology, assets in orbit, and market needs,” Robert Bell, World Teleport Association (WTA) executive director said in a 2023 report. “Agility and the intelligent management

of opportunities and risks have become key competitive advantages. Our top operators have excelled at all three.”

In 2022, Spherical Insights and Consulting estimated the global satellite ground station market to be worth $53.98 billion, which they expected to grow to $109.77bn by 2032.

Power draw is no insignificant concern with large mechanical antennas, onsite data centers, and complex communications equipment. Teleports can house dozens of antennas, some 16 meters across or larger with motorized tracking to follow low and medium Earth orbit satellites across the sky.

“Driven primarily by these higher costs, but also by growing awareness of the consequences of fossil fuel consumption, many teleports are taking steps to

Orange, France |

Source: Orange

Laurence Russell Contributor

decrease energy use and greenhouse gas emissions,” explains a January 2023 WTA report entitled How Green is My Teleport? With onsite renewables, these costs can be significantly limited.

Many cloud, data center, and telecommunications companies have stated sustainability goals, investing in renewable Power Purchase Agreements (PPAs) and on-site microgeneration at data center and cell tower sites to meet their ESG goals. Satellite companies are slowly following suit, though recent uncertainties around the optics of ESG have seeded hesitancy in surrounding markets.

Sustainability drivers

According to the WTA, the primary motivator for new green investments was reduced energy consumption, particularly in Europe where energy prices have spiked as a result of geopolitical ructions.

A participant in the WTA’s 2023 report quoted a participant suggesting most teleports are being forced to pass on these energy costs. Another spoke of their customers requesting a more environmentally friendly standard from their teleport.

“Customers of the WTA board were asking about sustainable solutions and energy cost savings from renewables,” the WTA’s Bell tells DCD. “If you think about the power draw of a teleport, with its own small data center and massive traveling wave tube amplifiers or solid-state amplifiers, microgeneration investments can be a drop in the bucket to the energy requirements of a site.”

Bell was keen to emphasize the sharp difference between the two breeds of teleport when it came to making big investments. Some ground stations are owned by major satellite operators to verticalize their networks. Others are independent, and represent a link in the chain.

In 2019, Goonhilly described the power consumption of its own data center to be 500kW, 350kW of which was supplied by onsite solar. Some of its older dishes reportedly used up to 800 watts, but smaller, newer models were stated to use just 300 watts when stationary.

“When resources are lean, of course the first thing [teleports will] do to save energy is trim all the waste they can with new investments in technologies with lower

Leuk Teleport, Switzerland

Source: Leuk Teleport and Data Centre

“When resources are lean, of course the first thing [teleports will] do to save energy is trim all the waste they can with new investments in technologies with lower draw and higher output.”
>> Robert Bell, WTA Executive Director

draw and higher output,” Bell says.

Technology efficiency to reduce power needs

The WTA’s 2023 report cited widespread upscaling in high power amplifiers (HPAs), uninterruptible power supply (UPS) systems, and heating, ventilation, and air conditioning (HVAC), but noted that since many operators were “barely profitable,” it was important to weigh investments against cost reduction timetables. An interviewee spoke of power monitoring advances granting greater insights into inefficient hardware in server racks which informed consolidation and replacement decisions.

While heating and cooling were understood to be the primary power demand, it was followed by the demands of transmission equipment like amplifiers.

Many operators aspire to update from Klystron amplifiers to Gallium Nitride (GaN) amplifiers, which have better output, meaning fewer amplifiers can do the same job.

One teleport the WTA surveyed in Germany found it could replace all of its UPS and see the investment pay for itself within 2.5 years. Others noted how cooler transmission infrastructure such as rechargeable batteries being charged while at 100 percent capacity had a knock-on effect on ventilation expense.

Processing virtualization was also highlighted as a solid method of energy mitigation, with one participant claiming replacing five to 15 racks with cloud support would save $17,000 a year in energy costs, though these gains have little to do with carbon footprint. “We’re just moving the footprint,” states an executive that makes heavy use of equipment virtualization. Another interviewee preferred to think of cloud use as redundancy. “It’s cheaper to have that insurance in the cloud and only pay when you use it.”

Setting the bar

Paris-based Eutelsat, which merged with OneWeb last year, is one of the WTA’s top members and has made significant commitments to hit sustainability targets. The company intends to halve its energyrelated emissions by 2030, compared to 2021.

The company intended to see 2,000,000 KWh/year produced by onsite solar panels by 2025 across their sites in France, Italy, Portugal, and Mexico.

The work is backed up by Eutelsat’s cooperation with the Science Based Targets initiative (SBTi) to observe and track their targets, lending credibility and transparency to their work. James Matthews, director for corporate social responsibility at Eutelsat described it as a “non-negotiable element” of its commitments.

“Eutelsat is a very large organization with deep roots in government, which makes it a prime candidate for leading the standard,” WTA’s Bell explains. “Big companies can address these concerns on their own, but independent teleports have less to work with. They could stand to see how projects like these are done and what the payoff is… There’s a need to provide some visible leadership here.”

Eutelsat declined to comment, citing its position navigating a carve-out and partial sale project.

Crawley-based Arqiva doesn’t plan on being left behind. As the supplier of broadcasting services for digital TV and radio in the UK and more than 1,000 channels internationally through satellite, and connect 50 million smart metering data points every day.

“Arqiva has committed to be net zero by 2040 with an interim target to be net zero for Scope 1 and 2 emissions by 2031,” Caroline Morris, Arqiva’s head of sustainability, tells DCD. It also sustains Scope 3 plans to collaborate with suppliers to support more carbon-efficient products.

The company states it has solar panels installed at some of its operational locations and is currently taking a “practical approach” to considering other

microgeneration options in the future.

“We use a rationale that looks at avoiding use of energy through power reductions or replacing equipment with more efficient alternatives across all our operations, where absolute reductions cannot be made we purchase renewable energy,” Morris says.

Arqiva is also cooperating with SBTi, and aims to have a set of carbon reduction target validated by them by June 2025.

“We participate in the Carbon Disclosure Project (CDP) gaining a C rating for our last submission as well as external ESG benchmarking, gaining a silver medal from EcoVadis and a score of 82 for our last GRESB submission,” Morris says.

In January 2023, Orange announced plans to deploy 50,000 square meters of solar panels, with an installed capacity of 5MW, at its satellite communications site at Bercenay-en-Othe in France. These will supply 20 percent of the site’s energy needs. In September 2022, Leuk Teleport & Data center, previously Signalhorn, sought to become Europe’s first 100 percent green teleport, deploying panels on the roof of its main building and 3,215 sqm (34,605 sq ft) of a defunct antenna at its site in Leuk, Switzerland.

Leading the way

With a remit to communicate what teleports can do to invest in mitigating their energy costs and green activist investors speaking their concerns, the WTA began running the Green Teleport program, a global design competition to connect Earth stations to universities and technical schools to create proposals to

enable sustainability gains for the sites.

The project called upon universities to equip their students to submit reports on engineering, business, and operations solutions that reduce greenhouse gas emissions through energy efficiency.

The work would not only create useful catalysts for change, but aid the development of relationships between teleports and educational institutions to build talent pipelines for them.

“Some are starting from scratch on that, others are building existing relations,” explains Bell. “Our partners are also in competition with big tech’s grasp of new talent. They need the brightest minds they can find.”

The 2023 competition was won by the University of Ljubljana in collaboration with STN and their teleport in Slovenia. Their star innovation was a self-sufficient water-cooling system for the teleport’s server room, fed by solar panels, of which the site could support through its main building’s 1,200 sqm (12,900 sq ft) of roof surface accommodating 312 solar panels generating 124kW of DC power per hour, reinforced with the support of batteries and capacitors capable of serving electrical need through the night and refilling in the day.

Their proposed solar array provided double the energy required for the conventional AC cooling solution already used at the site.

Another key solution was a proposed change of antenna design, favoring a low directivity with a wide main lobe with a deep minimum in the middle, which would need to be mechanically redirected

SES Luxembourg | Source: SES
| Source: Sebastian Moss

less often than a high directivity narrow lobe design. They proposed the introduction of patch-monopole antenna feeds.

“The introduction of the green technologies presented contributes to greater teleport autonomy, reducing the likelihood of service outages and increasing the reliability of operations,” the students explained in their winning report.

Second place went to the National Technological Institute of Mexico collaborating with Eutelsat’s Hermosillo teleport. Students calculated the site’s highest expense (excluding payroll and rent) was utility power, taking up 47 percent of the location’s costs according to billing information from 2018 to 2023.

They determined that the purchase of 144 solar panels, estimated to produce 446kW per hour in the sunny South American climate, would result in annual energy cost savings which would recoup their own capital expenditure investment in as few as two years with one of the solar providers analyzed.

The report recognizes the challenge of overheating and evaluates various cooling solutions to better address panel efficiency and longevity. While the region made solar an easy recommendation, students made pains to outline the effectiveness of an Aeromine system, a proprietary bladeless wind energy system that claims to outperform solar arrays of the same size.

These students brought a rare perspective to the hardened industry veterans they were collaborating with,

driver on the level I see in Europe and Asia,” Bell explains. “It isn’t trepidation so

“The introduction of the green technologies presented contributes to greater teleport autonomy, reducing the likelihood of service outages and increasing the reliability of operations”
>>Winning WTA green teleport program entrant

breathing new (green) energy into discussions. Having worked with a lot of students himself, Bell was well-appraised of what they bring to the table.

“It leads to productive conversations about which new directions are practical and which aren’t,” he said. “Our goal [for the project] is to have 10-12 companies in the program every year who take real pride in the work to make for strong competition.”

Sustainability misgivings amid slim margins

As has been the case at the macroeconomic level, not everyone is convinced of the great green revolution of energy transition. Not all green interventions are equal on the balance sheet, and projects with less financial advantages can be interpreted as vainly as corporate art installations.

“Sometimes [expensive green projects] are more of a statement than a raw economic advantage. People see these very visible investments and think ‘oh, they’re good people,’” Bell speculates.

In its report, Eutelsat explained that 85 of the 96 hectares of land it owned at its Paris-Rambouillet teleport were used for organic agriculture, while Telesat mentioned its corporate headquarters’ use of rooftop beehives. Measures like these are vital to addressing climate change, but set them in the crosshairs of ideological reactionaries.

“[Trepidation around ESG] is very cultural. In the business community here in the United States, ESG really isn’t a topic. It doesn’t come up as a business

much as concern about its relevance to the business. If it’s relevant, you can be sure business leaders will make sure people know they’re doing something about it.”

Participants in the WTA’s research were unanimous that renewable energy was less available in the United States than Europe, despite sites in California using 90 percent renewables, and an Electrodynamics facility in Brewster, Washington which relied solely on local hydroelectric power.

Most companies the WTA spoke to had yet to establish specific sustainability goals, most of which cited the obstacle of capex commitments that cannot be made. In an unforgiving race against big tech entrants like SpaceX, the priority for new investment was on services that could keep up with competition.

Cloud providers with ground station deployments, such as Amazon and Microsoft, claimed all their operations run on 100 percent renewable energy, having signed gigawatts worth of PPAs. They have never, however, mentioned the energy use of their ground station operations in these deals or given an idea of their energy use.

“I think we are going to see players with intense power consumption doing what the hyperscalers are doing,” Bell predicts. “[They’ll be] working out how to layer their power delivery to enable greater resilience. That means solar and better batteries, and it comes from finding where the cost-efficiencies of what’s cheaper and better are. It’s inevitable, but the timescale is almost impossible to predict.” 

Goonhilly, UK

Eaton 9395X –the next generation UPS

Ease of deployment

• Faster manufacturing due to ready-built sub-assemblies

• Simplified installation with inter-cabinet busbar design

• Plug-in self-configuring power modules

• One-click configuration of large systems

Compact footprint

• Occupies up to 30% less floorspace, leaving more room for revenuegenerating IT equipment

• Installation can be against a wall or back-to-back

• Integration with switchgear saves space, as well as the cost of installation and cabling

The Eaton 9395X is a new addition to our large UPS range. It builds on a legacy of proven power protection of Eaton’s 9395 family, providing a market-leading footprint with the best power density, leaving more space for your revenue generating IT equipment.

This next generation Eaton 9395X UPS offers more power, with ratings from 1.0 to 1.7 MVA, in the same compact footprint and brings you even more reliable, cost-effective power, which is cleaner thanks to our grid-interactive technology. With world-class manufacturing processes and a design optimized for easy commissioning, the 9395X offers the shortest lead-time from order entry to activation of your critical load protection.

• Save on your energy bill with improved efficiency of 97.5% and reduced need for cooling due to up to 30% less heat loss

• Choose the correct size capacity for your immediate needs, and easily scale up later in 340 kW steps

• Optimized battery sizing with wide battery DC voltage range Cost efficient & flexible

Easy maintenance

• More reliable self-monitoring system

• Less need for scheduled maintenance checks

• Safe maintenance while keeping loads protected

• System status information provided

More resilient

• Builds on the capabilities of the proven Power Xpert 9395P UPS

• Improved environmental tolerance for modern datacenters

• Component condition monitoring

• HotSync patented load-sharing technology

• Native grid-interactive capabilities

• Reduce facility operating costs or earn revenue through energy market participation

True believers: Lambda Labs’ AI cloud dreams

How a niche AI player hopes to compete with hyperscalers, and fend off newcomers

The race to build out the world's artificial intelligence infrastructure has seen many newcomers emerge onto the scene, hoping to chip away at the hyperscalers' market dominance.

For Lambda Labs, the moment is less about chasing a trend, and more about making sure it can keep up with a boom it predicted more than a decade ago.

The company was set up in 2012 with an eye to driving down the costs of running AI models after its founders struggled with the costs of Amazon Web Services in a previous venture. Lambda started by selling GPU-powered workstations and servers, and soon launched its own cloud service.

The Lambda GPU Cloud now operates out of colocation data centers in San Francisco, California, and Allen, Texas, and is backed by more than $820 million in funds raised just this year. As we go to press, another $800m is close to being finalized.

"20-25 years ago with the advent of the Internet, the software developer as a class came up and there are now millions of different software developers in the world," Mitesh Agrawal, Lambda's head of cloud, explains of the company's philosophy. "There are going to be millions of AI developers as well and we really want to work closely with them, we want to grow with them."

To catch what may prove to be a huge AI wave, Lambda is focusing on those developers early on. "Today, for example, Lambda has on our on demand cloud, where you can go from entering your credit card to actually running a job from your GitHub or HuggingFace transformer library in two minutes," Agrawal says.

"If you go to AWS, because they have to think about so many types of customers, first you have to install SageMaker, then you have to install your environment and so on.

"Lambda focuses only on GPU use cases for AI, initially for training and now going into inference. There is something to be said about a stack that allows you to spin up your application with a lot more ease."

Beyond simplicity and speed, another factor that works in the company's favor is cost. AWS, with huge historical margins to defend, can charge more than $12 for an ondemand H100 GPU per hour. Lambda currently charges $2.49 - although a price only comparison leaves it behind CoreWeave, which manages $2.23.

The challenge for Lambda, and others in its situation, is less about attracting developers with its ease and price. It's about keeping them if they grow. Cloud provider DigitalOcean focused on software developers in the pre-AI era, but saw customers 'graduate' to other cloud providers as they reached scale - leaving DO's growth stagnant.

from financial or public markets where they have to make certain margins. Pricing does become a knob for us."

"If you want to talk about longevity, like Lambda being around for 20, 30 years as a company, you do need to keep those startups that are growing," Agrawal says.

"That comes from more software features, as well as being aggressive on pricing. We're not public - we are very financially savvy and want to maintain a level of [financial] sustainability in the company - but we are not under pressure

"We can't do $100bn right out of the gate today. But there is a number there that we can do, and then we have an ambition that someday, maybe five years, maybe 50 years, Microsoft and Lambda will be similar size deployments"

The company may "not be even there in features for graduations," Agrawal admits. "But we sacrifice some of the pricing margin, and then pick some of our major breadwinners and focus on the features they need to make sure they graduate with you," he says. "And then once they graduate with you, people see that, and other companies come in and do that."

That's the plan, at least. "Right now, the strategy is market capture,” Agrawal explains. “Deploy as much compute as possible and then, as these companies grow, we make sure that we're keeping in touch and following them and making sure that at least some of them are graduating with us."

It also hopes to collect more established businesses along the way, and boasts Sony, Samsung, and Intuitive Surgical as customers. “We really do think the world is going to have major AI companies, but there's also going to be a big, fat, long tail of existing enterprises that will adopt AI,” Agrawal says.

He continues: “There's so much utilization of older models and layers. Technology companies are first to adopt, financial services companies, pharmaceutical companies and media and entertainment are getting into it. But insurance companies may just be starting. There's just so many layers of this that you can carve out niches here.”

The ambition, Agrawal says, is “to be the number one cloud for things like this,” and he believes there are big businesses in each of these sectors that could be within reach for his company. “You keep on adding [them to your

"The demand curve is higher than supply. We anticipate that not just for six months, 18 months, but for as far as we can see: Three years, four years, five years in terms of both training and inference demand"

service] as you grow and and hopefully you get to a certain level of ‘too big to fail’ kind of setup,” he says.

For Lambda, and every other cloud provider, differentiation beyond price and some software features has become increasingly difficult in a hardware market wholly captured by Nvidia.

"If you think about it from a 40 foot view, you're getting the same Nvidia GPUs from AWS as CoreWeave, OCI, and us,” he says. “But AWS is a commodity market, the CPU market is commodity, and you can build an $80 billion ARR business out of that.”

For some, the key differentiator is scale. Microsoft has deployed exaflops of compute over the past couple of years, with a huge portion dedicated to its favored son OpenAI. Now, rumors swirl of a $100 billion, 5GW 'Stargate' mega data center project for the ChatGPT creator.

"I know that they have not commented on it, but they're going to do it," Agrawal says. "We heard about it before the media started reporting on it."

This has led to fears that, as models grow unfathomably large, only a few companies with near-bottomless resources will be able to keep up.

Agrawal takes a different view. "It's actually great that some company is going to spend $100bn for AI,” he says. “Especially given that AI is [likely] going to be great for humanity.”

For Lambda more specifically, Agrawal says that the potential project simply proves the immense value of building in the AI space. "It's a great market indicator that one of the smartest companies on the planet is willing to put in $100 billion," he says.

"Of course, we can't do $100bn right out of the gate today. But there is a number there that we can do, and then we have an ambition that someday, maybe five years, maybe 50 years, Microsoft and Lambda will be similar size deployments."

Just as Microsoft is stretching itself to fund such an enormous deployment, "Lambda is deploying at a massive scale,

because we do believe that the training runs are going to get larger."

While OpenAI and some of the other large AI teams will turn to these supersupercomputers, Agrawal believes there is a market for others looking for smaller systems. "And, once you deploy a big model, it doesn't mean you can't break it down to smaller ones," he says.

For now, despite some concerns raised by Goldman Sachs and others about the long-term costs of AI, the market appears to be willing to support both large and small AI deployments. The demand is just for as many GPUs as possible, as soon as possible.

This has led to an imperfect allocation of customer funds to lesser cloud providers. “If your cloud products suck, but the customer has exhausted all avenues to find a provider - so, say Lambda doesn't have capacity right now - they will go to the shittier cloud,” Agrawal says.

“That's the market in which we are in, the demand curve is higher than supply. We anticipate that not just for six months, 18 months, but for as far as we can see: Three years, four years, five years in terms of both training and inference demand.”

Agrawal foresees breakneck growth for some time yet, even as the US grid struggles to keep up. “Look, it's easy to get swept up in it all,” he says. “You hear Elon [Musk] saying the next model requires 100,000 GPUs, or you hear about [OpenAI video generator] Sora and how many hours of GPU time it uses.”

But, he argues, “when you think about the compute demand and the unfathomable amounts of GPUs and power, we believe it is going to explode. We are so confident about the space and about extracting value in the space.”

The Silicon Valley company has built its business on predicting this boom, and believing that it will last.

“We, as AI engineers, believe in it. We are here for the long term,” Agrawal says. “We want to contribute and make an impact by accelerating human progress through AI.” 

This feature appeared in the AI supplement. Read it for free on the DCD website: bit.ly/AISupp

When the music stops

hings that go up must go down.

The soaring market cap of Nvidia has meant that the entire artificial intelligence industry is reliant on it posting ever-growing quarterly returns to maintain investor optimism - despite the fact that almost the entire AI industry is also trying to compete with Nvidia.

This has created a dangerous predicament, where the successes of one company acts as a bellwether for an entire sector, and the expectations grow ever higher.

Despite posting earnings above expectations, shares in Nvidia dropped this last quarter over growing fears that the companies buying billions of dollars’ worth of its GPUs still haven’t worked out how to make money off of them.

CEO Jensen Huang, one of the best corporate orators of his generation, was unable to provide a clear to answer to those concerns, instead relying on platitudes, promises, and visions of a gilded AI age.

At the other end of the chain, OpenAI has continued to pitch dreams of an AGI - artificial general intelligence smarter than a human brain - but has no clear pathway to it. Despite being on track to burn billions of dollars every year, and a convoluted and questionable corporate structure, it looks set to raise funds from Apple and Nvidia at a $100bn valuation.

All the while, the actual value of generative AI to society and business remains unclear. Memes may have benefited, and spammers may be celebrating, but bottom lines have yet to be dramatically altered.

This could, of course, change - but even then, there is no guarantee that that means capex spend on data centers and GPUs continuing at this breakneck pace. Equally, should one AI succeed, others might collapse as the market contracts to its actual size.

All of this, as many have noted, has echoes of the dot-com bubble. When it popped, businesses shuttered, thousands lost their jobs, and there was a digital infrastructure winter: All despite the fact that investors were broadly correct that the Internet would transform the world and business. Many of those failed ventures were rolled up into today’s titans.

But nothing is ever identical. This time, one company dominates a hype wave in ways never before seen. Its biggest customers are incredibly cash-rich, and should weather any storm.

Further down the chain, however, things could get grim. Second-tier AI software companies could be eviscerated, debt-laden GPU-as-a-Service companies could close up shop, and over-extended data center fi rms could be forced to hold fi re sales.

Even those data centers that have carefully navigated the storm will have to reckon with a broader slowdown in capex, crashing leasing prices, an an evaporating investor appetite.

This could be some time away, or it could not come to pass if the hype treads the narrow path to reality. But, in this period of exuberance, where anything seems possible, remember that there is another real possibility. And it sounds like a pop. 

ABB Data Centre Solutions.

The Business of Data Centers

A new training experience created by

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.