DCD Magazine #53: This Feature Will Last 10,000 Years

Page 1


6 News Oklo deals, CoreWeave growth, machine guns

15 This feature will last 10,000 years

An exclusive tour of Microsoft’s Silica labs, that is itself stored on the longlasting glass

25 Connecting Indonesia

Indosat CEO Vikram Sinha on 5G, AI, and the nation’s new capital

29 30 seconds from a server CoreSite’s CEO on keeping data close

33 The Cooling supplement Cryogenics, hot water, and dense racks

49 RISC taking Is RISC-V ready for the data center?

54 375 Pearl Street Inside a New York data center accused of being ugly

58 Running Microsoft Cloud during the AI boom

Noelle Walsh on the madness of our times

62 A Tract of land

Tract’s plans for giant data center campuses

68 The transformation fallacy TUI survived Covid. It can survive a cloud migration

71 Biting off the right amount to chew Green Mountain on its cautious European expansion

74 Norway takes the register New rules hope to squash crypto

78 Amsterdam’s miss

After the moratorium, scars remain

82 A bad year for space insurance

A couple of bad launches and the industry is teetering

87 Return of the GEO How large sats survive LEO

90 Is your data center earthquake proof?

Don’t wait too long to find out

94 Op-ed: Silicon Valley & data centers

The pressure—to deploy faster, manage smarter, improve sustainability—is intensifying. Relax and focus with infrastructure fiber solutions from CommScope. Streamlined and modular, they accelerate deployment and improve change management. High-speed, low-loss performance supports AI, ML, and 800G. Unmatched service means complete peace of mind.

That’s grace under pressure. That’s CommScope.

Scan below to learn more or visit commscope.com/think-fiber

The immortal data center publication From the Editor

Ozymandias may be forgotten, but DCD will live on.

For the next 10,000 years, our coverage of the data center sector will be encased in glass.

The data center for tomorrow

For the cover, we investigate Microsoft's Project Silica, an ambitious effort to overhaul long-term storage by using lasers to inscribe data within quartz silica.

As part of the piece, we also have the feature - and all of DCD's historical

Glass cold storage could save energy and reduce waste

output - stored on our own platter of glass, which is expected to survive a hundred centuries without decaying.

The CEOs

To celebrate our magazine's special print run for DCD>Bali, we profile Indosat's CEO to understand what it takes to connect Indonesia. We'll be at the event, so if you see us - come say hi!

Over in the US, we also chatted to the CEO of CoreSite about working with American Tower to build a competitive data center platform in a time of exceptional competition.

And then in Europe, we hear from Green Mountain's CEO about the company's plans in and beyond Norway.

Europe in review

Sticking to Norway, we take a look at the country's new data center rules, and what they mean for operators.

Plus, Amsterdam opens up.

The boom

Have you heard about AI?

We don't need to tell you what AI is doing to data center build out expectations, so to catch up on the chaos we chat with Microsoft's Noelle Walsh about hype and scale.

Also on the topic of extreme scale, we talk to Tract's boss about the mega campuses of the future.

In the eye of the beholder

A lot of people think New York's 375 Pearl Street is ugly.

But to us, it's a great towering beacon of connectivity. Verizon no longer owns the 'Verizon Building,' with the site now a data center.

We visit the facility to learn more.

Costly space

A relatively mundane matter may end up hampering the future of satellite launches: Insurance.

When rockets go boom, so do the balance sheets of boutique space insurance agencies - and ever more of them are dropping out of the sector. What does this mean for connectivity?

Elsewhere in the mag, our space correspondent looks at what's next for GEO in a skyline increasingly dominated by well-funded LEO players.

Plus more

A cooling supplement, TUI's CIO, seismic proofing data centers, & more!

The length data can last on tape - but only if it is stored to precise conditions

Meet

Sebastian Moss

Georgia Butler

Matthew Gooding

Junior Reporter Niva Yadav

Head

Claire Fletcher

Partner Content Editor Chris Merriman

Copywriter

Farah Johnson-May Designer

Eleni Zevgaridou

Media Marketing

Stephen Scott

Head of Sales

Erica Baeta

Conference

Rebecca Davison

Content

Gabriella Gillett-Perez

Content

Matthew Welch

Channel Management

Team Lead

Alex Dickins

Channel Manager

Kat Sullivan

Emma Brooks

Director of Marketing Services

Nina Bernard

CEO

Dan Loosemore

Head Office

DatacenterDynamics

32-38 Saffron Hill, London, EC1N 8FH

The biggest data center news stories of the last three months Whitespace

NEWS IN BRIEF

Subsea cable to run through geographic North Pole

Nordic regional research and education network NORDUnet has proposed a new subsea cable connecting Europe to Asia through the middle of the Arctic ice sheet, close to the North Pole.

TACC data center swaps to hydrogen power

Oklo signs nuclear deals with Equinix and Wyoming Hyperscale

Equinix has signed a pre-agreement with small nuclear reactor firm Oklo to procure up to 500MW of nuclear energy.

Oklo’s small fast fission reactors are capable of producing up to 15MW of power and can operate for 10 years or longer before refueling.

Backed by OpenAI CEO Sam Altman, the company is in the midst of a reverse merger with SPAC AltC Acquisition, which is expected to take the company public in July.

In an S4 filing with the SEC by AltC Acquisition Corp ahead of the merger, the company revealed a deal with data center colocation firm Equinix.

Following a letter of intent signed in February, Equinix has made a $25 million prepayment to Oklo for the supply of power by the small modular reactor (SMR) company.

The letter of intent is for Equinix to purchase power from Oklo’s planned ‘powerhouses’ to serve Equinix’s data centers in the US on a 20-year timeline –at rates decided in future Power Purchase Agreements (PPAs). Equinix will have the right to renew and extend PPAs for additional 20-year terms.

Details on where Oklo may be building

these powerhouses weren’t shared; though the company has an agreement in place with Centrus to deploy Oklo SMRs in southern Ohio.

The letter provides Equinix the right of first refusal for 36 months for the output of certain powerhouses for power capacity of no less than 100MWe and up to a cumulative maximum of 500MWe.

Oklo has also signed a non-binding letter of intent with data center company Wyoming Hyperscale for 100MW of nuclear power.

Wyoming Hyperscale is building a data center campus on 58 acres of land on Aspen Mountain, a remote site southeast of Evanston in Wyoming.

SMRs, which are also under development at companies including NuScale and the UK’s Rolls-Royce, have been proposed as a potential source of low-carbon energy for data centers which could effectively allow facilities to operate independently from the local grid - or as green PPAs feeding into the grid.

Last year, Blockchain firm Standard Power announced plans to procure 24 SMRs from NuScale for two US data center sites.

Microsoft has previously signed a nuclear deal with Ontario Power Generation for its operations in Canada.

The Texas Advanced Computing Center (TACC) in Austin, Texas, is now powering its data center via hydrogen. A US Department of Energy hydrogen test site launched in April.

King

Street acquires liquidcooling pioneer Colovore

Investment firm King Street Capital Management has acquired a majority ownership stake in Colovore, the Silicon Valley-based liquid-cooled data center specialist. The company has two all-liquid-cooled data centers in Cali.

Sloth caught in server rack in Brazilian data center

A sloth was discovered tangled in the wires of a server rack at a university data center in Paraíba, Brazil. The animal may have been trying to warm itself using heat from the IT hardware. It was returned to the surrounding forest unharmed by a member of university security.

Satellite firm SES to acquire Intelsat for $3.1 billion

European satellite company SES is acquiring rival operator Intelsat. The combined company will have a fleet of 126 satellites across Geostationary Earth Orbit (GEO) and Medium Earth Orbit (MEO) satellites with a further 15 set to launch by 2026. The companies had previously held merger talks last year.

AWS retires Snowmobile

Amazon Web Services (AWS) has ceased offering to transport data via an 18-wheeler after eight years.

Announced in 2016 by bringing a semi-truck on stage, AWS Snowmobile was a 45 ft, 100 petabyte (100PB) mobile shipping container designed to physically transfer data offline from customer premises to AWS facilities en masse.

CoreWeave becomes the new AI cloud hyperscaler

GPU cloud provider CoreWeave has had a busy quarter, raising billions of dollars and signing major leasing deals across the US and Europe.

May saw the company close a $1.1 billion Series C funding round and raise $7.5bn in debt financing. The deals mean CoreWeave has a valuation of close to $20bn and more than $10bn in cash available.

The funding and financing saw participation from the likes of Coatue, Magnetar, Altimeter Capital, Fidelity Management & Research Company, Lykos Global Management, Blackstone, Carlyle Group, CDPQ, DigitalBridge, BlackRock, Eldridge Industries, and Great Elm Capital.

Founded in 2017 and originally focused on crypto and blockchain applications,

CoreWeave has been investing heavily in its cloud offering, offering access to GPUs for AI applications.

Late last year, the company announced that it had raised $642m in a round that valued it at $7bn. That came after a $221m and a $200m raise earlier that year.

The company currently lists three data center regions on its website; US East in Weehawken, New Jersey; US West in Las Vegas, Nevada; and US Central in Chicago, Illinois. Its status page also lists a region in Reno, Nevada.

The company has been on a major leasing spree in the last 18 months. CoreWeave previously said that it expects to operate 14 data centers by the end of 2023 and 28 by the end of 2024.

Singapore announces plan to unlock 300MW of capacity

The company has signed leasing deals with the likes of Lincoln Rackhouse, Chirisa, Flexential, TierPoint, Digital Realty, and Core Scientific for data centers across the US, including in Texas, Virginia, Georgia, and Oregon.

Crypto-hosting firm Core Scientific recently announced a 200MW deal with CoreWeave that will see the cloud firm place thousands of GPUs at Core Scientific’s sites.

Core Scientific operates cryptomining data center campuses in Texas, North Dakota, Kentucky, Georgia, and North Carolina. The crypto-company intends to redeploy Bitcoin mining capacity from designated HPC sites to its other dedicated mining sites.

CoreWeave also recently announced plans to expand into Europe. The company will invest $2.2bn in expanding and opening three new data centers on the continent before the end of 2025. It will also invest $1.3bn in opening two data centers in the UK.

Exact details weren’t shared, but CoreWeave will be investing in locations in Sweden, Norway, and Spain. On its jobs board, the company is hiring for a data center technician in Barcelona, Spain, within an EdgeConneX data center. The company hasn’t said where its UK data centers will be located, but it is hiring data center technicians based at East London’s Docklands.

Core Scientific also confirmed it had received and declined a $1bn acquisition offer from CoreWeave, saying the offer “significantly undervalued” the company.

Singapore aims to unlock 300MW of additional data center capacity by driving greater energy efficiency at existing data centers.

Announced in May, Singapore’s Infocomm Media Development Authority (IMDA) has launched a “Green Data Centre Roadmap” designed to help support data centers in the city-state.

To achieve this, the IMDA aims to partner with local players to reduce the current energy usage of data center equipment and hardware in existing facilities to unlock more capacity.

“Our goal is to both meet our climate commitments and provide at least 300MW of additional capacity in the near term, or more with green energy deployments,” Deputy Prime Minister Heng Swee Keat said this week at the Asia Tech x Singapore (ATxSG) conference.

“This will require data center operators to work with enterprise users to enhance the energy efficiency of hardware and software deployed, and with energy suppliers to scale up the use of green energy.”

However, a report from financial analysts BMI says the announcement of the roadmap is unlikely to sway investors. It said the promised capacity would not be sufficient to meet the requirements of end users.

Digital Edge

develops tech to replace lithium-ion batteries

APAC data center operator Digital Edge has developed a new energy storage system to replace lithium-ion batteries at its data centers.

First revealed in the company’s 2024 ESG report and officially announced this week, Digital Edge partnered with South Korean energy storage firm Donghwa ES to develop what it calls a Hybrid Super Capacitor (HSC) as a new type of power supply for its UPS systems.

The company says HSC can replace lithium-ion batteries traditionally used in data centers.

HSC technology uses a hybrid energy storage method combining activated carbon, from an electric double layer capacitor, with carbon from a lithium-ion battery to produce a solution that the company says reduces the deterioration of the negative electrode in comparison to other technologies.

Jay Park, chief construction and development officer, Digital Edge said: “At Digital Edge we seek to be more than simply a data center operator, but to be a leader that continues to innovate and set new standards that will elevate the entire industry.

“Partnering with Donghwa, we are proud to have developed the HSC Energy Storage System, which we hope will serve to enhance the safety and reliability of the data center industry, while also supporting our environmental commitments,” he added.

The capacitors are designed to withstand higher temperatures than traditional batteries, potentially up to 65°C (149°F), meaning the equipment does not need to be

cooled. Digital Edge said this means HSCs are well-placed to support energy-intensive AI and high power density deployments that require complex liquid cooling.

“The energy density of the HSC is lower than a lithium-ion battery, however, due to its high power density (or C-rate), the HSC system has sufficient capacity to cover the same UPS load as other battery systems,”

Digital Edge told DCD

“In addition this high power density means that the HSC system can be recharged in a much shorter time frame, thereby making it more suitable in consecutive power outage situations.”

Digital Edge said the HSC can be recharged “in a matter of minutes,” enabling it to handle multiple contiguous power outages within a data center. The fact it does not utilize metal oxide reduces the risk of fire due to thermal runaway. Digital Edge said the capacitors have a 15-year lifespan, more than twice that of traditional battery products.

“With its significant decrease in fire risk, 100,000+ discharge/charge cycles capability, minimal maintenance, wider operating temperature range, ability to recharge in minutes vs. hours, and 2.5× longer lifespan, HSC is a viable and safer option to traditional lithium-ion batteries,” the company said in its ESG report.

Digital Edge said initial tests have “proved successful,” and the company is beginning to deploy HSC “on a small scale” across its new data centers.

Donghwa ES aims to commercialize the HSC for wider industry use within 2024.

An Australian pension fund suffered a week-long outage after Google accidentally deleted UniSuper’s Private Cloud subscription.

Initially described as a “combination of rare issues at Google Cloud” which resulted in an “inadvertent misconfiguration,” a later update said that the misconfiguration “ultimately resulted in the deletion of UniSuper’s Private Cloud subscription.”

The company said of the issue: “During the initial deployment of a Google Cloud VMware Engine (GCVE) Private Cloud for the customer using an internal tool, there was an inadvertent misconfiguration of the GCVE service by Google operators due to leaving a parameter blank.

“This had the unintended and then unknown consequence of defaulting the customer’s GCVE Private Cloud to a fixed term, with automatic deletion at the end of that period.”

Thomas Kurian, CEO of Google Cloud described this as a “one-of-akind occurrence” and assured that it would not happen again.

Google Cloud causes outage after deleting environment of Australian pension fund

The exact sequence of events is unclear, as is how much blame lies with Google and how much is down to decisions by UniSuper.

The outage lasted for an extended period of time as the subscription was deleted, taking with it UniSuper’s two geographies that were intended to provide protection against outages and loss.

Fortunately, UniSuper had backups in place with a different service provider, and was able to recover its data.

Aligned to “move forward” with campus at Quantum Loophole park in Maryland

Aligned Data Centers has told DCD it plans to “move forward” with its previously stalled data center campus in Maryland after recent regulatory changes.

Maryland recently amended its regulations around backup generators, easing restrictions for data center firms. With this move, Aligned said it would again be moving ahead with its plans for a campus in the master-planned Quantum Frederick park.

Maryland isn’t currently a major data center market, but the Quantum Loophole campus, near Adamstown just north of Virginia’s Loudoun County, is seeking to create a massive data center park that could reach up to 2GW.

Led by former Terremark and CyrusOne executive Josh Snowhorn, the company has partnered with TPG Real Estate Partners (TREP) and is developing a 2,100-acre, park on the former Alcoa

Eastalco Works aluminum smelting plant site in Frederick County.

As Quantum’s first tenant, Aligned planned to build 3.3 million sq ft (306,580 sqm) of data center capacity at the park. A spokesperson for Quantum said the company had sold the land to Aligned.

Aligned wanted 168 diesel generators capable of delivering 504MW for its full build on the site but the company pulled out last year after only being granted a provisional exemption for up to 70MW of diesel generators.

This month saw Maryland Governor Wes Moore sign SB0474 – also known as the Critical Infrastructure Streamlining Act of 2024 – into law. The bill alters the definition of a “generating station,” with the aim of exempting generating facilities used to produce electricity for

the purpose of on-site emergency backup from certain permitting requirements.

Previously, backup generators below 2MW were generally allowed, while any installation bigger than 2MW was classified as a “generating station” and had to be granted a Certificate of Public Convenience and Necessity (CPCN), a process that requires a lengthy public consultation.

The company told DCD : “Governor Wes Moore’s vision for a thriving tech sector is a catalyst for Maryland’s economic growth. Aligned is excited to move forward with our plans in Maryland.”

Quinbrook-owned data center firm Rowan is also planning on developing at the Quantum site, aiming to construct 11 buildings across three individual sites.

Ex-Google CEO Schmidt predicts AI supercomputers guarded by machine guns

The former CEO of Google has predicted that the US and China will operate huge supercomputers running advanced artificial intelligence workloads at military bases.

In a recent interview with Noema, Eric Schmidt pontificated at length about how governments will regulate AI and seek to control the data centers that run them. Since leaving Google, Schmidt has been heavily involved with the US military-industrial complex.

“Eventually, in both the US and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors,” Schmidt said.

“They will be housed in an army base, powered by some nuclear power source and surrounded by barbed wire and machine guns. It makes sense to me that there will be a few of those amid lots of other systems that are far less powerful and more broadly available.”

Schmidt was chair of the US National Security Commission on Artificial Intelligence and has served on the Defense Innovation Board. He has invested significantly in defense startups.

Tract withdraws plans for 30-building campus in Buckeye,

Arizona

Data center park developer Tract has pulled out of plans for a new campus in Phoenix, Arizona.

The company was plotting a $14 billion masterplanned data center complex across 1,000 acres in the Buckeye area of Maricopa County.

Known as Project Range, the development was set to span nearly 30 buildings totaling 5.6 million square feet on land north and south of Yuma Road between Jackrabbit Trail and Perryville Road.

However, YourValley reports Tract withdrew the application from Maricopa County’s planning and development queue after opposition from local residents and other stakeholders.

Founder and CEO Grant van Rooyen, who previously founded and led Cologix, told DCD the company has other sites in the Phoenix area that it is moving forward with instead.

Tract is planning other large parks in Reno, Nevada; Eagle Mountain, Utah; and Richmond, Virginia totaling around 5GW and thousands of acres. For more, see p62.

FUTURE-READY POWER SOLUTIONS

Georgia governor vetoes bill to pause data center tax breaks

A bill that would pause tax breaks for data centers in Georgia has been vetoed by the state governor, Brian Kemp.

May saw Gov. Kemp decide to veto HB 1192 after it had passed both houses.

Introduced in February, the bill would have suspended the issuance of any new sales and use tax certificates of exemption for data center projects in the state from July 2024 to June 2026. The tax exemptions would still have applied to data center contracts entered into before July 2024.

The bill would have also created a Special Commission on Data Center Energy Planning. The commission would review the existing grid and energy supply, and make recommendations around expanding grid capacity and transmission infrastructure and siting data centers.

In a statement explaining the veto, Governor Kemp said the state extended data center exemptions to 2031 just two years ago, with projects starting in the interim based on that extension but are yet to be completed and qualify.

Georgia has given data center operators an exemption from the state’s sales tax since 2018.

The Data Center Coalition welcomed the decision.

Man dies at TSMC Arizona plant after explosion

A man has died after an explosion at the construction site of TSMC’s Arizona plant in May.

The worker, 41-year-old Cesar AnguianoGuitron, was a waste disposal truck driver.

Firefighters from the Phoenix, Glendale, and Daisy Mountain fire departments responded to the incident, which occurred at 43rd Avenue and Dove Valley Road in north Phoenix.

Local authorities are reportedly still investigating the cause of the explosion however, according to a report from ABC15 Arizona, Anguiano-Guitron was transporting pressurized waste materials away from the construction site when he was made aware of an issue with a container.

While inspecting the equipment, “an uncontrolled pressure release occurred,” causing Anguiano-Guitron to be hit by a blunt object and thrown 20 feet from the vehicle he was in.

Microsoft and G42 to build geothermal data center in Kenya

Dan’s Data Point

He was taken to hospital with serious injuries but later died.

In a statement to ABC15, seemingly provided before the news of Anguiano-Guitron’s death was announced, TSMC said: “We are aware of an incident that occurred at our Arizona construction site whereby a waste disposal truck driver was transported to a local hospital.

“No TSMC employees and onsite construction workers have reported any related injuries. There was no damage to our facilities. We are working closely with local authorities. This is an active investigation with no additional details that can be shared at this time.”

The week prior to the incident, the company posted on LinkedIn that it was recognizing ‘Construction Safety Week 2024’ at the site.

TSMC is building three fabs in Arizona as part of a $65 billion investment.

Power projects seeking to connect to the US grid increased by 27 percent in 2023. In total, 2.6TW of planned power projects are in the interconnect queue - around twice as much as the US’s entire existing generating capacity.

A data center run entirely on geothermal energy will be built in Kenya as part of a $1 billion investment by Microsoft and Dubai-based AI firm G42.

A new campus that will be Microsoft’s East Africa cloud region is to be built in Olkaria, south west Kenya. It is hoped this will be up and running in the next two years and have an initial capacity of 100MW. This could eventually rise to 1GW, the companies said.

The data center will be powered by geothermal energy using heat naturally stored under the earth’s crust. The Olkaria region has

abundant geothermal resources, and Kenya’s state-owned energy provider, Kenya Electricity Generating Company, operates four geothermal power plants in the region, with installed capacity of over 700MW.

G42 announced plans for a 100MW geothermal-powered data center campus in Keynaexpandable to 1GW - in partnership with local operator EcoCloud in March. EcoCloud broke ground on its own 24MW ‘Project Eagle’ geothermal-powered data center at the KenGen Green Energy Park in Olkaria last year.

SAFEGUARD YOUR SITE

Maintain uptime and avoid data loss – and the potential monetary consequences created by unavailability – by investing in reliable batteries that are engineered with advanced Thin Plate Pure Lead (TPPL) technology. Our batteries are specifically designed to meet the evolving needs of today’s data centers and deliverpeace of mind for the availability of essential equipment. We have the energy storage solutions to ensure the resilience of your mission-critical systems.

Discover more about our data center solutions at: www.enersys.com

Archiving 10,000 years

This feature will last 10,000 years

Lasers, robots, and the archive of tomorrow

This feature will last 10,000 years. To the people of the year 12024, we hope more has remained of our time than a single article on data centers.

The future of the past starts in a basement in Cambridge, UK. Crab-like robots scoot along rails at high speeds, stopping suddenly to carefully pick up small platters of laser-inscribed knowledge, ready to ferry them back to AI-assisted cameras.

But, for all the modern machinery, at the core of this library is a technology first discovered some 3,500 years ago when craftsmen on the banks of the Tigris began mixing sand, soda, and lime: glass.

Over the centuries, glass has evolved to be used as an expression of artistic creativity, has brought light into people's homes through windows and bulbs and, most recently, has formed the backbone of the Internet as fiber.

Now, it could have another use: To store the world's knowledge.

At a time when we are producing more data than ever, the planet’s data centers are struggling to keep up. Even if we could manufacture enough hard drives, flash drives, and tape to store everything, we’d soon need to move the data once again as the equipment ages and begins to fail.

HDDs generally live three to five years, SSDs are lucky to reach 10, and tape is sold

with promises of 15-30 years - but only as long as temperature and humidity are carefully controlled.

In this, our most recorded age, data is set to be lost as companies and governments either choose not to store it due to cost, or simply fail to transition when their devices age.

In search of captured time

Over a decade ago, and about a hundred miles southwest of the glass basement, researchers at Southampton University discovered a previously unknown property of glass: Using

to improve hard drive storage. "We were working on this system called Pelican, basically trying to get the lowest cost hard drive storage you could possibly imagine," he recalls over lunch in Microsoft’s Cambridge cafeteria. "At the time, there were about 24 HDDs in a 4U deployment, and we managed to pack 1,152 3.5” HDDs in 52U."

Black's team focused on keeping costs low, including energy usage - "We'd keep the drives spun down as much as possible, and we only had one fan to cool the whole system,” he says.

“It was a fun project and had some impact, but we realized that the medium

femtosecond-length pulses could leave precise deformations, and changing the polarization of the laser would change the orientation of these imprints.

This could then be ‘read’ by a microscope, interpreting the scars in the glass as data. The university first demonstrated a 300KB glass storage system in 2013, and has since postulated a potential 360TB disc that would last billions of years.

When the glass deformation research first came out, Richard Black was trying

was actually where the problem was.”

HDDs cost too much, their lifetime was too short, “and an annual failure rate of 3-5 percent means that your hard drives are failing at the rate of one a week just in that single rack,” he says.

“We just realized that, in the archival space, we needed a better medium. And then, pretty much simultaneously, Southampton published this paper going ‘modified glass crystal is immutable.’”

Microsoft partnered with Southampton

for the first iteration of what would come to be known as Project Silica, but has since forged ahead on its own.

After years in the lab, the company is in the early stages of thinking about rolling out the product through its Azure cloud, in a move that could have a profound impact on archival storage, cold storage data center design, and how we choose what to keep for the future.

The librarian

Black won’t stop talking. We’re behind schedule for our tour of the Project Silica research lab as he jumps from discussing optics and Rayleigh scattering to RAID storage to the costs of femtosecond lasers.

"It's always exciting to talk about the tech," Black says, gently holding an older iteration of the quartz glass Silica in his hands, which was itself holding a complete copy of Microsoft Flight Simulator

But, for all the transparency of the medium and the tour, Black was coy on some crucial details about the latest version of Silica - including its density, current read and write speeds, or even its final exact size. "We've had a number of breakthroughs in the last few years that we don't want people to know about just yet," he teases.

Microsoft agreed to store this article on Silica (along with the rest of DCD's historical text and magazine output), but again has done so on an older version of the technology. The company does not have copy approval over this feature.

Black gave us about 100GB to play with - Microsoft has since confirmed 4TB and 7TB versions. But asking how high they can go is the wrong question, he says.

"We pushed the density a lot, eventually the Azure business asked us to stop," he says.

Everybody focuses on density due to three factors, he believes - 1) When people have to carry a device around, they think about how much they can fit in their pockets, which isn't a concern for archives; 2) As media eventually dies, when people replace it they want a bigger one; and 3) Cost.

That third point is a critical part of the sales pitch behind the Silica research effort. With HDDs, flash, or even tape, the medium is expensive - and even more

costly over a long time period where it has to be replaced repeatedly.

Glass, on the other hand, is incredibly cheap and durable. Silica can happily survive being baked in an oven, microwaved, flooded, scoured, demagnetized, or exposed to moisture, and will do so for at least 10,000 years. Over those longer timescales, this means it can outlast many possible threats.

More immediately, it also means that it doesn't need any energy-intensive and costly air conditioning or dehumidifiers to keep it from rotting away. Once the data has been inscribed, Silica's cost is "basically warehouse space," Black says.

"We're competitive with Linear TapeOpen (LTO) on density, and that's why Azure said to us: 'stop pushing density, push other aspects of the project.'"

Like tape, but unlike HDDs and SSDs, Silica is also true cold storage. It requires no power to maintain it in its rest state. “Eventually, when the time comes, we know how to recycle glass,” Black says.

storage by technology type but by ‘tier.’

The company could feasibly simply sell the archival tier, and Microsoft would have “complete control over when we actually move it from [more traditional storage] to glass,” Black says, which would allow the company “to schedule that in a way that lets us run our writers flat out,” maximizing the use of the expensive lasers.

Black takes us to see one of the laserbased data-writing stations. It sprawls across a large table; lenses and mirrors protrude at different angles, while cameras and sensors abound. For safety, the laser is not operating as Black prods at different components (it is also why the technology will never be available for consumers).

The production version will be much smaller. Many of the sensors are only for research, and the system has been designed to be quickly upgraded and changed on the fly.

“When I first saw HoloLens, it was

polygon,” he says. “Rather than trying to spin a piece of glass, what we actually do is spin the light across it, a bit like a barcode scanner at a supermarket checkout.”

The process starts from the base of the platter “so that you're always writing through pristine glass, you're not picking up any noise,” Black says. “It’s like pouring layers of cement that fill up layer by layer on the way to the top.”

Each little bubble is a voxel that represents data, with the laser having 180 degrees of freedom to develop voids at different orientations. “If you can differentiate between four symbols, then you can store two bits in one symbol,” Black says. “If you can differentiate between eight different symbols, you can store three bits in one symbol.”

Southampton got all the way up to seven bits in a single voxel, “but that takes a huge number of pulses from the laser and leaves a big flipping shiner in the glass,” which slows down writing speeds

The main cost remains femtosecond lasers, which need to be capable of sending out pulses at 10-15 seconds, making plasma-induced nano-explosions that leave microscopic bubbles in the glass. Microsoft is hopeful that the technology will follow the trajectory of nanosecond and picosecond lasers and drop in price as it matures and grows in use.

Even if cost doesn’t drop dramatically, the cloud’s economics favor Silica. Black notes that Azure currently does not sell

much larger than this,” Black says, referencing the company’s mixed reality headset effort. “Microsoft has a business unit that is comfortable with optics,” he adds, suggesting that the HoloLens team may help with the commercialization of Silica’s equipment.

Were Black to switch the laser on, the beam would be split into seven parts – this number differs on the version we were not able to see – that would simultaneously imprint data on the glass.

“We've got this thing here called a

“We found that machine learning consistently outperformed all of the traditional signal processing approaches, and it was quicker to get results”
>>Ioan

and limits how many voids you can fit on the platter.

“Around two to three bits is where you want to be, where we've got these nice little gentle bruises,” he says. “It’s only a small amount of energy to make each one, and you can use all that other energy for doing hundreds of them simultaneously, packing them in.”

In an adjacent room, we get to see those voxels up close. It’s another large table brimming with lenses, wires, and exotic apparatuses. In the middle sits

a square of glass, watched closely by a camera at the end of a series of lenses and mirrors.

“You have hundreds of layers of data in that two millimeters of glass,” Black says. “What we do is we sweep the glass through the focal plane of the microscope. So, as it comes into focus, you snap the data.”

The demonstration is slowed down so that our human eyes can keep up. The platter is moved so that the right sector of data is in the camera’s view, and then it takes photo after photo of the different layers of voxels.

Understanding Silica

Those images are the first step in reconverting the analog data for digital systems.

“Since the beginning of the project, we saw an opportunity to leverage deep learning in image understanding and signal processing,” Ioan Stefanovici, principal research manager in the cloud infrastructure group, tells us.

“We found that machine learning consistently outperformed all of the traditional signal processing approaches, and it was quicker to get results.”

Stefanovici adds that using artificial intelligence will speed up the final system and allow for faster iteration during research. Most crucially, it will help the company fit more on a single platter “because you need to write less error correction as your error rate will be lower.”

Training the AI was, to some extent, easy. While larger models today are

pushing the limits of recorded knowledge to build their systems, the Silica team had the enviable position of being able to create data whenever they needed it.

If they wanted more images of voxels in glass, they’d simply laser another platter and put it in the microscope. “We're in the unique position that we can create as much training data as we want,” Stefanovici says.

It has yet to be decided where that AI will run and whether it’ll be on a custom ASIC or something else. But the team wants to ensure that any future AI can survive as long as the glass itself.

“With every piece of glass we write, we put enough training data in to be able to rebuild the model if needed,” Black says. “It takes a tiny amount of extra data. And it means that, just from that piece of glass, you could rebuild your model and do a decode.”

Stefanovici adds: “Every piece of glass is completely self-describing, so you can bootstrap from scratch and recover your data regardless.”

As Stefanovici leaves to continue iterating on the AI, Black is keen to note that they are not the only creators of this technology. The company, he says, has leaned on a number of different divisions and specialisms to bring Silica to this point.

Perhaps nothing is more testament to the different disciplines at work than the robots in the basement.

The first thing you notice about them is their speed. The second thing you notice is their shape.

There is a joke among evolutionary biologists that, given enough time, everything turns into crabs. English zoologist Lancelot Alexander Borradaile was the first to notice the process of carcinization, coining the term in 1916.

In at least five different instances, distinct species have separately evolved into crab-like creatures. With these robots, we have a sixth case.

Bots for bits

There are several of them.

They race along rails, hooked legs hanging onto a rail above and below. One stops, a grabber on its right side carefully extending to softly retrieve a Silica platter for carrying to a microscope-based read drive.

Then it lets go from the rail above it, its wheeled feet reaching out into thin air. The whole system flips around. It has dropped a rung, with what were its bottom feet now at its top.

This allows for a small number of robots to serve rows and rows of Silica platters, clambering up and down racks in search of the right piece of glass. Should one robot fail, another could go around it with relative ease.

Like with traditional media, Microsoft uses erasure codes including RAID (redundant array of independent disks) to store duplicate data around the library, so data isn’t stranded behind any broken robots. This differs from a tape library, which is often serviced by one large robotic arm that can disrupt the whole system if it breaks.

Each robot is battery-powered and standalone, a critical design choice, Black says. “We absolutely wanted the shuttle robots to be untethered, and wanted the storage racks to be completely unpowered. If you attach power, then you have all these electronics that have a lifetime and need monitoring.”

All of that equipment has a finite lifespan that could render the project obsolete before the death of the media itself. In the case of Silica, the shelf life of the system is the life of the shelf itself - or of the building it is stored in.

“So basically, the shelving, the glass, the building - which wears out first?” Black says. “Probably the building - once you're at that point, whether it's 1,000

years or 10,000 years, it's a bit moot.”

The robots may also have a more immediate impact on the data center sector, with researcher Andromachi Chatzieleftheriou confirming that Microsoft Research is “starting to think about how we can use robots for data center operations,” as she checked a cloth covering non-Silica robot prototypes that we weren’t allowed to see.

Last October, DCD exclusively reported that the company was hiring a team to research data center automation and robotics. On LinkedIn, those hired say that they are working on "zero-touch data centers."

Microsoft also has a broader robotics effort, one led by former DARPA program manager Dr. Timothy Chung, who previously led the OFFensive SwarmEnabled Tactics program and the DARPA Subterranean (SubT) Challenge.

For these crab robots, their speed is an important part of reducing the time it takes to access Silica data. The sooner they can shuttle a platter back to a camera, the faster it can start to process. This is where things slow down, although Microsoft won’t say just how long it expects reading to take.

“It's definitely still targeting the archival space,” Black says. “In that space today, the standard is 15 hours.” Even though reading off of a tape is faster, that’s only once you’ve reached the right part of the tape - in reality, the system has to spool through a kilometer of tape to find the data, and then wind it back up.

“If you want milliseconds, use a hard drive,” Black says. “You’re definitely not online.”

For archives, that is part of the selling point. The tech is ‘write once read many’ (WORM), so cannot be altered after the fact by those looking to rewrite history, nor is it at risk of ransomware attacks.

Back up a bit

To understand what a technology like Silica could mean for archivists, we turned to John Sheridan at the National Archives, the UK government’s official archive.

He wants to talk about sheep.

"Medieval England had an economy based on sheep - just look at the wool churches in the east," he says, referencing the vast churches funded by pious wool

magnates that dot the nation.

"Because there were a lot of sheep, it was easy to lay your hands on sheepskin parchment," Sheridan continues, growing animated. "And, consequently, medieval England is particularly well documented because parchment has amazing preservation properties."

In the 17th century, sheepskin had another revival. Lawyers discovered that the layers of flesh acted as an immutable record, "so if you try and alter the record, it shows on the skin."

Sheridan adds: "There's a longstanding relationship around the transaction cost of recording information and the cost of keeping information that has a really big bearing on what gets

Archiving 10,000 years

recorded and what gets kept. Long term storage is not new; we have a building full of parchment."

What is new is the level of density that a modern technology like Silica could provide. All the world’s 1.2 billion sheep could record but a fraction of what could fit in a single Silica warehouse.

Our understanding of the past is intrinsically linked to the choices of contemporary individuals and the medium they used. “Most civilizations wrote on perishable degradable organic materials: wood, bamboo slips, textiles, paper, parchment,” Curtis Runnels, professor of archaeology, anthropology, and classical studies at Boston University, tells us.

“This means that the lion's share of all written texts are gone forever.”

We don’t know how much we’ve lost - the best we can do is make guesses from what remains. “The ancient Mesopotamians and the Hittites wrote on clay tablets which they fired to preserve; the Hittite archive runs to more than 25,000 such tablets,” says Prof. Runnels.

“That is a very rare exception. When you consider that the Library of Congress or the British Library each have in excess of 100 million texts, you get an idea of the scale of the loss from ancient empires.

“Almost all the thoughts of humans who have ever existed are lost. A tiny bit of human thought and experience has slipped through the memory hole. How would we measure the loss if the ancient scriptures (like the Torah or New Testament or Bhagavat Gita), poems (Iliad, Odyssey, Aeneid, Mahabharata), philosophy (Lao Tzu, Buddha, Plato, Aristotle), and science (Archimedes, Euclid) [were lost]? Yet they are only 0.001 percent of the accumulated store of human knowledge.”

He adds: “But what we have is priceless.”

We ask Runnels, who was not aware of Silica, about what we could do to ensure that the texts of today would not be lost.

“Inscribe the text in multiple scripts and a key for decipherment (like a chart/sound correspondence code/microchip) inside of some material like glass, say in a glass block,” he says.

“Then make many copies and distribute them around the world (several hundred thousand per continent) each block marked with huge structures or monuments like towers. One might get through to the future.”

A message through time

Time is on Jonathon Keats' mind. The conceptual artist and experimental philosopher has long created projects focused on exploring our epoch and what comes next.

His latest work centers around two cameras: One that lasts 100 years, and another that lasts 1,000. The first, the Century Camera, is an inexpensive pinhole system developed with UNESCO where individuals are encouraged to hide hundreds of them across the world.

"Most of them are going to fail, but it's cheap enough that we can have redundancy,” he says.

The Millenium Camera is more expensive, and is only being installed in a few specific locations, including upcoming deployments in Los Angeles

and the Swiss Alps.

As we speak, the sun is shining on Tumamoc Hill, overlooking Tucson. A few photons of sunlight make their way through a tiny pinhole in a thin sheet of 24-karat gold housed in a small copper cylinder perched on top of a steel pole.

True to its temporal nature, the technology behind this camera is old. “It’s a concept from about 500 years ago,” Keats says. “Basically, you rub the copper with pumice, and then you rub it with garlic - nobody even knows why you rub it with garlic, but it helps to bind the oil.”

That oil, itself a technology traced back to at least the ancient Egyptians, is then glazed to leave a millennia-long exposure on the back of the copper cylinder. “There are myriad reasons why this is likely to fail,” Keats says.

“First and foremost, we're in beta, nobody's ever done it before, I have no real way in which to be able to iterate, given that I don't have 1,000 years to live to be able to create my first prototype.”

Therein lies one of the fundamental challenges of any signal to the future. With the Internet’s Transmission Control Protocol, or TCP, an acknowledgment - or return signal - is a core part of any communication that is sent from one system to another.

“Here's the rub,” archivist Sheridan says. “If you're sending messages through time, the future can't send you the acknowledgment 'message received.' You’ve got to spread your bets, you’ve got to have redundancy.”

Symbols without meaning

Should our message survive through unknown millennia, it is also not clear that our glass-encased words will even be understood.

"It is not a possibility but rather a certainty that any given language will have considerably evolved 10,000 years from now," Filippo Batisti, head of the Cognition, Language, Action, and Sensibility – Venetian Seminar (CLAVeS) in Italy, says.

"Small changes happen all the time in front of our eyes and their sum over decades amounts to grandchildren talking and writing somewhat differently than their grandparents."

Over a couple of generations, mutual intelligibility can be mostly preservedbut as time stretches on, that link begins to fray. "Linguistic intercomprehension will become the first concern," warns Batisti.

"Gutenberg and computer writing are separated by a mere five-and-a-half century interval: Here, we're talking seventeen times this difference in a world where technological progress is much faster!"

It is not inconceivable that a future civilization would turn to AI to help decipher any discovered texts - or that the civilization could simply be an AI.

Already, we can see how our primitive technology can resurrect dead and forgotten languages. This past year alone, there have been “two marvelous breakthroughs, one with the Kushan Script and the other the Herculaneum papyrus rolls,” archaeologist Runnels says, with AI helping unlock new understandings of life in Central Asia and insight into a city destroyed by Mount Vesuvius’ eruption.

AI “has proved to be very good at deciphering unknown scripts and, when combined with CT-scanning and other technologies, to be able to read charred or otherwise ‘unreadable’ fragments of ancient books. I think that we will see all

the undeciphered scripts (and there are many) broken in the next five years at the most.”

Of course, we still suffer from the lack of contextual background that would help us fully understand the texts. “I can have all the AI translations of my words but our lack of comprehension is not only linguistic, they are not understanding the conceptual/cultural framing that makes me have that particular need,” Batisti says.

“This is more of an interpretative problem, rather than one of mere linguistic translation. AI-assisted translation will hardly be of help alone. This, incidentally, is also a very good reason to defend the value and the usefulness of the humanities against the ever-trending hyper-scientistic utopias.”

Lost in translation

Even if a future civilization relies on a language somewhat similar to our own, and is able to translate texts for their time, "the problem might be that entire sets of single words would turn out to have empty referents: entire concepts, taken in isolation, would then be lost," Batisti says.

"The longer the distance, the more the pieces of life (like material culture,

or social norms, or modes of knowledge and beliefs) attached to words change. By then, our comprehension of physics or even medicine will be different and its social significance as well. Even words or concepts referring to our own body and biological features could be construed quite differently. That already happens today in different cultures around the world."

The archaeologists of today do not just rely on texts - they look for dwellings, statues, lost cities, and other civilizational detritus to help paint a picture of the past. Luckily for future historians, but unfortunately for everyone else, we are leaving a far greater message to our descendants.

Climate change, biodiversity loss, and plastic waste are but a few of the anthropogenic scars we will leave on our timeline. With or without Silica’s recordings, a successor civilization would be able to decipher our values and priorities from our actions.

Artist Keats hopes that his work, and those of others, will help people think at longer time scales to understand the compounding effects of loss and change.

“It is essential that we be able to situate ourselves in relation to the deep past and to the future,” Keats says. “By virtue of the fact that what we do will persist for a long time, we are responsible to the future.”

That said, he warns that such a view can also be coopted to justify anything in the now. “There are many more generations that will live after us than that have lived before us,” he says. “And if we think about suffering, and we take a utilitarian way of considering suffering, then the far future as a whole is far more important than the present,” a concept that can allow for dangerous moral assertions.

“We are in the present and we need

to be living fully in the present in order to be able to make the kinds of decisions that are actually going to have a positive impact on the future. It’s a delicate balance.”

That present is one increasingly dominated by AI.

Generative AI, in particular, has become the technology of our age - or, at least, has been hyped up to be so. The largest hyperscale cloud companies, including Microsoft and its competitors, have announced record investments in data centers and servers as they gear up for a profound jump in the computational capabilities of our species.

This moment also represents another delicate balance for archivists and others interested in recording the world, one of great opportunity and even greater risk.

An AI audience

Generative models are hungry. Current approaches have seen companies circumvent copyright laws to ingest most of the Internet and a huge number of the world’s texts to help the models grow dramatically smarter over the past year.

This feature, once published, will soon be scraped and added to the great slop of data that is being pumped into the next wave of models. But it’s not enough. To keep growing, the models need ever more data, and they are running out.

One way to circumvent this has been for companies to feed their model synthetic data (knowingly or unknowingly), allowing them to create the data they need. But this can lead to model collapse, as the system errs ever further from baseline reality.

Another approach could be to digitize the treasure trove of data we have from the past - something made more possible by lowering the cost of storage through Silica or other technologies.

"I wonder whether it opens up new opportunities and new models for digitizing analog collections, because we've got hundreds of kilometers of records of humanity and less than 10 percent of that has been digitized," Sheridan, head of the National Archives' digitization efforts, says.

"I've been working at digitizing stuff really hard for a long time. But maybe the economics of digitizing the analog records of humanity - which are pretty extensive - shifts in a really profound and interesting way."

Signal through the noise

The models that we are building by mining our archives also risk polluting them.

“What we have now is a past that never existed,” says Andrew Hoskins, interdisciplinary research professor at

the University of Glasgow and founding Editor-in-Chief of the journal Memory Studies

“Large language models are regurgitating something that never was.”

There is no way to prove that this feature was written by a human. Its length and hopefully its clarity offer some hints, while - were one to invest the time to fact-check it - the lack of hallucinations and fabricated quotes offer another clue.

But that is hard enough today. For a future historian, perhaps sifting through countless records of multimedia content generated by AI models from the next decade, unable to call up sources, and lacking the contextual clues that might speak of human origin, what could they make of this text?

We are creating a great deal of noise that could deafen recorded reality, leaving mirages and illusions of ourselves alongside real videos and text.

The recorded self

Even without generative AI, the amount of data we produce is rising at a dramatic rate.

At the turn of the century, as the power of digital technology to record our lives became clear, "there was this obsession over total memory," Hoskins recalls.

Companies at the time pitched products that could record your life in full:

“It was this bizarre advertising like, ‘you'll never miss your first kiss, you can go back and see it at any time.’” The technology was expensive, impractical, and clunky, and it never really took off.

“And then in the past two years, it's started to become a reality.”

As a culture, recording and sharing more and more of our lives has become the norm. Even if you try to limit your own sharing, interacting with modern society means your data will inevitably be stored.

Earlier this year, a report commissioned by the US Director of National Intelligence said that intelligence community (IC) member agencies "expect to maintain amounts of data at a scale comparable to that of a large corporation like Meta or Amazon," and raised concerns about their ability to hold all of this surveillance data.

The "IC has the potential to be one of the largest customers for cold data storage because of its wide-ranging need for information," the report states, laying out the problems of short-lived storage platforms - HDD density growth

is slowing, SSDs don't last long enough, and tape will likely hit super-paramagnetic effect limits by the end of the decade, capping density improvements.

The report found that Microsoft’s Silica and rival Cerabyte’s ceramic storage were the only two technologies expected to be capable of storing the coming wave of IC data in the near term.

What we should forget

While the intelligence community will argue that widespread surveillance is key to national security, it continues to be an ethical morass that democracies have failed to fully debate or address.

The scale of the records we provide of ourselves, and that corporations and governments keep, is unlike anything we have ever maintained. An entry-level employee in a quiet backwater town will

have more records kept about them than kings and emperors of the distant past.

Sometimes there is a value in losing data, argues Hoskins.

“Forgetting is not always a bad thing, societies need to forget enough to be able to move on," he says. "My teenage years were not recorded, all the crappy things I did no one knows about."

Beyond his own youthful foibles, Hoskins wonders what else should be relegated to the dustbin of history. “Traditionally, the media that carries memory - paper, photographs, etc. - they yellow and fade and decompose in a natural way. That’s how societies forget, it’s a decay time.

“It's a natural thing for memories to disappear. The digital era, of course, just totally messes that up.”

What we might accidentally forget

At the same time, modern recordings suffer from a troubling flaw - they often require complex technologies to understand them, including layers of proprietary software, or always-online servers.

As an example, in “a pervasive software like Microsoft Word - what is encoded in the file and what Word computes when you open the file is not obvious, including to most users of Word,” archivist Sheridan says.

"Our systems have become so complex - we have no idea how anything works, because it has many, many layers of interconnected software and complexity.”

This presents a real “challenge for long term preservation as the systems that we need to use to render information over time become more complex, and the ability to preserve whole infrastructures doesn't look economic.”

Even if the economics are solved, closed source software and intellectual property issues can limit what archivists are able to keep.

“We think that having institutions whose job this is to solve is a really important thing,” Sheridan says. “It’s all the more important, because digital stuff doesn't keep itself, unlike parchment.”

The world’s library

With Silica, Microsoft could be set to store all these disparate worlds within its halls - the representations of real ones, the digital ones, the intentionally fake ones, and the hallucinated ones.

Beyond copyright laws and the broad

guidelines of Microsoft Azure’s terms of service, it will not be Microsoft’s job to police and maintain what goes into the glass. Nor would we necessarily want that responsibility to lie with the world’s most valuable public company.

The company will own the technology, although others are working on alternative approaches (see box), and will probably solely offer Silica as a cloud service. But it likely won’t be too aggressive a gatekeeper on what gets stored on the glass, beyond who is willing to pay and won’t break the law.

Instead, that will be down to all of us on what we choose to create and record. Archivists will be at the frontline of the fight to pass on knowledge to future generations.

Lowering the cost of storage, and the risk of loss, is just the beginning of what it means to store the world’s data. “Commoditized, low-cost long term storage gives digital archives the opportunity to put more of their effort into the stuff that is less well solved,” Sheridan says.

“If this is what this is, that's fantastic,

because it means we can put more of our effort and energy into all the other parts of the jigsaw.”

Cost savings over the long term will be the thing “that will drive societal change,” Microsoft’s Black believes. “Whether it be hospitals keeping medical data or extractive industries keeping accurate details on what they did to the ground, when you move it to Silica that incremental cost just goes away.”

He expects to see a shift in how people treat data, and that regulations will also change to increase how long different sectors have to hold onto information now that there are no technological or temporal limitations to indefinite storage.

“If people internalize what it actually means, I think it's going to be a complete step change in how people think about data preservation,” colleague Stefanovici adds, mentioning how much scientific data from experiments is currently not stored, and how historical data simply no longer exists.

“We don’t need to have so much loss." 

OTHER APPROACHES

Microsoft

is not alone in seeking to upend long term storage.

In 2022, we profiled a number of competing approaches to keeping data for hundreds or thousands of years.

European startup Cerabyte also hopes to use femtosecond lasers, but instead is focusing on ceramic nano-coatings. Piql has developed a version of film that can last 1,000 years, and has put reels and reels of data under a mountain in Svalbard.

The US Intelligence Advanced Research Projects Activity is funding a Molecular Information Storage Technologies (MIST) effort to develop DNA storage. Companies

like Catalog and Biomemory are also developing early DNA prototypes.

“Microsoft was heavily involved in DNA stuff,” Richard Black tells DCD. “We stopped - they're just not meeting the orders of magnitude gain they thought they were gonna get. Proponents touted the extreme density, but it's not clear that that's in any way relevant."

The company co-founded the DNA Storage Alliance, but is no longer involved.

“Way down the line, before the supernova, there’ll be a point where humanity needs to leave planet Earth and find somewhere else. When that happens, the megabytes per gram metric is going to matter, and we'll probably use DNA storage," Black says.

“That's a few billion years off. It's not clear to me that between now and then there's a use case for DNA storage.” 

Indosat CEO Vikram Sinha: Plugging the gaps and helping to connect Indonesia’s

new capital

Fresh off a merger two years ago, Indosat’s CEO talks 5G, the country’s new capital network build, AI, and more

"

The biggest opportunity from the merger has been to serve more rural customers,” Vikram Sinha, CEO of Indosat, tells DCD

In 2022, two Indonesian telcos, Ooredoo Indosat and Hutchison Asia Telecom Group merged to create Indosat Ooredoo Hutchison (IOH), a combined company worth $6 billion (IDR97.7tn).

Sinha, who was installed as the CEO of the new business following the merger, insists it has been a

success for the company, which serves more than 100 million mobile subscribers and is Indonesia’s second-largest telco.

He explains that the merger has enabled Indosat to plug gaps in its network coverage, in particular in rural parts of the country.

“Currently, we are strong in Java, but the biggest opportunity for growth comes from rural Indonesia,” says Sinha.

“We are really focused on improving the customer experience

Vikram Sinha, CEO of Indosat
Paul Lipscombe Telecoms Editor
Credit: Nusantara capital city authority.

in these areas. In 2023, Indosat invested around $800 million (IDR13bn) in capital expenditure, around 60 percent of this being spent to strengthen our network in rural Indonesia. We are continuing this investment in 2024.”

He says the merger has enabled the telco to vastly increase its network coverage, plugging gaps where coverage hadn’t previously been. At present, Indosat’s mobile network covers 94 percent of the country’s population.

By 2027, Indosat plans to connect 21 million rural residents to Internet and mobile services, as part of this investment.

Proving mergers can succeed

Mergers are often controversial as they can be perceived to hinder competitiveness in the market.

It was no different for Indosat at first, explains Sinha, who notes that credit rating agency Fitch Ratings downgraded the company to a negative BBB rating after the deal closed in 2022.

However, two years later the same agency has since upgraded Indosat to a positive score of AA+.

In the case of Indonesia, a country that boasts a population of around 280 million people, there’s plenty of competition, with networks including Telkom Indonesia, the country’s biggest telco, Tri Indonesia, XL Axiata, and Smartfren Telecom all fighting for customers. The latter two are currently in talks over a merger of their own.

But Sinha, who previously led Ooredoo Indosat’s operations in Myanmar and the Maldives, argues that the merger between

Ooredoo Indosat and Hutchison Asia Telecom Group was necessary for the Indonesian market.

“We have achieved around $400m (IDR6.5bn) in annualized synergies, or cost savings, due to the merger. We have already completed most of the big-ticket integration initiatives, mainly around network integration, but there is still more we can do,” says Sinha.

He adds that the main focus of the company since merging has been around optimizing its business.

“This is reflected in our network integration, which we completed in record time in around 12 months. Where there was overlap in the two networks and duplicate transmission sites, we could remove some sites to achieve cost savings.”

Sinha says Indosat has been able to do this by strengthening its network in parts of rural eastern Indonesia where there were gaps.

“By removing duplicate sites and rolling out new sites in new areas we have managed to reduce our total number of sites delivering cost savings while improving the experience for our customers,” he explains.

4G push

In its most recent earnings for the first quarter of this year, Indosat reported revenue growth of $873m (IDR13.8 billion), up 15.8 percent, while net profits jumped nearly 40 percent year-on-year (YoY).

Significantly, it grew its total subscriber

"By removing duplicate sites and rolling out new sites in new areas we have managed to reduce our total number of sites delivering cost savings while improving the experience for our customers"

base beyond 100 million customers during the period, which Sinha attributed to the company’s increased focus on rural areas.

In total, Indosat increased its customer base by 2.3 percent YoY during the first quarter.

“We have been investing significantly in building a very high-quality network in the rural parts of Indonesia. Indosat must have a competitive network not only in Java, but it has a very reliable and competitive network across Indonesia,” Sinha says.

"In rural areas, 4G device penetration is still much higher than for 5G, and so we are focusing on expanding and strengthening our 4G network.”

Indeed, since last year Indosat has pledged to invest around 60 percent of the money it puts into its CapEx programs into rural areas.

However, because Indonesia is made up of around 17,500 islands, of which around a third (6,000) are inhabited, getting to every inch of the country is impossible, due to the terrain in these locations, coupled with the fact that the majority of the islands are uninhabited, and therefore do not require network coverage.

In addition, the vast number of islands means it costs Indosat more to expand its networks.

“The cost of network expansion and enhancement is high in Indonesia because the country is made up of so many islands and the population density is quite low in areas,” explains Sinha. “This

"In the case of 5G, it’s not about being first. It’s about ensuring that the complete ecosystem of handsets, applications, and use cases is ready so that customers see the benefits of the technology and we can see a return on our investment"

means the number of people covered by base stations is lower than in urban Europe, for example.

“Another challenge is transport and making sure we have a resilient transport network at the right price,” Sinha adds. “We need to build our network and sales channel capabilities in rural areas, and we need to be able to travel at a reasonable cost. We always need to be mindful that we operate in a market with a monthly average revenue per user of around $3.”

To date, the network build-out has focused heavily on 4G connectivity too, according to Sinha. This has been reflected in the company’s recent earnings report, which revealed that its 4G base station footprint grew by more than 20 percent to 184,000 across the country. Data traffic across its 2G, 4G, and 5G networks rose by 14.3 percent year-onyear, to 3,858PB.

Like many operators across the globe, Indosat has phased out its 3G network, doing so back in 2021, well before many of the world’s markets.

“3G was a waste of allocation of spectrum, so we were very quick to do that,” he explains, with the move freeing up spectrum in the 1,800 to 2,100 MHz bands.

He notes that the company still operates its 2G network and says it has between five and six million 2G users. There is no timeline for when it will retire this network.

No rush with 5G

Many markets across the world, in

particular in Western Europe, the US, South Korea, and Japan battled for supremacy in the race to launch 5G networks.

While South Korea and Switzerland launched their first 5G commercial networks in early 2019, the same wasn’t happening in Indonesia.

Indosat launched 5G in 2021. Its 5G network is available in eight cities, including Jakarta, Surakarta, and Denpasar, Bali.

Even now, Sinha says it’s taking its time with expanding its 5G rollout, insisting it’s never been a race to launch the latest G.

“I think for Indonesia, it was a wise decision not to rush on 5G, as we have seen from all over the world that sometimes being late is good,” he says. 5G is not about speed but is about the whole ecosystem getting ready.

“In the case of 5G, it’s not about being first. It’s about ensuring that the complete ecosystem of handsets, applications, and use cases is ready so that customers see the benefits of the technology and we can see a return on our investment. The key thing is when we put money on 5G, we should be equally ready to monetize our investment otherwise it will have no meaning. If the use cases for 5G aren’t there, then you don’t have that.”

Sinha adds that Indosat’s network is “fully modernized,” and ready for upgrades as soon as certain 5G spectrum comes into play.

He touts benefits such as faster connection speeds and the potential for

the technology to drive Industry 4.0, while Fired Wireless Access (FWA) also excites Sinha.

Building Indonesia’s new capital

One of the company’s biggest endeavors at present is its role in the build-out of Indonesia’s new capital city, Nusantara, which is set to begin this summer.

Nusantara is being built on the island of Borneo, 800 miles away from the current capital Jakarta on the island of Java. With a population of 34 million, Jakarta is already overcrowded, and is threatened by severe rainfall that could leave it underwater by 2050.

Indonesia's President Joko Widodo plans to build an entirely new "green" capital by clearing virgin rainforest on Borneo, and has decreed that, on 17 August 2024, the role of the capital city will shift to what is currently a vast building site.

This is no small task, and to support the move, Indosat is developing a 4G LTE network for the city, says Sinha.

He explains that around $10m (IDR162.9bn) has so far been invested to deploy between 30 to 40 base transceiver stations in Nusantara, on top of the 30 4G sites it already operates in the area.

“[In Nusantara] we will be working very closely with the authorities to ensure that there is world-class digital infrastructure," he said in August of last year.

“Indosat’s wider purpose is to empower Indonesia and so we are working very closely with the authorities to help develop world-class digital infrastructure

in the Nusantara area. The work has already started, and we are doing it in a very collaborative manner. We already have around 30 transmission sites in the area, and we recently invested another $10m to add another 30 to 40 new sites.”

The move to switch capitals is estimated to cost $30bn (IDR466tn).

Spin in it to win it

As is the case with many telcos, Indosat doesn’t just own traditional telecom assets, it also owns a data center business.

In 2022, it teamed up with BDx Data Center, and Lintasarta to form a data center joint venture (JV). The agreement, valued at the time at around $300m (IDR4.8tn), marked BDx’s entry into the Indonesian data center market. The companies said the JV was formed to “meet Indonesia’s growing need for a higher level of global data center facilities.”

Earlier this year, Indosat agreed to sell a portfolio of data centers to BDx, consisting of carrier-neutral colocation and Edge sites in cities such as Jakarta, Surabaya, Batam, Medan, Makassar, Bandung, and Semarang, including ten sites connected to six domestic and five international subsea cables.

“We're honored to play a role in shaping Indonesia's digital future through this impactful collaboration,” Sinha commented in January. “This transaction underscores our dedication to building a sustainable business and propelling Indosat's evolution from telco to TechCo.

Collaborating with BDx Indonesia not only enhances our customer service but also reinforces our commitment to connecting and empowering every Indonesian.”

Providing an update on the company’s approach to its data center assets, Sinha told DCD that it made more sense to focus on areas where it had more expertise.

“We decided to carve out data centers. So it was not only about money, it was more about getting the right expertise, supported by the right funds needed, and that's why we created this JV with BDx.

“We wanted to create a platform which had all the right expertise and that can help fulfill the needs of Indonesia when it comes to data centers.”

However, it’s not just data center assets that Indosat has sold, the company has also turned its attention to monetizing its telecom towers, something that has been seen frequently in the telecom industry in the past few years.

The company completed the sale of 4,200 telecom towers to DigitalBridgeowned EdgePoint Infrastructure's Indonesian unit in 2021 for $750m (IDR12.2tn). Last year, Indosat scooped an additional $109m (IDR1.75bn) following the sale of 997 towers to Mitratel.

Sinha has not ruled out any further tower sales, noting that any proceeds contribute towards the modernization of its mobile network.

“We want to be asset right and assetlight,” he says. “With towers, we did it for the right reason, so that our focus is we build competencies that are focused on our strengths. When it comes to infrastructure, we want to make sure that it is carved out, so that we can unlock the full potential of its value.”

Sinha also revealed that Indosat is set to add between 2,500 to 3,000 new towers this year, as it aims to plug coverage gaps across the country.

Indosat has been heavily rumored to be considering selling its fiber assets, which could fetch as much as $1bn (IDR16.2tn). Sinha didn’t comment on the deal, simply noting that Indosat is open to the right deal.

In November, the company confirmed the acquisition of fiber-based service provider MNC Kabel Mediacom (MNC Play) in Indonesia as part of a collaboration with Asianet.

“In the area of fiber-to-the-home (FTTH), I expect our home broadband business, which we call Indosat HiFi, to become an increasingly important contributor to Indosat’s growth in the future,” he adds.

All eyes on an AI future

As with every other business, the company is still trying to work out how AI will impact its growth.

“Indosat has a larger purpose, to connect and digitally empower every Indonesian,” he explains. “AI can play a pivotal role in advancing Indonesia by unleashing the innovation and creativity of Indonesian businesses.”

Such is the company’s optimism around AI, it announced a partnership with GPU maker Nvidia to build an AI center in Surakarta, Indonesia.

The center will be worth about $200m (IDR3.2tn) and could also include telecommunications infrastructure and a human resource center, while Sinha says it’s part of Indosat’s strategy to become an AI pioneer in the country.

“Through our landmark partnership with Nvidia to become their cloud provider partner in Indonesia, we will democratize access to AI-cloud services, making them accessible to businesses across Indonesia and the region, and accelerating the growth of Indonesia’s digital economy,” he adds.

The CEO believes AI will be “transformative in helping productivity,” and adds: “There’s been a lot of fear that AI will replace jobs, but this isn’t the case. It will replace people who don’t embrace the technology.”

His enthusiasm for the technology mirrors that of many in the telecoms industry. Only time will tell as to how pivotal AI will be in the future for telcos such as Indosat.

But, for now, the main focus is on bridging the gap between Jakarta and some of the country’s more rural communities.

“Whether someone is in Jakarta or in places like Lombok, or Nusantara, we want them to have the same level of experience. That is our goal, that is the journey we are now on by investing in our network.” 

The CEO who is 30 seconds from a server

Can any data center boss get from his desk to a data hall faster than CoreSite’s Juan Font?

Juan Font can get hands-on with his company’s equipment any time he wants. His office is in CoreSite’s VA 3 data center, on the company’s Reston, Virginia, campus.

“I love to listen to the servers every day,” he tells me over the phone. “From where I sit, it would take me probably 30 seconds to get to the computers.”

This keeps him close to the business, he says, “I do a walkabout each day, to understand how our customers are utilizing the platform. “I help them to work and I just love to get dirty, to get into the nitty-gritty.”

Starting in an icon

US colocation specialist CoreSite began life in 2001 at One Wilshire, an iconic carrier carrier hotel in Los Angeles. The building’s owner, Carlyle Group, set up a subsidiary, CRG West, to manage interconnectivity at One Wilshire, as well as the Market Post Tower (now Tower 55) in San Jose, which at that time hosted one of the Internet’s oldest exchange points, MAE West.

Through the first years of the 21st century, CRG expanded into Washington, Boston, and Chicago, before changing its

name to CoreSite and floating in a 2010 IPO. The Denver office came later when CoreSite bought ComFluent in 2012.

“I joined here in September 2010, and that was about a month and a half before we went public,” says Font. “I’ve been here through that journey - becoming a public company and being independently operated from 2010 through 2021, before being acquired by American Tower.”

Font was previously at data center giant Equinix: “So I’ve been in the data center industry since 2005,” he says.

Before that, he was in telecoms, at Teleglobe and Aleron Broadband. “I was negotiating with telecom carriers and PTTs,” he recalls. “But since that market became more liberalized, there's been more competition. I don't know if you recall the days when calling international would be like $2 a minute - but those rates started to drop dramatically.”

With the Internet and IP supplanting the core of voice networks, he says: “There were a lot of bankruptcies, and returns were always declining over time. It became very difficult to make a living in that space.”

When he joined data centers, he immediately felt the contrast: “What

Peter Judge Contributor
Juan Font

a wonderful business,” he says. “We were at the dawn of the digital age with unending growth. I thought it was a great career move. I don't think you had to be particularly brilliant to succeed in this business”

Is he being modest? “It's a demanddriven business. You don't need to generate demand. The amount of data that gets created, processed, transmitted, and stored, continues to grow at an unbelievable rate - and every X number of years, you have this next, paradigmshifting phenomenon that upends everything and accelerates it. From that perspective, it seems like it's very hard to fail.”

First impressions

When he first saw inside a data center, he noticed several things: “It wasn’t like today,” Font says. “They were very small assets. The rooms were hot and noisy.”

More importantly, he saw a “sticky” business model: “Once you see a cabinet with all these cables running around, you understand that each cable is going to a separate customer, and you know, this business is pretty sticky. Because for that network or enterprise to move somewhere else, you have to migrate all those connections.”

He also saw an industry that was capital-intensive: “Particularly for the smaller companies, to keep adding Inventory and power distribution, the capital intensity is very high compared to other models. On the network side, fiber is expensive, but it's nothing like building these buildings and adding all this electrical and mechanical infrastructure.”

Finally, he understood location: “Once you realize how sticky the business was, you know locations are not created equal,” Font explains. “You can be in one building, but being in this suite is so much better than the other suite because the peering is changed, for example. I learned the value of interconnection, fairly early on.”

He saw all this at Equinix, but wasn’t quite comfortable there: “Equinix is the incumbent, the largest, most dominant interconnection-driven platform out there,” he says. “But I love the underdog.”

He says Equinix became “almost like a monopoly telecom operator” that could “treat customers like subscribers, and I like to create bonds and relationships with customers.”

He also observed that Equinix’s retail focus meant that it missed out on the opportunities for wholesale colocation provision. These were being picked up by other players, he says.

“Equinix was very retail oriented, and there were emerging platforms out there like DuPont Fabros, Digital Realty Trust, and CoreSite, that could also sell wholesale,” he says. “That felt attractive, being able to do different types of deals, not just working with networks and other digital platforms, but also with enterprises.”

Combining connection and capacity

So he looked elsewhere: “What attracted me about CoreSite was that it was a much smaller company, and still is - so as an individual contributor, the ability to be more impactful was very meaningful to

me. Also they owned One Wilshire, the preeminent carrier hotel on the West Coast, the most integrated building, and the second most important peering platform in the US.”

The company could operate in different markets: This is a company that has cut its teeth with interconnection, which is where I come from, but at the same time, it is investing in these very large buildings and is going toe to toe against competitors in the wholesale space.”

This was before there was a hyperscale market, he says: “We called it wholesale.” And he wanted to do that as well as interconnection.

“We were very early in wholesale, and other younger companies never really adapted,” he explains. “When I joined, our business model was around owning and operating carrier hotels. We quickly realized that the advent of the digital age, social media, and search required heavier racks and more power.

That was different from the carrier hotels business: “Carrier hotels were well suited for enabling interconnection between carriers,” he says. “You would go to a central location, like One Wilshire, and all the carriers from Asia Pacific would converge and exchange traffic with each other, whether it was voice traffic or data traffic. They were very well suited for interconnection with a very large meetme room.”

Companies like Google, Facebook, and (at the time) MySpace were different: “They required larger spaces, and much more power.”

CoreSite developed a campus model: “LA2 was probably the first manifestation. It was the old Post Office annex in Los Angeles, at 900 N Alameda, and we tethered it to One Wilshire with highcount dark fiber that we owned.”

Developments like that were able to offer larger footprints to customers, but also access strong interconnection: “That was something that CoreSite had really thought about. Equinix was still fairly focused on smaller retail transactions expanding globally, while competitors like Digital and DuPont were more focused on just the larger stuff. But we were able to do both. We were able to provide large footprints that were tethered to an extremely rich network ecosystem.”

After that, a new kind of player

emerged: the cloud provider. “It all started with Amazon, here in Northern Virginia,” Font says. “But that was a new infrastructure, a new utility that was being built from scratch, which is the cloud. That just took off, and pretty quickly those cloud providers realized that, just like network carrier peering, you have to establish on-ramps for enterprises and other digital businesses to connect to you.”

This is best provided by providers like Equinix or CoreSite, he says: “Those on-ramps became the third leg of the stool, and we pivoted to being a company that could support network to enterprise, enterprise to cloud, and network to cloud.”

As he adds: “With networks and cloud providers, and capacity that was adjacent to it, you can address a fairly wide spectrum of use cases, that are sensitive to latency and have to have certain adjacency to the client on ramps - but at the same time, the operating characteristics of the wholesale model.

Time to expand with American Tower?

CoreSite now has around 28 data centers covering most of the US, but it is not rushing into expansion abroad.

best markets,” Font explains. “We’re missing one or two, but in LA, Silicon Valley, Northern Virginia. New York, and Chicago, we are the number one or number two interconnection fabric. And we have a lot of capacity that is adjacent.”

That determines where the best place to spend money is: “From a capital allocation perspective, what provides the highest return on invested capital is to continue growing in those markets, because our customers continue to demand more capacity where we are,” Font says.

Now CoreSite has the backing of American Tower, it has added a couple of markets, like Orlando, but its focus is on getting a return on investment: “It's

says: “It is very net positive for CoreSite to be part of a Fortune 350 company, this gives us more credibility with enterprise users.”

As he explains, enterprise customers moving their IT load out of on-premise data centers will want a solid and reputable partner. “You're going to ask, what is the financial wherewithal of this company because you recognize this is a very capital-intensive industry, and you want to make sure that your operator is not cutting corners or maintenance, and has the ability to continue expanding as you grow.”

Combining towers and facilities

It is a multi-tenant operator: “We have to cater to all walks of life - it can be an enterprise that has very low density, a network, or a highperformance computing environment. Even before AI, over the last couple of decades, we’ve seen a progressive, unrelenting increase in power density.”

“We have a catalog of solutions already in place for enterprises and other digital platforms. They get a cage or a full suite from CoreSite to host their private cloud. Then they use our interconnection to go to the cloud on-ramps, to move some of those workloads to the cloud.”

Enterprises have a continuum where loads can move back and forth as companies evolve, but AI will add a class of workloads beyond enterprise capabilities.

“We have digital hub status in the

very hard to say, ‘let's be international,’” Font explains. “There is every intention at some point to go international, but I think there's still plenty of work to do here in the US. Our focus is to continue adding capacity in these very harsh and vibrant markets.”

After all those years of being an independent provider, what is American Tower like as an owner?

“Frankly, being perfectly honest. It's been a godsend,” he responds. “They allow us to operate independently. And they are also very enthusiastic about our business model, and the possibilities of convergence between our disparate real estate communication assets of data centers and towers.”

With American Tower’s funding, CoreSite’s growth has accelerated, Font

What about the towers, though? Is there a synergy between CoreSite’s data centers and its owner’s mobile infrastructure?

“It's a matter of if and not when,” he replies. “There will be a convergence between wireless and wireline infrastructure. American is the largest operator in the US with something like 43,000 towers. Right now, we're embarking on the rollout of 5G, but after that, there'll be a 6G.”

That will increase bandwidth, he says: “Imagine your phone has 100 times the amount of throughput that you have today. There will be more and more applications that are very intensive in data consumption and compute, that will perform close to the end user.”

Right now, cloud regions are centralized but he sees that changing: ”The content and compute utility that you can tap on has to be closer to the end user and that is the Edge.”

Infrastructure closer to users will improve the user experience, “but it's going to take some time for these killer apps to be realized,” Font argues.

“You don't have to have a containerized data center in every tower, but you can have a cluster of towers where you have a mobile Edge compute site with the capability to operate a more Edge-type exchange, where firms can exchange data in closer proximity to users. “It's not just

the latency, it is the cost of transporting that amount of data across the country.”

AI will also persuade users who have given up their on-prem data centers to adopt “near-prem” resources, he says. “The AI technology is going to enable a lot more applications for near-prem data centers. Where you have points of sale in stores and restaurants, you can supplant individuals with technology, but for it to work properly. It has to be co-located in close proximity to where the points of sale are.”

AI accelerates outsourcing

He thinks AI is a “paradigm-shifting technology” on par with the Internet or cellphones, but it is coming faster: “What has caught me by surprise is how quickly enterprises are jumping onto the bandwagon and utilizing it.”

The surprise is that enterprises are latching on to something that they cannot build for themselves, and that will change how they consume computing.

Of course, he knows forms of AI have been around for many years, with sites like Netflix and TikTok using algorithms to serve content. But the new wave, Font says, is different: “There were a lot of things happening behind the scenes, but now it has become more ubiquitous and easier to use, for a broader base of use cases,” he says. “And that is only going to accelerate AI massively.”

That requires “massive amounts of infrastructure” that are “orders of magnitude higher than anything we've seen before.” Font says: “It's been a phenomenal driver of data center demand, and we are beneficiaries of that emerging trend.”

That trend also changes how data centers are built: “These GPUs consume a lot of power, and most of this capacity is going to be absorbed by hyperscalers, companies that are building massive facilities catering to a single user, like a Microsoft or someone like that.”

Font remembers that “it used to be 2kW per rack. Nowadays, the average in our portfolio is around 6.5kW. That's a fourfold increase in 15 years. With AI that density goes to 20, 30, 40, 50. It's an order of magnitude change.”

He thinks that new hotspots of AI capacity demanded by enterprises will “turbocharge” the existing model of

hybrid cloud. The new hardware demands are tough for data center operators like CoreSite, but completely beyond regular IT managers working in 20-year-old data centers: “It is going to be impossible on their local infrastructure. It will require massive amounts of capital to retrofit their infrastructure - for something that is not their business. So I think AI will accelerate outsourcing.”

Enterprises will have to place demanding AI-related workloads into shared spaces, where they can on-ramps to reach cloud providers for the actual inferencing. “That's what we're trying to prepare our platform for - and we're seeing early signs that that's how it works.”

So CoreSite will be doing some “retooling” he says: “It does require some custom fitting and some planning for those sort of use cases, as we build new data centers, like we are in Denver or New York.” He’s talking about adding flexibility to provide new cooling quicker than he can right now.

AI changes cooling

CoreSite has supported advanced use cases in the past: “We have done liquid cooling for, say, a university deploying high-performance computing or a government agency, but it's been on a one-off basis.”

Most of CoreSite’s data centers have water circulation, he says. “Even if it's air-cooled, those air-handling units get chilled by water anyway. Air is the mechanism that rejects heat from the server. We can tap those pipes and create a separate environment for a customer.”

That would be a one-off offering, though. “We design our facilities to cater to a broad range of use cases. “If you design with that use case in mind only, you will spend an inordinate amount of capital.”

Higher density makes cooling more crucial, he says, because “losing cooling for a smaller period of time can have catastrophic consequences. To avoid this, CoreSite’s cooling has redundancy: “You have N+1 or N+2. You have more pumps than you need, more chillers than you need, and more CRAHs, in a sealed water loop. The one thing you cannot afford to do at a data center is to lose your cooling.”

Overall, Font says CoreSite achieves PUE (power usage effectiveness) figures between 1.2 and 1.5, averaging at about

1.3. The figure depends partly on customers, so CoreSite runs a YouTube channel explaining issues like blanking panels to them.

It also improves as CoreSite updates equipment in older sites like One Wilshire, replacing fixed-speed air handlers with units that have variable fan speeds. “That enables just in time cooling,” he says. “We are replacing old technology with new, and reducing the PUE in our older sites.”

CoreSite aims to procure renewable energy but again has to work with customers. “In Virginia, we have 100 percent renewable energy, but it's harder to do if you have multiple customers in the facility,” the CEO says. “At the end of the day, the utilities that we work with have their own mandates to keep becoming greener, and we work with them to ensure that they continue marching on that journey.”

And customers that have moved into CoreSite will have a lower footprint than in their old on-premise facilities. “In the early days, One Wilshire had a PUE of two-plus, but enterprises operate at 2.5 or 3.0.”

Happy company

On diversity and hiring, he says his senior leadership team is 45 percent female, and 25 percent from under-represented groups, and “more than half of our staff is on data center operations.

"About 37 or 40 percent of our data center operators are veterans, and something like 30 percent or 40 percent are under-represented minorities. I think our workforce looks more diverse than the country at large.”

He prefers to make change organically rather than mandating it and is pleased with the results: “You can come in as a tier one security guard, with very limited qualifications and just rise through the ranks. It is very encouraging to see that we can provide an environment for underrepresented minorities to come in at a lower level and then just grow within the company.”

Font writes a letter to every employee who reaches ten years with the company: “I can't tell you how many of those folks started as a security guard. And they still just love being at CoreSite.”

He likes to listen to the servers, but it seems he pays attention to the people as well. 

The Cooling Supplement

Keeping IT cool

Cryogenically-cooled chips

> Is cryogenic cooling the route to more efficient data center chips?

Hot water, cold water

> What’s the right temperature for water in liquid cooled systems?

Density dilemmas

> AI is making data center racks denser, presenting new cooling challenges for operators

Precision Liquid Cooling

Iceotope is reimagining data center cooling from the cloud to the edge.

Precision Liquid Cooling removes nearly 100% of the heat generated by the electronic components of a server through a precise delivery of dielectric fluid. This reduces energy use by up to 40% and water consumption by up to 100%. It allows for greater flexibility in designing IT solutions as there are no hotspots to slow down performance and no wasted physical space on unnecessary cooling infrastructure. Most importantly, it uses the same rack-based architecture as air cooled systems and simply fits to existing deployed infrastructure.

Get in touch to arrange a demo. +44

Contents

36. Cryogenically-cooled chips: a chilling proposition Is cryogenic cooling the route to more efficient data center chips?

42. Hot water, cold water What’s the right temperature for water in liquid cooled systems?

45. Density dilemmas

AI is making data center racks denser, presenting new cooling challenges for operators

Too cool for the data hall

The impact of AI on the data center over the last two years has been profound, and nowhere is this more apparent than in the realm of cooling.

Where once data center operators could rely on fairly basic air systems to chill their racks, demand for increasingly powerful servers packed with hardware ready to run AI workloads means more advanced cooling techniques are required.

This means liquid cooling, so long an industry buzzword, is finally starting to make an impact, and is likely to play a key role in the data centers of the future.

A question of density

How to meet the cooling needs of AI chips which are developing at breakneck speed is a question being grappled with by the industry’s biggest names.

The last 12 months have seen many data center operators release cooling systems specifically designed for high-density environments, with some catering for power densities of up to 300kW.

Data center companies are having to make educated guesses on what the market will look like in the coming years, so that their facilities are prepared to cope with future demand.

“We need to have the flexibility to know we can deal with a client requirement now and be able to deal with it in three years’ time,” CyrusOne tells us in our feature on high-density cooling systems.

Hot hot heat?

With water now playing a key role in many data center cooling systems, the question arises as to what temperature this liquid should flow at.

Traditionally, data centers have set water temperatures low, at around 42-45°F (6-7°C). But upping that temperature can have big benefits - every 1°C (1.8°F) increase in the temperature of chilled water can lead to a 2-3 percent savings in power consumption for a chiller.

Data center operators are often accused of being over-cautious when it comes to cooling, and as more liquid-based systems are introduced, it might be time to rethink temperatures.

Going sub-zero

For those who prefer their temperatures to be more extreme, cryogenic cooling is emerging as a radical solution.

Research has found that cooling CMOS chips - the components that control the flow of current through a CPU - to ultra-low temperatures can yield better performance and greater efficiency, as well as cutting leakage - the amount of electricity wasted during the operation of a device.

But issues remain around the cost and practicality of implementing cryogenic cooling in the average data hall.

In this issue, DCD tries to make sense of the technology’s fascinating potential.

Cryogenically-cooled chips: a chilling proposition

Is cryogenic cooling the route to more efficient data center chips?

Matthew Gooding Features Editor

Fans of science fiction will know that cryogenic freezing is a commonly used mode of transport for astronauts wishing to traverse the galaxy. From classic movies like 2001: A Space Odyssey and Alien to more modern tales such as Interstellar, sci-fi writers love nothing better than plunging their protagonists into deep freeze to allow them to travel millions of miles unscathed.

In the real world, you can’t freeze and unfreeze live humans (yet), but server chips are a different matter, and for data center operators, cryogenics could be moving out of the realm of science fiction and into that of science fact.

Research is emerging that suggests running complementary metal-oxidesemiconductor, or CMOS, chips at very low temperatures (the cryogenic temperature range is considered to be anything below 120 Kelvin, or -153°C) using liquid nitrogen cooling can lead to increased performance and power efficiency.

Bringing the technology out of the laboratory and into commercial environments will be a challenge, but as vendors seek new efficient ways to cool their increasingly powerful components, this novel approach could reap rewards.

Colder is quicker

CMOS technology plays a vital role in integrated circuits (ICs), such as processors, memory chips, and microcontrollers, as part of switching devices that help regulate the flow of current through the IC, thus controlling the state of its transistors.

“Chips are made up of transistors which are either switched on or off,” explains Rakshith Saligram, a graduate research assistant at the Georgia Institute of Technology’s School of Electrical and Computer Engineering. “Switching devices are used to apply the minimum voltage required by these transistors to change from on to off. The amount of voltage you need to apply as part of that switching action determines how efficient the device is.”

Saligram is an electrical engineer who formerly worked for Intel (“I would describe myself as a circuit designer,” he says) and is currently conducting research “exploring different devices and looking at ways to make circuits better.” While evaluating different technologies, he came across cryogenic CMOS.

Most commercially available silicon chips are graded to run from a minimum temperature of 233 Kelvin (-40°C), right up to a maximum 373 Kelvin (100°C). Transistors will switch at what Saligram describes as a “reasonable speed” while operating at room temperature, but performance picks up considerably as temperatures get lower. In a paper published in March 2024, Saligram and his two co-authors, Georgia Tech colleagues Arijit Raychowdhury and Suman Datta, took a 14-nanometer FinFET CMOS device, and optimized and tested it using a cryogenic probe station, focusing on how the transistors performed at temperatures ranging from 300 Kelvin (26.85°C) to four Kelvin (-269°C).

“The minimum voltage difference you need to apply in order to take a transistor from on to off at room temperature is around 60 to 70 millivolts (0.06V-0.07V) in the bulk of the devices,” Saligram says. “But when you go to cryogenic temperature, this voltage difference can be as low as 15-20 millivolts. That’s a 4× reduction in the voltage you need to apply, which is a big difference.

While these are small values in absolute terms, the number of transistors on a single CPU can run into the millions, so the power savings soon add up, something which is likely to be welcomed by operators at a time where many data centers are becoming constrained by the amount of available energy from the grid.

Saligram says the research also shows that power leakage drops at lower temperatures. “When you're running a workload on a data center, not all devices need to be on at the same time,” he explains. “There’s always switching activity going on, and when a component is not performing any action, it is generally switched off. But during that period a small amount of electricity

is still being used.

“That’s wasted power and we want to minimize that waste. And if we take these devices down to cryogenic temperatures we see a 4× reduction in those kinds of currents.”

Lessons from quantum

While Saligram and his colleagues have been looking at how standard components can be optimized to perform at low temperatures, over in the UK work is ongoing on semiconductor IP specifically designed to operate in cryogenic conditions.

The snappily titled “Development of CryoCMOS to Enable the Next Generation of Scalable Quantum Computers” is backed by UK government innovation agency Innovate UK and led by low-power chip specialist sureCore, with support of a host of other organizations including chip design specialists AgileAnalog, SemiWise, and Synopsys, as well as Oxford Instruments and quantum computing companies Universal Quantum and SEEQC.

Quantum computers are a field where cryogenic temperatures are, by necessity, already in widespread use, with many of the types of early quantum machines requiring ultra cooling to operate effectively.

SureCore and its partners are aiming to drop this temperature further by incorporating more parts of the quantum computer inside the cryostat, the coldest part of the machine. “The big problem for quantum computers at the moment is that most of the control electronics get housed outside the cryostat,” says sureCore CEO Paul Wells. “You have a considerable amount of cabling coming out of the cryostat, and that not only introduces latency, but you’ve also potentially got thermal paths back into the cryostat. This has the effect of limiting the number of qubits.”

Qubits are the unit of measurement used for quantum power, and the most advanced computer currently in operation has around 1,000. It is thought that quantum machines with hundreds of thousands, or even

millions, of qubits will be required if the technology is to fulfill its potential and outperform classical computers, so more efficient hardware will be required.

To help solve this issue, the consortium has come up with new timing and power models for chips designed to operate at cryogenic temperatures. “People who are developing quantum control chips can just pick up the new models as part of their work and drop them into existing processes - the rest of the chip design flow is unchanged,” says Wells, who hopes the designs can form the basis of new quantum control and measurement ASICs.

In May, the project taped out its first chip, which will be used to validate this IP. “Assuming that works ok, our end goal is to offer a portfolio of cryogenic

"When you go to cryogenic temperature, this voltage difference can be as low as 1520 millivolts. That’s a 4× reduction in the voltage you need to apply, which is a big difference"

chips that can operate down to four Kelvin,” Wells says.

Cryogenics in the data center

Despite the progress made on the project, sureCore’s Wells is skeptical

that widespread use of cryogenic cooling technology will be seen in the data center outside of specialized quantum environments.

This is because, he says, the processes needed to develop dedicated cryogenic hardware will be costly and complex to set up for chip manufacturers. “They’ve got thousands of very clever engineers, so I’m sure if they wanted to do it, they could,” he says. “But it will come down to economics, and if they were to do this it would not be cheap or straightforward.”

Victor Moroz also has reservations about the technology’s commercial viability, but for different reasons. Moroz is a fellow at semiconductor equipment maker Synopsys, and has published several papers on the potential of cryogenic cooling of CMOS chips, the most recent of

which he presented at last year’s VSLI Symposium. This event, run by the Institute of Electrical and Electronics Engineers, is one of the most prestigious and long-running conferences on electronics and circuit design, and Moroz says this reflects that interest in the technology is running high.

The findings of Moroz’s research are broadly in line with those of Saligram and his team at Georgia Tech, but he says that while there is plenty of enthusiasm for the potential of cryogenic CMOS within the academic community, restrictions being placed on the technology by the US government are likely to hold its adoption back.

“I would say within the research community there is a lot of excitement [about cryogenic CMOS],” he says. “But once you get into industry there’s a huge pushback because once you associate your technology with cryogenics you can get put on the US export control list. All the foundries are ‘allergic’ to this technology because of that.”

Indeed, cryogenic storage is one of many technologies to have been put in the spotlight by the US trade war with China, which has seen exports of a host of semiconductor-related products to Beijing either banned or heavily restricted.

Cryogenic equipment is covered by restrictions on quantum technology, and though these are not as prescriptive as some of the controls on artificial intelligence chips, they still present a barrier to development, Moroz says.

“From a power perspective, cryogenic cooling totally makes sense, but this export control thing is a big problem,” he adds. “There is also the issue of cost, because in my research I did not do any cost analysis. But if the cost is ok then it simply becomes a matter of infrastructure and creating enough hardware.”

Saligram strikes a more optimistic note when it comes to adoption, pointing to an announcement from IBM last December that it had

"It will come down to economics, and if they were to do this it would not be cheap or straightforward”

developed a CMOS transistor optimized to work at extremely low temperatures. Big Blue used nanosheets, a new generation technology which is set to replace FinFET and enable greater miniaturization of transistors (“Nanosheet device architecture enables us to fit 50 billion transistors in a space roughly the size of a fingernail,” Ruqiang Bao, a researcher at IBM, said at the time). The device performed twice as efficiently at 77 Kelvin (-196°C) as it did at room temperature, according to the IBM team.

US innovation agency DARPA, which has previously funded programs that led to the development of many of the foundational technologies used by businesses and consumers today, has also taken an interest, starting a research program called Low Temperature Logic Technology, through which some of Saligram’s research was conducted.

Data center operators themselves are also looking at how cryogenic temperatures can be utilized at their facilities, Saligram says. “At Georgia Tech we’re working with one of the leading data center companies, which is interested in pursuing this technology as part of their applications,” he says, declining to name the business involved. “They are really interested, and there have been multiple other instances where companies have experimented with low temperatures - Microsoft famously dunked an entire server onto the ocean bed to see how that would play out and saw some performance improvements.

“This DARPA project includes several industry players, including IBM, so there is definitely interest there and a desire to take this technology to the next level.

“We need to get more traction from the guys who build the chips, like Intel and AMD, who need to take this up so that they can take advantage of all the benefits it brings on a circuit and system level. The final stage is to

work with mechanical engineers on the deployability of this technology, to ensure data centers are able to handle this. There is some work to do there.”

Indeed, installing a cryogenic cooling system is a costly business, particularly at a time when many data center companies are spending considerable sums switching from traditional air cooling systems to new and more efficient liquid cooling setups.

“Bringing the temperature down [to cryogenic levels] does involve a lot of cooling costs,” Saligram says. “But our argument is that data centers currently invest a lot in power and cooling, but don’t get anything back in terms of improved performance - the money just goes on keeping things running.

“If operators invest a little bit more [to move to lower temperatures] they may be able to get some performance back.”

He adds that it is not even necessary to aim for some of the more extreme temperatures investigated as part of the research project. “You don’t need to go all the way to that temperature, even going to 373 Kelvin (-100°C) enables you to get better performance, depending on the type of hardware you’re using and the workloads it is running,” Saligram says.

Elsewhere, there is work to do, he says, on handling the large amounts of liquid nitrogen which is used as coolant in cryogenic systems. “Liquid nitrogen production itself is energyintensive,” he says. “And we need to look at ways we can effectively recycle the liquid nitrogen if it is going to be used, and how we can build infrastructure that is leak-proof and achieves the connectivity at node and rack level, as well as switch level.

“These are big questions that need to be answered if this technology is going to be deployed, but there are a lot of opportunities to make things happen, and as an engineer, it’s a very interesting area to be involved in.” 

Cooling the AI Revolution in Data Centers

Artificial Intelligence (AI) is transforming in the digital infrastructure industry. With unprecedented compute demands, data center operators are faced with the challenge of maintaining efficiency, sustainability, and total cost of ownership (TCO). As the tension between power, sustainability, and data center growth escalates globally, a strategic shift towards innovative solutions is necessary to address these challenges effectively.

Liquid cooling is rapidly emerging as a key enabling technology for AI workloads. It offers a revolutionary approach to dissipating heat from high compute power and denser hardware configurations.

This makes the technology essential for optimizing performance, energy efficiency, and hardware reliability in AI-driven data center environments. From its ability to address the thermal challenges of AI to its potential to enhance overall data center efficiency and sustainability, liquid cooling’s time has come as a cornerstone technology for data centers.

By efficiently dissipating heat through circulating a dielectric coolant directly over the hottest components, liquid cooling ensures optimal operating temperatures for AI systems while reducing carbon emissions and overall energy consumption. Furthermore, as next-generation CPUs and GPUs with TDP requirements of up to 1500W and beyond become commonplace, data center operators will be looking more urgently to future-proof their infrastructure investments.

As data center operators begin to embrace liquid cooling, it’s important to note not all liquid cooling technology is the same. Cold plate and immersive technologies – each available in single-phase and two-phase processes –are well known within the industry.

Cold plate cooling, also known as direct-to-chip cooling, entails transferring fluid directly to specific IT components requiring cooling. While this approach excels in delivering peak cooling performance at the chip level, it still relies on auxiliary air cooling and may not fully address long-term sustainability objectives.

Conversely, tank immersion presents a more sustainable alternative, enabling the recapture and reuse of nearly

40 percent and water consumption by up to 100%. Precision Liquid Cooling offers unparalleled sustainability with significant cost savings and zero compromise on your data center performance.

Data center operators are navigating in real-time the intricacies of AI and its impact on data center infrastructure. But one thing is clear liquid cooling – and Precision Liquid Cooling in particular - is the clear choice for meeting

100 percent of the heat and potentially eliminating the need for fans in data centers. However, its implementation often necessitates new facility designs and structural requirements, making utilization in existing brownfield data center spaces challenging.

Of all the cooling technologies available, Precision Liquid Cooling is the simplest and most efficient cooling technology on the market today. Offering the best of both direct-to-chip and tank immersion, Precision Liquid Cooling delivers a small amount of dielectric coolant precisely targeted to remove heat from the hottest components of the server, ensuring maximum efficiency and reliability.

This removes nearly 100 percent of the heat generated across the entire IT stack and reduces energy use by up to

these demands. This will require collaborative efforts between data center design and build teams, IT specialists, and executive leadership to seamlessly integrate liquid cooling into data center environments.

By doing so and aligning liquid cooling strategies to broader business objectives, organizations can accelerate innovation, improve cost-effectiveness, and gain a competitive edge in the AI-driven era.

Hot water, cold water

What’s the right temperature for water in liquid cooled systems?

Dan Swinhoe Senior Editor

Water is a core part of many data center cooling systems. But as densities - and therefore temperatures - increase, questions need to be asked about the right temperatures of the water cooling these systems.

As the chips running servers become denser and more powerful, operators are faced with questions around whether to lower the temperature of the water going to these chips, to the point we will have to start focusing more cooling to the water systems.

The move to liquid Historically data centers have been kept at around 20°C to 22°C, but groups such as the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) have been advising that organizations set thermostats higher for years, and data center temperatures have been creeping up. Facebook parent company Meta raised its temperatures to 29.4°C, Google went up to 26.6°C, and Microsoft has published guidelines suggesting temperatures could go up to 27°C.

Typical legacy data centers have chilled water set points between 42-45°F (6-7°C). Facilities that have gone through optimization of their cooling systems have successfully raised their chilled water temperatures to 50°F (10°C) or higher. According to Johnson Controls, it is estimated that for every 1°C (1.8°F) increase in the temperature of chilled water, there is approximately two-three percent savings in power consumption for a typical chiller.

A recent DCD Broadcast analyzed a case study about how a service provider in the UK achieved a £1.5m ($1.9m) annual saving by increasing the temperature of the data hall, which only translated into a 0.3 percent increase of hardware failure risk.

“Cooling has always been the secondlargest consumer of energy in the data center after the IT load, and this is mostly energy used to cool whatever the heat transfer medium is - be it air or liquid. So the less energy is spent there, the better the overall efficiency of the facility,” says Vlad-Gabriel Anghel, director of

solutions engineering at DCD’s training unit DCD>Academy.

The picture is changing as the industry moves towards predominantly liquid-cooled data centers, where a liquid such as water circulates directly over the heat-producing components and removes heat. Water has a much higher thermal capacity than air, meaning data centers can support higher-density chips and use less energy to cool them.

Though air-based cooling options exist for racks drawing more than 20kW, the drawbacks start to outweigh the benefits, leading operators to switch to liquid systems. For years, 30kW was seen as the top-end of high-density deployments, and air was good enough. With the advent of generative AI and what classes as ‘high-density’ now potentially reaching more than 100kW, sticking with air cooling alone is no longer an option.

“This idea that you're going to run liquid cooling at 122-140°F (50-60°C) water is probably going to be highly unlikely, especially in the training loads”
>>Andrew Bradner, Schneider Electric

The fluid running over liquid systems has a much higher temperature than that found in chilled water systems, but the industry is yet to standardize on the best approach. At the same time, chips are becoming increasingly dense, and the temperatures of the water being supplied to these systems is coming down.

Data center operators have long

been accused of being over-cautious by overcooling their air-cooled data centers to protect the IT hardware and avoid even the merest risk of overheating the data halls. Showing too much trepidation on liquid cooling risks the same issue.

Higher water temperatures mean less energy used towards cooling – great for PUE – but risks running chips closer to their thermal limit. So, how hot is too hot?

What is the right temperature for water?

ASHRAE introduced a paper on liquid cooling back in 2011. The paper set out broad classes - W1, W2, W3, W4, and W5, based on the cooling temperature. Originally those classes were 17°C, 27°C, 32°C, 45°C and Over 45°C, respectively. When the work was updated in 2022, new temperature refinements were required, including a temperature of 40°C, and ASHRAE moved to new class definitions: W17, W27, W32, W40, W45, and W+.

DCD>Academy’s Anghel says there is no optimal temperature for water in liquid-cooled systems, because the best temperature will vary depending on the set-up of the facility.

“This will depend entirely on the type of liquid cooling used as well as the environment the liquid cooling system is in, the type of chip and its TDP as well as the utilization of the chip,” he says. “A rear-door air-assisted liquid cooling solution will have different temperatures to a closed loop direct to chip cooling system.”

According to Uptime, water temperatures in liquid-cooled systems today seem to be converging around 32°C (89.6°F) for facility water – what is described as a “good balance” between facility efficiency, cooling capacity, and support for a wide range of DLC systems. The company notes, however, this often requires additional heat rejection infrastructure either in the form of water evaporation or mechanical cooling for higher-density chips.

“Many operators have already opted for conservative water temperatures as they upgrade their facilities to incorporate a blend of air and liquidcooled IT. Others will install DLC systems that are not connected to a water supply but are air-cooled using fans and large

radiators,” the company said in a recent report.

The analyst firm notes current highend processors (up to 350W thermal design power) and accelerators (up to 700W on some GPUs) can be “effectively” cooled even at high liquid coolant temperatures, allowing the facility water supply for the Direct Liquid Cooling system to be running as high as 104°F (40°C), and even up to 113°F (45°C).

Andrew Bradner, general manager for Schneider Electric’s cooling business, tells DCD, however, that after chips reach 500W, supply water temperatures have to come down to 85°F (30°C). And, for 700W, the temperature may have to come down to as low as 80°F (27°C).

“This idea that you're going to run liquid cooling at 122-140°F (50-60°C) water is probably going to be highly unlikely, especially in the training loads,” Bradner says.

And in the same way air-cooled data centers have generally been run colder out of caution, customers using liquid cooling deployments are being equally prudent.

As part of its AI-focused redesign, Meta has settled on 85°F (30°C) for the water it supplies to the hardware, and hopes to get the temperature more widely adopted through the Open Compute Project.

Anecdotally, however, DCD has heard operators who expected customers to go with the ASHRAE definition of W27 (27°C/80°F output water) and are instead

opting for the W17 (17°C/62°F) option.

“There's a lot of discussion at times that the water temperatures are going to go to 104-122°F (40-50°C),” says Bradner. “But as the power densities of the GPUs start to get over 500W to 700W each, the case temperatures that they're starting to see are requiring that that water comes down lower.”

“Once you've hit 700W, the water temperature has to come down to about 80°F (27°C). And we have customers that are asking for between 68-75°F (20- 24°C) supply water.”

Free cooling vs assisted cooling

When you need 68-75°F (20-24°C) temperature water, some assisted cooling technologies are required in many cases – especially in hotter climates.

As an example, Bradner said Schneider recently performed an assessment with a partner around free-cooling – which relies on pulling in naturally cool air or water instead of mechanical refrigeration – at higher densities.

As long as the chip densities were at 300W, 95 percent of their sites could get away with not having any type of mechanical assist to provide the water temperatures they needed to run a liquid system.

However, once chips went over 500W, only five percent of their sites could support free-cooling, and 95

“We're

seeing many of

our largest

customers are talking about water temperatures that are more in the 80-86°F (27-30°C) range, not 104-122°F (40-50°C)”

percent of their sites needed some sort of compressor mechanical-assisted solution.

“So I think that's the challenge,” says Bradner. “As the chips get more powerful and more power hungry, the internal dissipation that needs to happen to the chip case housing requires colder water to be able to still support reliable cooling of those chips.”

But he notes that for water temperatures around 80-86°F (27-30°C), there are still large parts of the year that you're going to get free cooling, and operators may only need assisted cooling for the hottest summer months.

“Right now, you can run the 300W-400W chips that are available with far higher water temperatures," Bradner says.

“But that's going to change dramatically once these more powerful GPUs become readily available and deployed at scale. We're seeing many of our largest customers are talking about water temperatures that are more in the 80-86°F (27-30°C) range, not 104-122°F (40-50°C).”

DCD>Academy’s Anghel warns that if water temperatures are set too low, operators risk overcooling chips and ultimately wasting energy and repeating mistakes long made with air cooling.

“Any watt spent cooling water is another watt removed from the IT load,” he says. “The same efficiency mistakes are being made regardless of the cooling medium.” 

Density dilemmas

AI is making data center racks

denser, presenting new cooling challenges for operators

"Generative AI is the defining technology of our time,” declared never-knowingly understated Nvidia CEO Jensen Huang at his company’s GTC developer conference back in March. “Blackwell GPUs are the engine to power this new industrial revolution.”

Huang was speaking at the launch of Blackwell, the latest GPU architecture designed by Nvidia to train and run artificial intelligence systems. As is customary, it offers hefty performance and efficiency bumps when compared to its predecessor, the Hopper series.

But packing more and more transistors (the B100 GPU features 208 billion, compared to 80 billion on the previous generation H100) onto a single chip comes at a cost. Nvidia says the first devices in Blackwell series require 700W-1,200W of power each, up to 40 percent more than the H100, and features direct liquid-cooling capability for the first time.

For data center operators, growing demand for high-density servers featuring AI-focused processors from Nvidia and other vendors presents a challenge: how to keep cabinets full of GPUs cool as their

power requirements increase and they generate more and more heat.

For many companies, this means redesigning their systems to cater for AI-ready racks, utilizing emerging liquid cooling techniques, which are proving more effective than traditional air cooling. Whether generative AI truly turns out to be the defining technology of the current era, or a triumph of hype over substance, remains to be seen, but it is certainly redefining how data centers approach the cooling conundrum.

Hot chips

While much talk in the industry is of racks up to 100kW in density, the reality in most data centers is somewhat lighter. JLL’s 2024 Data Center Global Outlook report, released in January, shows the average density across all data center racks to be 12.8kW, rising to 36.1kW when just taking into account hyperscale facilities, where most AI workloads run.

However, the report expects this to rise to an average of 17.2kW across all data centers by 2027, hitting 48.7kW for hyperscale facilities.

With technology evolving at a rapid rate, data center operators are having to grapple with the challenge of being prepared for a high-density future, while also meeting the day-to-day needs of their customers. “We have to be all things to all people,” says Tom Kingham, senior director for design at CyrusOne.

“We’ve seen the recent announcements from Nvidia around the Blackwell chipsets, which are liquidcooled and have incredible densities. So we know that’s coming, and the challenge for us is that the data centers we’re designing now will not go into operation for another three years.

“We need to have the flexibility to know we can deal with a client requirement now and be able to deal with it in three years’ time when the thing goes live. And because we want a 10-15 year lease term, we would like it to at least be somewhat relevant by the end of that end of that term as well.”

CyrusOne, which focuses on providing data center services to the hyperscale market and is backed by KKR and Global Infrastructure Partners, has come up with a solution it calls Intelliscale, a modular approach to building data centers. The company says it will be able to accommodate racks up to a hefty 300kW in density.

To cope with the cooling demands of such racks, the system can be kitted out with a mixture of liquid cooling technologies, encompassing direct-tochip liquid cooling (DLC), immersion cooling, and rear door heat exchange, as well as traditional air cooling. This is a “natural progression” of what the company was doing already, Kingham says. “We've been using modular electrical plant rooms, packaged chillers, and cooling products that form 1.5MW

“We're moving the cooling closer to where the heat is, so there's less of a buffer and, if something were to go wrong, it's going to go much more wrong now than it ever has before”
>>Tom Kingham, CyrusOne

blocks,” he explains. “We put as many of those blocks together as we need to get to the capacity of the data center.

“All that we've done with Intelliscale is add in some additional components that provide us the flexibility for the data center to be liquid or air-cooled, or to use things like immersion cooling.”

Kingham says CyrusOne is currently hedging its bets when it comes to which types of cooling technology will be favored by customers and equipment vendors. “At the moment, we have to offer everything,” he says. “But it feels like DLC is the preferred method at this point. With that comes an element of a hybrid set-up where the chip will be cooled by liquid but other components, like the power supply unit, still need some air.”

A liquid future

Other vendors are also turning to liquid cooling in a bid to cope with demand for high density. Equinix announced in December it was making liquid-cooled solutions available in more than 100 of its data centers around the world, without specifying the kind of densities it was able to handle.

Stack Infrastructure launched a high-density offering in January, saying it could support up to 30kW per rack with traditional air cooling, up to 50kW per rack with rear door heat exchangers, and up to 100kW per rack with DLC. The company said it intends to

CyrusOne's Intelliscale

support 300kW or higher per rack with immersion cooling in future.

Aligned is promoting a similar product, known as DeltaFlow, which it says can support high-density compute requirements and supercomputers, and can cool densities up to 300kW per rack.

Digital Realty’s high-density solution hit the market last year, supporting up to 70kW. In May, the company revealed it was more than doubling this to 150kW, and adding DLC into the mix. This is now available at 170 of the company’s data centers, representing just under half of its portfolio.

“We already have really strong demand for AI and HPC workloads across a variety of business segments,” says Scott Mills, SVP for engineering and customer solutions at Digital Realty. “That has ranged from sectors like minerals and mining, to pharma and financial services. At cabinet level, we’re seeing densities going up from 30 cabinets to 70, and within those 70 cabinets you might have 500kW of you might have 10MW.”

Mills says that, for Digital’s platform to be able to cope with all these different densities effectively, it’s important to be able to “plug in” new cooling systems like DLC. He says the company intends to continue expanding its offering: “We’re going to keep doing this, and the plan is to make the platform broader as new technologies become available,” he says.

Up on the roof

Making new cooling technologies available is one thing, but fitting them into existing data centers is a different challenge entirely.

CyrusOne’s Kingham says the company is currently grappling with the challenge of retrofitting its Intelliscale units into a data center it designed only a year ago. “We're already seeing customer demand to convert that into a liquidcooled facility,” he says. “Principally, it's not very difficult, because we were already running a closed loop water system into the building, but the densities are so much higher that our challenge is getting enough heat rejection equipment into the building.”

Heat rejection systems are typically

placed on the roof of the data center, which is all well and good unless that roof space is shrinking. “Our initial hypothesis was that we could make the data centers smaller because the racks are getting denser, therefore we could make the rooms smaller,” Kingham says.

“But that means we don't have enough roof space to put all the heat rejection plant. And we don't necessarily have enough space in the building to coordinate all the power distribution for those racks, because now the power distribution is much larger. So we're actually finding that the rooms aren't necessarily getting any smaller but the focus is now on trying to efficiently get heat out of the building.”

Elsewhere, Kingham says making liquid cooling cost-effective (“there’s so much more equipment in this sort of design”) is also a challenge for his team, while new hazards are also emerging. “We're moving the cooling closer to where the heat is, so there's less of a buffer and, if something were to go wrong, it's going to go much more wrong now than it ever has before,” Kingham adds. “And on the other hand, if the water doesn't get to the chip, it's going to cook quickly.”

From an operational standpoint, the presence of liquid cooling is changing the way data centers operate, Digital Realty’s Mills says. “There are new service level agreements and new things we have to monitor and manage,” he says. “Something like leak detection has to become core to the offering whereas before you didn't really talk about it before because it was heresy to even think about introducing liquid into the data center environment.

“Our global operations team is developing new methods and procedures around monitoring, both for our teams

“From my point of view, all the discussions now are around liquid cooling”
>>Tom Kingham, CyrusOne

and also so that we can supply alerts directly for our customers. We’ll get better at doing that, and also the industry as a whole will improve as these things become standardized.”

The liquid cooling journey

Mills says many of Digital Realty’s customers are also on a journey of understanding about how liquid cooling is relevant to their data center workloads. “We have a group of customers who have experience in doing this, who will come to us and know what they want and how they want it,” he explains. “Then it’s just up to us to work with them to make that happen.

“There’s a second group who says ‘we are learning and we want your expertise to help us,’ and that’s where we have to provide more support through our global programs.”

It’s a similar story at CyrusOne. “We’re all learning together,” Kingham says. “It’s easy for us, as the operator, to get frustrated because we have to set things in stone now that will go live in 2027.

"But when you look at how much change has happened in the last year, even with the products announced by Nvidia alone, what is the picture going to look like in three years? It could be totally different.

“I think the real challenge is going to come in the next tech refresh cycle, in the early 2030s. What we’re designing for now may be completely obsolete by then.”

Regardless of what the future holds, Kingham believes the advent of AI has accelerated adoption of liquid cooling, and that the change is a permanent one.

“We'll see how the market trends over time, but, from my point of view, all the discussions now are around liquid cooling,” he says. “There's no sense that this is just a buzzword anymore, and it feels like pretty much everything we do from now on is going to be down this path.

He adds: “I wonder how long it will take us before we just pivot to accepting that, right from the very start, all our projects are going to be liquid-cooled data centers.” 

Precision Liquid Cooling

Iceotope is reimagining data center cooling from the cloud to the edge.

Precision Liquid Cooling removes nearly 100% of the heat generated by the electronic components of a server through a precise delivery of dielectric fluid. This reduces energy use by up to 40% and water consumption by up to 100%. It allows for greater flexibility in designing IT solutions as there are no hotspots to slow down performance and no wasted physical space on unnecessary cooling infrastructure. Most importantly, it uses the same rack-based architecture as air cooled systems and simply fits to existing deployed infrastructure.

Get in touch to arrange a demo. +44 114

RISC averse?

Can the RISC-V open-source chip architecture rival Arm and x86 in the data center?

YouTuber bitluni likes building strange things. Head over to his channel and you can watch footage of him constructing a multi-colored LED wall made of ping pong balls and a DIY sonar scanner.

Recently, his 250,000 followers have also seen him create his own supercomputer. To do this, he stitched together 16 “superclusters,” each containing 16 RISC-V-based microcontrollers from Chinese vendor WCH, into one “megacluster.”

The resulting 256-core computer is capable of registering 14.7GHz combined single-core clock rate (“not amazing

but not too shabby either,” according to bitluni). On the face of it, this is not particularly super - Frontier, the world’s fastest supercomputer, has more than eight million processor cores - but the machine bitluni put together in his workshop demonstrates what might possible on a grander scale using components based on open-source chip architecture RISC-V.

The open nature of RISC-V means there are no intellectual property (IP) licensing fees to be paid by vendors, so components can come cheap; the WCH microcontrollers cost less than ten cents each, a quarter of the price of comparable devices based on

other architectures, meaning bitluni was able to procure the parts needed for his ‘megacluster’ for less than $30.

But beyond providing low-cost chip option for bedroom projects, can RISC-V have an impact in the data center?

Semiconductors based on Intel’s x86 architecture still rule the roost alongside a growing number of Arm-based devices being developed in-house by the major cloud providers.

But some vendors think so, and are launching products which they hope will become integral to the servers of the future, but they have their work cut.

outake an impact.

RISC management

RISC-V is a product of research carried out in the Parallel Computing Lab at UC Berkeley in California. First released in 2010, it is a modular instruction set architecture (ISA), a group rules that govern how a piece of hardware interacts with software. As an open standard, RISC-V allows developers to build whatever

called RVA, which is for general-purpose computing applications like HPC and big, honking AI/ML workloads. It’s not really targeted at the earbud folks, though they can use it.”

Himelstein says this is to help accelerate RISC-V chip development and reflects a growing interest in server chips based on the ISA. “In the post-ChatGPT world, people are getting much more

they desire on top of the core ISA.

Non-profit organization RISC-V International manages the ISA, and has formed a community of just under 4,000 members including Google, Intel, and Nvidia. In 2022, RISC-V International claimed there were 10 billion chips based on its ISA in circulation around the world, and predicted that number would climb to 25 billion by 2027.

Many of these chips are basic microcontrollers featuring in low-cost Edge devices and embedded systems for products such as wireless headphones, rather than advanced CPUs. But Mark Himelstein, CTO of RISC-V International, says RISC-V is thinking bigger. “Everything we’re doing in RISC-V is driven by the data center,” he says.

Last year the RISC-V Foundation introduced profiles, packages containing a base ISA coupled with extensions that work well together when building a specific type of chip. “We can give these to the compiler folks and operating system folks to say ‘target this,’” Himelstein says. “The first profile that came out is

aggressive with how they integrate AI and machine learning into every application and solution,” he says. “You need good hardware to be able to go off and do that, and we’re seeing an increasing number of RISC-V server chips that can power the next generation of ‘pizza boxes.’”

Hitting the market

Ventana Micro Systems is one of the companies targeting the server market with RISC-V hardware. Last year, the vendor announced a second version of its Veryon processor, featuring 192 cores built in a chiplet design and ready for production on TSMC’s four-nanometer process.

The company was founded in 2018 by engineers who had previously worked on developing 64-bit processors on the Arm architecture and saw an opportunity to do something similarly transformative with RISC-V high-performance semiconductors.

“If you look at RISC-V today, it’s basically a bunch of microcontrollers,” says Travis Lanier, Vantana’s head of

product. “In fact, I would expect RISC-V to completely take over the microcontroller market.

“But people will look at that and say ‘it’s not a serious ISA for high performance’. So we have to prove that by moving RISC-V along, and I think all the features are now there to compete with the other ISAs, it’s about putting them into a CPU.”

In terms of performance, Ventana

Everything we’re doing in RISC-V is driven by the data center”
>>Mark Himelstein, RISC-V International

says the Veryon V2 can outpace AMD’s Genoa and Bergamo Epyc server processors, though given that the V2 won’t be deployed until 2025, this comparison is likely to be somewhat dated. “We’re finishing up the design and expect the first deployments in data centers to be early next year,” Lanier says. “Those will be limited deployments, and we’ll look to scale up from there.”

Ventana plans to take advantage of the flexibility RISC-V’s open architecture to give it the edge over its rivals. The Veryon V2 supports domain-specific acceleration (DSA), enabling customers to add bespoke accelerators. DSA could help data center operators boost performance for specific workloads to meet customer requirements, Lanier says.

Some vendors are backing RISC-V, and are launching products which they hope will become integral to the servers of the future. But they have their work cut out to make an impact.

SiFive was an early proponent of RISC-V high-performance chips, and in 2022 was valued at $2.5bn following

a $175 million funding round. The company’s hardware can apparently be found in Google data centers, where its chips help manage AI workloads, and it was awarded a $50m contract by NASA to provide CPUs for the US space agency’s High-Performance Spaceflight Computer. However, its progress seems to have stalled recently, and last November it was reported that it was laying off 20 percent

countries “seek to regain sovereignty over semiconductor production.”

Sébastien Luttringer, R&D director at Scaleway, said at the time: "The launch of RISC-V servers is a concrete and direct statement by Scaleway to boost an ecosystem where technological sovereignty is open to all, from the lowest level upwards.

industry and academia.

“Open source software is in every project now,” says Gavin Ferris, CEO of lowRISC. “It’s a clear success, but the challenge for us was how to do that with hardware, because there are huge advantages to sharing foundational IP.

" I think all the features are now there to compete with the other ISAs, it’s about putting them into a CPU”
>>Travis Lanier Ventana

of its workforce, including most of its high-performance processor design team.

Another startup, Tenstorrent, is building its own RISC-V CPU, as well as an AI accelerator which it hopes will be able to compete with Nvidia’s all-conquering AI GPUs. It is headed up by Jim Keller, a former lead architect at Intel who is credited with an instrumental role in the design of Apple’s A4 and A5 processors, as well as Tesla's custom self-driving car silicon.

Tech big names are also getting in on the act. Samsung is setting up an R&D lab in Silicon Valley dedicated to RISC-V chip development, and Alibaba, which has long held an interest in RISC-V, claimed in March that it was on track to launch a new advanced server chip based on the ISA at some point this year.

Alibaba already has a RISC-V server chip on the market in the form of the C910, which was made available on French data center company Scaleway’s cloud servers in March. Scaleway claimed that this was first deployment of RISC-V servers in the cloud, and added that it expects the architecture to become dominant in the market as

"This bold, visionary initiative in an emerging market opens up new prospects for all players.”

“It makes economic sense to share and amortize the cost of foundation IP

blocks so that you can focus on the stuff that is new and innovative and you can cut time to market.”

Trust the process

As commercial RISC-V chips gather momentum, the architecture is also underpinning efforts to develop opensource silicon which could find its way into data centers.

Earlier this year, the OpenTitan coalition claimed a milestone when it taped out what it said was the first open-source silicon project to reach commercial availability.

The chip, which features a RISC-V processor core, is a silicon root of trust, a firmware device that provides security at hardware level by detecting changes made by cyberattackers to a machine and disabling the affected hardware.

OpenTitan was founded by Google in 2018 with the aim of developing an opensource root-of-trust chip, and the project has come to fruition with the help of lowRISC, a community interest company dedicated to the development of open silicon, and a host of other partners from

LowRISC spun out of Cambridge University’s computer lab in 2014, and was looking for a first use case to drive open-source hardware when it happened upon OpenTitan. Following years of work, a chip based on IP developed by the project was made earlier this year, and the coalition says it is the first open-source semiconductor to have been built with commercial-grade design verification and top-level testing.

The IP is now available to use, and Ferris says he is confident it will be adopted by chip makers as part of their devices. “It’s commoditizing something that isn’t the ‘secret sauce,’ and I think there’s a growing recognition that security is not an area where people want to differentiate,” he says. “There’s a lot of engineering in any SoC, and it doesn’t make sense to do it all yourself, the smart thing to do is leverage open-source and concentrate on the product features higher up the stack that you can sell.

“That doesn’t mean you don’t do proprietary things with [open-source], it just means there are a whole layer of tools you can just access and use. We’ve seen this movie before with software and we know how it ends, we just need to get to a good baseline to get the machine started, because once it starts, it doesn’t stop. That’s what we’ve got with OpenTitan.”

Challenges ahead

While enthusiasm for RISC-V within the open-source community is high, whether this is shared in the wider data center market is another matter.

Server chips have long been the domain of Intel and its x86 architecture, though recent years have seen AMD, which designs its own x86 CPUs, eat into that domination. Mercury Research’s latest report on the CPU market, released in May, shows AMD now has a 23 percent share of the server market thanks to the success of its Epyc range. This is up from 18 percent a year ago.

Intel and AMD also have to contend with the rise of Arm devices in the data center. Previous attempts to introduce the UK chip designer’s low-power architecture into data halls have not been a success, but improvements to its technology, combined with the desire of the cloud hyperscalers to develop their own hardware in-house, has driven a paradigm shift over the last three years.

Amazon offers Arm-based Graviton chips in its data centers, while Microsoft launched its Cobalt 100 CPU and Maia AI accelerator, both built on Arm, before Christmas. Apple also has its own Armbased consumer silicon, having ditched Intel in 2022, while Nvidia has an Arm CPU, Grace, which can be deployed in conjunction with its GPUs or used as a standalone product.

Elsewhere, vendors such as Ampere are building dedicated Arm-based data center chips which are also gaining traction with the hyperscalers.

All this leaves limited space for RISC-V CPUs in the data center, argues chip industry analyst Dylan Patel of SemiAnalysis. “Arm is doing really well at locking in the big guys by providing them a lot of value,” he says.

“They’re not just providing CPU cores, but also the network-on-chip that connects the CPU to the memory controllers and PCIe controllers. They’re

"I don’t think RISC-V as the central CPU in the data center is going to happen any time soon”
>>Dylan Patel SemiAnalysis

a slowdown, and if you put those two together it shows the difficulty of finding a solid commercial case for RISC-V.”

He believes there is a supporting role RISC-V chips can play in the data center through devices such as the OpenTitan root of trust chip.

“If you’re making a new accelerator, it will need to have standard instructions,” he says. “That stuff is being standardized by RISC-V, and then you can go out and attach everything else yourself. That’s something a handful of folks are doing with custom accelerators for data centers, whether they’re related to storage or AI workloads.”

even doing things like physically laying out the transistors, so they’re doing a lot to maintain and grow their market.”

Because of this, Patel says, “I don’t think RISC-V as the central CPU in the data center is going to happen any time soon.”

Patel also points out that the highprofile problems at SiFive, and the delays Ventana has experienced in getting a product to market (the original version of the Veryon chip never made it to production), have not helped adoption. “I think the RISC-V hype peaked in 2022,” he says.

“Since then SiFive has laid off its highperformance CPU core team because they weren’t getting the traction. I think they’ll claim they’re still making one, but it’s at a much slower cadence.

“Ventana never got their first chip out and have had a few issues. They're not down and out but there’s definitely been

Meta is the most interesting example of this, Patel says. Facebook’s parent company has designed its own AI accelerators by linking together RISC-V CPU cores built using IP from another vendor, Andes Technology, and intends to continue using the architecture in its future silicon efforts. Patel says this “probably the biggest positive” for those hoping for greater RISC-V adoption in data center servers.

At the RISC-V summit in November, Prahlad Venkatapuram, senior director of engineering at Meta, said: “We’ve identified that RISC-V is the way to go for us moving forward for all the products we have in the roadmap. That includes not just next-generation video transcoders but also next-generation inference accelerators and training chips.”

If Meta’s engineers need any tips on what to do next, they could always put in a call to bitluni. 

Inside 375 Pearl Street

The heart of lower Manhattan

The view across the East River is serene. On a cold March day, skies blue and crisp, the Brooklyn Bridge stretches across its waters and, on the Manhattan side, comes ashore adjacent to 375 Pearl Street.

375 Pearl Street, also known as the Verizon Building, One Brooklyn Bridge Plaza or, since the building’s acquisition by Sabey Data Centers, Intergate.Manhattan, is in fact a data center.

“It’s iconic,” says Dan Meltzer, managing director of sales at Sabey Data Centers, noting how the building is often a key feature in media shots of Manhattan. “Movie and TV producers shooting downtown focus on our building, because that view is so iconicthe Brooklyn Bridge, the Gehry building, and 375 Pearl Street.”

The building itself has faced some criticism over the years, architecturally speaking. In 2012, The Daily Telegraph named 375 Pearl Street the 20th ugliest building in the world. It has since undergone significant renovations, including replacing the top 15 floors with windows instead of limestone walls. More recent sentiment analysis conducted by Buildworld found that, despite these efforts, public opinion has not particularly changed - though the tower remains undeniably striking.

Of course, how the building looks on the outside is more or less irrelevant given what lies within.

Today, Intergate.Manhattan is a 32-story data center and office building spanning a total of 1.1 million square feet (~102,195 sqm).

The data center element of the building resides on the sixth, seventh, and twelfth floors.

It has a total power capacity of 18MW, but the three floors currently dedicated to data centers only use 5.7MW of that. The thirteenth floor offers a powered shell, and the eleventh floor is currently under construction to also accommodate data center customers.

The skyscraper was originally built in 1975 for the New York Telephone Company. Servers were installed after Sabey acquired 375 Pearl Street in 2011.

“In those days, it was completely a telecommunications central office and provided all the communications for downtown New York,” says Meltzer.

New York Telephone became 9X, and was later bought by Bell Atlantic which

Photography by: Georgia Butler
Georgia Butler Reporter

rebranded as Verizon in 2000. This was when the tower picked up its affectionate name of the Verizon building.

It was during this era that the 9/11 terror attacks happened at the Twin Towers, barely a 20-minute walk away from Pearl Street.

Verizon and the building itself played a significant role in recovering communications for the area during the incident. In a recollection of the event, a Verizon blog post notes the company lost three members of staff in the attacks: Donna Bowen, Derrick Washington, and Leonard White.

In the days that followed, thousands of Verizon employees risked their own safety (the air in Southern Manhatten was heavily polluted) by rushing to reestablish connectivity for residents and the New York Stock Exchange - the hub of America’s economy. A year later, the building was decked out with Verizon logo - a badge that remains emblazoned on the site to this day.

The years that followed marked a seismic shift for the telecommunications industry. “The business changed,” explains Meltzer. “This was during the move to cable and different services, and Verizon started to sell some of its assets, and decided to sell 375 Pearl Street.”

The building was purchased by Taconic in September 2007 - which also owned 111 8th Avenue, now known as the Google Building - with plans to turn it into a residential tower, says Meltzer.

“But after the 2008 financial crash, they handed the keys back to the bank. One of our guys was looking at real estate and got a lead on the building, and Sabey was in a bidding war with Carlos Slim, the

Mexican billionaire, and we won. The rest is history.”

Meltzer says that he joined Sabey “expressly with the idea of leasing in that building,” adding that despite having since been promoted within the company, 375 Pearl Street is still “[his] baby.”

The original plan - or “thesis” as Meltzer puts it - was for the entire skyscraper to be a data center.

“It was going to be ‘the place’ for financial services companies. We were talking to the likes of Bloomberg, for example,” he recalls.

The thesis was, in theory, a sound one. The building’s location in Manhattan’s financial district means it is ideally placed to deliver digital services to nearby companies at low latency.

“Unfortunately, Mother Nature had other plans,” Meltzer says. “Hurricane Sandy hit in 2012 and suddenly, our phone stopped ringing.

“People were telling us they had a mandate to move everything out of New York City. Things changed, and we had to get creative. Our solution was to create that office block.”

The office space is mostly taken up by the City of New York agencies, which have 20-year leases with the building - though one floor is home to the late architect Rafael Viñoly’s firm.

Viñoly, who died last year, leased the building in 2018, a mere two years after renovations kicked off to replace the facade with glass. The architect requested yet more changes including carving out a chunk of his leased 31st floor to make a personal balcony. From afar, the balcony can just about be seen.

The process of taking the Verizon Building and making it into the data center and office complex that it is today was a massive undertaking, Meltzer says. “We put in over $300 million just on the data center side,” he explains. “If you include the office block, it was probably half a billion dollars.”

While requiring a massive amount of investment, the building was in many ways ideal for the purpose. The floor loading varies between 150 to 400 pounds per square foot, and the ceiling heights are between 14ft-23ft. Running down the entire height of the building is a shaft where power, cooling, and cabling can be housed without taking up valuable real estate.

On the difficulties in transforming the facility, Meltzer says: “Data centers are also horizontal in nature, and at one point we were the tallest data center building on the planet - we possibly still are. We had to

craning in generators, and chillers. It was crazy.”

Now operational, the data center operates efficiently at a PUE of 1.3 to 1.35, has an affordable high-tension service agreement with power company Con Ed for power delivered at 13,200V, which is then stepped down 480V. When the local climate is cool enough, the site can use free air cooling.

On the connectivity side, customers have a choice of 17 carriers - more than the average data center, and a perk of being located in a major metropolis.

With the work now done, Sabey counts the City of New York agencies among its office customers, though only the Department of Sanitation for its data center services.

“One of my frustrations is that we thought the city would be great data center tenants,” Meltzer says. “The city of New York has 150-170 different agencies, and a big budget - the whole city has a budget of something like $100 billion.

“But we have just one data center client from the city, which is the Department of Sanitation. The reason for that is that during the winter months when there are snow storms, they need a very secure and reliable backup data center.”

When it comes to security, the data center benefits from being within the perimeter of the New York City Police Department, which means, as Meltzer puts it, it has around $50 million worth of security watching the building.

Overall, Meltzer calculates that the data center is currently 92 percent leased. But Sabey is hungry to find new customers, particularly from the financial services sector.

“The stock exchange itself is kind of like a symbol. All the trading actually happens in New Jersey,” he says.

“The good news for us is that the major hedge funds and some of the financial institutions still are in NYC and want to be close to their servers. So we have seen success in that vertical.”

Of course, those hedge funds and financial institutions are predictably secretive with details about their data center footprint, so no specific companies were named.

Meltzer is also hopeful that the artificial intelligence boom will benefit Sabey and its customers. In terms of density, 375 Pearl Street can currently accommodate up to around 20 or 23kW per rack, but the facility is exploring ways to expand this.

For one of its customers, Sabey is currently testing the feasibility of using liquid cooling in the facility.

“One of our clients already has a pilot for liquid cooling running in our building," he says, "and we are looking at potentially taking one of our floors and making it a mix air-cooled and liquidcooled data center floor.”

“We also think the infrastructure floors that are currently really dedicated to our equipment might lend themselves to liquid cooling, The second floor, third floor, and the 11th floor could all make my thesis of us being an AI hub more interesting,” adds Meltzer.

“For example, an investment banking firm could have their AI right at 375 Pearl Street and have almost zero latency right to their headquarters.”

Meltzer is confident Sabey is on track to see a return on its considerable investment in the tower.

“We are on the right path for the building, and it will generate income,” he says. “Were we to ever sell the building, we would probably get a very nice return because you probably couldn’t do this building today with all the zoning, and the fuel and everything. It's very unique.”

Whatever the future holds, 375 Pearl Street will remain an important fixture on the New York skyline, and those who dislike the building’s aesthetics will likely continue to grumble.

Perhaps they should heed the words of Viñoly, who before his death spoke at length to Metropolis about the importance of function over form. “I’m very interested in unglamorousness!” he said. “People don’t understand how important this kind of thing is.” 

Grundfos Data Center Solutions

Keep your cool

Efficient water solutions for effective data flow

Meet your efficiency and redundancy goals

Smart pumps offering up to IE5 efficiency, redundant solutions that meet up to 2N+1 redundancy – whether you’re planning a Tier 1 or Tier 4 data center, you can rely on Grundfos to keep your data center servers cool with a 75-year history of innovation and sustainability at the core of our corporate strategy.

Your benefits:

• High-efficiency cooling solutions saving water and energy

• Redundancy meeting up to Tier 4 requirements

• End-to-end partnership process

"Today is another week" - Running Microsoft Cloud during the AI boom

Nuclear power, robot dreams, and the sinking of Project Natick

In the great rush for land, power, and chips that has marked the artificial intelligence age, none have spent more or deployed more than Microsoft.

“I don't think any industry has gone through what we're going through right now,” Noelle Walsh, the head of the company’s Cloud Operations + Innovation (CO+I) division, tells DCD in a wide-ranging discussion about the company’s approach to the current moment.

The uncomfortable truth We’re talking days after Microsoft’s own sustainability report admitted an uncomfortable truth: The rush to deploy large AI clusters has come at a cost, with emissions increasing alongside the expanded deployments.

Overall Scope 1-3 emissions were up 29 percent from 2020, with increased data center construction blamed for the majority of the jump.

same,” Walsh counters. “They’ve not gone out of the window: Achieving 100 percent renewable PPAs by 2025, and then at the grid level 24/7 by 2030.”

As for power, Walsh believes that those renewable goals will still be met - “we were always planning to overshoot, and now I just have to plan to overshoot again.”

Traditional renewable PPAs from wind, solar, and hydro will form much of that, but Walsh hopes that another power source could help green its operations.

“The wind doesn't always blow and the sun doesn't always shine - and batteries aren't there yet,” she says. “In the US, we're pushing for more nuclear. There’s some apprehension from the utility to go there, but it would be great.”

Earlier this year, DCD exclusively reported that Microsoft had hired several nuclear experts to head a new division to investigate the technology’s potential in powering data centers.

"[We hired them] to evaluate," Walsh says. "So some of the nuclear power companies are going to shut down their nuclear power plants, and now they're going: ‘Wow, we can retrofit them and make them brand new and continue to operate.' And when we make guarantees and offtakes, it can help them get better loans, etc."

That would mean the company helping traditional nuclear power plants remain operational. Earlier this year, the 800MW Palisades plant announced that it would reopen thanks to a $1.5 billion loan from the Department of Energy - it is not known if it has any anchor tenants.

Rival Amazon Web Services spent $650m this March to acquire a data center at a nuclear power station in Pennsylvania, which it plans to expand into a 15-building campus.

"And then the [Small Modular Reactors (SMRs)] are very interesting and attractive in the US," Walsh continues. "We are working with different technology providers, and then different equipment providers and the utility grid.

"The first one will be expensive. We will invest on the first and then assume there'll be productivity gains. If we invest up front, let us ride the wave."

Walsh says that Microsoft has "not yet struck those deals yet," but is "working with a number of players." Likely at the

"The first ChatGPT was trained at a supercomputer built in Ohio. ChatGPT: It's not made in California. It's made in Ohio"

front is Oklo, an SMR that has signed PPA deals with Equinix and Wyoming Hyperscale. It also happens to be backed by Sam Altman, of Microsoft-funded OpenAI fame.

More Blue Sky is fusion power, with Microsoft signing a PPA with Helion Energy (also backed by Altman) for 2028. "I've been in a fusion lab," Walsh says. "Plasma at a million degrees, traveling a million miles, and generates net more energy than you need to put into it... the physics works, so I think it'll come through some time."

The investment in power generation is partially one of climate commitments, but also one of sheer necessity - the number of places that potential gigawatt campuses can exist in the US or globally is painfully small.

"The question is, do we need to do more behind the meter? I would rather put it on the grid. But those are the discussions," says Walsh.

Where possible, the company will "move to the power and bring the fiber in, if we're going big."

The drive for AI

Training clusters are growing ever larger, consuming ever more power - but don't, necessarily, need that low latency. "With commercial cloud, we will be more city-based for the proximity to banks,

customers, etc. [Training] does give me more of an opportunity to go outside the big cities."

America's rural expanses could see more enormous data centers sprout up to train ever-larger models. But just how large could they grow?

"I ask myself the same question," Walsh says. "The more compute, the more capability these models have. I'm looking at it from an infrastructure perspective - you would be stunned at what we are working on and what is possible.”

She won't comment on Stargate, a rumored $100bn, 5GW mega data center for OpenAI - although other Microsoft employees have downplayed the story.

"For me, it's great not to have to build 100 small things if I can build 20 big ones," Walsh says, sidestepping the matter. "We get economies of scale and efficiency if you go to the right places, and you set up right."

She goes back to the value of bringing (a few) jobs to rural communities through large AI clusters. "The first ChatGPT was trained at a supercomputer built in Ohio. ChatGPT: It's not made in California. It's made in Ohio."

The distinction between AI training data centers and commercial cloud is a challenging one, requiring Microsoft and the wider industry to go out on a limb building an entirely new type of

"It's a massive investment, billions of dollars of investment. We can't all go wild, cuz you're talking country GDP-type of investments. But we can't miss the wave"

facility in non-traditional locations on behalf of a speculative and changing technology that has yet to prove sustainable business models.

“It's a massive investment, billions of dollars of investment,” Walsh says. “We can't all go wild, ‘cause you're talking country GDP-type of investments. But we can't miss the wave.”

Where possible, the company is looking at whether the training data centers can be used for inferencing in the future, or could be “converted to commercial cloud in the future,” but if it “had to always have that fungibility, we wouldn't grow fast enough.”

She adds: “But we do aim to strike that balance.”

Finding the right approach comes at a time of extreme crunch, where demand to deploy has never been tougher. “First there was Covid and then it was the Ukraine war,” she says. “Now I've accepted there is no going back. We talk about 'Today is another week,' that's what it feels like.”

That pressure could help lead to positive changes, Walsh hopes. “I'd like to think maybe the urgency now will encourage innovation and change.

“The way I look at it, we want to evolve society through AI. So we're not going to do that by taking a step back in sustainability. I think there’s the opportunity to shift it the other way.” 

ROBOTS, WOOD, AND UNDERWATER DATA CENTERS

Despitethe AI boom, one place Microsoft won't be going, however, is underwater. Back in 2013, the company started 'Project Natick' to explore submerged data centers, ultimately deploying a test system off the coast of Scotland in 2018 (profiled on the cover of DCD Magazine #38).

"I'm not building subsea data centers anywhere in the world," Walsh confirms after the project went quiet several years ago. "My team worked on it, and it worked. We learned a lot about operations below sea level and vibration and impacts on the server. So we'll apply those learnings to other cases."

One such learning could be that - for obvious reasons - the underwater and nitrogen-filled Natick did not have human engineers fiddling with its racks. Perhaps, DCD suggests, robots will be used in land-based sites instead of humans.

“We're looking at robotics more from the perspective that some of these new servers will be very heavy,” Walsh says. “How can we automate that versus having people push things around?” Similarly, with break/fix (a method of providing IT support to customers), “how can we get

better on diagnostics?”

DCD saw some of the company’s robots during our tour of Microsoft’s Project Silica lab (see cover, p15), and also understands that Microsoft is building a robot for cleaning servers that gently pulls them out of racks.

“We are learning from other industries on robotics, but we're also very cognizant that we need people. I don't want people worried about their jobs.”

On the construction site, the embodied carbon of data centers is also forcing a rethink. "I think what we can control directly, we're gonna hit it, and then indirectly working with our suppliers on the rest," she says.

Back in 2021, Microsoft funded a study that proposed using earth, algae, and hemp to build data centers, but Walsh downplayed the concept. "We're maybe looking at more practical cases," she says.

"We're applying timber in data center design" - DCD understands the company plans to announce a wooden data center in the months to come - "or recycled products in concrete, and then green steel," where steel is made with green energy. 

DATA CENTER SOLUTIONS

Your project partner –from design to delivery

A TOTAL SOLUTION FOR DATA CENTERS

From data center design to fit-out, our total solutions package brings together innovative hardware to build your data centers with software that helps you design them, and a range of services that meet your needs – all from a single source.

MODULAR SUPPORT SYSTEMS

Our modular support systems are designed to be used and re-used in as many different configurations as you need. installation is typically quicker, easier, and safer. You can add pipe supports, cable trays, duct runs, electrical outlets, bus bar supports and more to meet your original BIM models. Or adapt your supports as our needs change.

FIRESTOP SOLUTIONS

We help you maintain uptime with its pre-formed firestop products, which are designed to reduce the risk of fires spreading and limit the costs of reconstruction.

END-TO-END PROJECT SUPPORT

Our end-to-end approach enables you to build data centers faster and get them up-andrunning quicker. Thru engineering, procurement & construction we help you to increase your speed and productivity on-site and minimize your material and labor costs.

Building for the long-term: van Rooyen’s masterplan

DCD gets the inside Tract with Grant van Rooyen

rant van Rooyen, best known as the founder and former CEO of Cologix, is back with a new venture, called Tract.

Rather than building data centers for hyperscalers, the new company aims to build ‘master-planned data center parks,’ providing all the required infrastructure and permits that allow wholesalers and hyperscalers to build their own data centers on shovel-ready land.

While Tract isn’t the first wholesaler-to-the-wholesalers the data center industry has seen, the company’s ambition is noteworthy: In less than 12 months, the company has officially announced plans for almost 5GW of capacity across three locations – with more in the pipeline.

Photo Credi: Tract

New venture, new masterplan

Van Rooyen began his career at Level 3 Communications, leaving in 2009 after an 11-year stint. He then founded US data center operator Cologix alongside investment firm Columbia Capital (ColCap), leading the company as chief executive officer for eight years.

Van Rooyen left Cologix in 2018, a year after Stonepeak acquired a majority stake in the firm, with Bill Fathers taking the helm. That year Stonepeak also acquired Communications and Realty Investments, a van Rooyen-led company that owned data centers in Columbus, Jacksonville, and New Jersey. Though he hasn’t been in the spotlight at the helm of an operator, van Rooyen has been busy.

“We have a long history of successfully executing in the space, and we generally tend to do it behind the curtain,” van Rooyen tells DCD. “After I left Cologix, I certainly didn't sit on the beach.”

In 2022, the van Rooyen Group, a family investment group led by Grant, acquired 49 percent of Israeli data center operator MedOne alongside US private investment firm Berkshire Partners. That same year, Digital Realty acquired a majority stake in South African operator Teraco from a consortium of investors, including Berkshire Partners, Permira, van Rooyen Group, and Columbia Capital. All three still have an interest in the firm.

News of Tract first surfaced in 2022 after flyers were sent to investors. At the time, the newly formed company claimed to have identified 40,000 acres worth of potential sites and was seeking $1 billion to fund its lofty ambitions.

At the time, ColCap and the van Rooyens were said to be involved, with the latter investing $50m to get Tract going. Berkshire Partners is also a named investor.

Where Cologix, MedOne, and Teraco develop data centers, this latest venture is targeting massive parks that hyperscalers and their wholesalers can develop their own campuses on.

The Colorado-based company describes itself as a company that acquires, zones, entitles, and develops master-planned data center parks to provide to data center end users - whether cloud or wholesaler.

“Our primary objective is to deliver speed and certainty for our customers,” says van Rooyen. “Regardless of who's doing the vertical development, what they value the most is the ability to move with speed and certainty; predictable developments where the roadblocks have all been removed.”

A new data center reality

“It became fairly obvious around three years ago that the feedstock for all of this infrastructure is getting very scarce,” van

"For too long, the industry has assumed that the data centers are welcome everywhere and belong everywhere"

Rooyen says. “What was 20MW is now 100MW; 100MW is 250MW; 250MW is a gigawatt, and we'll continue to see that ratchet up.

“That has profound implications for where you can locate this infrastructure and the size of the feedstock and magnitude that's required to feed it. So we came to the conclusion that this was an area we wanted to explore.”

He says that customers have behaved over the years as though this is a spot market, that when you require capacity somewhere, you can easily procure it. Now, he warns, those days are gone.

“We're focused on making sure we own tomorrow's building block. That means we take a very long-term view; in many cases 10-plus years.”

Tract won’t be engaging in building its own data centers, and will instead focus on ‘horizontal development.’ That will include acquiring land, taking it through rezoning and entitlements so that data centers are allowed by right on the land, and having energy available.

“We push dirt, we build roads, we grade pads, we deliver wet infrastructure, both water, and sewer, and will master-plan the site to the point where somebody can come in, in whole or in part, and procure these positions and begin building as quickly as they can submit a building permit and pull that permit,” van Rooyen says.

"We don't just go and buy what's for sale. These locations are unique in their magnitude"

Reno, Eagle Mountain, Richmond

Tract says it owns or is under contract on more than 20,000 acres across the United States, which are in various stages of rezoning, design, or horizontal construction. Van Rooyen says the company has around 50 people working to take its projects through the permitting and development process.

The company made a splash with its first official announcement; a 2,200-acre data center park outside Reno, Nevada. The company acquired the land inside the Tahoe-Reno Industrial Center (TRIC) in Storey County and has commitments from NV Energy to deliver more than 2GW of power, beginning in 2026.

Tract reportedly acquired the land from Blockchain LLC. Blockchain had planned a 5,000-acre blockchain-powered smart city/ innovation zone that would house 35,000 people. After acquiring the land in 2018, the plans were dropped around 2021.

Ground was broken on the first 810MW phase of the Reno development – now known as the Peru Shelf Technology Park – in May 2024. The company has also acquired another 510 acres within the TRIC currently in the planning stages.

It has also alluded to another 1,500-acre, 1.2GW campus nearby known as the South Valley Technology Park.

Then came Eagle Mountain, Utah. The company has acquired just under 670 acres south of Salt Lake City and is working with Rocky Mountain Power to deliver more than 400MW by 2028.

That campus is identified as ‘Project Tripletail’ on the Economic Development Corporation of Utah’s website, which suggests a potential investment of up to $7bn.

Finally, May 2024 saw the company officially announce plans for a 2.4GW data center park outside Richmond, Virginia. The Hanover Technology Park will span 1,200 acres of land, with bridging power expected on the site by 2026.

Planning documents suggest up to 46 buildings may be developed on the site, located on the south line of Hickory Hill Road (State Route 646) at the intersection with Old Ridge Road (State Route 738). The company reportedly paid $33m for the undeveloped land.

There are more announcements to come. Van Rooyen says Tract is operating in ten markets – with multiple campuses in many locations.

“We haven't announced many of those markets and we won't until we've facilitated the certainty that we require before we'll make that public. We will not announce the project until it's fully entitled, zoned, and owned,” he says

He notes they are a combination of existing hubs – Phoenix, Arzona, and the wider Virginia market – as well as more upand-coming markets such as Reno. While van Rooyen says brownfield sites have merit, the “lion’s share” of Tract’s current developments are greenfield sites.

For now, van Rooyen says Tract’s first investment vehicle is focused on the US. On expanding abroad in future, he reiterates his experiences abroad over the years and adds that he “knows the strategy can travel.”

Stumbles in Phoenix

Bigger and bigger data center campuses are garnering more attention from the public than ever before, much of it negative. It’s becoming increasingly common to see

pushback on large-scale projects from residents and local politicians, especially in markets with sizeable data center footprints.

With such sizable proposals, van Rooyen is clear that outreach and engagement are cornerstones of the company’s ambition to deliver long-term certainty around its planned campuses.

“These developments need to work for the communities in which they exist. And we take this seriously at Tract,” he says. “I think for too long, the industry has assumed that the data centers are welcome everywhere and belong everywhere.”

“That may have been okay when we were at 5, 10, 20MW scale. But gigawatt campuses can't just live anywhere and everywhere. We don't just go and buy what's for sale. These locations are unique in their magnitude and they need to be located in places where ultimately you can work productively with cities and counties, and the communities. That doesn't mean everywhere.”

“Over the years, our industry hasn't done itself favors by operating behind the curtain,” he adds. “We're very clear from the start, who we are, what our intended uses are, and who our ultimate customers are likely to be. That transparency is not just overdue, it's fundamentally necessary.”

Tract’s pitch for long-term planning means cities and counties can avoid a lot of ‘ad hoc’ development; what van Rooyen describes as ‘20MW here, 50MW there,’ in a disordered manner that might upset the local residents.

“As these areas evolve and clustering intensifies, many participants in these communities look at them and say, ‘How did that happen?’ The solution, we believe, is master planning; let's sit down productively with these communities and engage with them on where they want data centers to develop at scale, and answer that over a 10-plus year period.”

While zoning and entitlements for the site were approved unanimously by both the Hanover County Planning Commission and Board of Supervisors, the Virginia campus only finally gained approval after several months of delays by county officials.

“Our Virginia project has been two years in the making. And that's not two years of checking in once a quarter. That's pick and shovel work every day for two years, from a group of people, not one or two people,” says van Rooyen.

"As these areas evolve, many in these communities look and say, ‘how did that happen?’ Let's sit down with these communities and engage with them on where they want data centers to develop at scale"

Elsewhere, however, Tract hasn’t been so lucky. In April, the company pulled an application to develop a large campus in the Buckeye area of Phoenix.

The company had been plotting a $14bn master-planned data center complex across 1,000 acres in the Buckeye area of Maricopa County, to the southwest of Phoenix.

Known as Project Range, the development was due to span nearly 30 buildings totaling 5.6 million square feet on land along Yuma Road between Jackrabbit Trail and Perryville Road. Buildings were set to range from 149,000 square feet to 260,000 square feet each.

Plans for the project were first submitted in late 2023, and work on the project was set to start in 2025, continuing over the next 15 to 18 years.

Tract withdrew the application from Maricopa County’s planning and development queue after opposition from local residents and other stakeholders.

The project received pushback from both Goodyear and Buckeye city staff because of its "incompatibility" with the designated land uses for the site along with concerns around building heights and noise. The site is currently designated for residential uses and is surrounded by neighborhoods.

Tract had already submitted revised applications reducing the proposed heights of the data centers and increasing the setbacks, before withdrawing the application altogether.

“While we worked productively with the city of Buckeye and Maricopa County, ultimately we came to the conclusion with our partners in the community and in the city that it was not the right place to locate a data center campus of this magnitude,” van Rooyen says of the issue. While that project is done, the company isn’t giving up on Phoenix.

“We withdrew that application, but we certainly haven't haven't withdrawn our appetite for the market and we continue to work very productively with the city of

Buckeye. Watch this space.

“We have other projects in that city, and we are advancing those very productively. We very rarely go into a market with only one solution. In some of these markets, we will ultimately have multiple solutions. Reno is a perfect example.”

When DCD suggests it’s rare for a developer to withdraw an application completely, van Rooyen replies that forcing an outcome is never productive.

“We will always be tenacious, but we're reasonable,” he suggests. “It's a massive investment, and it belongs in the right location. We're prepared to be patient to find those locations in partnership with our city and county partners, and the people that live in those communities.”

No leg up

Tract has yet to transact on any of its portfolio, but the company is in no rush.

“We're very focused on assembling the portfolio, and we take a very long view to the path to monetization of the portfolio,” van Rooyen says. “But we certainly are a recipient of a lot of enthusiasm from our prospective customers.”

As well as Cologix and Teraco, ColCap’s digital infrastructure investments include Boston’s DeepEdge, while Berkshire Partners also has investments in Edge firm Vapor IO as well as MedOne.

When asked if his and Tract’s primary investors’ connections and interests in data center developers would be advantageous in securing customers, van Rooyen is dismissive.

“We don't rely on that, or any special leg up,” he replies. “The needs are clear, and our job is to make sure that our developments are thoughtfully planned at scale. If we do a good job at that, we think we'll have many, many choices in terms of who will want to develop on our campuses.

“We're focused on playing for tomorrow's scale. And we feel like we're just getting started.”

POWERING GIGAWATT CAMPUSES

Youcan’t talk about large data center campuses without looking at power.

Van Rooyen says the fact that Tract is often talking about 10-year timelines means energy companies are receptive to what the company is planning and more easily able to accommodate those needs.

“We spend a lot of our time engaging with our energy partners,” he says.

“These are challenging projects and transformative loads to contemplate positioning their infrastructure to deliver around.

“Those conversations have to be thoughtful. They have to be engaged productively.

"Nobody can just wave the magic wand and deliver this magnitude of electrons without long-term planning.”

The company is working to ensure its sites have renewable power options.

In early 2024, Tract signed an energy partnership with solar farm developer Silicon Ranch, a company specializing in solar energy and battery storage, to pair renewable projects with Tract’s data center sites.

Silicon Ranch is working on site acquisition and interconnection processes for utility-scale solar and battery projects – exceeding 500MW – to directly support data centers on Tract’s campuses in Nevada and Utah.

Van Rooyen says Tract has “many more” renewable energy partnerships that the company hasn’t announced yet. 

Stand by for the next generation of generators

The most important power source is the one you hope never to use

As data center operators are forced to focus on the primary power constraints that must be overcome to meet the demand for new data center facilities, there is another type of power that cannot be overlooked. Emergency standby generators, or gensets, are the life-support systems of the data center, ready to kick in to keep the lights on and the data flowing in the event of a power grid failure.

IGSA Power is a Mexico-based manufacturer of emergency standby generators, founded in 1970, which works in partnership with a range of bluechip companies to produce customized backup power solutions for the data center industry. DCD had an opportunity to chat with CEO Santiago Paredes, who can teach us a lot about the generator market – where it’s been, where it’s going,

and how to find the right solution for you.

There has been substantial change in the market over the past five years. Paredes looks back at how current demands have evolved, emphasizing that backup power is more important than ever: “As industries expand and modernize, their power needs to grow accordingly, and clients are now seeking gensets with higher capacities to ensure they can meet their increasing power demands reliably. With the critical nature of uninterruptible power supplies, clients prioritize reliability more than ever, opting for larger gensets to ensure robust emergency power systems that can withstand fluctuations and outages without compromising operations.”

Technological advances have made these higher capacities possible, and newer units often offer the increased

power required, without the need for an additional footprint in increasingly spaceconstrained environments.

The environment itself is a consideration, as Paredes explains: “With stricter EPA (US Environmental Protection Agency) regulations and emission standards being enforced globally, clients seek gensets to comply with these regulations without compromising performance, equipped with advanced emission control technologies and alternative fuel options.”

At the heart of IGSA’s approach are its partnerships with a range of named brands including Baudouin, Mitsubishi, Volvo Penta, John Deere, and Perkins. Parades talks about the value that these collaborations bring to IGSA’s portfolio:

“We partner with brands globally recognized for their commitment to quality, reliability, and performance. By incorporating their engines into our genset solutions, we ensure that our products meet the highest standards of excellence, providing customers with peace of mind regarding the durability and longevity of their backup power systems.”

These partnerships bring a range of benefits to customers, from technical knowledge in design and manufacturing to ensuring clients receive “exceptional parts and service support”. It also makes IGSA Power more agile:

“Leveraging engines from multiple renowned brands offers production flexibility, and allows us to customize genset solutions to meet the specific requirements of each project. Whether

adapting to various building loads, environmental conditions, or integration with existing infrastructure, our partnerships with these brands enable us to tailor solutions that precisely match our customers' needs.”

For IGSA Power and its customers, designing gensets capable of maximizing power output for each liter of fuel used is the key to giving the performance required by modern workloads, whilst simultaneously reducing operational costs and environmental impact. These gensets are all offered in a modular form that meets National Fire Protection Agency (NFPA) 110 standards. The generators are designed to start up and transfer 100 percent of the building's electrical load in under 10 seconds.

Paredes says: “These modular engine designs streamline service and maintenance processes, as well as allow our team to optimize stock and parts inventory management. This allows for greater efficiency in serving multiple genset models, reducing downtime, and ensuring optimal performance over the generator's lifecycle.”

As a welcome side-effect to this approach, IGSA Power has improved its production process, allowing it to offer significantly shorter lead times to its customers.

“IGSA Power has optimized its production processes to streamline the manufacturing and assembly of the gensets packages. By leveraging advanced production techniques and efficient workflow management, IGSA Power can produce generators in as little as 40 weeks from order placement to delivery.”

This target, offered in partnership with Baudouin, can supply up to 10,000 engines per year. Paredes compares this to competitors who offer delivery on generators larger than 2000kW an average of 104 weeks, with some lead times as long as 152 weeks. The advantages are obvious:

“We can reduce the lead time, not only to acquire a generator but to build your data center. Our clients are satisfied by getting something up and running quickly, and not having to wait for the two to three-year lead times that our competitors have.”

Once the genset is installed, the story doesn’t end – it begins. Regular testing of diesel generators is crucial to ensure

their reliability in those moments of need. Paredes tells us:

“Typically, the generators need to be tested at least once a month to verify operational readiness. However, testing frequency may vary based on specific regulations, industry standards, and critical data. This is done based on how critical the application is, as IGSA Power recognizes the importance of regular testing for gensets but acknowledges the environmental impact associated with diesel generator operations, particularly during testing procedures.”

As well as a range of advanced emission control technologies, such as exhaust after-treatment, which are said to reduce the environmental impact of diesel emissions, IGSA Power is committed to finding maintenance regimes to further minimize impact. This commitment comes along with alternative fuel options, such as biofuel, renewable diesel, and HVO (Hydrotreated Vegetable Oil) to further reduce carbon emissions:

“IGSA Power can develop efficient testing protocols that minimize the duration and frequency of testing while still ensuring a thorough assessment of generator performance by optimizing testing procedures, thus reducing fuel consumption and emissions associated with testing activities.”

Another important aspect of efficient genset operation is the ability to control equipment at a granular level. Paredes tells us why this matters: “By monitoring multiple controls on the generator (amperage, voltage, frequency, temperatures, pressures, etc.) we can integrate real-time performance data that can be overseen from remote locations so you can proactively address situations.”

We’ve already looked at the advantages of partnerships that underpin IGSA Power’s products. It also puts them in a position to pool all that knowledge and experience into an overriding partnership with its customers.

“IGSA Power boasts a team of highly skilled and experienced engineering professionals, who specialize in power generation solutions, making them wellequipped to assess the effects and advise clients on the most suitable solutions for their specific needs. These experts have in-depth knowledge of generator technology, industry regulations, and application requirements. Our team works closely with clients to understand their unique needs, assess their power requirements, and recommend the most appropriate generator solution tailored to their circumstances.”

This ties in with a consultative approach to client engagement, allowing them to understand operational challenges, power demands, budget constraints, and environmental considerations. Says Paredes: “We recognize that one size doesn’t fit all. Based on our comprehensive understanding, the engineering team provides personalized recommendations and guidance to help clients make informed decisions regarding generator selection.”

By partnering and consulting with customers, IGSA Power can find solutions that don’t just suit today, but offer an individualized genset array that takes into account expectations of a market that moves at breakneck speeds, as Paredes tells us:

“IGSA Power assists clients in futureproofing their generator solutions by prioritizing regulatory compliance and scalability of technology integration, lifecycle planning, and ongoing customer engagement. By taking a proactive and holistic approach, IGSA Power ensures the client's emergency backup solutions remain reliable, efficient, and compliant with future requirements.”

For more information: igsapower.com

The transformation fallacy

How TUI has continued to modernize its IT operations before, during, and after the pandemic

More than 100 years is a long time for a company to stick around, so it’s no surprise that travel company TUI has been through quite a few transformations since it was founded in 1923.

One of the biggest is arguably the most obvious. Formerly known as Preussag AG, a transportation and industrial business, its shift into the service and leisure industry in the 1990s - and subsequent rebranding as TUI in 2002 - was quite the dramatic metamorphosis.

Nowadays, Germany-based TUI offers package holidays, experiences, and cruises to customers across Europe. Its transformations in recent years have been less blatant to the outside world, focusing on digital transformation and modernizing the company’s IT footprint.

Though, according to CIO Pieter Jordaan, considering “transformations” as individual processes is foolish.

“I think there’s a fallacy that transformations have a start and an end,” Jordaan says. “Really, it's a snapshot of time because of certain technology disruptions.”

Despite this, Jordaan is happy to break the changes down into phases when talking to DCD

“Our early transformation, up until 2017, was just consolidation,” he explains. “It was moving all of our countries’ data onto single platforms.

“We grew through acquisitions - so we would buy a company that had its own platform, database, and multiple ERP and finance systems. Eventually, this becomes

prohibitively complex and you need to constantly consolidate that as a matter of survival.”

He is not joking. A brief look through TUI’s history shows a complex series of mergers, acquisitions, divestments, and investments since the turn of the century. In addition, though based in Germany, the company serves 15 countries adding to the complexity of consolidation, and offers charter and scheduled passenger flights, package holidays and cruises.

Beyond consolidation, TUI was migrating to the cloud - Amazon Web Services (AWS), specifically.

“If you look at cloud computing, and when AWS emerged on the scene, even when they started it seemed like they did everything. What that has done is lowered the barrier of entry for companies to

innovate,” says Jordaan, speaking to DCD at AWS’s London summit.

“In the past you would have had to spend millions to set up your kit, to have all the compute available before you can start. Now, if you are a software startup, you could even be in a broom closet and still have access to the same compute that we [TUI] have as a €20bn ($21.7bn) company.”

Prior to its cloud migration, TUI was spending around 60 percent of its capital on networking, infrastructure, and pipelines for data. For Jordaan, it makes much more sense to let AWS and other cloud providers invest in innovation, while TUI cashes in on the convenience of the service.

The company has opted to steer clear of a ‘multi-cloud’ approach involving several different vendors, instead going all-in with AWS. Many businesses opt for a multi-cloud approach to avoid vendor lock-in, but Jordaan feels this concern belongs in the past.

“You’re trying to mitigate the risks of if you have to move but, in reality, the lifespan of most software is maybe five years,” he says. “You won’t migrate it anyway, you’ll just rewrite it. So the cost of running multi-cloud versus just writing it new is a significant difference.

“I think that's why people are stopping trying to build multi-cloud, or multi-cloud or on-prem and cloud set-ups, because the cost-benefit analysis doesn't make sense.”

Other tech leaders do not seem to share Jordaan’s view. The vast majority of companies still take either a hybrid approach or a multi-cloud approach for their IT strategy , if they are not fully onprem. Perhaps this choice is just hedging on the “safer” side, but, regardless, TUI seems set on an ultimate goal of being (almost) all in AWS.

Unsurprisingly, despite being a cloudfirst (and AWS loyalist) company, TUI still has some on-premise capacity. According to Jordaan, earlier on in the company’s journey, it had more than 30 data centers. Now it operates just 11, with plans to continue reducing that number this year.

Describing the migration process so far, Jordaan says that they started with some of the very critical systems. “One of

which was the e-commerce system that runs 50 percent of our revenue,” he says.

“That was significant, and provided scalability, because we could add more countries into it, and there are events such as airlines that cancel flights and so our customers would suddenly be looking for bookings.”

After that, it was a long-running program of lifting and shifting workloads depending on their priority.

The remaining data centers are either “just really complicated” to decommission, or are in the process of being shut down. By the end of next year, Jordaan expects TUI to have just one or two data centers.

One of the more unique areas of TUI’s business from an IT perspective is that of the cruise ships.

“Cruise ships actually have a whole data center on board,” Jordaan says. “It has completely separate networks for the onboard systems, navigation systems, and everything else, because you're in the middle of the sea. You need all the compute, storage, and networks to run from a data center.”

These systems are being transformed, too. Ships are deploying satellite connectivity to bring greater Internet connectivity on board. TUI is currently

deploying Starlink on its cruise liners which, according to Jordaan, is enabling them to reduce the compute on board as well as improving Internet connection for holidaygoers.

Overall, the company’s technical transformation really ramped up during the Covid-19 pandemic. With holidays out of the question as countries around the world went into lockdown, TUI was forced to take an operational break. For once, people weren’t using its services 24/7, so what better time to modernize?

That said, TUI was also, in some ways, on the front lines of the pandemic. DCD asked Jordaan if he remembered what the energy in its offices was like when the situation began to unfold.

“We aren't strangers to extraordinary events,” says Jordaan after a pause. “We have a crisis center to handle these things - be it hurricanes, volcanic eruptions, or even terrorist attacks. We have very strong crisis management abilities, but what made Covid-19 complicated was that it was a crisis in every single country at once.”

In the immediate aftermath, TUI had to locate and repatriate all of its customers who needed help getting home. That meant coordinating the company’s own planes and sending them across the

globe out of schedule, and during a “very volatile period where some countries are closing their airspace.” Jordaan added that some of TUI’s ships were refused permission to get into harbors or ports. TUI had to send out eight airplanes just to rescue those passengers.

While, of course, a hectic and strenuous time, being a package holiday provider and having an already consolidated and strong IT setup meant that TUI knew where its customers were stranded, and could get all 200 of them back home safely.

In addition to the physical safety and well-being of TUI’s customers, the second concern was financial. Very few companies escaped the cost of the pandemic, and the travel sector was among those hit particularly hard.

“No one knew how long it would last. Would it be three months? A year? Everyone was, of course, hoping it would be short, but the difficulty was on the revenue side,” says Jordaan. “Maintaining airplanes that aren’t flying for long periods of time isn’t cheap. The planes are still costing you money. You can’t shut down hotels indefinitely, because you will have mold and other issues. Fundamentally, your asset base is costing you money.”

TUI saw significant financial hits to its bottom line. Its 2020 revenue was €8bn ($8.7bn), a 58 percent decrease on the previous year, and the German state gave the company €3bn ($3.3bn) in financial support to soften the blow.

Despite this, it was also the company’s

first opportunity to go hard on its IT modernization efforts, cloud migration, and begin putting in the groundwork for future artificial intelligence (AI) solutions.

During its Q4 2020 earnings call, TUI noted that the company had been able to modernize much faster than it may otherwise have because of the reduced traffic - “the migration to the new architecture is easier.”

Jordaan expands on this: “We were already on a transformation journey. We were already consolidating countries into regions of data stacks or data centers. But it’s a hugely iterative process and it’s made worlds more complicated if you are busy flying planes, sailing cruise ships, and selling millions of holidays. It makes it very hard to switch systems.

“Covid gave us an opportunity where planes were grounded, so the risk appetite changed from ‘please don't touch anything' to 'please do it as quickly as possible.’ It’s not a risky thing, it’s a prudent thing. The risky part is when you aren’t finished and things start up again.”

When it comes to AI, Jordaan says that, within one year, the company increased its model deployment and Amazon SageMaker use by 1,000 percent and, in 2023, launched an “AI Lab” to bring generative AI throughout the company.

The initial focus for AI within the company is on efficiency - “How do I make my core center better? How do I get my developers to code faster?” Beyond that, the AI will be used a touch more creatively.

“If you talk to your friend who had a great experience on a holiday, they don’t tell you the pool was 200 meters away from the hotel and the restaurant opened at 6pm. They tell you that the staff were friendly, and the vibe was good. They tell you about how they went up a mountain, and the hotel provided lunch for the trip,” explains Jordaan. “These are data points that are hard to measure but super important to us as humans, and they are how we make decisions.”

Here, Jordaan sees AI changing the vacation experience. Instead of the very quantitative and “crude” star method of rating, the AI will be able to make recommendations based on what an individual wants from their holidaybe it a party scene, a quiet location, or something family-friendly.

“The decision is very emotional. It's based on all the parameters that are important to you, and you can use language to describe that,” he says.

Thus far, TUI has been able to use AI to scale its online content designed to “inspire” customers into purchasing holidays. How long it will be before AI is actually planning our holidays is hard to say, but for TUI, its increased use of automation comes as the company emerges from the deep end of the pandemic.

In its latest earnings call, the company declared it was back on track with holiday bookings, with 60 percent of summer bookings already made. Thankfully, TUI completed enough of its cloud migration before the airplanes took flight again. 

Biting off the right amount to chew

Green Mountain CEO

Svein Hagaseth on the company’s strategy, and his role within it

It is no secret that Norway is an ideal place to house a data center.

Ample renewable energy and a naturally cold climate enables data centers in the nation to run at the highest level of sustainability.

Founded in 2009 and acquired by Azrieli Group in 2021, Green Mountain has traditionally focused on the Norwegian market. But in 2023 Green Mountain began to sprawl out into wider Europe and, according to CEO Svein Hagaseth, is in the process of developing a sustainability-focused “pan-European platform.”

“I ended up at Green Mountain a little bit by coincidence,” Hagaseth talls DCD when talking on his career thus far with the data center company.

“A couple of years after moving to Canada, I read an article by the then CEO of Green Mountain in which he announced that the company was going international, so I did what sales guys do,” explains Hagaseth. “I picked up the phone, I called him, and said: ‘What do you mean by international?’ And here I am eight years later.”

Hagaseth started at Green Mountain as senior vice president of North

America, taking over as chief sales officer in 2017, and was asked to step up to CEO in 2023.

With a new CEO often comes a new company strategy, but that wasn’t the case in Hagaseth’s takeover - his priority was to stay the course - because the course was working. “I think Norway and Green Mountain have a fantastic value proposition to the market,” he explains.

“So really, I wanted to do more of the same, but to accelerate the pace. I wanted to make sure that I took care of what made Green Mountain great, and to find opportunities to continue to build

that pan-European platform.”

And pan-European the company has become. Green Mountain made its first steps internationally in 2022 when parent company Azrieli Group acquired Infinity SDC Ltd, taking over a data center in Romford, East London. Expansion work on that data center began in December 2023.

According to Hagaseth, that move was relatively simple in terms of entering a new country.

“We took over an existing organization that was already delivering high-quality services so we didn't have to do as much there. We just had to reaccelerate the project, invest in new capacity, and build it according to our standard design,” says Hagaseth.

Sadly, the UK doesn’t have the same proportion of green energy in its grid as Norway, but Green Mountain will still offset the data center’s power consumption with renewables, use HVO instead of diesel for its generators, and is aiming for a PUE of 1.2.

While the UK was a simpler transition abroad, Green Mountain is also currently working on a data center project in Germany through a joint venture (JV) with power company KMW.

The decision to pursue this via a JV was tactical as, in this case, the data center project is being built from the ground up.

“This is one of the reasons we decided to do the joint venture with KMW, because they know the market. Not the data center market, but they know the regulators and [KMW] is owned by the municipality which gave us a good edge in knowing how to maneuver in a new country.”

The project itself appealed to Hagaseth because it was the first opportunity to “deliver as close to a truly carbon neutral data center in the German market.”

Located in Mainz, the data center is set to be cooled by the Rhine River for 10 out of 12 months of the year, will be fed energy via renewables that are already in production by KMW, and won't use diesel generators.

Eventually, the site will also be connected to Mainz’s district heating system, reusing excess heat generated by the data center.

These two sites encompass Green Mountain’s current plans for international expansion, although more will eventually follow. However, the company was keen to note it hasn’t forgotten its Norwegian roots.

Green Mountain has three data centers in Norway, two of which are fully colocation, with the other part colo and part-leased to an “international business” which Hagaseth politely declines to name. The company has plans to develop several more.

Perhaps the most high-profile is the data center currently under construction for TikTok. OSL2-Hamar will eventually comprise five buildings, each with 30MW of data center capacity.

The first of these was handed over to TikTok late last year, but Azrieli Group’s most recent annual report suggests that the overall TikTok development had fallen behind schedule.

“We’ve had the handover of the facility [to TikTok] back in November which was within the timeline, and then we are fitting it out as the clients deploy hardware into the facility. We were a little delayed because it took time to get a concession for the power,” Hagaseth tell us.

The project needed a new substation, which requires approval from the Norwegian authorities before it comes online. “It took a little longer than expected,” he says.

The delay in getting approvals caused greater problems. “That meant that winter then came around which can reach the minus 20s, and electrical cables can't be put in the ground when it is less than -15°C (5°F).

“This year, we had six weeks consecutively -20°C (-4°F) or below. The ground is frozen solid, and you can’t bend the cables - even if you heat them - whenever you bend them, it will break. You can’t work in those temperatures.”

TikTok is itself something of a controversial customer. Owned by Chinese company ByteDance, several countries have banned the app including Afghanistan, India, Nepal, Senegal, Somalia, Jordan, Kyrgyzstan, and Uzbekistan, while others, including the United States, Australia, Austria, Belgium, Canada, Denmark, Estonia,

France, Malta, the Netherlands, Latvia, Ireland, New Zealand, Norway, and Taiwan have enforced partial bans prohibiting access on work devices by lawmakers, civil servants and other employees citing data privacy and national security concerns.

In the US, a law has been passed that could force ByteDance to sell its regional operations to a local company or face a full ban. Oracle was previously touted as a potential buyer, as was Microsoft.

In an attempt to alleviate some of these data privacy concerns for EU countries, TikTok launched “Project Clover,” a plan to keep all European user data on the continent. It is developing data centers in Ireland, and has undertaken the massive project with Green Mountain in Norway.

Hagaseth is, outwardly at least, unconcerned by TikTok’s travails.

“Of course, if there's a boycott of TikTok in general, that would have a consequence,” he concedes. “But, at the same time, as long as they have users on the platform, the data centers are going to be the last thing they actually shut down. But we don't see that as a considerable risk. I think there are many things that TikTok can do before that becomes a reality.”

As for Green Mountain’s future, the company is banking on the rise of artificial intelligence (AI), and the opportunity it brings for Norway.

“With everything that's happening on the AI side, there's a huge requirement for capacity,” Hagaseth says. “Norway is becoming a very, very hot place to put data centers.

“We have planned going forward that we have significant capacity available in Norway, and now we're looking to execute on those opportunities. I think AI has probably accelerated that discussion.”

But, as much as looking to the future is important, Green Mountain doesn’t want to risk growing faster than it can handle.

“When we do things we want to make sure that we do things with quality. We’ve had 100 percent uptime since we started, knock on wood. That is the overall objective, that we don't lose focus and bite off more than we can chew.” 

Norway takes the register

Politicians in Oslo are plotting new legislation for the nation’s data centers. Will it keep crypto miners at bay?

Norway’s economy is powered by two engines: fossil fuels and fish.

Figures from the government’s statistical agency show that oil and gas accounted for 96 billion Krone ($9.1bn) of the 160 billion Krone ($15.2bn) of Norwegian exports in April 2024.

Fishing-related industries exported 13.3 billion Krone ($1.2bn) of salmon, halibut, and other fish, crustaceans, and mollusks over the same period.

Matthew Gooding Features Editor
Rob Elder, Bulk Infrastructure

Could digital infrastructure emerge as a third engine?

Given that Norway sits atop some of the world’s largest reserves of coal, crude oil, and natural gas, it might be a bit of a stretch to think that data centers could match the financial output of the country’s energy industry. But there’s no doubt that the profile of the Nordic nation’s data center sector is growing.

“Our CEO and I recently visited the US with businesses from fisheries and green industries, as well as the tech sector,” says Rob Elder, chief commercial officer at Oslobased Bulk Infrastructure, referring to a trade mission in April headed up by Crown Prince Haakon, the heir to the Norwegian throne.

The trip saw the delegation visit San Francisco and Seattle to forge links between Norway and the US, and Elder says: “Norway has traditionally been all about oil and gas or fisheries, but now it’s at the forefront of the green energy transition and offshore wind. The three strands we took from Norway to the West Coast were fisheries, green energy, and digital infrastructure, and that shows that data centers are front of mind in Norway right now.”

But with greater profile comes greater scrutiny, and the Norwegian government has announced it intends to become the first European country to introduce a

“At the moment with data centers there’s no regulation - if you have permission to put a building up and the necessary power you can set one upit’s very problematic”
>> Bjørn Rønning

dedicated legal framework for data center operators, which could compel them to detail the type of workloads being run on their server.

This is being billed as a way to ensure Norway does not become a haven for cryptocurrency mining, but will have to be introduced carefully so as not to become an additional administrative burden for operators that could deter investment just as the sector begins to get revved up.

Registering an interest

Norway’s data center market is not one of the world’s largest. According to Data Center

Map, the country is home to 45 data centers, more than half (23) of which are located in and around Oslo, the nation’s capital.

Norway set out its intention to become a “Data Center Nation” back in 2018, with a new national strategy designed to attract the bigger players in the market. But while the government has offered tax breaks on electricity usage for data center operators in a bid to entice overseas businesses, the US cloud hyperscalers have yet to build in Norway.

Despite none of the hyperscalers having their own facility among the fjords, Google is spending €600 million ($646.4m) building a campus on 200 hectares of land in the Gromstul area of Skien, around 85 miles southwest of Oslo, which it acquired in 2019. This facility could offer up to 240MW of IT capacity, with the first phase due to go live in 2026. Microsoft has also operated two Norwegian cloud regions, though not from its own data centers, and one of these has since been delisted. AWS has an Edge location in Oslo.

Norwegian companies and smaller overseas operators have filled the hyperscaler-shaped void. For example, Norway’s Green Mountain is building out digital infrastructure which will be used by social media giant TikTok, having handed over the first data center to the company in December.

Norway’s cool climate means that, in many ways, it is an ideal location for data centers, and the abundance of available clean energy is also a boon for operators; Norway’s grid runs entirely on renewable sources, with 88 percent generated by hydropower, alongside some wind energy.

Demand for this power is growing all the time, and so Norway’s government has decided to take action in the form of legislation that it hopes will ensure that the data centers using large chunks of grid capacity are benefiting Norwegian society and the economy. “The purpose is to regulate the industry in such a way that we can close the door on the projects we do not want,” Karianne Tung, Norway’s digitization minister, said in an interview with the VG newspaper in April.

“We need to know more about which data centers we have, what they contain, and what they do. Today we have no overview.”

Under the proposed rules, a register will be created detailing the owners and managers of data centers, as well as the type of digital services they offer. It is hoped this will empower local authorities to make more informed decisions about whether to give new projects the go-ahead.

The government hopes this will help stop the construction of crypto data centers, which use large amounts of power to generate cryptocurrencies such as Bitcoin. According to Terje Aasland,

Norway’s energy minister, crypto mining “is associated with large greenhouse gas emissions, and is an example of a type of business we do not want in Norway.”

New rules a step forward?

In emailed comments to DCD, Gunn Karin Gjul, state secretary in the Ministry of Digitalisation and Public Governance, said that the department had carried out “several consultations and dialogues with the data center industry” over the new rules. Gjul said: “Our impression is that they acknowledge the growing importance of the data center industry and therefore see the necessity of a regulation.”

One of the bodies consulted was Norwegian Datacenter Industry, a trade association representing 60 members across data center operators and associated businesses. Bjørn Rønning is the association’s CEO, and says the Covid-19 pandemic brought home the importance of digital infrastructure to policymakers in Norway.

Rønning says members of his association are “pretty relaxed” about the upcoming regulation. “This is a recognition of the importance of the data centers,” he says. “The rest of the industry, in terms of fiber infrastructure, mobile infrastructure, and all the service providers, have been regulated for 20 years, but at the moment with data centers there’s no regulation - if

“As the data center sector increases in scale and importance it's inevitable but also important that it has the right governance and the right legislation to enable it to operate in the right way”
>>Rob Elder

you have permission to put a building up and the necessary power you can set one up - it’s very problematic.”

Bulk, which is a member of the Norwegian Datacenter Industry, operates two data center campuses in Norway, the 12MW N01 Data Center Campus in Kristiansand, in the south of the country, and its Oslo Internet Exchange site in the capital. The company broke ground on a 42MW extension of N01 earlier this year.

COO Elder echoes Rønning’s view that more regulation is a good thing. “We’re hugely supportive,” he says. “Anything that brings more public confidence to our industry is a good thing, and as the data center sector increases in scale and importance it's inevitable but also important that it has the right governance and the right legislation to enable it to operate in the right way.”

Closing the door on crypto

The new legislation is expected to be considered by lawmakers in Norway later this year, with a date in November penciled in.

Gjul said the government believes “sensible obligations are accepted by the serious data center operators, and even seen as necessary,” but the rules are unlikely to be so warmly received by crypto data center operators, which are currently free to do as they please.

While no accurate picture exists of the number of crypto data centers in Norway, those in the industry say such facilities are relatively common.

“We know there are quite a few out there, and the value of Bitcoin has increased recently, which helps them,” says Jørn Skaane, CEO of Lefdal Mine Datacenter. “The problem is there’s no real benefit to it - you just have a container sitting out in a field somewhere with a noisy generator and a German Shepherd watching it, and there’s no investment in the community. It just converts kilowatts to Bitcoin.”

Despite literally being based in a mine - Lefdal’s data center is located in an abandoned gemstone and mineral mining facility - the company, in keeping with many of its peers, does not allow any crypto harvesting activity on its servers, though it has previously hosted mining rigs operated by Northern Bitcoin, now known as Northern Data.

Skaane says the company now stipulates “no cryptomining in our contracts, which keeps things simple.” But, he says, “the situation for the government is more complex because there are also some AI services that are not making the world a better place, and other things which happen on platforms like Facebook, and on the Internet as a whole, that are not positive. It’s difficult to just say no [to cryptomining] because you don’t like it.”

Indeed, the convergence of crypto

“For Norway this is a fantastic opportunity to build out a new export industry, where we convert kilowatts into bits and send them through the fiber network to Germany or the UK”
>>Jørn Skaane

data centers and AI could make the implementation of any new laws in Norway more difficult, with many former crypto companies now using their hardware to power generative AI systems instead. This will potentially make it challenging to discern what types of workloads are running in a specific facility.

On the whole, Skaane is in favor of regulation. “I think it’s good the government is taking an interest, as long they don’t make any hasty decisions without considering the consequences,” he says.

“We want stable conditions regarding

taxation and legal frameworks, and for Norway this is a fantastic opportunity to build out a new export industry, where we convert kilowatts into bits and send them through the fiber network to Germany or the UK. That makes so much more sense than sending power over long distances to fuel data centers in Frankfurt or London.”

To ensure this new export industry takes off, Norwegian Datacenter Industry’s Rønning says it is important to minimize the burden the legislation places on operators, so as not to deter investment. Many of Norway’s biggest data center operators have all been reliant on foreign backers, with the London-based Columbia Threadneedle European Sustainable Infrastructure Fund holding a majority stake in Ledfal Mine, and Israel’s Azrieli Group backing Green Mountain.

Rønning says data center companies can expect regular audits if the proposals become law, but that his association is working with the government to ensure this process is “as smooth as possible.”

He adds: “For investors, the instinctive response is always that all regulation is bad, so we have spent some time trying to calm them down. This will put some extra administrative burden on the data center, and you’re probably going to have to find some reports you don’t often produce today.

"But otherwise, I have no worries, this is not a sign that this industry is not welcome in Norway.”

‘Dam busters

Five years after Amsterdam imposed a shock moratorium on new data centers, the market is still feeling the effects

July 12, 2019, was shaping up to be a normal day for Amsterdam’s many digital infrastructure providers. Life appeared to be pretty good in what was one of Europe’s most popular data center markets. Then came an announcement from the Municipality of Amsterdam that changed everything.

The municipality, which covers the historic heart of the Dutch capital, along with the adjoining precinct of Haarlemmermeer, announced an immediate ban on all new data center developments.

This moratorium was lifted a year later, but as the five-year anniversary of this surprise decision approaches, its effects are still being felt across the market.

“Amsterdam is still a top-tier market; it’s currently the third biggest in Europe, but it’s about to drop to number four because Paris will soon grow to a greater size,” says Andrew Jay, head of data center solutions at CBRE. “It’s always been number three, but the original moratorium really knocked the wind out of Amsterdam’s sails.”

Further restrictions and moratoriums have followed, meaning Amsterdam has

lost ground on its rivals that it may never recover.

2019 and all that

The reason for a moratorium, the municipalities said, was primarily one of space. DCD reported at the time that officials decided to impose the pause to ensure "that data centers occupy as little space as possible... and [architecturally] fit in well with the environment."

“The arrival of data centers is, in a sense, a result of our own consumption and lifestyle: We want to be online on our phones and laptops all day,” said Marieke van Doorninck, Amsterdam's alderman for sustainability and spatial development. “To a certain extent, we will have to accept the associated infrastructure, but the space in Amsterdam is scarce.”

Haarlemmermeer apparently had similar concerns, with Mariëtte Sedee, alderman for spatial development, environment, and agricultural affairs, adding: “The space in our municipality is of great value... It is necessary to be on-site and formulate policy first, so that we keep a better grip on

the location of data centers.”

At the time, Amsterdam had become a victim of its own success, with its data center market regularly growing at a rate of 10-15 percent a year.

The city’s status as a connectivity hub for Europe ensures it is an attractive proposition for operators, Jay says. “It became a big Internet exchange point around the time of the dotcom-boom,” he explains. “It doesn’t have as many big enterprise companies as Paris or Frankfurt, but this interconnectivity drove demand.

“The moratorium knocked everyone for six because we didn’t have as much demand as we have today, but here was one of the main cities saying: ‘You can’t come to this area.’”

Though the ban only covered two of Amsterdam’s districts, its effect, says Edgar van Essen, was to suggest the entire city was closed for new data center business. Van Essen is managing director of Amsterdam-based Switch Datacenters, which runs four facilities in the region, so has witnessed the impact of the ban firsthand.

Data centers in Middenmeer, where both Google and Microsoft have a presence

“The image of the whole of Amsterdam was negatively hit by decisions from a small part of Amsterdam,” he says. “The misconception in the market was that the city had shut down, whereas, in fact, the moratorium related to two municipalities representing less than 20 percent of it.

“But if Americans see headlines saying ‘Amsterdam is closed,’ then they won’t do any further reading, they’ll just move on to another city.”

The initial moratorium ended in 2020, when an agreement was reached to allow new data centers to be built in designated areas of the two municipalities, providing they met stringent energy efficiency requirements.

The Dutch Datacenter Association (DDA) helped broker the agreement on behalf of its members. Stijn Grove, its managing director, says the initial moratorium was damaging because “trust in a market is hard to gain but easy to lose.”

Grove says: “The moratorium on new projects then had significant impacts on the local data center market, influencing both its immediate and long-term dynamics. Initially, the sudden halt on new construction was, of course, a shock to the industry and, in hindsight, not needed as in terms of policies, nothing has really changed since.”

Indeed, Amsterdam already had stringent rules in place around the

“The sudden halt on new construction was of course a shock to the industry and, in hindsight, not needed as in terms of policies nothing has really changed since”
>>Stijn Grove, DDA

efficiency requirements for data center, with new builds required to achieve a Power Usage Effectiveness (PUE) rating of 1.2, and existing facilities having to hit 1.3.

On the impact of the moratorium, the DDA’s State of Dutch Data Centers 2024 report, published at the start of June, notes that in recent years “growth in the Dutch colocation market had slowed significantly.”

The report says: “The moratorium in Amsterdam and Haarlemmermeer, caused the international demand, especially for wholesale data center space, to drop strongly and move to other locations across Europe.”

Data from the DDA shows that the total

number of data centers in the Netherlands was lower last year (187) than in 2019 (189). There are also fewer colo companies operating across the country, with 95 last year compared to 111 in 2019. While other factors will have played a part, given that the Amsterdam metropolitan area represents 71 percent of the entire Dutch market, per the DDA’s figures, it is likely the moratorium has had an impact.

Amsterdam is also lagging behind its Tier One rivals in the other FLAPD markets - Frankfurt, London, Paris, and Dublinwhen it comes to adding new capacity. According to CBRE, last year it gained an additional 62MW, significantly less than any of the other FLAPD nations (London, the second lowest, added 82MW). And in the first quarter of 2024, no new capacity was brought online.

What next for Amsterdam?

Van Essen believes the initial moratorium was politically motivated. Despite Amsterdam’s status as a data center hub, the industry’s impact on the environment means it has come under close scrutiny for many years.

Microsoft has been engaged in a

Photo credit: Switch Datacenters
Amsterdam’s old town, where the 2019 moratorium began

long-running battle with farmers living near its data center campus in Hollands Kroon, 50km outside Amsterdam, where Google and CyrusOne also operate facilities. The farmers say the data centers are encroaching on valuable and limited agricultural land, while the high volume of water consumed by the campus has also raised eyebrows during recent droughts.

Elsewhere, in 2022 Meta killed a plan to build a 200MW data center in Zeewolde, east of Amsterdam. The five-data hall facility would’ve been the largest campus in the Netherlands at the time, but having initially been given planning permission, Facebook’s parent company ran into opposition from the Dutch Senate, as part of the land for the data center was owned by the government. Lawmakers decided not to sell this land, and as a result the project bit the dust. The site has since been unzoned, meaning it cannot be used for data centers.

“There are a lot of very negative opinions about big data centers in the minds of politicians,” Van Essen says. “As a result, they came up with these rules that bash the industry harder. The rules were the result of a political climate where declaring that you were against data centers became the right thing to say at parties.”

And though the original moratorium has long since been and gone, heavy restrictions remain in place in Amsterdam and across the Netherlands, not least around new hyperscale developments. Projects with an IT capacity of 70MW or more, or those above 10 hectares in size, are banned in most of the Netherlands, with a handful of exceptions for sites in the north of the country. The government says this is due to a lack of power capacity for the facilities, and the hyperscale ban has been compounded in Amsterdam itself by a fresh round of restrictions imposed in the Amsterdam and Haarlemmermeer municipalities in December 2023, in a bid to control the amount of energy used by data centers.

The DDA described this latest development as "symbol politics,” which seeks to blame data centers for wider power supply problems experienced on Holland’s grid. "The issues with the current grid congestion will not disappear,” it warned in January. For its part, the municipality said it was committed to ensuring the region remains “an attractive location for data

centers” but that operators must do their bit to ensure residual heat is reused and that water consumption is kept to a minimum.

Development of existing hyperscale data centers, or those that had already received planning permission prior to the restrictions, is still allowed, meaning some building continues. In April, Google broke ground on its fourth Dutch data center, and says it will invest €600 million ($643m) building out the facility in Groningen.

But the effect of this complex web of rules and partial moratoriums is that, according to CBRE’s Jay, the hyperscalers are simply looking elsewhere. “We’re in a firmly hyperscaler-driven market now,” he says. “In any given quarter, they’re responsible for 85-90 percent of the new capacity coming online, so the kind of moratoriums we’ve seen in Amsterdam are not doing wonders for supply.

“There’s still demand for Amsterdam, and

it’s not a case that the other FLAPD locations have done anything to make themselves more attractive. They just haven’t made the moves Amsterdam has to make itself unattractive.”

The picture isn’t entirely bleak for the Dutch capital. The DDA report predicts the market will grow at a rate of 10 percent over the next year, presenting opportunities for companies like Switch Datacenters.

“We try and show the added value we can bring to communities,” Van Essen says. “We go directly to municipalities and explain we’re a local company building data centers and that we can help them meet their sustainability targets because we have this green residual heat that can be put to use in district heating systems.

“We think that’s a smart approach, because by trying to work together we can show them that data centers are not the enemy.”

LESSONS FROM SINGAPORE

Those concerned about the longterm impact of the Amsterdam moratorium may not be reassured by developments in Singapore, which is emerging from a moratorium put in place by the city state’s government in 2019. This was imposed after concerns were raised about a lack of power and development space in the market.

The ban was slightly relaxed in 2022, with new developments permitted subject to being granted licenses, and in May the country’s Infocomm Media Development Authority (IDMA) revealed a plan to unlock up to 300MW of capacity by making existing data centers more efficient. The IDMA said it will work with operators to put the so-called Green Data Center Roadmap into action.

However, the damage may have already been done. A report from financial analysts BMI said the roadmap is unlikely to persuade data center investors, who have fled Singapore to neighboring markets like Malaysia and Indonesia, to return.

“Our core view is that large-scale data center capacity investments are unlikely to go back to Singapore as a result of global key trends in the industry and momentum held

by neighboring markets, primarily Malaysia,” BMI’s report said, pointing out that the 300MW on offer in Singapore is dwarfed by the combined 2,500MW of new capacity that is expected to be on offer in other markets in the region over the next few years.

Amsterdam isn’t the only FLAPD market that has imposed a moratorium on new data center developments.

In Dublin, grid operator EirGrid imposed a moratorium on new developments in 2022, which is set to last until 2028.

This was put in place because of the amount of power data centers are consuming; figures published last year by Ireland’s Central Statistics Office showed that data center power consumption in the country increased by 31 percent in 2022, accounting for 18 percent of all electricity used in Ireland. Data center developments are still permitted elsewhere in Ireland with few restrictions, and a separate report from the International Energy Agency, released in January, said data centers could gobble up 32 percent of Ireland’s power by 2026 due to the number of new builds planned. 

Leading the Way in Data Center Sustainability

Overview

AMI Data Center Manager (DCM) is an onpremise, out-of-band solution for monitoring server and device inventory, utilization, health, power, and thermals.

• AMI DCM addresses data center manageability, reliability, planning, and sustainability challenges with real-time data, predictive analytics, and advanced reporting.

Take control of data center manageability, reliability, planning, and sustainability challenges with real-time data, predictive analytics, and advanced reporting.

• DCM is an on-premise, out-of-band solution for monitoring server and device inventory, utilization, health, power, and thermals.

• DCM is designed to enhance operational efficiency, reduce costs, and ensure the availability and uptime of critical infrastructure.

• DCM manages a data center's carbon footprint to achieve sustainability goals.

Designed to enhance operational efficiency, reduce costs, and ensure the availability and uptime of critical infrastructure, DCM manages a data center’s carbon footprint to achieve sustainability goals.

Carbon Emissions Measurement and Projections

Centralize the management of diverse devices across data centers, even in multiple locations

Reduce operating expenses by optimizing power consumption, identifying zombie servers, addressing thermal hotspots, and improving cooling

Improve data center reliability and uptime proactively by monitoring device component health and responding to alerts

Centralize the management of diverse devices across data centers, even in multiple locations.

servers, addressing thermal hotspots, and improving cooling.

Improve data center reliability and uptime proactively by monitoring device component health and responding to alerts.

Monitor, manage, and control data center carbon emissions to achieve sustainability goals and comply with regulations.

Paying the premium: Why 2023 was a bad year for space insurance

2023 was the space insurance market’s worst ever year – could it stifle satellite innovation as a result?

The first commercial communications satellite in geosynchronous orbit (GEO), Intelsat I, was launched in April 1965. Otherwise known as Early Bird, it was also the first insured satellite, with the Lloyds of London insurance syndicates providing cover against physical damage to Intelsat 1 on pre-launch.

Thankfully, COMSAT, the company that launched Early Bird, never needed to make a claim on the 34.5kg machine, which ended up operating for more than twice its expected 18-month lifespan and broadcasting the Apollo 11 moon landing to viewers on Terra Firma.

But space is dangerous and difficult, and failures costly. The first year of “turmoil” for space insurance was 1984, which saw the failure of several satellites (some later recovered), most notably the $100 million Intelsat 5.

Today there are thousands of satellites in orbit – hundreds of which are insured – but the market dynamics have changed. The number of satellites launching every year

is booming; yet most are now too small to warrant insurance, while the value of the largest machines is on par with what the entire insurance market can afford.

Last year was one of the industry’s worst on record for space insurance claims, and 2024 isn’t set to be much better, with insurance providers anticipating heavy losses. While few will shed a tear about insurance firms losing money, the result is higher premiums for operators, which could potentially hurt future satellite launches.

Insuring GEO vs LEO

Satellite insurance coverage largely falls into two buckets; cover for the launch plus one year of a satellite’s operations (aka launch plus one), and in-orbit coverage that is renewed annually. In-orbit claims can be for partial losses – where the lifespan or capabilities of a machine are noticeably reduced – or for a total loss after an issue leaves a machine unsalvageable from an operational or commercial point of view.

Of the ~10,000 satellites in orbit, David Wade, space underwriter at Atrium Space

Insurance Consortium (ASIC), tells DCD only around 300 are actually insured, almost all in GEO. More than 9,000 satellites are in LEO (low Earth orbit), of which only around 50 are insured. Such insurance isn’t mandatory in many countries, meaning the decision is often down to the particular company and its financiers.

The multi-ton satellites flown to GEO, 35,786 kilometers (22,236 miles) above the Earth, can routinely require investments in the hundreds of millions of dollars. Of the 250 that are insured, the insurance kicks in at launch and lasts for much of their expected lifespan, but satellite values decline enough that, after ten years, insurance often isn’t worth the cost.

Wade says, however, that number of insured satellites is drifting down because satellites are not being replaced at the same rate as they are retiring. The gradual decline of traditional broadcast satellites - so-called ‘bentpipe’ satellites that simply retransmitting broadcast feeds - has seen the number of insurable satellites launched every year drop.

Photo Credit: Viasat
Dan Swinhoe Senior Editor

“What were once three colocated satellites are being replaced by two, or you see a couple of satellites being replaced by one which has a high-throughput payload and a heritage broadcast payload,” he says.

Consolidation in the market – including Viasat and Inmarsat, SES, and Intelsat – will also likely result in fewer GEO satellites being launched as merged operators combine and rationalize their respective networks.

Market dynamics are also changing as LEO operators take a different view to resilience and redundancy. Large, one-off GEO satellites have traditionally been built with resiliency and redundancy in mind. That resilience comes at a premium, but the cost of failure was much higher. LEO satellites, however, have been built cheap, with the idea that the sheer number in large constellation fleets can cover any individual failures with in-orbit spares.

While GEO operators would generally buy insurance for the launch and usable lifetime of the satellite, LEO operators are typically eschewing post-launch insurance and relying on strength-in-numbers for inorbit operations.

“At best, they're purchasing insurance for the launch phase only,” says Wade. “The only time that they might want insurance is when they’re launching 40 or 50 of these satellites on a single launch vehicle.”

“We do see more policies covering the launch phase for constellation satellites or small satellites, but that just does not replace the income loss through traditional GEO.”

LEO can be at risk of generic failures that impact common components, and such issues do occasionally occur across entire batches of craft, forcing operators to take the financial hit. Starlink last year said it would be de-orbiting an unspecified number of its then-new v2 mini-satellites after observing unexpected altitude changes, with some demonstrating eccentric orbits. However, the speed at which smaller satellites can be built and launched means failing craft are quickly replaced.

Larger, high-resolution Earth observation satellites in LEO might traditionally buy insurance, but they too are being replaced by a greater number of smaller and cheaper machines. And, even for those smaller satellites that might want cover, it can often be hard to justify.

“It can be difficult to get insurance for some of the smaller satellites because it's just not cost-effective,” says Wade. “If you've got a satellite valued at a couple of million

“I cannot see a good reason why last year was so bad”
>>David Wade, Atrium

dollars or less, it might not make sense or be worth it to insure.”

2023: Space’s very bad not so good year

This decline in the insurance market comes at a period of a growing number of value of satellite failures, which could have a knockon effect in the years to come.

While never a high-margin business, the market is used to paying out on smaller claims and maybe a large failure or two every year – the industry has generally hovered around a five percent failure rate since 2000 and one report suggests there have been around 165 claims for more than $10 million across the history of the industry.

Though there were hundreds of satellites successfully launched into orbit, 2023 was not a good year for the space industry in terms of reliability. Issues and failures hit both a large number of smaller satellites as well as several of the industry’s marquee machines – with some of the industry’s largest-ever claims coming last year.

Jan Schmidt, head of space at Swiss insurance provider Helvetia Insurance, told Connectivity Business News last year marked “the most challenging period for the industry in over two decades.”

It saw close to $1 billion in claims and some $500m in losses for insurers; the space insurance market generally operates at a premium of $550m, meaning 2023 was a year of major losses for the industry.

“The losses in the space insurance market are unsustainable,” Melissa Quinn, general manager, Slingshot Aerospace, said after the company posted a recent market report.

“Some insurers are exiting the space industry, while the ones who remain are substantially increasing premiums to hedge against the record losses in the industry. While anomalies in the early part of a

satellite’s life account for most of the major losses, the increased cost of insurance for overall operations is also meaningful.”

Chief amongst the casualties was Viasat, which saw its new ViaSat-3 F1 satellite suffer an "unexpected event” during reflector deployment.

The first in a constellation of three geostationary Ka-band communications satellites, F1 was set to cover the Americas, with future satellites covering EMEA, and APAC. The satellites are each expected to have a throughput of more than 1Tbps and download speeds of 100+ Mbps.

Boeing provided the satellite bus, a 702MP+ platform. The design of the satellite’s unusually large reflector is thought to be from Northrop Grumman’s ‘AstroMesh’ line of reflectors.

The company has since said it won’t replace the ailing satellite, despite the machine being expected to handle “less than 10 percent” of its planned throughput. Viasat is making a $420 million claim against the loss – the largest single claim in the history of the industry.

Last year, Viasat acquired rival Inmarsat. This August, in more bad news for the company, Inmarsat’s I6 F2 satellite suffered an “unprecedented” power subsystem anomaly during its orbit-raising phase – when the satellite has detached from the rocket and is rising to its desired final altitude.

The 5.47-ton I6 F2 – based on Airbus Defence and Space's Eurostar E3000e platform – was launched in February 2023 and set to have a 15-year lifespan. The satellite was to offer 4Gbps of additional Kaband capacity to Inmarsat’s network, as well as L-band services over the Atlantic.

Viasat said it has insurance coverage of $348 million in place for the I6 F2 satellite.

While two such major unprecedented failures in a single year would be bad enough – and enough to make alarm bells

Photo credit: SES

go off for space insurers – the incidents turned out to be merely the beginning of the market’s problems.

Astranis’ debut Arcturus microGEO satellite – launched aboard the same rocket as ViaSat-3 – also suffered a power issue with the system that controls its solar panels. Though the company remains in control of the satellite, Astranis estimates it can only get six to 12 hours a day of service from the spacecraft, as opposed to the expected 24. Astranis is launching a multi-mission ‘UtilitySat’ to offer reducedcapacity services in Arcturus’ wake until a full replacement can be deployed in 2025. Reports suggest the satellite was insured for around $40 million.

Though not lost, four other satellites last year were reported as suffering power issues that would likely curtail their lifespans, resulting in more insurance claims. Yahsat’s Al Yah 3, Avanti Communications’ Hylas 4, and Northrop Grumman’s two Mission Extension Vehicles (MEV-1 and MEV-2) were said to be having problems with onboard power processing units (PPUs).

The PPUs from Aerojet Rocketdyne – recently sold to L3Harris – provide the electrical power their thrusters need for station-keeping in geostationary orbit (GEO). Sources said Al Yah 3, Hylas 4, and MEV-2 have each lost one of two onboard PPUs since the issue emerged in 2022. The youngest of these spacecraft, MEV-2, launched in 2020.

The faults will not cause the machines to fail altogether, but will likely impact their 15-year lifespans. All four were built at Northrop Grumman’s facility in Dulles, Virginia. SpaceNews reported insurance claims for the four machines could total around $50 million.

ASIC’s Wade says there’s no single reason for so many large failures and claims in one year.

“I cannot see a good reason why last year was so bad,” he says. “In some cases, there were external events that led to the loss of a satellite which could not have been predicted [and] could not be foreseen.

“There's no single thing that links them all, but for some of the other claims there are different factors that could be influencing each one; workmanship, poor quality control, design issues.”

Wade wonders if issues during Covid-19 – including supply chain difficulties, layoffs, changes in processes – may have been a factor, but has found nothing conclusive.

A bad run of results set to continue Wade notes the satellite insurance industry has made a loss in three of the last five years.

2019 saw the failure of the VV15 Vega rocket launch, taking with it a military observation satellite for the United Arab Emirates and resulting in a $411.21 million insurance claim – the world’s largest up to that point. It could have been worse; that year Intelsat 29e failed after a fuel leak, serving just three of its planned 15 years. The company hadn’t insured the satellite due to the company’s reliability track record up to that point – only a handful of its machines were insured before the incident.

Despite a successful launch in December 2020, SiriusXM's seven-ton SXM-7 Satellite, built by Maxar to provide digital radio to consumers, failed during in-orbit testing. The company made $225 million insurance claim after the satellite was declared a total loss. The VV17 Vega rocket launch failure in 2020

“If the insurance isn't available, then the finance isn't available, and that really stifles innovation”
>>David Wade, Atrium

saw the loss of two satellites with a combined valued at around $400 million. Two Pléiades Earth imaging satellites were lost in 2022 after the VV22 Vega launch failed.

Other notable failures in recent history include 2015’s loss of the Centenario mobile communications satellite, due to be part of Mexico’s MexSat system, which resulted in a $390.7 million claim.

September 2016 saw a SpaceX Falcon 9 rocket fail while preparing a static-fire test, destroying Spacecom’s $200 million Amos6 communication satellite and wiping out 20 years of insurance premiums for prelaunch coverage in the process. 2024 is set to look equally bleak for insurers. Several of the claims from 2023 are ongoing, and more large claims are being filed. Reports suggest losses for the space industry this year have already surpassed $500m.

Last year, SES reported the four O3b mPower satellites it had in orbit at the time were facing power issues that will shorten the machines’ lifespans. SES and the

satellite manufacturer Boeing said satellites will have an operational life and capacity “significantly lower” than previously expected, but five machines yet to launch would be upgraded, with two more satellites launched to cover the issue.

In its April quarterly earnings report, the company said it was “continuing to engage with insurers” over a $472m claim relating to O3b mPower satellites 1-4.

In May, SpaceIntelReport reported Ligado Networks had declared its SkyTerra 1 mobile communications satellite a total loss and filed a $175m insurance claim, later confirmed to DCD. Launched in 2010, the five-ton GEO satellite featured 152 L and Ku band transponders to provide 4G-LTE open wireless broadband network coverage to North America.

However, in good news, Yahsat’s Thuraya-3 communications satellite seems to have recovered from an issue. The satellite, which provides services over Asia, suffered an outage of its communication payload in April. In May, the company said it had manage to successfully recover services across a large part of the region – though seemingly not Australia. The Boeing-made L-Band satellite was launched in 2008 with a 15-year planned design life.

Market dynamics changing; bad for insurers, bad for operators?

ASIC’s Wade says changing market dynamics have seen the premium base eroded at the same time that high-end satellites are increasing in value. The market has “become much more volatile,” he says.

His organization has a combined capacity of $45m, meaning it can underwrite any single launch or satellite up to that value. Across the 35 other space insurance providers globally, there’s close to $600m of capacity.

But some of today’s large satellites are routinely being valued at more than $400m. It only takes a couple of satellite failures to wipe out the entire market, and such events seem to be happening increasingly often.

“Satellite values have gone up, and the premium has gone down,” Wade says. “It used to be that we made enough premium to pay two total loss claims and still make a profit. That's certainly not been the case for the last decade.

“Now, if the largest satellites were to fail in any one year, that's usually going to be enough to tip the market into a loss for the year.”

Wade notes that with the growth in lower-value satellites, the premium base for

insurers has dropped. And when the really high-value machines get lost, the loss ratio of claims to premium goes through the roof.

“A single event throws the market into a loss. That is getting attention, and some firms are questioning whether it's valuable as a line of business. If we keep on losing money, they’re going take that to insurance capacity and give it to a more profitable line of insurance.”

While companies focused on delivering connectivity from space may be more concerned about their own operations than the profit margins of insurance firms, a bad year for insurers can have knock-on effects in the space industry.

Inevitably, the only way to reduce losses is to either raise prices for operators, or have more launches and satellites to cover. Richard Parker, co-head of space at underwriter Canopius, told SpaceNews last year his firm raised prices across the board after the Viasat-3 announcement – before the other notable failures later in the year.

“We know that right now the number of risks insured isn't going to increase for the foreseeable future,” says Wade. “The only room that we have to maneuver is to increase rates to try to get the premium level back to a level where it can pay out the claims we're seeing.”

Ideally, Wade says he’d like to see around 50 launches a year, where he could underwrite $20 million on each launch; and of those 50 launches, one would fail. Under that market dynamic, the risk is spread and insurance firms can make profit and support some failures.

“But what we are now seeing is maybe 20 launches a year,” he says. “The majority of those might be small satellites or constellation launches, where we might have a line of $10m; we might have a couple of Earth observation satellites that are $20m; and a couple of GEO commsats for $20-25m.

“And then you get this single, massive value satellite comes along where everybody needs to write a big line to get the thing insured, and you end up with one single satellite that's $40m exposure. And if that one goes wrong, suddenly the whole book is blown.”

He notes that microGEO satellites – the smaller machines with smaller coverage areas being sent to GEO – may be a good thing for the industry and are “very insurable” from his perspective.

These machines – like those from Astranis – are lower cost than their high-throughput cousins, but might only provide coverage

to a single country or region. Astranis has contracts for eight machines to provide coverage over the Philippines, Mexico, Argentina, Thailand, the US, and Peru.

“I would prefer to see 20 of these microGEO satellites launched to fill particular needs rather than a company struggling to raise the finance to buy an underutilized big satellite,” says Wade.

More failures and claims coming?

Things may get worse before they get better. Through 2024 and beyond, a number of new launch vehicles will enter service, heightening risk at a time of increased space weather and reduced satellite reliability.

Currently, there are no dual-launch rockets in operation after the retirement of the Ariane 5 launch vehicle, meaning the largest satellites are being launched one-by-one. However, a number of new dual-launch vehicles are set to come into operation soon, potentially meaning all of the space insurance market’s underwriting capacity will be attached to one launch, adding greater risk to the market.

Increasingly turbulent space weather could also cause more issues. The sun is heading towards the peak of its 11-year solar cycle, meaning more solar flare activity is likely over the next few years.

Significant weather events that damage a large number of satellites could easily have a major impact on constellation providers, leaving them with limited coverage, reduced revenue, and no cover.

SpaceX lost up to 40 Starlink satellites in the wake of a geomagnetic storm in February 2022 – partly due to increased drag impacting orbit-raising maneuvers. Starlink seems to have held up better in the wake of recent space weather.

“Major geomagnetic solar storm happening right now,” SpaceX CEO Elon Musk posted on X (formerly Twitter) in May amid major solar activity. “Biggest in a long time. Starlink satellites are under a lot of pressure, but holding up so far.”

This increasingly unpredictable weather is occurring at a time when satellite operators are also using newer, less resistant chips, which can potentially push up failure rates.

“I think we're seeing issues with some of the new space players, where component parts are maybe not tested as thoroughly as they used to be,” says Wade.

The desire to bring down the cost of a satellite is resulting in moves towards The

commercial off-the-shelf components, while the desire to improve satellite capacities means greater use of more modern chips that might not have been hardened against space to the same extent as older, less capable units.

“We're seeing the density of those devices getting smaller and smaller, and their susceptibility to space weather events becomes more critical,” says Wade.

No insurance, no innovation

While he hopes the space insurance industry makes money in 2024, Wade concedes it’s “not going to be a great year” due to the low number of GEO communication satellites launching this year.

Some insurers have already given up. Brit, which offered more than $50m of space insurance capacity, left the market last year. AIG ($20-25m) also left last year, with Swiss Re, Allianz, and Aspen Re all leaving the space insurance market since 2019.

Insurance Insider has suggested other players are “teetering on the edge” in fragile positions, with Brit the “tip of the iceberg” as others potentially slowly withdraw their capacity from the market in the future. Indian insurance specialist Tata AIG, however, announced it was entering the space in May 2024.

“If we continue to see losses, there will just be more and more insurers that start to pull out as less insurance is available, the competition dries up, and prices rise in response,” Wade says.

Between 2012 and 2021, annual investments in the space sector increased from $300 million to $10 billion, according to McKinsey. But if commercial space players can’t get or can’t afford the insurance banks or investors might require to finance new machines, projects may not be able to (literally) get off the ground.

“We're already seeing some insurers say that they're not prepared to insure satellites in LEO. My concern is that if the insurance isn't available, then the finance isn't available, and that really stifles innovation,” Wade warns.

Longer term, however, he still sees “great prospects” thanks to the growing commercialization of space – including a number of space stations and lunar missions.

“We know that space is required in everybody's daily life, and it has an incredibly important role to play,” he says. “It will be increasingly commercial - we just need to get through the next two or three years to get there.”

Grundfos Data Center Solutions

Keep your cool

Efficient water solutions for effective data flow

Meet your efficiency and redundancy goals

Smart pumps offering up to IE5 efficiency, redundant solutions that meet up to 2N+1 redundancy – whether you’re planning a Tier 1 or Tier 4 data center, you can rely on Grundfos to keep your data center servers cool with a 75-year history of innovation and sustainability at the core of our corporate strategy.

Your benefits:

• High-efficiency cooling solutions saving water and energy

• Redundancy meeting up to Tier 4 requirements

• End-to-end partnership process

Will small GEO revitalize the orbit? Bringing LEO

sensibilities to GEO orbits

Small, geosynchronous Earth orbit (GEO) satellites have been an increasingly discussed design trend in the industry, boasting the same regionalized scope as traditional GEO models at a price point reduced by the cost-saving of micronized technology. It’s a fresh idea at a time when some are worried GEO technologies are being interpreted as stagnant.

At the Satellite 2024 conference held in Washington DC this March, market research firm Northern Sky Research spoke at length of the compounding acceleration of low-Earth orbit (LEO) connectivity expansion, fueled by SpaceX’s Starlink, with Amazon’s Kuiper hot on their heels. Yet, the analysts were keen to emphasize that the sun was not setting on geostationary satellites.

“The days of 25-30 big GEOs ordered every year are gone,” said Christopher Baugh, founder and CEO of Northern Sky Research.

“GEO will coexist with LEO as a multi-orbit option, but it’s out of its heyday,” added Jose Del Rosario, research director at Northern Sky.

They spoke to wider conversations elsewhere at that conference concerning how

GEO was adapting to new market realities. One such revelation has been the exploration of geosynchronous smallsats, like Boeing’s 702X small GEO satellites, the European Space Agency’s SGEO missions with Hispasat 36W-1, and more recently with Astranis’ MicroGEO project and Swissto12’s HummingSat.

While larger geosynchronous powerhouses could aspire to span many transmission wavelengths, multiple beams to serve several applications and customers at once, and various redundancies to ensure continuous operation, small GEO focuses on more specific applications, often for a singular customer, intending to offer the same throughput and range.

“While the LEO satellite constellations offer an impressive amount of broadband capacity with low signal latency, GEOs can offer low-cost connectivity to customers with the lowest cost ground infrastructure,” Mike Kaliski, chief technical officer at Swissto12, a satellite systems developer exploring small GEO, says to DCD

“The future will be built on integrated networks that combine the best services based on a delivery through the combination of LEO, MEO, and GEO assets.”

Aramis Dodgson Contributor
Humming

Modern small GEO

This “small” definition can, and has, been interpreted creatively in the past, though geostationary satellites weighing under a ton are a reasonable benchmark, such as Astranis’ 800kg MicroGEO Omega, which is intended to enable 50 gigabits per second in Ka-band.

“MicroGEO satellites can be built quickly, which means MicroGEOs beat traditional GEOs on price,” Christian

Omega is set to be completed in 2025 and launch in 2026. The company is still due to launch four of its original model satellites in 2024, beginning their road to launch 100 MicroGEO satellites by 2030.

“Small GEO is a relative term,” explains Jean-Luc Froeliger, SVP of space systems at Intelsat.

“Satellites that were built in the 70s and 80s had the same capacity that what we call small GEO today. The technology

Keil, vice president at Astranis, tells DCD. “Their unique, smaller form factor also means they can be dedicated to individual customers, which gives those customers unprecedented flexibility.”

At the Space Symposium this year, Astranis CEO John Gedmark suggested a smaller design was preferable for defense procurers, allowing for a resilient architecture. “No more big, fat, juicy targets,” he said.

The approach appears to apply the proliferated manufacturing sensibilities of the LEO business to GEO, increasing order numbers and speeding up production, though this move has ramifications for the technology’s longevity.

Keil confirmed MicroGEOs were not being built with compatibility for refueling, presumably ruling out upgrading and repair in the same stroke, preferring to launch new satellites every decade, yet another LEO calling card.

With the geosynchronous regime being the traditional satellite servicing market, this move bets against the formation of that emerging economy.

has progressed so much over the last 50 years that the 70s large GEOs are now the small GEOs of today. With today’s small GEOs we are able to procure and launch satellites that have a similar capacity to the 70s/80s satellites but at a much lower price point.”

Small GEO isn’t a universal economic upgrade, Froeliger adds, but can suit market applications or regional sectors where large investments in GEO don’t make commercial sense, a point Swissto12’s Kaliski agreed upon.

“[Small GEO’s] affordability makes it an option of choice to address regional markets, to replace a large GEO at endof-life, or to offer secure and sovereign connectivity solutions to small and medium-sized states,” he explains.

With many world governments eager to demonstrate space capability in defense in an increasingly advanced and contested era, a competitive price point could prove effective for selling to smaller states with belligerent neighbors, though many of these players have proved most interested in Earth observation over satellite connectivity.

Small GEOs a new trend?

New technological trends can and have created significant change across the satellite industry, although small GEO has been around for over five years as geosynchronous operators have continued producing large, high-power, long-life, and upgradable satellites.

Tracking the popularity and progress of the new direction may depend on whom you ask.

“The days of 25-30 big GEOs ordered every year are gone”
>>Christopher Baugh, CEO and founder of Northern Sky Research

“Orders of traditional large GEO satellites have reduced in the last 10 years to a lower level, while at the same time, we see a large demand for this GEO SmallSat class satellite Swissto12 has pioneered with HummingSat,” Swissto12’s Kaliski told us. “The four satellites we have on order from industryleading operators such as Intelsat and ViaSat/Inmarsat are a testament to the relevance of this new class of GEO satellites.”

HummingSat claims to be a tenth the size of conventional GEO satellites at just over 1.5 cubic meters in volume –loosely half the size of a small car – and 200kg of payload. Though small, this telecommunications payload is supplied with 2kW of power and ought to have an operational life of 15 years.

Swissto12 describes the satellite as serving “a wide range of RF frequencies from L-band to Q/V band … [offering operators] wide-area shaped beams and high-throughput spot beams.”Swissto12 is the first company to sell a GEO SmallSat, their HummingSat model, to established global GEO satellite operators, Kalliski says.

Astranis

“The complexity of global communications means that we believe many different designs, approaches, and technologies will be needed to continue to meet customer demand in the future,” Mark Dickinson, head of space Systems at Viasat, told us.

Viasat seeks a hybrid space and terrestrial strategy that integrates the best functions of many technologies, which is what they saw in Swissto12.

"The technology has progressed so much over the last 40 years that the 70s large GEOs are now the small GEOs of today"
>>Jean-Luc Froeliger, SVP of Space Systems at Intelsat

innovation and differentiation.”

The certainty of multi-orbit integrations across satellite regimes appeared more comprehensive than convictions about small GEO.

Intelsat’s Foreliger believes that the best connectivity solution is multi-orbit.

“While LEO solutions have their advantages, including lower latency, GEO still provides the best economic solution in terms of cost per bit,” he says.

Swissto12’s Kaliski saw this multiorbit trend playing out at established GEO operators like SES, Eutelsat, and Telesat: SES already owns one of the largest GEO fleets, and has started complementary medium-Earth orbit (MEO) service through the O3B mPower constellation; Eutelsat owns a large fleet of geostationary satellites and has merged with LEO operator OneWeb to hybridize offerings; and Telesat is another GEO player that developing

ASTRANIS

Satellite-based augmentation services (SBAS) and proprietary 3D printing of radio frequency payload technology, set Swissto12 apart, in Viasat’s eyes, widening the flexibility to adapt to emerging business cases.

“[These advanced technologies] mean [Swissto12] have the capability to underpin [Viasat’s] critical safety services well into the 2040s,” says Dickinson.

A new multi-orbit ecosystem?

While eager to hedge their bets across investments, Dickinson remains adamant that geosynchronous orbit would never lose relevance.

“GEO is simply more effective and cost-efficient,” he says. “That said, Viasat is not anti-LEO or against any orbit. There are benefits to different orbits for different purposes & operating bands. … We stand on the brink of a satellite revolution, with the anticipated influx of new satellites in LEO, we are proactively enhancing our GEO capabilities with terrestrial and non-geostationary orbit (NGSO) solutions to stay ahead in service

its own LEO constellation Telesat Lightspeed.

“By combining LEO or MEO services, which have high capacity with low signal latency, with GEO, which brings global capacity at the lowest cost, operators can combine the best of both worlds,” Kaliski explains.

Swissto12 is currently focused on assembly, integration, and test activities on HummingSat and hopes to announce upcoming orders after completion.

While consensus appears to be building on the subject of the costeffectiveness of small GEO, with some operators and manufacturers waiting to see pioneers deliver cost-saving and longevity before they get behind the bandwagon, it appears like a more reliable assumption that the future of GEO will perform in concert with LEO, and multi-orbit networks become more common. 

IN FOCUS

Founded in 2015, California-based Astranis develops small – around 350kg – geostationary communications satellites. The company’s’ first MicroGEO, Arcturus, launched in 2023 but experienced a pointing issue with its solar array drive assembly, which motivated a redesign of the associated components for the company’s subsequent Block 2 MicroGEO satellites.

The problem meant the company’s Internet services to Alaska would be delayed, and required support from a UtilitySat satellite to compensate, though the measure wouldn’t match the coverage Arcturus intended to serve. The machine was set to offer 7.5Gbps of throughput in the Ka-band.

In February 2024, Astranis announced the satellite would move to an undisclosed orbital position where it would spend three months serving Israeli satellite operator Spacecom’s reservation of an orbital slot under international regulatory rules before returning to 163° West.

Keil insisted that Arcturus “did not need saving” through satellite servicing measures, and that it was currently performing commercial missions, and would go on doing so.

The company aims to deploy a UtilitySat – which has Ka, Ku, Q, and V band transponders – as an interim replacement until a permanent stand-in is launched in 2025.

Despite the initial setback, Astranis has a backlog of customers. The company is to launch a dedicated MicroGEO satellite for Argentina in partnership with LATAM ISP Orbith. Thaicom has also ordered a GEO satellite from Astranis to launch in 2025.

It also has plans to launch two satellites over the Philippines in 2024 with local telco Orbits Corp, with an additional two satellites set to launch for Mexican firm Apco Networks. US-based mobile satellite connectivity specialist Anuvu (again for two machines) and Peruvian cellular backhaul provider Andesat have also ordered satellites from Astranis. 

Is your data center earthquake-proof?

Chip fabs and data centers are often located in countries prone to extreme seismic activity, presenting challenges for operators

Between January and April of 2024, two earthquakes measuring 7.6 and 7.2 on the Richter scale hit East Asia, causing billions of dollars of damage and resulting in hundreds of people losing their lives.

Despite the widespread destruction in Japan and Taiwan, where the earthquakes hit, one thing remained relatively unscathed in both countries: their chip fabs.

Charlotte Trueman Compute, Storage & Networking Editor
Colt’s data center after the 2011 Tohoku earthquake in Japan
Colt’s Seismograph measuring the movement of its data center

This is because, although the fabs are situated in areas that are prone to level 4 and 5 seismic activity, many of the factories had been built with enough structural integrity to withstand such natural disasters.

This is also often the case when it comes to insuring the structural integrity of data centers, a significant number of which are situated on the West Coast of America and in East Asia, where a lot of land covers fault lines, and as such, the earthquake risk is pretty significant.

According to Ibbi Almufti, principal, risk and resilience at engineering consultancy firm Arup, seismic activity is one of the hardest natural phenomena to protect against. As such, he prefers to use terms such as ‘resilience’ or ‘strengthening’ rather than ‘proofing’ when discussing such building techniques.

Some of these techniques are outlined by chip maker TSMC on its website, where the company has a webpage that outlines what it calls “pioneering antiseismic methodologies.”

Following the 1999 7.3 magnitude Chi-Chi earthquake in Taiwan, the company implemented a series of earthquake protection management plans that surpass the legal requirements of the Taiwanese government.

These include adding seismic anchorage onto all equipment and facilities, installing floating piles at new fabs in Tainan Science Park to decrease seismic amplitude by 25 percent, and

appointing 180 earthquake protection guards who are fully trained with seismic knowledge and practices.

As a result, after the most recent round of earthquakes, TSMC reported no major damage reported to equipment at any of the facilities.

California-based Almufti works with a lot of data center providers and often becomes involved in the process right from the get-go, when companies are still thinking about which sites to build on.

He explains that one of the biggest drivers of site selection currently is power availability but data center providers are often already very clued up about potential hazards when they approach him to consult on projects, listing things he hadn’t necessarily considered, such as the potential for a train derailment to impact a facility built in the vicinity of railway tracks, or a toxic gas spill.

On the flip side, Almufti says some providers believe that a facility built to meet local building codes will automatically have the level of seismic resilience the site requires. However, he explains that this is not the case as building codes are often only designed to protect the lives of building occupants.

There are still a few occasions where, to pardon the pun, these considerations slip through the cracks. Retroactively seismic strengthening buildings is another service Almufti consults on and right now, he’s helping a data center provider that unwittingly constructed its

"With seismic, you have to really focus on the inside guts of the structure, as well as the nonstructural elements and the envelope… it really is the most academically rigorous process that you go through"

facility on a fault line and is now trying to minimize any potential damage that could befall the structure in the future.

“Most of the time, they’re pretty smart about it,” he says. “You would never knowingly build in those types of zones.”

To further assist less experienced clients, Almufti has also helped to develop what is known as the Resiliencebased Engineering Design Initiative (REDi) guidelines, a framework for building resilience that offers Silver, Gold, and Platinum rating tiers.

Created to help owners, architects, and engineers implement “resiliencebased design,” according to its website, the REDi Rating System provides a “design and planning criteria to enable owners to resume business operations and provide liveable conditions quickly after a disaster,” such as earthquakes, extreme storms, and flooding.

Designing resilient data centers

While some design and construction techniques come as standard, Mauro Leuce, global head of design and engineering at Colt Data Center Services said that different regions tend to take different approaches when it comes to seismic strengthening. In Japan, where Colt’s data centers are located, base isolation is the preferred technique, due to the intensity of the energy generated by the fault lines the country sits on.

First used by Colt in 2011, base isolation involves placing flexible bearings or pads made from layers of

"Going around and trying to fix things as strongly as possible against the walls definitely costs more and is also much more time-consuming than simply starting with a design that can be isolated from the ground"

rubber and lead between the building's foundations and the structure. If an earthquake were to hit, the base isolators would absorb most of the impact and, therefore, reduce the swaying and shaking of the data center.

All of Colt’s data centers are now designed with base isolation as standard and while Leuce notes that is a technique widely used across Japan, in the US it’s a more recent innovation. By comparison, in Europe where seismic activity is much rarer, the so-called tie-down technique is favored by data center providers.

“From a cost perspective, it's probably

easier to isolate the structure so that it will float above the earth while everything else is moving, instead of what is usually done in commercial or residential buildings, where you try to restrain the systems, such as pipework, electrical cables, transformers, generators, anything else,” he says.

“Going around and trying to fix things as strongly as possible against the walls definitely costs more and is also much more time-consuming than simply starting with a design that can be isolated from the ground.”

Leuce explains that when Colt designs the layout of a data center, it ensures the most critical parts, such as the data halls, electrical rooms, and other ancillary rooms required for business continuity, are placed on the isolation base. Other elements, such as generators, which are often designed to withstand an earthquake, can then be placed directly on the ground.

He adds that it’s also important to make sure you don’t have anything heavy suspended above your servers in case they become dislodged during an earthquake and end up crushing them.

A final technique employed by Colt is the use of dampers – hydraulic devices that dissipate the kinetic energy of seismic events and cushion the impact between structures.

Having previously deployed lead dampers at its first data center in Inzai, Japan, Colt’s has gone a step further at its most recently built facility in Keihanna, Japan, where it is using a combination of an oil damper made out of naturally laminated rubber plus a friction pendulum system, a type of base isolation that allows you to damp both vertically and horizontally.

“The reason why we mix the friction pendulum with the oil damper is because with the oil damper, you can actually control the frequency in the harmonics pulsation of the building, depending on the viscosity of the oil, while the friction pendulum does the job of dampening the energy in both directions, so you bring both technologies together,” Leuce explains.

As a result of having base isolation in place, when the 9.1 magnitude Tohoku earthquake hit Japan March 2011, Colt’s entire data center only moved by 10cm.

Putting racks through their paces

It’s all well and good making sure you include base isolation systems, stateof-the-art dampers, and that no heavy pipes are suspended precariously above your servers, but how do you know if that’s enough to protect your data center

rubber damper
Source: Colt Data Center Services
Mauro Leuce

should an earthquake strike?

When New York was caught off guard by a 4.8 magnitude quake in April, IBM engineer PJ Catalano joked on X, formerly Twitter: “I am happy to report that all 200 mainframes in Poughkeepsie, NY have successfully passed this FREE earthquake test!”

Speaking to DCD after the earthquake, Catalano explains that to ensure its mainframes survive when disaster does strike, a multi-phased testing approach has been designed that all hardware headed for the facility is subject to before being installed.

“We start with computer simulation so, before we build anything, we have models that we take through simulation to gauge what materials needs stiffening, weight distribution,” he says. “From there we go to a second phase of prototyping so we can test the real materials in the real world.”

Once those stages have been completed, IBM carries out shake table and operational vibration tests. Operational vibration tests are to ensure the rack and its components will continue to function through events that are more akin to a high-speed train

line or a busy highway being in close proximity to a data center.

There’s also a shipping test where IBM simulates its racks being in the back of “an 18-wheeler flying down the highway at 60 miles an hour” because, as Catalano notes, “if it can't handle that, then an earthquake is out of the realm.”

Finally comes the earthquake test, where IBM tests the stiffener and the earthquake kit it sells as an additional extra to customers that operate in areas with significant seismic activity.

“Anytime we design, build, and ship a brand new generation mainframe, like the z16, we go through this whole suite of testing,” Catalano says. “We generally do it once for each hardware release to make sure that the cage, the frames, all the components that were released, go through this set of testing at least once”

IBM also seeks certification from independent labs to give it additional credibility and prove to its customers that the system is working as it should.

Climate change is the new resiliency frontier

While seismic activity remains one of the main disasters organizations want to protect their critical infrastructure against, Almufti said that the extreme weather being experienced as a result of climate change is bringing with it a whole host of new resilience challenges.

While UK-based data center providers probably don’t have seismic proofing near the top of their construction todo lists, when the country was struck by a record-breaking heatwave in the summer of 2022, Google, Oracle, and London-based Guy’s and St Thomas’ NHS Foundation Trust all experienced

data center outages as temperatures soared to a record 40°C (104°F).

Almufti said flooding is the most intuitive natural disaster to protect against as you just have to raise the foundations or put in retention ponds or storm drainage. He also helps operators predict how future heat waves or extreme temperatures might impact their mechanical systems, allowing that to be factored into any building plans.

Additionally, as most data centers are constructed with precast concrete or tiltup walls containing very few openings, they’re pretty resilient to most strong winds. However, Almufti explains that unless you build a concrete bunkersomething he says is possible but very costly - it is very challenging to protect against another common US weather phenomenon, tornadoes.

“Every [technique] is slightly nuanced and ideally, what you're doing is trying to find synergies between the measures so they're co-beneficial,” he says.

But, to end where we began, Almufti reiterates his opening gambit that seismic remains the most difficult hazard to protect against.

“With seismic, you have to really focus on the inside guts of the structure, as well as the non-structural elements and the envelope,” he says. “That's why I'm most excited about it - I love the other stuff too, but seismic really is the most academically rigorous process that you go through.”

PJ Catalano
Ibbi Almufti

Silicon Valley still doesn't understand data centers

here’s a problem at the heart of the AI revolution.

Silicon Valley has long fostered dreamers. People who have played fast and loose with reality and challenged what was thought possible. Sometimes, this has led to revolutionary products and ideas that have transformed our world. Other times, it has bred bubbles, scams, and embarrassing failures.

The push and pull between what is feasible and what is aspirational is core to the culture and mythos of the region, but the valley is still bound by the trappings of fundamental reality.

No matter how inspirational a leader, how deep a pocket, or how aggressive a company, “move fast and break things” cannot defeat core constraints of physics or time.

With the AI boom, we may be about to hit those limits. Valley technologists have long espoused the theory of exponential growth, claiming that AI will grow ever larger, ever faster.

To build what they have today, AI developers have sucked up all available high-density compute and funded a dramatic expansion. But their expectations that the data center industry will keep up with exponential curves are fanciful.

As we go to press, the talk of the town is Leopold Aschenbrenner. He has all the hallmarks of a glowing Fortune profile in waiting: He graduated valedictorian of Columbia when he was 19, worked at OpenAI as a super-alignment researcher, and has now started an AGI-focused investment firm.

Aschenbrenner, one would hope, understands AI and what it takes to build it. And yet.

In a recent podcast with Dwarkesh Patel, the researcher pontificated on the future of data centers, talking about the shift to gigawatt scale campuses we are seeing now, and the rumors of the Microsoft Stargate campus on the horizon.

“By 2030, you get the trillion-dollar cluster using 100 gigawatts, over 20 percent of US electricity production,” he said, adding: “Six months ago, 10GW was the talk of the town. Now, people have moved on”

This is fantasy. 1GW is still a huge challenge, the rumored 5GW Stargate is far from guaranteed, and anything substantially above that is simply impossible. Power plants, transmission infrastructure, chip fabs, and data centers are multi-year and even multi-decade projects.

The data center sector knows this all too well, but Aschenbrenner’s comments were accepted at face value, and have been repeated by much of the AI-hype machine.

OpenAI chief Sam Altman, who has long been accused of lying and spinning truths, hasn’t been as ridiculous as Aschenbrenner - but has still overegged the potential speed at which compute can be manufactured, deployed, and powered. He’s also betting on unlimited fusion power becoming real before 2030.

Some hype is natural, but the disconnect between AI bluster and AI construction is growing ever wider. Data center operators must ensure that they don’t get lost in the fog of excitement, and stick to the path of reality. 

The Business of Data Centers

A new training experience created by

Delivering expertly engineered solutions that meet the ever-evolving needs of data centers and building network infrastructure.

• Customer-focused

• Time-tested innovations

• Acclaimed product quality and efficiency

• Industry-leading brands

• Unbeatable support

• Solutions for Critical Power, Physical Infrastructure, and Network Infrastructure

Learn more about Data, Power & Control at: https://www.legrand.us/data-power-and-control

Raritan | Server Technology | Starline | Ortronics | Approved Networks | Legrand Cabinets and Containment

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.