DCD Magazine #46: The Last Data Center

Page 1

The Last Data Center

Cable ships head for rough seas Boat shortages

DE-CIX’s CEO Ivo Ivanov peers into the future

A second look Why big tech is taking on time

Targeting cybercrime Raiding data centers

Long-term data storage enters a new epoch Issue 46 • November 2022 datacenterdynamics.com
CUSTOMIZATION IS OUR STANDARD CONFIGURABLE RACKS AISLE CONTAINMENT COLOCATION CAGES MILLIONS OF CONFIGURATIONS AVAILABLE NEW DATA CENTER BUILDS OR RETROFITS SECURE AND FLEXIBLE LAYOUTS www.amcoenclosures.com/data 847-391-8100 MADE IN THE USA an IMS Engineered Products Brand

6 News

Loudoun’s limits, bankruptcies, fires, saboteurs, and layoffs. Plus the climate crisis claims a data center 14 When everything else is gone

Exploring the frontiers of long-term data storage to find out what remains of the human race

The CEO interview

“We see a huge wave of new market participants on platforms like DE-CIX, who are not interested in peering directly,” DE-CIX’s CEO says.

Light matters

How photonic computing could revolutionize AI and interconnect technology 31 A second look

The dangers of a negative leap second mean that its time to rethink time 35 Cable ship crisis

Everyone wants submarine cables. Nobody wants to buy the ships that lay them 42 The flamingo universe

Simulating the cosmos requires a lot of RAM. 46 A necessary shutdown

Why we’re killing off 2G and 3G, and what it means for 5G 48 Raiding data centers

When law enforcement comes for servers, what happens? 51 Amped up

Ampere talks Arm data centers, and why it’s building its own cores 54 Telco consolidation

Behind Vodafone’s merger with Three UK 58

Reusing waste compute

How digital boilers could heat homes, and do compute on the side 62

Op-ed: Brace for impact

The economy is turning, and that means tough times ahead

24
ISSN 2058-4946
Contents November 2022 28
Issue 46 • November 2022 | 3 14 48 35
54
28
24

GET THE POWER OF THE PLUS.

From the Editor

When everything else is gone

What will remain of the human race?

We don't know how long our species will survive, and even if we can hold on for thousands more years it is not clear that the knowledge of today will carry on with us.

On the cover, we talk to those building records of our present to bring hope to our future. From a mine at the end of the Earth, to Microsoft's cutting edge research labs, to the edge of space, we travel in search of the next stage of data storage.

Data

Over in a second

While that feature focuses on immense timescales, elsewhere we delve into the impact of a single second. A fight over time itself risks causing serious outages, as we face the first ever negative leap second (p31).

Connection issues

Our vulnerability to perturbations in collective time are the result of our interconnected digital world.

How that came to be, and why it's important to build an open network, are the focus of our interview with the CEO of DE-CIX (p24).

But we may not be able to keep building out as easily as we have in the past, as the economy starts to turn and

money becomes less attainable (p62).

Further down the line, we will start to suffer from years of underinvestment in the cable ship industry, just as submarine cables start to boom.

There's no incentive for operators to build more, and that could mean a crisis for those that depend on them (p35).

Darkness

One sector that relies on the growth of the Internet is that of cybercrime.

But nefarious platforms, spammers, and criminal enterprises still need somewhere to call a home.

The world of cybercrime enforcement is not as glamorous as Hollywood might pretend, but it still means the occasional data center raid.

Now, however, the denizens of the dark web are embracing the cloud, causing new challenges for law enforcement (p48).

A different form of darkness can be found in Durham. We travel to the North England city to see a supercomputer built to study dark matter at immense scales.

You can find the answer to life, the universe, and everything on page 42.

The telco challenge

Like supermassive black holes, telecoms companies must consume all around them.

Our new telco editor, Paul Lipscombe, analyzes the next big merger in the telco sector - Vodafone and Three UK - and looks at what it means for their competitors (p54).

Elsewhere, he charts the end of 2G and 3G services (p46).

10,000

number

Meet the team

Editor-in-Chief

Sebastian Moss @SebMoss

Executive Editor

Peter Judge @Judgecorp News Editor Dan Swinhoe @DanSwinhoe

Telecoms Editor

Paul Lipscombe Reporter Georgia Butler

Partner Content Editor Claire Fletcher

Head of Partner Content Graeme Burton

@graemeburton

SEA Correspondent Paul Mah @PaulMah

Brazil Correspondent Tatiane Aquim

@DCDFocuspt Designer Eleni Zevgaridou

Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses

Channel Manager Alex Dickins

Channel Manager Emma Brooks Channel Manager Gabriella Gillett-Perez Chief Marketing Officer Dan Loosemore

Head Office

DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU

© 2022 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.

The
of years Microsoft believes that Project Silica will be able to hold data for
storage is nearing its limits, unless we embrace new technologies
Dive even deeper Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below. Events Intelligence Debates Training Awards CEEDA
Sebastian Moss Editor-in-Chief
Issue 46 • November 2022 | 5 >>CONTENTS

News

The biggest data center news stories of the last three months

NEWS IN BRIEF

Meta ups server room temperatures to 90 degrees F

The company also reduced humidity to 13 percent. But as many of its large facilities are in drought-stricken areas, some are concerned about local impacts even with wider restoration efforts.

Vertiv CEO Rob Johnson to retire on health grounds

Giordano Albertazzi, currently President, Americas, has been appointed immediately as chief operating officer. On January 1, 2023, Albertazzi will succeed Johnson as CEO.

Microsoft installs Ambri high-temperature ‘liquid metal’ batteries as backup

The Ambri batteries use electrodes made of calcium alloy and antimony electrodes, with a molten salt electrolyte. They are also used by TerraScale data centers.

Loudoun County puts limits on data center growth

Loudoun County, Northern Virginia, home of the world’s largest concentration of data centers, has adopted new rules which could limit future development there.

In late September, the Board of Supervisors of the County approved proposals from the county planners, which will limit data center projects in some neighborhoods, particularly along Route 7. The new rules will also require data centers to adopt higher-quality building designs and tougher environmental rules depending on their proximity to housing.

The new zoning rules, based on proposals set out by the planners in July, were adopted by the Board of Supervisors on September 20.

The adopted rules will exclude data centers from certain areas, in particular keeping them Route 7, where there is no more power infrastructure to support them. Route 28 will remain open to new facilities.

The problem on Route 7 arose because data centers were approved automatically in many locations along the highway, so many were approved that the number exceeded the capacity of Loudoun’s power distribution networks. This happened because data centers were approved “by right” in certain types of neighborhood. They were fasttracked without any hearings with local

boards, without the backing of residents - or even, apparently, without reference to whether the utility, Dominion Power, could even supply them with electricity.

To fix this problem, Loudoun is reclassifying neighborhoods, according to the “place types” used by zoning rules.

Data centers will no longer be approved by right in “suburban mixed-use” locations, which make up 887 acres of vacant land in the County. They will also lose automatic approval in areas including “urban transit center,” “urban mixed-use,” “urban employment,” and “suburban neighborhood.”

They will also be barred in “urban transit center” areas, where tall buildings cluster around Metrorail stations - and that place type has been enlarged by also including “urban employment” areas.

Data centers continue to be allowed byright in areas designated “suburban industrial/ mineral extraction”, “transition light industrial,” and “transition industrial/mineral extraction.”

Data centers that are permitted will face stricter environmental controls, using “high-quality building design” to make their exteriors less ugly, and limiting the noise made by air-conditioning systems used to cool the servers bit.ly/LoudounBreakingPoint

Stack deploys beehives at data center campus in Milan

“Honeybees are responsible for 80 percent of the world’s pollination, but their population is threatened and drastically declining,” Stack said. “To support their preservation, we’ve adopted three hives that will host nearly 200,000 bees.”

EcoDataCenter to reuse heat in fish farms and greenhouses

EcoDataCenter and Wa3rm will focus on large-scale cultivation in fish farms and greenhouses in the projects, though further details haven’t been shared. There are several data centers warming fish farms, including Green Mountain and the White Data Center.

Switch begins work on expanding pyramid campus in Grand Rapids, Michigan

Gaines Township officials approved a site plan amendment in July 2021 for a 312,000-square-foot (29,000 sqm) expansion at Switch’s existing data center, located at the former Steelcase Inc. “pyramid” building in Gaines Township. Construction appears to have begun in October this year.

Whitespace
>>CONTENTS 6 | DCD Magazine • datacenterdynamics.com

Compute North files for Chapter 11 bankruptcy, with $500m owed

Cryptomining data center firm Compute North has filed for bankruptcy.

Minnesota-based Compute North Holdings Inc., which provides data center hosting services for cryptocurrency miners and blockchain companies, filed for Chapter 11 bankruptcy in Texas this September.

The company said it owed as much as $500 million to at least 200 creditors. The firm’s CEO Dave Perrill has also stepped down but will remain on the board.

Data center operators worldwide are seeing energy prices rise, while cryptomining firms are facing a double-whammy of low Bitcoin prices and Ether’s recent move away from ‘proof of work’ mining, reducing the need for mining hardware.

Bloomberg reports Compute North faced delays in energizing mining machines for its client Marathon Digital Holdings Inc. in Texas due to local regulations in the state.

Compute North’s 280MW mining facility in Texas was planned to be operational in April, but was held up by the approvals process. By the time it could launch, Bitcoin prices had plummeted, and funding opportunities had dried up.

In February, Compute North raised $385 million, consisting of an $85 million Series C equity round and $300 million in debt financing. However, one of its main backers, Generate Capital, withdrew funding at the end of the year, sparking the bankruptcy process.

Founded in 2017, Compute North operates four US data centers; two in Texas in McCamey (280MW) and Big Spring, and one each in Kearney, Nebraska (100MW), and North Sioux City, South Dakota (&MW). A third 300MW facility was in development in Granbury, Texas. The company’s assets are worth between $100 million and $500 million, according to its Chapter 11 petition.

“The Company has initiated voluntary Chapter 11 proceedings to provide the company with the opportunity to stabilize its business and implement a comprehensive restructuring process that will enable us to continue servicing our customers and partners and make the necessary investments to achieve our strategic objectives,” a spokesperson said.

Both Compass Mining and Marathon Digital are customers of Compute North, and released statements saying the filing shouldn’t affect operations.

However, Marathon has $31.3 million invested in Compute North, and has paid the company about $50 million in security deposits and prepayment.

Separately, rival cryptomining firm Core Scientific said that it was facing bankruptcy this October, warning that it would not pay its debts for the month, or in November. By the end of the year, it could run out of cash.

“In the event of a bankruptcy proceeding or insolvency, or restructuring, holders of the company’s common stock could suffer a total loss of their investment,” the company said. “Substantial doubt exists about the company’s ability to continue as a going concern.

bit.ly/CryptosGambleFails

Dutch colo firm Datacenter Almere files for bankruptcy

Dutch data center firm Datacenter Almere has filed for bankruptcy.

The Flevoland company this month was declared bankrupt by a court in Midden-Nederland.

Led by Andrew van der Haar, the company offered colocation services and operated a single data center on 153 Randstad 22 in Almere, outside Amsterdam. Subsidiaries ICT Campus Almere and IaaS provider NL Datastore are also subject to the bankruptcy decision.

The court has appointed a trustee, Mr. KCS Meekes, associated with De Advocaten van Van Riet BV, who will take care of the settlement of the bankruptcy. DCD has reached out to van der Haar for comment.

The facility was originally built in 2001 for Sara (now teaching and research cooperative Surf.nl) before being taken over by Vancis in 2008. Interxion/Digital Realty acquired the facility in 2018 and operated the site as Almere ALM1. Datacenter Almere relaunched the facility in 2019.

The company reportedly suffered a major outage in September after a break in a high-voltage cable. A number of companies, including NorthC and Keppel, own and operate data centers in Almere bit.ly/DataCenterBankruptcy

>>CONTENTS DCD Magazine #46 Issue 46 • November 2022 | 7

Europe could face a winter of mobile network blackouts

Mobile networks across Europe could start going down this winter, as operators warn that the energy crisis may lead to regular power cuts and energy rationing.

This has led to fear within the telecoms industry, notes Reuters, which reports that industry officials are concerned that a challenging winter could put telecoms infrastructure to the test.

The potential power issues have been fueled by Russia’s invasion of Ukraine, with Russia deciding to halt gas supplies via Europe’s key supply route in the wake of this conflict.

Four telecoms executives say that there are currently not enough backup systems in many European countries to handle widespread power cuts. This potentially

increases the prospect of mobile phone outages.

This has led some European countries to try and ensure communications can resume as normal even if power cuts end up exhausting backup batteries.

Europe has nearly half a million telecoms towers, with most providing battery backups that last around half an hour to run the mobile antennas.

French electricity distributor Enedis has put forward plans for a ‘worst-case scenario’ that will see power cuts lasting up to two hours, affecting different parts of the country on a rotational basis, notes Reuters. Any blackouts would exclude hospitals, police, and government facilities.

Sources claim that the French

government, telecom operators, and Enedis have discussed the issue over the summer.

“Maybe we’ll improve our knowledge on the matter by this winter, but it’s not easy to isolate a mobile antenna (from the rest of the network),” said a French finance ministry official with knowledge of the discussions.

Swedish, German, and Italian telcos have also raised concerns, while Nokia and Ericsson are working with mobile network operators to mitigate the impact of potential power shortages.

UK telecoms company BT recently told the FT that it was not currently seeking more backup power for the winter, but was assessing which of its non-critical hardware could be switched off.

In the data center industry, the likes of Equinix and Digital Realty have increased their diesel reserves in preparation for potential grid and fuel supply issues.

Equinix usually fills its tanks to 60 percent capacity, but is now raising that to 90 percent across many of its sites. Digital Realty said that it had established more priority delivery agreements with its diesel suppliers. One of the UK’s largest equipment rental groups is also stockpiling extra diesel generators in anticipation of high demand going into winter.

“We’ve been doing contingency planning since the war in Ukraine broke out,” Gary Aitkenhead, Equinix’s SVP of Europe, Middle East and Africa operations, said.

“We don’t ever expect to have to run for more than a few hours, or at worst case a day, on diesel but we’re prepared to run for up to a week.”

bit.ly/EuropeanEnergyCrisis

UK government could ration diesel supplies to keep data centers up

The UK government is considering rationing data centers’ access to diesel fuel for backup if the power crisis accelerates in coming months.

Government officials have discussed allocating diesel for data center backup generators if the continuing energy crisis leads to power cuts from the National Grid, according to Bloomberg

Data centers make up 2.5 percent of the UK’s electricity demand, and in recent years have faced increasing opposition from local residents. In West London it was reported that housing building projects were not being allocated power, because data centers had already taken the available capacity.

As well as grid power, data centers need backup, currently nearly always provided by diesel generators. There have been reports of data centers stockpiling diesel for what is expected to be a difficult winter.

“Our members have taken all necessary precautions by filling up their reserves, but we need to see government take necessary measures to ensure a continuous supply in the unlikely event of prolonged blackouts,” Matthew Evans, markets director at technology industry group techUK, told Bloomberg.

Data center operators in turn are coming forward with suggestions to progress the idea of using their backup systems to shift demand off the grid at key times, to avoid the need for blackouts.

bit.ly/DieselSupplyCrunch

Whitespace
>>CONTENTS 8 | DCD Magazine • datacenterdynamics.com

Dominion to resume connecting new Loudoun data centers, but capacity

still limited

Dominion Energy has found a way to connect some new data centers in Loudoun County, Northern Virginia - but there will still be some that have to face delays till 2026.

In July, the energy utility shocked data center operators in the world’s most concentrated data center hub by announcing that it could not guarantee new facilities would get power through its network of overhead lines, meaning planned connections for some facilities in the East of the County would be delayed for years.

But in late September, Dominion said that it is once again connecting new data centers. Some planned projects will be held up for as much as four years, because of constraints in its transmission infrastructure.

Dominion Energy now says it has been working with customers to spot strategies to squeeze some incremental capacity out for new projects, according to Data Center Frontier

“After completing a comprehensive analysis of our system and accelerating several near-term projects, we’ve been able to lift the temporary pause and resume new data center service connections on an incremental basis,” Dominion’s Aaron Ruby said.

bit.ly/AtLeastThatsSomething

AWS buys 105 back-up diesel generators for new data center in Dublin

In response to concerns over impact on electricity networks in the region

Amazon has applied for an emission license to Ireland’s Environmental Protection Agency (EPC) to install 105 diesel generators at its new Dublin data center site.

First reported by The Times, the application from Amazon requests 105 backup diesel generators and four diesel-powered fire pumps to be located at the data center site in Clonshaugh Business and Technology Park. The units will have the ability to general a total of 674MW of power. Industrial emissions licenses are required when units are expected to generate more than 50MW of power.

Over the summer Amazon was granted planning permission for two new data centers at the Clonshaugh Business and Technology Park. Located on the site of a former Ricoh building once earmarked for T5’s first European facility, it’s unclear if Amazon had been previously granted a connection by EirGrid before the current moratorium was brought in.

According to The Times, applications for licenses have ‘flooded into the agency,’ and nine of the ten applications received this year are for Amazon’s data centers.

The application comes as a response to concerns over the impact data centers are having on electricity networks in the country. It was reported earlier this year that the amount of metered electricity consumed by data centers in Ireland reached 14 percent in 2021.

It was this report that led to the consideration of a moratorium on data centers country-wide. Instead, the government published a revised statement in which it said that data centers should make efficient use of the country’s electricity grid by using available capacity and alleviating constraints, increase renewable energy use, colocated with a renewable generation or energy capability, be decarbonized by design, and provide opportunities for community engagement.

There remains a de facto moratorium enforced by the grid in the Dublin area, with EirGrid stating no new grid connection applications for data centers will be accepted in Dublin until 2028.

Interxion (Digital Realty) paused plans for expansion in the area as a result, while Dataplex recently entered voluntary liquidation after EirGrid denied power contracts at two data center sites. Microsoft, AWS, and Equinix have also reportedly paused projects in the area.

Ireland’s Commission for Regulation of Utilities has reportedly encouraged data centers and other large consumers of energy to turn to emergency power generators. Since January of last year, a total of ten facilities — nine operated by Amazon and one by Microsoft — have applied for industrial emissions licenses. K2, Equinix, and Echelon applied for permits in 2020 and 2021, ahead of the CRU decision.

bit.ly/BackUpALotOfSecs

The world is heading for a catastrophe as nations fail to reduce carbon emissions, the UN’s environment agency has warned.

The agency said that there is ”no credible pathway to 1.5°C in place” for 2030, and even reducing emissions to hit 1.8°C by 2050 seems unlikely.

“This report tells us in cold scientific terms what nature has been telling us all year through deadly floods, storms, and raging fires: we have to stop filling our atmosphere with greenhouse gases, and stop doing it fast,” Inger Andersen, the executive director of the UN Environment Programme (UNEP), said.

“We had our chance to make incremental changes, but that time is over. Only a root-and-branch transformation of our economies and societies can save us from accelerating climate disaster.”

She added “every fraction of a degree matters: to vulnerable communities, to ecosystems, and to every one of us.”

bit.ly/ItWasGoodWhileItLasted

DCD Magazine #46 Issue 46 • November 2022 | 9
“No credible pathway to 1.5°C in place,” UN warns
COMPREHENSIVE ENGINEERING EXCELLENCE www.anordmardix.com

Intel to lay off a “meaningful number” of employees in $10bn spending cut, as chip sales crater

Semiconductor designer and manufacturer Intel plans major layoffs and spending cuts as its internal issues are compounded by a slowing global economy.

Revenues fell 20 percent year-over-year in the last quarter, while net profit fell by 85 percent.

Other than its networks and Edge division, which grew in revenue by 14 percent to $2.3 billion, revenue fell across the board.

PC processors dropped by 17 percent to $8.1bn, its relatively new chip foundry business dropped two percent to $171 million, and its servers and AI chips fell 27 percent to $4.2bn.

But CEO Pat Gelsinger tried to pitch its server chips as a positive sector, saying that its upcoming 4th Generation Xeon Scalable chips “will be our fastest ever Xeon to a million units.”

The next three generations of Xeon server products are all on target, he claimed, and are “making very good milestones.” The 4th Gen ‘Sapphire Rapids’ had been repeatedly delayed due to technical issues.

Irrespective of the economic slowdown, Intel’s server business has faced pressure from a resurgent AMD, which has eaten at its oncenear-monopoly market share. AMD’s Epyc processors have scored an increasing number

of wins in the data center and supercomputer space, while issues with Intel’s manufacturing process meant that its products fell behind. At the same time, x86 as a whole is under attack from Arm processors, as well as RISC-V and novel architectures.

Intel commanded an 86.1 percent presence in the x86 server chip market in the second quarter, down from 90.5 percent in the year before - and well down from highs of 98 percent.

With both internal and external pressures cutting into sales and profit, Intel said that it would cut costs by $3 billion in fiscal 2023, and up to $10bn by 2025.

“These savings will be realized through multiple initiatives to optimize the business, including portfolio cuts, right-sizing of our support organizations, more stringent cost controls in all aspects of our spending and improved sales and marketing efficiency,” David Zinsner, chief financial officer at Intel, said during an earnings call.

Speaking to Barron’s , Zinsner said there would be a “meaningful number” of job losses, echoing earlier reports that the company could be set to fire thousands of employees. bit.ly/AllLayoffsAreMeaningful

Oracle has laid off more than 200 workers at its Redwood City location.

The job cuts happened last week as the tech giant filed a Worker Adjustment and Retraining Notification in California.

In total 201 jobs have been cut, said to be ranging from data scientists, application developers, marketing specialists, and software developers.

Oracle previously announced cost-cutting plans to save $1 billion, noting that job cuts would be likely. Earlier this year DCD reported that further job cuts are likely to happen beyond the US, in Europe, India, and Canada.

The layoffs follow Oracle’s $28.3bn acquisition of healthcare IT company Cerner in June.

Oracle still plans to invest in its cloud service, as it ramps up to serve TikTok, which it gained as a customer after then-President Trump tried to ban the Chinese social media platform, causing it to shift to the cloud provider founded by a Trump donor.

bit.ly/ CoveringCe rnerCosts

Microsoft lays off 1,000 employees across company

Microsoft has quietly laid off around 1,000 employees across multiple divisions of the company.

The true scale of the cuts, first reported by Insider, are not known.

Among the divisions impacted by the layoffs are Xbox, Xbox Cloud, Microsoft Strategic Missions and Technology organization, Azure, and Microsoft government. The Mission Expansion cloud government team is among those potentially on the chopping block.

“Like all companies, we evaluate our business priorities on a regular basis, and make structural adjustments accordingly,” Microsoft said in a statement. “We will continue to invest in our business and hire in key growth areas in the year ahead.”

The company announced it would lay off less than one percent of its 180,000-person workforce in July, and has slowed hiring since May.

A number of high-profile executives have also left Microsoft Azure over the past year, including its VP of global infrastructure, VP of cloud infrastructure M&A, CVP of Azure IoT, and VP of global data center construction.

bit.ly/AQuietFiring

Oracle lays off more than 200 Californiabased workers
>>CONTENTS DCD Magazine #46 Issue 46 • November 2022 | 11

Rogers and Shaw merger deal hits a

snag

The proposed $20 billion merger of Rogers Communications and Shaw Communications faces uncertainty after mediation with Canada’s competition bureau failed.

Rogers, Shaw, and Quebecor failed to mediate their differences with Canada’s competition bureau, according to Reuters

Rogers first announced its intentions to buy Shaw in March 2021, in a move that would see four Canadian operators consolidate down to three, but the move was met with fierce opposition by Canada’s Competition Bureau over fears it would hurt competition.

In a bid to mitigate competition concerns, Rogers outlined its plans to sell Shaw-owned Freedom Mobile to telecoms and media firm Quebecor, through its subsidiary Videotron for CAD$2.9bn (US$2.3bn), which would allay any fears over the competition in Canada.

However such proposals have not been enough, with Rogers, Shaw, and Quebecor detailing in a joint statement that “the mitigation did not yield a negotiated settlement.”

The companies plan to try to come to a resolution with a tribunal this November.

Police raid SK Group over Kakao fire outage, as Korea’s government impose safety drills on data centers

Korean police have raided the offices of SK Group, and the SK data center which caught fire in October.

Local police confiscated documents relating to the fire, which brought down the KakaoTalk messaging service on Saturday.

KakaoTalk is used by 90 percent of South Koreans, and the outage brought down many finance and travel applications that rely on KakaoTalk IDs. The disruption continued for much of a week, and the data center is still operating without backup power, so further disruption is possible at the time of publication.

Kakao, which has seen one of its CEOs step down in the wake of the incident, has blamed SK’s Li-ion batteries for the fire.

The police will also be interrogating SK Group officials, according to Yonhap News Agency

Meanwhile, the country’s Ministry for Science and ICT (MSIT) is stepping in to impose disaster management procedures on private companies in Korea. All the country’s large data centers will be subject to a government disaster management system, and subject to regular inspections and safety drills, according to Korea JoongAng Daily The measures promised to establish a center for disaster prevention for digital infrastructure, which will address private data centers as well as government facilities.

Nationwide inspections of data centers and network infrastructure will be carried out by MSIT with the National Fire Agency, and an expert group will create a list of measures to improve data center safety.

bit.ly/KakaoGoesKaka

Peter’s Starlink factoid

Elon

SpaceX CEO Elon Musk has reversed calls for the US government to continue to foot the bill for Starlink Internet services in Ukraine.

The world’s richest man tweeted: “The hell with it … even though Starlink is still losing money & other companies are getting billions of taxpayer $, we’ll just keep funding Ukraine govt for free.”

The company had long claimed credit for the tens of thousands of Starlink dishes sent to Ukraine since the outset of Russia’s invasion, and said that it had not received money for the deliveries.

But in April it was revealed that USAID spent millions on buying thousands of terminals, while the governments of France and Poland also acquired thousands of the dishes. Some Ukrainian citizens have also tweeted that they’ve been paying for Starlink out of their own pocket.

But SpaceX was covering some costs, and sent a letter to the Pentagon asking for money, which leaked. This was met by a public outcry, exacerbated by recent Musk comments that echoed the demands of Russia’s government.

bit.ly/AStrangeMan

Whitespace
Researchers have reverse-engineered Starlink’s communication network to use it as a global positioning system. But to be as accurate as GPS, SpaceX would have to provide more data.
bit.ly/DealGetsRogered
>>CONTENTS 12 | DCD Magazine • datacenterdynamics.com
Musk says SpaceX will pay for Ukraine’s Starlink service after pushback

Saboteurs cut fiber cables in France, in second incident this year

Multiple fiber cables were cut in Southern France, in what appears to be a targeted attack.

The incident is the second in the country this year. Back in April, multiple cables were cut in the country overnight in a coordinated incident.

“We are aware of a major cable cut in the South of France that has impacted major cables with connectivity to Asia, Europe, US, and potentially other parts of the world,” cloud security company Zscaler said in a blog post.

“As a result of the cable cut, customers may see packet loss and or latency for websites and applications which traverse these impacted paths.”

The company said that there were at least three cable cuts - Marseille-Lyon, MarseilleMilano, and Marseille-Barcelona. Police were on-site at the first cut, investigating the cuts, which caused delays to the repairs.

The cuts echo those in April, although it is not known for certain that they are connected. The Spring cuts caused outages across France, impacting 10 Internet and infrastructure companies and several cities.

“The people knew what they were doing,” Michel Combot, the managing director of the French Telecoms Federation, told Wired “Those were what we call backbone cables that were mostly connecting network service from

Paris to other locations in France, in three directions.”

Arthur PB Laudrain, a researcher at the University of Oxford’s department of politics and international relations who has been studying the attacks, added: “It implies a lot of coordination and a few teams.”

No groups or individuals claimed responsibility for the damage, and French police have not announced any arrests related to the damage. It is not known if the cuts are related to Covid conspiracy theories, anti-tech activity, or for another reason.

Fiber optic cables used for Orange’s network were intentionally cut in the Paris region back in 2020.

The week before the cut, submarine cables to the Shetland Islands were severed, causing Internet outages in the remote Scottish archipelago (see right).

While such outages are usually accidental, British tabloids have focused on the presence of a Russian “research ship” near the cut, suggesting that they could have intentionally damaged the cable.

Russia is also accused of sabotaging the Nordstream gas pipeline in the Baltic Sea, but there is no proof it was involved in the Shetland cut.

bit.ly/WillThereBeAThird

Shetland outages caused by submarine cable breakages

The Shetland Islands experienced a major outage that affected landline, Internet, and mobile services.

The outage at the archipelago in the Northern Isles of Scotland was been caused by a breakage in the SHEFA-2 submarine cable. The cause of the break hasn’t been disclosed.

BT blamed the outages on breakages to a ‘thirdparty cable that connects the Shetland’, which is 100 miles off the North coast of Scotland.

The police declared the outages a major incident, while the MP for Orkney and Shetland Alistair Carmichael said that the damage had caused a “catastrophic impact.”

BT Group said in a statement: “Engineers are working to divert services via other routes as soon as possible and we’ll provide further updates. Our external subsea provider is also looking to restore their link quickly.”

SHEFA-2 was initially deployed in 2007.

bit.ly/OhShetNo

Twitter data center brought down by California’s extreme heat wave

A Twitter data center was brought offline by California’s extreme heat wave, as equipment shut down during record temperatures.

The company avoided an outage by transferring its workloads to its data centers in Atlanta and Portland, but said that if either of them had failed the social media website would have gone too.

“On September 5th, Twitter experienced the loss of its Sacramento (SMF) data center region due to extreme weather. The unprecedented event resulted in the total shutdown of physical equipment in SMF,” Carrie Fernandez, Twitter’s VP of engineering, said in an internal message seen by CNN.

“All production changes, including deployments and releases to mobile platforms, are blocked with the exception of those changes required to address service continuity or other urgent operational needs.”

The outage occurred before Elon Musk acquired the site.

bit.ly/WelcomeToOurFuture

>>CONTENTS DCD Magazine #46 Issue 46 • November 2022 | 13

The last data center

Long-term data storage enters a new epoch

DCD Magazine #46
>>CONTENTS
Images by Sebastian Moss

Buried deep in a mountain at the edge of civilization there lies what may end up being humanity's last message.

To get there we traveled over permafrost and up a steep passage, past signs warning of polar bears ahead. Then we descended into the dark mines, our headlamps illuminating falling ice crystals disturbed by our presence, bathing us in glittering and ephemeral showers.

It's not clear how long we traveled down the shaft, time travels differently underground. Here in this Svalbard mountain, it is measured in eons, not hours.

We go past the Global Seed Vault, a backup facility of the world's seeds in case of disaster, and journey further downwards. At last, we come to a door, glowing in the pitchblack. Emblazoned on it are the words "Arctic World Archive: Protecting World Memory."

Before we talk about what's behind that door, we should understand the two great data challenges it hopes to solve. It is joined by dozens of startups, researchers, and even a trillion-dollar corporation in competing to figure out the future of data.

One challenge is philosophical, that of our death and destruction, and what we leave behind. The other is more immediate, that of growing hordes of data, which threaten to overflow our current systems and leave us unable to keep critical data in an economic way.

The beginning of recorded knowledge

The story of data is an ancient one. Some 73,000 years ago in what is now South Africa, an early human picked up a piece of ocher and scratched a symbol into a shard of stone, in what is our earliest recorded piece of human artwork.

It took the majority of our species' history to get to written recordings, with the Kish tablet in 3500–3200 BC, where humans etched pictographic inscriptions into limestone. Even then, it took thousands of years to advance to clay tablets, and still further to get to papyrus, parchment, and finally paper.

Most of what happened in the world was not recorded. Of what was, the majority

has been lost in wars, fires, and through institutional decay, never to be recovered. Our understanding of ourselves and our past is told through what little survived, providing a murky glimpse that is deeply flawed and relies on the skewed records of kings and emperors.

Now, things are different. We are flooded with data, from individuals, corporations, and machines themselves. But we keep that data primarily on hard drives and solid state drives, which last mere decades if kept unused in ideal conditions, and just a handful of years if actively run.

Other common storage platforms are magnetic tape and optical discs, which themselves come in multiple formats of varying density and lifespan, but are often used for ‘cold’ longer-term storage.

All have their uses and individual benefits and drawbacks, but the simple fact is that if we stopped transferring data to new equipment, nearly all of it would be gone before the century is out.

"The only written records of our time would be the embossing on stainless steel cooking pots saying 'made in China' and probably the company logos on ceramics," Memory of Mankind (MoM) founder Martin Kunze explained.

Kunze is one of a select few hoping to prevent such a tragic loss for our future. To do so, he is looking to our past.

A collection of memories

"I studied art with a focus on silicate technologies and ceramics," he said. "The idea of using ceramics as a data carrier is not new, it's 5,000 years old."

Like the archive in Svalbard, and the Deep Sea Scrolls of the past, he has also turned to depositing data underground as a method of long-term storage.

2km deep, inside the world's oldest salt mine beneath the Plassen mountain in

Hallstatt, Austria, rows of neatly organized ceramic tiles attempt to provide a snapshot of our world.

The most immediately discernable tiles are readable to the human eye - with words and images printed onto them at 300 dpi resolution, similar to a normal color printer.

Less visually exciting at a distance, but perhaps far more important, are ceramic microfilm plates. Kunze turned to physical vapor deposition, a method of vacuum deposition that produces thin films and coatings on substrates such as metals, and then laser etches data at five lines per millimeter. This gives around 500 times the density of the original plates - which will be used to store 1,000 of the world’s most important books.

This, in turn, sparked a governmentfunded project to build a femtosecond laser, which could write onto even thinner materials such as glass-ceramic. "It's very early days, but we have proved that it's possible to write and read 10 gigabytes per second" at much higher density, he said.

As for its lifetime, it should "far exceed the existence of our Solar System, so you could say it's eternal," he said. The technology is being developed by a company Kunze cofounded, Cerabyte, currently still in stealth mode.

Cerabyte does not expect to produce the tech on its own, and has turned to Sonywhich has an optical disc factory in Austria that is slowly declining with the death of the format - to potentially develop "a minimum viable product that we aim to have in one and a half years," he said.

He's far from alone in trying to reinvent the data storage landscape. While Kunze turned to the past for inspiration, others have gone for an approach that appears ripped from science fiction.

The data soup

"I guess you could say we're in the business

Issue 46 • November 2022 | 15 Escape Ukraine 
The last data center >>CONTENTS
"We're manipulating DNA for the purpose of both storage and compute and making some real progress here"

of trying to build a bio-computer," Dave Turek, chief technology officer at Catalog, said. "We're manipulating DNA for the purpose of both storage and compute and making some real progress here."

Turek knows all about the intricacies of traditional computing - the last time we supercomputer, then the world's most powerful, a project he spearheaded during his nearly 23 years at IBM.

Wetware computing is a new avenue. "I'm not a molecular biologist, so we're on even ground here," he joked before launching into a dense explanation of DNA.

The classic double helix construct of DNA is made of ladders that are formed of just four bases: Adenine, Guanine, Cytosine, and Thymine. With that simple starting point, all of the sequences of bases that dictate everything that makes every living thing unique are found. "It's biology's way of encoding information," Turek said.

We have DNA of about 3.5 billion bases long, which contains all that we are, hinting at the storage potential that could be harnessed. It's not a new idea: Early attempts date back to 1988, while American geneticist George Church encoded a book into DNA in 2012.

Those approaches took each of the letters A, G, C, and T, and assigned bit values to them, potentially allowing incredible data storage density. "Immediately, people started seeing DNA as the remedy to the overflow of information created in modern society," Turek said. "And they're completely wrong."

Catalog believes that those researchers, as well as some rival DNA storage companies, have made a crucial error. "You then have to solve a fundamental problem, which is that every time you add another base to your synthetic piece of DNA, it takes 30 seconds. So if the cost is 30 seconds for every two bits of information, that's not going to work very

decoding."

This is because the genome industry has prioritized accuracy at the base level over speed, at a fidelity that Catalog doesn't actually need. "We've got to resolve the issue of rapid decoding in parallel to the velocity with which we can write,” he said.

"We have some partnerships established

The company has decided to scale back and not work at the base pair level, instead constructing an alphabet composed of small snippets of DNA. Catalog now has 100 of these oligonucleotides, as they are known, with which it can create data by connecting them together in what is called ligation.

"And the machine that we invented and developed automates that process, which is a big deal,” Turek said. “It uses inkjet printheads that contain these oligonucleotides. And each reservoir is unique from every other one. So that each of the nozzles can be instructed to fire a particular snippet of DNA out of my alphabet."

The ‘Shannon’ machine has three thousand nozzles that deposit DNA ink drops at the picoliter level, 500,000 times a second. These mix together to create a long string of data holding DNA. “The way it's currently configured, I can create a trillion unique molecules in a day,” he said.

This can then be read with a DNA sequencer, with the company currently using Oxford Nanopore machines. These are much slower than the writing machines - "if I take 10 minutes to write a whole bunch of data, it might take me a week to do all the

to begin to try to do some real innovation on the read side as well. We have ideas today that we think could easily generate two orders of magnitude improvement in speed."

Even at the slightly lower storage ability of oligonucleotides, the amount that DNA can theoretically record is mindblowing. "You could put all the information in the world in this," Turek said, holding up a Pepsi can. Such a feat is still a way off, he admitted, but the company was able to store all of Wikipedia into a few droplets.

As for the long-term storage abilities, the death of a mammoth some 1.2 million years ago gives an insight into its longevity. Last year, researchers successfully sequenced the previously unknown Krestovka mammoth, and its body was not exactly kept in ideal conditions.

Perhaps more enticing still is another idea being cooked up at Catalog - using DNA for compute. The concept is in its early stages, and only works for very specific types of compute, but would still be a profound advancement in computing.

“We're doing one case that is inspired by a real potential user,” Turek said. “This is not a theoretical abstract academic exercise that originated in a textbook. And we're using

16 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS

that customer to guide us in terms of the nature of the algorithms, and the other kinds of things that need to manifest themselves in DNA.”

It’ll be a long time before such efforts will bear fruit, but Catalog is hopeful that the opportunity is vast. “If you want to build a parallel computer, which I did for 25 years, and you want to add an incremental unit of computing, it typically has a pretty hefty cost to it, and consumes a lot of energy and space,” Turek said.

In DNA, he argues, “I can make it parallel cheaply to an extraordinarily large degree. If you say ‘I want to I want to run this instruction 100,000 times in parallel,’ I would come back to you and say, ‘100,000? Why not 10 million? Why not 10 trillion?’”

The idea is different, to say the least. We’re used to thinking about the computing world in terms of electricity and physics, in bits and bytes. “But look at how people are moving on from von Neumann architectures, and beginning to create quantum computers,” Turek countered.

“We think that we're in the right place at the right time, because there is a de facto acceptance of alternative architectures in the computing world. However strange you think this might be, the guys in the quantum

world are stranger - and they can still sell their computers.”

Nonetheless, Catalog still has a long way to go before it can convince companies to put their data lakes into data soups, and embrace unconventional storage solutions. Here, rival Microsoft has an advantage - it can deploy its long-term storage concept in its data centers, and rely on a robust sales network to convince users that its approach is the way to go.

Halls of glass

In Cambridge, the hyperscaler’s researchers have been experimenting with fused silica, where a femtosecond laser encodes data in glass by creating layers of threedimensional nanoscale gratings and deformations at various depths and angles.

“To read it, we image it with a microscope,” Microsoft Research’s distinguished engineer and deputy lab director Dr. Ant Rowstron explained. “It's got these layers in it, and you focus on a layer. And then we take several images concurrently. We don’t spin it, we read an entire sector at a time.”

Project Silica was set up by Rowstron and Microsoft after realizing that conventional storage was set to hit a bottleneck. “We began from the ground

up, asking what storage should look like,” he recalled, with Microsoft deciding to build upon silica data storage research from the University of Southampton.

It has gone through multiple iterations over the past few years, slowly getting denser as well as easier to read and write. “If we were to fill our 12-centimeter by 12-centimeter reference platter entirely with data, we’d be at around five terabytes,” Rowstron said.

Escape Ukraine 
“We have turned to warehouse-style robotics, and we have these little robots that operate independently, they can move up and down and along the structure in this crabbing motion"
The last data center >>CONTENTS

Writing is still slow and expensive, requiring high-powered lasers to accurately etch the glass in just the right part. Critics, including some working on the other projects in this piece, worry that there are fundamental limits to how fast you can pump energy into the glass without causing issues.

“It's been a lot of work,” Rowstron admitted. “When we first saw the technology at Southampton, it was taking hundreds of pulses to write data into glass. But we’ve been working on how to form these structures with a very, very small number of pulses.”

He declined to disclose exactly how far the project had come, but added “if you compare it to the state of the art, we were significantly below [that number of pulses]. You can think of it as dollars per megabyte writing, and I would say the technology is now in a good spot.”

Microsoft is also thinking about how it would be read in a data center. “We have turned to warehouse-style robotics,” Rowstron said. “And we have these little robots that operate independently, they can move up and down and along the structure in this crabbing motion, which is pretty cool.”

On one end there would be a machine

The guardians of knowledge

writing the data onto the blocks of glass, on the other a reader ready to uncover what is within each block. Other than the robots and the writer, it wouldn’t need any power, and it won’t need any cooling.

That means that the first data centers Silica will inhibit will be massively overengineered. “I guess one day we'll end up with buildings or data centers dedicated to just storing that preserved glass,” Rowstron said.

That glass can last around 10,000 years. “No one's asked us to go further,” Rowstron said. “There are things you could trade-off - you could trade density for lifetime and things like that. But you've got to remember our goal is to get a technology that will allow us to use this in our data centers. No data center is going to exist for 10,000 years.”

Others could go further, should there be demand. The original Southampton work found that its much-slower-to-write silica could last 13.8 billion years at temperatures of up to 190°C (375°F). The researchers there

stored the Universal Declaration of Human Rights, Newton’s Opticks, the Magna Carta, and the King James Bible on small discs.

“My hope is that in 200 years’ time there will be a new storage technology that is even more efficient, even denser, even longer life, and people are going to say ‘we don't need to use glass anymore,’” Rowstron said. “But they’ll move formats because they want to, not because they have to.”

Rowstron believes Silica will prove useful for both major challenges of data storage. "You want to make sure that whatever else happens, data from the world is not lost," he said. But the tech is naturally focused on the more immediately pressing concerns faced by businesses.

"Hard drives are languishing, we've had so little capacity increases in the last five years; tape is suffering," he said. "There is more and more data being produced, and trying to store that sustainably is a challenge for humanity."

Cold storage that doesn't require constant power would be a huge boost for the environment, as would moving away from rare earth metals such as those found in HDDs (of which there may not be enough to meet future demand). It is also much

In popular culture, the Library of Alexandria - one of the greatest repositories of knowledge in antiquity - was lost to the flames, its endless rows of papyrus consumed by fire. Scientific theories, fantastical tales, and bureaucratic minutia became embers in a matter of hours.

But this is likely not true, Richard Ovenden, the head of the Bodleian Library, said. Its death was slow, as papyrus became less relevant with new technology - parchment - and no transition was made, like with the Library of Pergamum.

More profound was an institutional decay, as a lack of management, investment, and care led to its tragic demise.

"I think we are in a moment in time where that risk is high again," Ovenden, author of Burning the Books: A History of Knowledge Under Attack, said. "You certainly see that with the failure of public libraries in many Western countries."

Our story on how we store data for years to come has focused on the technology, but it's important not to forget the humans working to protect and share knowledge.

Librarians and archivists have passed the torch of knowledge from one generation to the next, ensuring the light of humanity does not go out.

"Throughout history, there have been moments of destruction of knowledge, but they've often been

matched with moments of renewal or of rescue," Ovenden said.

"That certainly is the case today, both in terms of communities rallying around their libraries to protect them, and in library's ability to adapt to the various challenges posed by the big tech companies today."

We live in a strange time where we have more access to information than ever, but it is through the lens of corporations who put profit over fighting misinformation or loss. "Preservation is not in Google's mission," Ovenden said.

"We need places on the Internet where information can be relied on, and I think many of those are libraries and archives that are not profit-making companies."

Preserving the creations of corporations, with their platforms and proprietary information, is also a difficulty.

"As those preserved worlds become much more complex and exist much more in real-time, that task of preservation becomes much more complex and philosophically challenging, as well as financially challenging for organizations," he said.

"There should be a link with regulation and taxation on the profits of the major technology companies in order to allow third-party preservation to take place by not-for-profit preservation organizations."

18 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS
Credit: Microsoft

cheaper over the longer term, when all you have to worry about is where to put the glass.

The technology caught the eye of Guy Holmes, CEO of Tape Ark, an Australian company focused on moving aging tape libraries to the cloud. "I've actually got it just sitting here in my office," he said, grabbing what at first looked like a clear square of glass, but revealed rows of microscopic etchings when a light was shined into it.

"It's very early days, but it seems to have legs internally there, and it appears to be picking up its density," he said. "I spent a lot of time with the guys at the program just talking through density, access times, access frequency, etc. Customers tend to do one restore per week at the moment based on backups. As the tapes age, the number of access requests reduce significantly.

“We're finding there are these archives of infinite retention or tapes that have never been accessed since they were created.”

His company remains excited about the technology, but is currently focused on the more immediate challenge of simply getting data off of antiquated tape systems and onto the cloud, including Microsoft Azure. There, it might even go back on tape, but first Holmes recommends trying to see if there is hidden value in the data.

"The number of people that don't even know what's on their tapes is pretty startling," he said. "On there could be a cure for a disease, but nobody knows. So we get pretty excited by some of the projects we do."

Other efforts have immediate clear value. "We got to work with Steven Spielberg on six petabytes of Holocaust survivor videos, which needed long-term preservation, it needed immutability so that nobody could go and do deep fakes on a holocaust video and say that it didn't occur," he said. The work required moving critical data off of both tape and film, in an effort to give it a longer life.

With film, we come back to the door in Svalbard. The company behind the project has its roots in the cinema industry, selling film projectors and digital light modulation technologies to Hollywood and Bollywood.

“And then that movie Avatar came out, and there was a super fast transition from a film-based world to digital projector-based world,” Piql CEO Rune Bjerkestrand said.

The return of film

While film appeared to be on its way out, the company was convinced that there was still something to the medium. Piql studied different types of film from around the world,

hoping to find one that would last a long time, and store a lot of data. Some last up to 500 years, but don’t have much density, and are dangerously flammable.

“So we embarked on developing our own film, Piql Film, which is a nano silver halide film on a polyester base,” Bjerkestrand said.

The company takes binary code and converts it into grey pixels, for a total capacity of 120GB per roll of film. Under good conditions, that film can last around 1,000 years, Piql claims.

The company has split its data archiving in two. One is more traditional, where it works with companies like Yotta in India to convert data to film and store them in commercial data centers around the world - with Piql highlighting its use against ransomware or data center disasters.

Then there’s the Arctic World Archive, pitched as a more humanitarian mission to preserve crucial data for future generations.

Due to its distance from other land masses, the fact that it has been declared demilitarized by 42 nations, and its lack of valuable resources, the hope is that Svalbard is unlikely to be nuked in any future conflicts. However, just like everywhere else on the planet, the Norwegian archipelago cannot claim total safety - it is one of the fastest-warming places in the world due to climate change, and the territory has faced increased aggression from Russia.

Still, the land is remote and its empty mines are of little use to would-be invaders. At the depositing ceremony we attended, works of art from national archives around the world were placed in storage (and images of an Indian couple’s wedding, who paid to have “their love recorded for eternity”). Large reels of film are carefully vacuum sealed and then placed in the shipping container 300m underground, nestled deep in the permafrost.

Anyone can pay to store their data at the archive, but it is primarily finding business with governments and public bodies. Microsoft’s software collaboration platform GitHub also stored 6,000 software repositories on the site (a backup is also stored at the Bodleian library, see box out).

“It's for people that want to bring their valuable, irreplaceable information into the future to the next generation,” Bjerkestrand said, but added that the company was not planning to be a curator in and of itself. “That has been a serious discussion, and we came to the conclusion, that no, why should we have an opinion on what's worth bringing into the future?”

When pressed on how the AWA will last 1,000 years or more when tied to a corporation, Bjerkestrand said that the company planned to spin it off as a nonprofit foundation.

Currently, however, people pay for storage for up to 100 years, after which the data is sent back to them if they don’t keep paying. In an eventuality where the institution that first contracted Piql collapsed, that could cause problems: "They need to make sure that somebody gets the right to the reel,” he said.

A foundation, on the other hand, would have a longer-term focus. “Such a foundation would have interest across the world for organizations to support, it's basically supporting world memory to survive into the future,” Bjerkestrand said. “So I think it's a good cause that you could get sponsors, donors, and supporters for.”

When thinking about scale in that way, you cannot assume that future generations will be able to understand how data storage formats work. Here, Piql has an advantage: As it’s film, you can simply hold it up to the light, and the first few frames are pictures of how to access the information.

That data then needs to be able to be readable with what is available on the roll. “We have a fundamental principle that it should be self-contained,” Bjerkestrand said. “We don't compromise that it should be open source license and free to retrieve the data with the tools that are on the data. We had scenarios where we could do more data on the film, but then it would be too complex to read.”

Simply recording reality and putting it on one of the above media doesn’t work if future civilizations can’t understand it, posing a critical challenge for much of the data of our day.

The death of tools

“My biggest concern is the loss of knowledge

Issue 46 • November 2022 | 19 Escape Ukraine 
The last data center >>CONTENTS

of the software that's needed to quickly interpret digitized content,” Vint Cerf, TCP/IP co-creator, said.

“An increasing amount of digital content that we create was made by software, which is needed to correctly understand, render, and interact with data, like with spreadsheets and video games where you actually need a piece of software, plus a bunch of data in order to exercise it. If you don't have that software running anymore on the platforms that are available 100 years from now, then you won't be able to do that.”

Cerf, known as one of the fathers of the Internet, told DCD that he was “worried about the kinds of software that's needed to interact with databases, for instance, timesheets and other kinds of complex objects, where we may not have as widespread implementations available, some of them may even be proprietary.”

Fellow Internet Hall of Fame member Brewster Kahle shares the concern about specific pieces of software and data being proprietary or in the hands of corporations.

"If you look at the history of libraries, they are destroyed or they're strangled such that they're left irrelevant,” he said. “And that used to be by king and churches, but these days it is governments and corporations.”

As one of the developers of the World Wide Web precursor the WAIS system, Kahle found himself at an inflection point

Our obituary

of humanity - where data was set to be shared and accessible by the world, but was transient and easily lost. This, he hoped, gave an opportunity for a new kind of library.

"As we're coming to a change in media type, can we go and start a library, right away?" he wondered, launching the nonprofit Internet Archive, which tries to create a long-term copy of websites, music, movies, books, and more.

"The goal of the Internet Archive is to try to build the Library of Alexandria for the digital age," he said. And, so far, the effort has been wildly successful: "We're a small organization, we're $20-25 million a year in operational costs, and yet we're the 300th most popular website in the world."

But its continued success is threatened by a changing world.

“I could only start the Internet Archive after we had gotten the Internet and the World Wide Web to really work, both are open systems. So if we go into a period where the idea of public education or universal access to all knowledge starts to eclipse, we're in trouble,” Kahle said.

“We see that now within corporate environments, and we're starting to see it in government environments, whether it's banning books or where you can pressure organizations to do things without going through the rule of law, but you go through the rule of contract.”

Layers of control

As the web becomes more centralized and in the hands of a few cloud providers, their terms of service risk controlling which information lasts long enough to be stored for greater timescales. At the same time, as systems get built on top of cloud providers, or platforms like Facebook, storing them would require also recording all that backend software that is not shared.

Kahle is less worried about the longevity of data storage devices, and more about ensuring immediate access in an open world.

“100 years ago, microfilm was a new technology, and it was greeted with this fanfare that we'd be able to make it available so that people in rural areas could have access to information just like the people in big universities,” he said. “Well, that didn't really come about, it ended up just being used to just reinforce the power structures and the publishers that existed at the time.”

The question of world memory is less about the medium it is stored on, and more about how it is used when it is on that medium, he argued. “Microfilm may last 500 years… if you don't throw it away. But it turns out, people will just throw it away even if it hasn't been copied forward.

“We need not just formats that will last a long time, we need to keep the material in conversation, in use so that people will continue to love it and keep it going.” 

We don’t know when humanity will end. It could be soon, with climate change, nuclear war, or another pandemic. Perhaps we will survive, eventually spreading to distant stars.

When we do, we may come across a golden disc floating through space, carrying a message from our past. Or, possibly, someone else might come across it.

“There were two audiences we designed for - one was the extraterrestrial audience, and that was the one that I was most concerned with, and the other was the message to ourselves,” Jon Lomberg, NASA's design director for the Golden Record, recalled.

“In a sense, it was a message to ourselves, as well as a message to extraterrestrials saying this is a snapshot that one group of people thought would be a good capture of the Earth at this point in time.”

While destined for the stars aboard the Voyager spacecraftwhich are the furthest man-made objects from our planet - the team had to contend with very terrestrial challenges.

First, there was copyright, with some songs not allowed on the golden vinyl.

Then there were issues with nudity, after the earlier Pioneer plaque which included a line drawing of a naked man and woman drew criticism from the more prudish for showing a penis, and those on the other end of the spectrum for the censorship of female genitalia.

Lomberg’s anatomically correct drawings of a man and woman were not included on the Golden Record.

Another image that was not included was that of an atomic bomb or mushroom cloud, despite the pervading threat of nuclear holocaust when the vessels launched in 1977. “Carl [Sagan] didn't want it to seem like a threat,” Lomberg said. “He didn't want anything on record that could seem like ‘if you mess with us, this is what we could do.’ He wanted to greet the cosmos with open arms.”

The discs contain music from around the world (which is available online for those curious), as well as different languages, all of which will last an incomprehensibly long time. The side facing outwards is covered by a box that “lasts about a billion years, and then it's about another billion years to erode the outward facing side,” Lomberg said. “The inward-facing side lasts a lot longer - they think up into the trillions of years."

Should it be found by another species billions upon billions of years from now, it will be our last message to the cosmos. And, by dint of data storage limitations and intentional curation, it will be an imperfect one.

But Lomberg is happy with that: “In a sense, this is our obituary; let's be remembered for Mozart, not Hitler. We'll likely be done in by our own flaws, our own shortcomings may well destroy us, and that's punishment enough. But that wasn't the whole story of us.”

20 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS
Networks Supplement INSIDE Connecting the world Building light into chips > Can we use photons for compute and networking? How peering went global > DE-CIX’s new CEO reflects on the growth of interconnection A second look > Ending the leap second and saving time from itself
Sponsored by
Next-Gen
22 | DCD Magazine • datacenterdynamics.com EcoStruxure™ Micro Data Center from Schneider Electric™ lets you operate in any edge environment. • Enjoy fewer costly service visits. • Deploy faster with pre-integrated solutions delivered in shock packaging. • Rely on the support of our partner and services network. when they operate at the edge. IT professionals get more EcoStruxure IT Expert 6U Wall Mount EcoStruxure Micro Data Center apc.com/edge #CertaintyInAConnectedWorld ©2022 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-22321004_GMA

The networking challenge

You know the story. The amount of data the world produces is growing exponentially. More importantly, the data is created from more sources than ever before, and then has to travel to yet more end points than ever.

Gotta connect them all

The story of how the data center sector linked up cannot be told without looking at DE-CIX (p24).

We talk to the CEO of the German Internet exchange company about how it became one of the key lynchpins of our modern world.

“When I’m asked 'what's the difference between DE-CIX and Equinix?' I say 'we have more data centers,'" CEO Ivo Ivanov told us.

It doesn't actually own any of those data centers, of course. "We are in more than 700 facilities around the globe," he said. "Including Equinix.”

Looking ahead, it represents an opportunity for an open peering network, where these networks are not controlled by a single company.

This could have a profound impact on the next iteration of the Internet, especially as Edge comes to the fore.

Let there be light

Over fiber, data travels at the speed of light. Then it reaches the data center.

We're used to compute being in the realm of electrons, but that means heat and latency.

Lightmatter hopes to use photons for computing and interconnection, bringing light

further into the data center (p28).

That could mean processing at the speed of light, just in time for an explosion in artificial intelligence workloads.

“We have about a 42x improvement in latency, because the processing is happening at the speed of light," CEO Nick Harris said.

"You're doing multiplication and addition as light flies through the chip."

Ain't got time for that

There are 31,536,000 seconds in a year. It takes just one of them to break everything.

Since 1972, the world has added leap seconds to UTC every few years in an effort to bridge the gap between atomic clock-based time and the time derived from the Sun's position as our spin decelerates.

This may not sound like a big deal, but every time it happens is a calamity for networks. Computer systems that rely on a precise and identical time cannot handle the change, crashing spectacularly.

Companies like Google and Meta have come up with workarounds, smearing a second over hours. This is a step forward, but still carries a great risk.

That danger is only growing. The Earth's spin is no longer slowing, in fact it is speeding up.

This change means that we will soon have to do a world first and roll out the negative leap second.

With systems crashing even with a time addition that we're used to, how will they react to such an event?

Companies don't want to find out, so suggest killing off leap seconds entirely (p31).

24 Next-Gen Networks Supplement | 23
Contents
28 24.How peering went global Changes in the world’s networks turned Germany’s Internet peering platform into a global phenomenon 28. Building light into chips The AI revolution needs so much processing power, could it be time to bring in a new generation of chips that combine photonics with conventional silicon? 31. A second look Hyperscalers want to end the leap second, as things start to turn negative 31 Next-Gen Networks 
Sponsored by

How peering went global

If your business wants a connection in Africa, India, or the Far East, you might find yourself looking at a platform originally founded as a very local service handling Internet traffic in Germany.

The story of how DE-CIX changed from a single-country Internet exchange to a global interconnection player is the story of big changes in networkingand in the way society consumes those networks.

Starting in a post office

The Internet was created in the United States. By the early 1990s, it had grown internationally, linking networks round the world. But all peering still took place in the United States, so when two Internet users on different networks communicated, all their traffic still had to go through the US.

This applied in every country in the world, creating heavy delays, and massive network costs in days when international bandwidth was much more limited and expensive than it is today.

Two users connecting in London or Frankfurt had to effectively communicate over a two-way link to the only Internet exchange points, with all their traffic going to the US and back, adding massive delays and telecoms costs.

Local groups could see an answer. In late 1994, a group of five British Internet service providers (ISPs) set up the London Internet Exchange (LINX), using a Cisco Catalyst switch with eight 10Mbps ports in London’s Docklands.

In 1995, Germany followed suit swiftly, with its own Internet exchange point (IXP), the Deutsche Commercial Internet Exchange (DE-CIX).

Internet exchanges in all countries have grown massively: DE-CIX for

instance now connects more than 3,000 networks.

“In 1995, DE-CIX was established as a pure peering point, a fabric operator,” says Ivo Ivanov, DE-CIX’s CEO.

Three ISPs launched the German IXP in the back room of a post office in Frankfurt, and expanded into an Interxion data center, becoming one of the largest Internet exchanges in Europe.

“Over time we grew in Frankfurt, and we added Hamburg and Munich,” he says. “In 2007, Telegeography said we were a new telecommunications capital of Europe, because of the huge concentration of traffic between the Western and Eastern hemispheres.”

That’s when DE-CIX realized it had an international opportunity. If you’ve created a distributed carrier-neutral exchange platform in multiple data centers in one metro area, then you can do it in others.

“A data center- and carrier-neutral, distributed platform present in a metro in different data centers, all interconnected into one integrated fabric, is great from the redundancy point of view, and from the distribution point of view, with a very high level of accessibility.”

The nature of an IXP began to look more like a business model than a simple network exchange, he says: “Potential participants can join the fabric regardless

24 DCD Supplement • datacenterdynamics.com
Changes in the world’s networks turned Germany’s Internet peering platform into a global phenomenon
 Next-Gen Networks Supplement
>>CONTENTS

of which data center they're physically located in. It's a matter of cross-connects only.”

That’s pretty similar to the Equinix International Business Exchange (IBX) model, he says, but with one big difference: “They [Equinix] offer their fabric to their own customers only.”

“Our success in Frankfurt gave us the confidence to export this type of knowhow, this experience on the engineering and business development side, to other regions.”

DE-CIX looked for regions that lacked this type of infrastructure or ecosystem, where there was no localization of content and data gravity. Its first target was Dubai.

‘UAE-IX powered by DE-CIX’ was launched with local telco Datamena in 2012. Since then, it’s become an international and regional hub, and Ivanov reckons the presence of DE-CIX was “one of the main factors” contributing to Dubai’s ascendence in digital commerce in the Middle East.

“To create an attractive framework for operators who are not from the region, the regulatory authority created the socalled free interconnection zone based on our guidance.”

Global networks are changing

With that success, DE-CIX next went into areas that already had infrastructure, but wanted to consume networks differently. The German outfit set up exchanges

in the US cities of New York, Dallas, Chicago, Richmond, and Phoenix.

Alongside that, there are now European exchanges in Marseille, Palermo, Lisbon, and Barcelona, with another serving the Turkish market in Istanbul. Five years back, an exchange was established in Mumbai, with other hubs following in Delhi, Kolkata, and Chennai.

DE-CIX’s Indian operation claims to be the largest interconnection platform based on the numbers of participants in Asia, Ivanov says.

Southeast Asia followed with IXPs in Malaysia, and Singapore. There’s a partnership announced with the Getafix exchange in the Philippines - and a move to offer exchange services across the whole of Africa.

The African service takes a new network approach, matching a more consumer-centric network market. It’s a managed solution that Ivanov calls “DECIX-as-a-service” or ‘interconnection fabric in a box.’

“We deliver a turnkey full-fledged interconnection fabric with hardware, software, and processes - but also including marketing and business development activities,” he says. “It's literally the full cycle needed to run a successful interconnection fabric and create an ecosystem.”

Coming back to the comparison with Equinix, he says: “When I’m asked 'what's the difference between DE-CIX and Equinix?' I say 'we have more data centers.'"

Of course, DE-CIX doesn’t own data centers, he explains: “We do not have data center operations. We collaborate with the data center operators. We have more data centers enabled. We are in more than 700 facilities around the globe. Including Equinix.”

The service has expanded up the stack as well, offering cloud connectivity over and above network peering, with a so-called “multi-service access port,” that can support what DE-CIX calls a “cloud router,” designed to let customers hook together multi-cloud networks.

That means DE-CIX can offer a network security service, giving a direct Layer 2 network connection into cloud services such as Microsoft applications like Azure, Office 365, CRM, and ERP.

Automation Supplement | 25 Next-Gen Networks Supplement | 25 Peering
into the future
>>CONTENTS

Enterprises are network operators now

These services are aimed at a different set of customers from the traditional ISPs and network providers linking to an Internet exchange port. They are picked up by large enterprises in multiple industries.

“We see a huge wave of new market participants on platforms like DECIX, who are probably not interested in peering directly,” he says. “They're interested in cloud connectivity.”

All the big industries are getting involved in direct management of their connectivity, says Ivanov. “And the bigger they are, the higher the demand for global network presence.”

For instance, he says, eight years ago Netflix and Apple didn’t own a single Point of Presence (PoP). “They were not connected to even one Internet Exchange. Today. Netflix and Apple have hundreds of PoPs in hundreds of different exchanges around the globe.

“They're still huge enterprises,” he says. “But today they are also a global network operator. They run their own global network for resilience, for cost saving, but even more importantly to control the data journey to their customers.”

Ivanov thinks that banks and automotive companies will be the next big global network operators: “We know from different projects which have been announced that finance institutions have started investing in their own subsea cables, and automotive companies are building their own mobile networks. If you control the data journey, you control the gateway, and you can control the gateway only if you are directly involved in infrastructure.”

What these customers want is virtual private networks - and that’s something a peering platform can offer, by setting up a closed user group. This means, for instance, that an auto firm can keep tabs on all the data that flows in and out of a connected car.

“The closed user group approach gives them a private virtual ecosystem for direct interconnection, which is more isolated, more secure and in specific compliance with regulatory requirements.”

Peering at the Edge

The neutral exchange approach also lends itself to Edge services, he says: “As we all know, all the applications which are extremely important today, and will

evolve even further in the future, have one thing in common - they're extremely performance sensitive. That means they are latency sensitive. Every single millisecond counts.”

He goes on: “All the crucial processes in a company are somehow related to real time communication, and latency is an extremely important factor. I love to say latency is the new currency in our industry.”

As-a-service exchanges can be set up anywhere, he says: “We’re creating new telecommunications hubs in the Edge, in tier two and tier three. We have projects for creating Edge interconnection setups on highway crossroads or next to a mobile cell tower.”

As with the large facilities, DE-CIX isn’t making its own Edge facilities. It’s placing a “pizza-box” of its own kit into the appropriate Edge modules - or any other configuration up to four cabinets in a Tier One hub.

26 DCD Supplement • datacenterdynamics.com
 Next-Gen Networks Supplement
>>CONTENTS
"We see a huge wave of new market participants on platforms like DE-CIX, who are probably not interested in peering directly"

Peering into the future

In the US, DE-CIX is working with Edge players like DartPoints, while Ivanov says some interesting options are emerging: “AtlasEdge is an interesting one in Europe, and I believe companies like EdgeConnect have started looking into a related approach - so I think the landscape will be very interesting.”

The joy of duplication

With a lot of players offering access to Edge networks at different layers of the network stack, there are bound to be overlaps, Ivanov concedes, but actually

this can be a good thing.

In terms of network peering, there are other players offering softwaredefined networking (SDN) services, such as Megaport, Packet Fabric, and Console Connect. But he’s happy to collaborate with them to extend the reach of both services, and he argues that overlapping services are complementary, and can reduce the inherent risk of moving to one cloud provider.

“This overlap means there are redundant solutions for the market. Enterprises do not want to rely on only one fabric, and the bigger they are the

higher the need for redundancy so they can be extremely solid in the case of an outage.”

This sort of redundancy might even be required by regulators which are starting to require financial sector organizations to have a plan to mitigate the risks of cloud concentration.

These regulations will require organizations to have a multi-cloud strategy to avoid relying on one single cloud service provider, he says: “But it also means ideally that you physically connect to different cloud zones and to different cloud connectivity solutions as well, for redundancy.”

It’s another example of an opportunity emerging from the combination of economic reality and network technology. In the end, it’s the basic changes in the way we use networks that have turned one German peering organization into a global network platform. 

Automation Supplement | 27
Next-Gen Networks Supplement | 27
"If you control the data journey, you control the gateway, and you can control the gateway only if you are directly involved in infrastructure"
>>CONTENTS

Building light into chips

It’s well known that Moore’s Law is coming to an end. We can no longer expect processor power to double every two years, as more transistors are packed onto each silicon chip.

That’s inconvenient for conventional IT, which has been riding high on the continuing dividend from Moore’s Law. It’s potentially a disaster for artificial intelligence (AI), which is on the verge of a big expansion… but an expansion that depends very much on fast processing.

One startup believes the answer is to combine conventional silicon with photonic processors that operate with light.

The AI explosion

Artificial intelligence is in a furious growth phase right now, says Nick Harris, CEO of LightMatter: “People have found use cases that are insatiable. They will

THE PROBLEM

THE SOLUTION

take as much as they can get, they'll spend any amount of money. Google, Microsoft, Amazon, and Facebook, will pay anything for these things.”

This is a recent development. After surges in the 1960s and 1980s, AI research was progressing slowly. Then in 2012, a neural network called AlexNet, created by Alex Krizhevsky, won an image recognition contest running on low-cost GPU hardware.

That showed commercial possibilities, Google bought Krizhevsky’s company, and investment began.

“There was this massive investment in scaling these things out,” says Harris. The investment bore fruit. “In the past

ten years, the complexity of AI models has followed a 3.6 month doubling period.”

The trouble is, even cheap generalpurpose silicon can’t keep up with that. And, while it’s possible to throw extra time and resources at an AI in the lab, it needs fast performance when it is deployed in real applications.

“The challenge with AI is, you can train very big models, but if you'd like to deploy them and have people interact with them, the time between a user making a query and getting a result back, is very important,” says Harris. “You need real-time feedback. The big challenge in the field is to build machines that can run these huge neural networks so you get an answer back within milliseconds.”

28 DCD Supplement • datacenterdynamics.com
The AI revolution needs so much processing power, could it be time to bring in a new generation of chips that combine photonics with conventional silicon?
 Next-Gen Networks Supplement
>>CONTENTS

Silicon can’t keep up

Processor performance has been doubling every two years for decades, since 1965, when Intel’s Gordon Moore noted the trend.

That’s been good, but that rate of progress was not enough to keep up with AIs emerging this century, says Harris: “Even if you have the best case scaling for electronics, you're not really powering this.”

And to make matters worse, just at the moment smarter AIs arrived, the rate of silicon acceleration slowed.

Moore’s Law held because chip makers could double the number of transistors packed on a fragment of silicon every two years, Now, while processors are still packing more transistors, they are running hotter.

“The reason we have this heat problem is Dennard Scaling,” explains Harris. Robert Dennard invented DRAM, and observed that smaller transistors used less energy, scaling with their area: “Around 2005, that broke down.”

Today’s fast processors use 300W and upwards, and Harris says that’s heading for 1kW chips.

“We're still getting more transistors per unit area. But you can't really use them, because the cooling solution does not support you using them. The chip will burn. You need to be able to develop chips that perform more operations per Watt.”

Enter photonics

What makes chips hot is resistance. Electrical signals face resistance as electrons flow in current. By contrast, light signals don’t face the same

resistance, and don’t create heat - and photons also travel faster than anything else.

For some years, advanced computer designs have tried to introduce photonics, and use “electrons for processing, photons for communication,” in the words of John Sontag, an HPE scientist (HPE is an investor in Lightmatter).

Long-distance communications use fiber optics, and those fibers now penetrate deep into the racks of data centers. “You have companies selling 100 Gig pluggable optics, and they're just now deploying 400 Gig pluggable optics. They send 400 gigabits per second of data over the optical fiber to lace together racks and things that are spatially separated.”

Recent developments have allowed transistors and photonics to merge on the same wafer, in so-called “copackaged optics.” Initially, this has been seen as a way to reduce the size and

“It’s just too early to have a copackaged optics solution that is ready for mass deployment and volume production within the next few years,” said Dell’Oro Group analyst Sameh Boujelbene in a comment to SDxCentral this year.

power consumption of those optical plugs, bringing the signals into the chip as light, instead of converting light signals to electrical ones at the borders of the CMOS chip.

According to the roadmap, “optical components get closer and closer to the silicon until, eventually, the optics are 3D stacked and co-packaged with the processors and networking chips, giving you very high data rates at low energy consumption.”

Intel has been demonstrating copackaged optics for a year or more, Broadcom has demonstrated a copackaged optics switch, and Marvell bought photonics company Inphi for $10 billion in 2021, but the industry is skeptical about it coming into play quickly.

Automation Supplement | 29 Next-Gen Networks Supplement | 29 Light matters 
>>CONTENTS

Co-packaged optics could be useful for making the highly-interconnected GPU systems used in training AI, but that still requires compute clusters with a “rat’s nest” of interlaced optical fibers, comments Harris. “They're planning to lace together the processors inside the server using the optics. When every chip is connected to every other chip using a fiber, there are performance benefits, but it's very hard to service those things.”

Lightmatter’s approach is to push the optical elements further inside the chip, so all those interconnections are handled by a switchable photonic network within the silicon, that generates no heat, and takes up minuscule volume.

“Fiber is macroscopic, it's on the order of a millimeter,” he says. “Our devices are two microns.”

This could drastically reduce the hardware required, effectively integrating a complex AI training system onto a single chip: “If you open our server, there's one chip in there. It contains all of the processors for the server. And they're optically interconnected inside of the chip. And they can communicate with other platforms over optics as well.”

He continues: “Ultimately, what this thing does is extreme integration, enabling everything with optical interconnect, and allowing for really absurd bandwidths.”

And it’s done in standard processes available from merchant silicon fabs: “We built our wafers with GlobalFoundries,” says Harris. “We have transistors that are very close next-door neighbors, within 100 nanometers of the photonic components. It's all monolithic.”

The same etching tools make the CMOS, and the photonic connections, which are on the same nanometer scale as transistors, he says.

“We use all the same etching tools. So it's all completely standard CMOS. We use a ‘silicon on insulator wafer,’ which is used in the production of many electronic chips.”

Harris and colleagues developed the idea at MIT, and have been

commercializing it since 2018, with the aid of $11 million in startup funding.

Going to silicon

The company has two products. Passage is an interconnect which takes arrays of traditional processors and links them up, using a programmable on-chip optical network.

“Lasers are integrated into the platform, along with modulators and transistors,” he says. “If you take a scanning electron microscope to the thing, you can see the waveguides - they are spaced about two microns apart, and are a few 100 nanometers wide.”

The other product is Envise, a general purpose cloud inference accelerator, which combines computation elements, with a photonic computing core.

The promise here is to address the issue of AI processing speed: “We have about a 42x improvement in latency, because the processing is happening at the speed of light. You're doing multiplication and addition as light flies through the chip.

The technology is still at an early stage, but Harris says Lightmatter has “about five customers,” who are large enterprises. The company has silicon in the lab, and will have the chips on general availability later in 2022.

“In the Passage case, we're looking at the communication between chips, and in the Envise side, the optical processing core helps with communication energy, and also offloads computer operations,” says Harris.

The products are “big chips,” says Harris. Much like another AI chip startup, Cerebras, Lightmatter has found that integrating multiple cores and a network can be done across a single wafer.

Cerebras is further advanced commercially, with products adopted at the EPCC supercomputing center of the University of Edinburgh, and at Biopharmaceutical company AbbVie, among others. However, it has had to create its own liquid cooling system to deal with the heat generated in the on-chip network.

Lightmatter’s optical network sends signals with photons and runs cooler. It’s also somewhat smaller, but is still “inches across,” with Passage fitting into an eight-inch by eight-inch chip socket: “The biggest chip socket I've ever seen in my life.”

It does, however, offer that “absurd” bandwidth: 768Tbps.

Wafer-size chips might sound like a liability, given that all silicon wafers can suffer from small point defects, so a large wafer has a higher chance of failing. “We do a lot of work on yield engineering,” says Harris. “But there are not a lot of transistors on the chip.”

With few transistors, there’s less chance of point defects: “We have very low densities, so there's a very low probability of getting a point defect in manufacturing that kills the transistor. The yields end up being high because it's not a very densely integrated transistor circuit.”

Applications

The first applications for this will be companies that do analysis of real time videos, says Harris. These could include security firms, but also companies monitoring a manufacturing line using cameras to spot when a part has a defect.

It’s also potentially useful for speech analysis and other AI applications: “It's across the board.”

There’s one common factorcustomers are interested in the “transformer” type neural networks pioneered by Google, and want to implement them more cheaply

“The first application would be principally trying to address dollars-perinference cost. If you're a product person who is working on Google Cloud, there are a lot of AI models you'd like to deploy, but you can't afford to, because the dollars per inference doesn't make sense.”

Will it all work? One positive sign is the caliber of the engineers joining the company.

Richard Ho, one of the leaders of Google's custom AI chip family, the Tensor Processing Unit (TPU), joined Lightmatter in August, following Intel's VP of engineering, data center, AI group, Ritesh Jain. In May it hired Apple finance director Jessie Zhang as VP of finance.

The prospects for photonic computing could be bright. 

30 DCD Supplement • datacenterdynamics.com
 Next-Gen Networks Supplement
>>CONTENTS
“For Passage, we're looking at the communication between chips, and for Envise, the optical processing core helps with communication energy"

A second look

Hyperscalers want to end the leap second, as things start to turn negative

Meddling with time can have unintended consequences. In the data center world, it only takes a second to cause an outage or even corrupt data.

But since 1972 we have done just that 27 times, adding a leap second every few years in an effort to equalize two different ways of tracking what the time it is.

Now, many in the industry are calling for a rethink of how we approach time, and demand the end of leap seconds.

Our species has always been in search of precision, with complex mechanical watches replacing centuries of hourglasses, astrolabes, water clocks, and sundials.

But it was in the last century that we made a breakthrough in precision: The atomic clock, which measures time by monitoring the resonant frequency of atoms.

Hundreds of these clocks around the world were used to create International Atomic Time (TAI), a weighted average of time as a constant duration defined by cesium radiation.

But scientists soon realized that something was amiss - the time did not match up to that of Universal Time, or UT1, which is based on observed solar time,

Automation Supplement | 31
Time, revisited  Next-Gen Networks Supplement | 31
'Physics Laboratory: Time and Frequency Division.' By National Institute of Standards and Technology
>>CONTENTS
Sebastian Moss Editor-in-Chief

where noon is when the Sun is at its apex.

The Earth is not a perfect sphere, nor is its orbit a perfect circular ellipse. Complicating matters further is the fact that the Earth's rotation has been slowing due to tidal deceleration and other factors, changing the length of a day.

In the latter half of the last century, this change meant that UT1 was 1.3 ms slower than TAI every day, on average.

In 1972, the international reference time scale Coordinated Universal Time (UTC) was launched in an effort to combine these two competing visions of time.

It began based on TAI (with an initial difference of 10 seconds), but periodically has whole seconds added to bring it closer to the slower time tracked by UT1. Outside of the leap second adjustments, UTC is mapped to atomic time by a constant offset.

This is the time that the networked world relies on. Computers need to know the precise time to communicate with each other, with accurate time stamps required for billing systems, database sorting, network diagnostics, transactions, and more. If they get the time wrong, things can crash.

Computers come with their own clocks, of course. But quartz oscillators drift, slowly going out of sync with time, causing havoc when multiple systems have a different

concept of ‘now.’

So enterprises turn to time servers to tell their systems the time. These use Network Time Protocol (NTP) - a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks - to get within a few milliseconds of UTC.

A big virtual cluster of timeservers is known as a pool, where a large number of computers volunteer to provide highly accurate time via NTP based on their own source of time from a DCF77 receiver, WWVB receiver, or a GPS receiver, among others.

Both Meta and Google offer their own NTP service, based on their own atomic clocks. "Every pool defines its own rules, engineers have strong opinions," Oleg Obleukhov, the creator of Facebook's Public NTP and cofounder of the company’s internal time card, said.

It’s all a careful balance, where if one mistake happens, everything can come crashing down.

That's why periodically adding a new second can be a significant threat to uptime.

When data centers receive the satellite signal announcing the leap second, they either show the impossible time of 23:59:60, or they miss a second.

That could cause "a negative number,

which of course blows up everything in your code," Obleukhov explained.

"There are outages all across the industry all around the world when leap seconds hit, where CPUs spin at 100 percent because of such events, where the only remediation was to go and physically reboot devices. This has happened again and again every leap second."

Each one has caused problems, taking down platforms like Reddit, Cloudflare, Foursquare, LinkedIn, and Yelp, among others.

"Throughout my career, I went through multiple leap seconds, and everywhere it was a disaster, and everything was falling apart every time," Obleukhov said.

A report by the National Institute of Standards and Technology (NIST) and France's Bureau International des Poids et Mesures (BIPM) found that “contrary to our expectations, the number of problems reported has increased with time."

In an effort to mitigate such a risk of a sudden change in seconds, Meta has begun 'smearing,' a concept first proposed by Google in 2011, instead of ‘stepping’ a whole second in one go.

Smearing adds a couple of milliseconds every now and then over a longer period of time, reaching a full second just as the new leap second comes in.

32 DCD Supplement • datacenterdynamics.com
 Next-Gen Networks Supplement
>>CONTENTS
'Strontium Clock.' By National Institute of Standards and Technology

There are numerous ways to smear, either by adding equal amounts of milliseconds, or doing so in different amounts at varying intervals.

Google does it over 24 hours, while Meta goes for 17. Alibaba is believed to smear for 12 hours on either side of the leap second.

"There are many different techniques for smearing - all of them are bad," Obleukhov said.

"You have to do it over many machines, and this introduces errors between machines,” he explained.

“Depending on how sensitive your systems are you might have a problem. And when you’re smearing, if appliances get rebooted or if something else goes wrong, then the chances of a fatal issue raises drastically.”

Smearing is the best option we have at the moment, "but you still may get negative time," Obleukhov said.

Meanwhile, public pools like NTP.org do not smear. “What you will end up doing if you join them is stepping, which is just dangerous,” he added.

This is not the only problem. After decades of leap seconds being added as the Earth's spin slowed, the planet began to accelerate in 2016, reaching its fastest spin since the change in August.

Why this is happening is not clear, but scientists have several theories.

Seismic activity such as the 2011 earthquake in Japan shifted the planet's axis by 6.7 inches, which sped up the rotation.

Another potential reason is known as the 'Chandler Wobble,' where the movement of the geographic north and south poles causes the planet to wobble, slowing it down - but in recent years it has wobbled less.

Finally is our own impact on the planet. Mountain ice caps have historically melted and refrozen, impacting our rotation like the arms of a spinning figure skater - when the arms are out, they move slower, when they are in, they move faster.

Now, due to anthropogenic climate change, that great mass of ice has melted and is not returning, instead staying at a lower altitude.

Whatever the cause, we now face the first time our rotation has sped up since UTC began, potentially leading to a completely new challenge: The negative leap second.

Instead of adding an extra second, UTC could remove one.

This could also theoretically be smeared,

but that introduces its own risks, most notably that the networked world has never tried this.

"These events have never happened, so that it is almost a certainty that there will be widespread errors in realizing the event, if it happens," the NIST and BIPM report states.

As it stands, if the current rate of change between UTC and UT1 continues, then a negative leap second is expected to be required by 2030.

Given all this, Meta - along with Google, Microsoft, and Amazon - suggest killing off the leap second entirely. They are joined in this recommendation by the NIST and BIPM, although the time-tracking bodies have a slightly different approach.

There are still those that wish to keep the status quo, arguing that scientists and astronomers observing celestial bodies rely on UTC. Were it to move out of sync with UT1, then legacy equipment would need to be adjusted, and there could be a period of inaccurate astronomical observations and celestial measurements as a UTC-based infrastructure has to be painstakingly shifted to UT1.

But Ahmad Byagowi, time appliance project lead at the Open Compute Project and research scientist at Meta, argues that ultimately they will benefit from such a move.

At the moment UTC and UT1 are already out of sync, he reasoned, as their times are only normalized when a whole second is added. Between those leap seconds "you have an error," he said.

"To those scientists that want to observe the sky, we're suggesting that they will always be able to go to a website that says 'this is the offset between UTC and UT1.' It's much more granular, you can go into milliseconds, and you can actually see things much much better. That's what we're proposing."

They want December 31, 2016, to mark the date of the last leap second, with computers no longer having to worry about interruptions to constant time.

NIST and BIPM aren't quite as aggressive, just yet. Researchers at the time institutes suggested that perhaps a temporary answer lies in increasing the maximum difference between UT1 and UTC. That could mean a leap minute, or even a leap hour.

The benefit would be that it would occur much less regularly, making risky events less common. But there's a danger to that, they admitted.

By making it a once-in-a-generation or more event, whole systems would be birthed and die between leap events. Knowledge and preparation could be lacking, making the time change all the more dangerous.

"Therefore, it will be necessary to place an increased emphasis on education and awareness ahead of such a step," they said.

Such a move would add a huge risk, but reduce the times we will face that gamble. "We do not consider that there is a 'perfect' solution to the problem," the national bodies said.

"Defining a time scale that satisfies the needs of time and frequency users and is also in agreement with astronomical phenomena is not straightforward and a series of trade-offs are necessary. We consider that enlarging the tolerance [between UTC and UT1] is a wise provisional solution, which should be re-considered when new discoveries and deeper understanding could result in a better solution."

It is not clear whether either of these calls will lead to immediate action. The quest to kill the leap second faces an obstacle almost as inevitable as the passage of time: Inertia.

Time has no single ruler. The decision over the leap second will have to be agreed on by multiple governmental, research, and non-governmental bodies, with its detractors having to navigate complex politics and a natural unwillingness to change.

Such an effort kicks off this month, with a vote on the future of the leap second. The decision at the Consultative Committee for Time and Frequency could help decide when the next major outage hits.

Automation Supplement | 33 Next-Gen Networks Supplement | 33 Time, revisited 
"Defining a time scale that satisfies the needs of time and frequency users and is also in agreement with astronomical phenomena is not straightforward and a series of trade-offs are necessary"
 >>CONTENTS

Gain the flexibility you need to optimize uptime at the edge.

• Gain visibility and actionable insights into the health of your IT sites to assure continuity of your operations.

• Instead of reacting to IT issues, take advantage of analytics and data-driven recommendations for proactive management.

• Keep costs under control by managing your IT sites more efficiently. Choose to outsource to Schneider Electric’s experts, address issues yourself, or leverage a mix of both approaches.

EcoStruxure IT Expert

©2022 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-22321004_GMA
insights to reduce downtime, with EcoStruxure™ IT. IT professionals use data-driven, ecostruxureit.com

The cable ship capacity crunch

Today there are more than 400 or so subsea cables in operation, with dozens more due to enter service over the next few years. These cables are the lifeblood of the Internet; with the majority of the world’s data flowing through fiber sitting on or under the ocean floor.

However, the world’s supply of ships that can actually lay and maintain these cables is surprisingly small: just 60 ships worldwide. Most of those ships are long in the tooth; following a glut of new ships deployed around the millennium at the height of the dot com boom, new ships have been few and far between since.

As the industry sees huge demand for new cables, largely driven by OTTs and hyperscalers, there is an increasingly acute capacity crunch of available ships, meaning projects are facing lengthy delays.

Cable ships: in demand yet full of veterans ships rarely replaced

According to the ISCPC, there are around 60 cable ships in the world. According to SubTel Forum’s 2021/2022 Annual Industry

Report, after a splurge of investment around the turn of the century, there were no newbuild cable ships delivered between 2004 and 2010, and only five ships were delivered between 2011 and 2020.

And new ships aren’t being added at the same rate older ships are being retired. Only eight of those 60 ships are younger than 18, with most between 20 and 30 years old. 19 are over 30 years old, and one is over 50; the Finnish Telepaatti, built in 1978.

“There were a lot [of ships] built about 20 to 22 years ago,” says Gavin Tully, Managing Partner at Pioneer Consulting, which provides services on deploying submarine cable networks. “There's definitely a crunch in the industry; projects are really at the mercy of ship availabilities.”

“You can't just walk in and purchase ship time,” he adds. “Scheduling is really paramount right now; it takes time to get a slot in the ship schedules, and things are not very flexible.”

As an example, cable ship operator Alcatel-Lucent Submarine Networks (ASN) currently has a fleet of seven cable ships – a mix of purpose-built and retrofits, including one in development that was originally a cable ship that was moved to the Oil & Gas sector and is being re-fitted back to telecoms use – with several dedicated to either cable

deployment or maintenance. Business is booming.

“Our fleet is now occupied up to 2024,” explains Jérémie Maillet, VP of Marine Operations at ASN. “The contracts we are negotiating right now are for installation post-2024.”

Demand is so high, that cable companies are often buying capacity or chartering other cable ships to try and keep up with business.

“Three years ago, we were not hiring external vessels apart from in specific areas where local resources were mandatory due to local regulation or customer requirements. At times [recently] we have up to four external vessels working in parallel with our own vessels on projects.”

At the same time, they are trying to keep an aging fleet out at sea for as long as possible instead of having them return to port by transporting the cable to the cable ships via freighters.

“With such a demand for installation activity, we can't really expect the cable ship to come back all the way on a long transit back to the cable factory to collect cables,” adds Mick McGovern, ASN’s Director of Projects. “We’re using freighters a lot more to keep feeding the installation vessels cables in the region that they’re working in.”

Issue 46 • November 2022 | 35
That sinking feeling 
>>CONTENTS
Demand for cable continues to increase, but the fleet laying them is small and aging

Delays and re-routes

While the existing fleet of cable ships had been more than enough to keep up with industry demand since the dot-com days, the recent boom in new subsea cable projects has seen the cable ship industry quickly become a seller’s market, where the power is in the hands of the ship operators.

“The suppliers are in a good position right now where they're basically able to say, ‘give me money and I'll give you a schedule. And if you don't have the money, come talk to me when you do and I'll tell you what the schedule is then,’” says Tully. “And that's a very different situation than five years ago, where the suppliers would be elbowing each other out of the way for business.”

As a result, most projects will likely be faced with delays. Even the hyperscalers, which may get more lenient treatment as suppliers know they are good for the money and likely repeat customers planning

cable back when permit becomes available and going back and laying it.”

“It can lead to you installing a system during the period of the year that is inefficient in terms of weather and create restrictions in terms of your ability to deploy and recover plows or land shore ends, etc.”

While hyperscalers might see a project slightly delayed, Pioneer’s Tully notes that the smaller cable projects are the ones that are more likely to be affected.

“The suppliers right now won't commit to anything until you give them a downpayment, and that downpayment is also hand-in-hand with proof of full funding for the entire project,” he says.

“The suppliers are prioritizing the hyperscalers, which is sometimes a disadvantage to the smaller, more entrepreneurial customers who may need a signed contract with the supplier so that due diligence by the financier can be completed.

smaller parts of a project. For example, funding a marine survey, then purchasing the cable, then permitting, etc.

“It's being bitten off in chunks, which when you're asking someone to take a risk of giving me $5- $10 million versus $200-300 million, clearly the risk tolerance is very different. ”

Tully adds that the company is seeing more phased implementations in longer projects. Instead of doing all 10,000 kilometers of a trans-Pacific cable at once, for example, the project may be broken into smaller chunks – i.e Asia to Guam first, then Guam to North America later – which lowers the financing hurdle and means a smaller window of ship time is needed at any one moment.

The most extreme public example of the impact of the cable ship shortage is in Canada. In May 2021, Maple Leaf Fibre, a Canadian project to lay fiber cables between

multiple projects, are seeing delays creep up.

“There’s definitely frustration on the part of the developers, and I would include the hyperscalers,” says Tully. “Projects are taking longer than anyone is planning for. The majority of the projects that we see finish later than when they were initially conceived.”

Projects that might include a year buffer to allow for financing, permitting, and other delays, are “plowing through” those time contingencies by anything from six months to more than a year later than was initially planned.

Delays caused by external factors can also have a knock-on effect. ASN’s McGovern says. “[Delays with permits] lead to huge inefficiencies in the operations of the vessels. You might have the cable loaded and the configuration on board, but if a permit is not in place or it's delayed and unclear when it is going to be freed, then you end up turning over the cable, laying the system in a different direction and then turning the

But the schedule inside that contract won’t be confirmed until downpayment and proof of full funding is made.”

Even if a developer does have a schedule with a supplier, if there are delays achieving financial close, the schedule can easily slip by six months or more, or see price increases to hold the schedule.

“On existing projects where all the money is committed, we're still seeing delays creep up, and it's creating a lot of tension between clients and suppliers. Some of these delays are in the order of six months, even when a project is already fully financed, as the suppliers themselves are encountering a lot of difficulties scheduling all the different projects and prioritizing things.”

As a result, smaller cable companies are beginning to look at taking a ‘disaggregated approach’ to financing a cable project; instead of a turnkey project and a small number of large financiers, companies are working with smaller, more nimble and risktolerant financiers who are willing to finance

Kingston,

Announced in 2018 as a joint venture between Metro Optic and Crosslake Fibre along with Utilities Kingston, the cable was set to be terrestrial between Kingston, Ontario, and Montréal, and under Lake Ontario westwards from Kingston to Toronto. However, a shortage of cablelaying vessels led to a change of plans, with the whole cable system now due to be terrestrial, running from Toronto east via Kingston to Montréal.

Fergus Innes, chief commercial officer of Toronto-based Crosslake, told Capacity: “Vessel availability [is] one of the reasons we have pivoted from a subsea design to a full terrestrial build on our Maple Leaf Fibre project.”

The requirement of needing to take a ship that was big enough to carry and lay the cable yet also small enough to fit along the St. Lawrence River lock system likely

36 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
Toronto, and Montréal, scrapped plans to lay a cable under Lake Ontario due to a shortage of cable ships.
>>CONTENTS

added further complexities. DCD reached out to Crosslake, who declined to comment further.

The Maple Leaf route was unique in having a terrestrial route as a viable alternative – with the ‘wet’ route likely a cheaper option in the original project timelines – with most other cables unlikely or unable to make such route adjustments and instead will have to accept delays.

Where’s the next-generation fleet?

The main issue causing the crunch is the lack of ships, but the fact the fleet is largely full of vessels closer to retirement than launch isn’t helping.

“We're seeing projects now that have been delayed because of just maintenance issues with ships,” says Tully. “That’s not due to negligence; these ships are just old and

However, Pioneer’s Tully notes that ship owners and operators are incentivized to maximize every available minute of these ships to be working, due to the high cost of standby, which can run into the tens of thousands of dollars per day.

“The worst possible thing you can do is have these ships sitting around idle. But now they have a book of business and can say this ship is now booked for the next 24plus months, and that is exactly what these companies want.”

New ships can cost upwards of $100-150 million, and delivery can take several years. The soaring price of steel also means the costs for new ships are going up rapidly.

These ships can be in service for 20 years or more, and it seems many cable ship owners and operators are reluctant to pile money into the next generation of vessels when many automation and sustainability technologies are still immature. However, Orange Marine has said the replacement for the Raymond Croze will include electrical storage back-up using batteries when she launches in 2023.

“The question of the renewal of the worldwide fleet is still a big question mark. We don't have the financial capacity of a Maersk or a big company owning tens or hundreds of vessels,” says ASN’s Maillet. “But at some stage, we will have to face and take

there's no slack built into the schedule.”

The equipment and technology aboard the vessels are always improving. And new ships do occasionally come into service, but they are often on a one-in, one-out basis as older vessels are retired. And often even then the recent trend has been to retrofit older ships to save costs and speed delivery.

In 2020, Orange subsidiary Orange Marine said it would build a new cable ship designed to help maintain both fiber and power cables, due for launch in 2023 to replace the 40-year-old CS Raymond Croze Its last new ship was the Pierre de Fermat in 2014.

This year saw SBSS launch a new cable ship, CS Fu Tai. Built in Spain in 2007 as an offshore construction vessel, SBSS purchased the Fu Tai in 2021 and converted her to a bespoke vessel. South Africa’s Mertech Marine recently announced the retirement of cable retrieval ship MV Lida. It plans to replace the vessel, but hasn’t made any announcements yet.

Many cable ship owners and operators are reluctant to make such a large-scale investment in new builds as the costs and business case can be hard to justify, and there’s no guarantee that demand will continue at the current red-hot levels once any new ships do enter service.

At the same time, change is on the horizon. The future of shipping involves more automation – partly to reduce costs and partly to deal with an ongoing skills shortage – yet remote navigation and maintenance technologies are still in their nascent stage.

The fossil-fuel-reliant industry is looking to decarbonize over the next couple of decades and meet the various 2050 net zero goals being set by countries the world over. However, large-scale batteries that can reliably support large vessels at sea for long periods are still in development, and the supply chain to ensure such technologies can be supported wherever in the world a cable ship might be needed are still a ways off.

the decision to start to renew the fleet.

“The market ramped up very quickly during the last few years. Nobody really anticipated it, and the lead time for new build construction is a minimum of three years. Will this market will be sustainable for the next 10 years? Maybe it's not enough to defend a strong return on investment to build new cable ships.”

In the meantime, retrofits of smaller ships are becoming more common for smaller projects. Maillet notes more retrofits and smaller, more specialized vessels tailored to specific roles rather than very large multipurpose ships may be a way forward in the short term to alleviate some of the capacity crunch in a more cost-effective way.

“In the future, we may design different types of vessels, more specialized and not capable to do everything,” he says, “but extremely efficient for what they have to provide in terms of service.”

For the longer trans-oceanic projects, however, the cost and wait times for a new

Issue 46 • November 2022 | 37 That sinking feeling 
“The suppliers are prioritizing the hyperscalers, which is sometimes a disadvantage to the smaller, more entrepreneurial customers"
>>CONTENTS

Maintenance: thin margins mean even less chance of new ships

While the lack of cable ships is affecting the deployment of new cables, a similar issue may be bubbling in the background for the maintenance of existing cables.

While Global Marine’s maintenance account director Steve Holden tells DCD there is probably currently ‘sufficient repair tonnage’ for the current market, the market is facing similar barriers to investing in new vessels, driven by narrower margins compared to the cable deployment side. The company is looking to extend the lifespan of its fleet – seven ships including those on charter ranging from 10 to 30 years old – to 40 years.

“The maintenance contracts are not really long enough and not conducive to any new build globally,” he says. “Many just see it as a cost to be driven as low as possible.

“At the moment the economics for replacing a traditional cable ship don't stack up at all. If we saw that a conversion opportunity arose that was suitable, then we would take it. But we believe that it's actually better to extend the life of the fleet at the moment.”

While it hasn’t made an official announcement, SubCom, a major player in the subsea cable space, is reportedly looking to exit the maintenance market and focus its ships entirely on cable laying, meaning there will be even fewer cable ships dedicated to maintenance at a time when there are ever more cables entering operation.

On the plus side, maintaining cables is often less demanding than laying them and often doesn’t require as much cable or heavy equipment such as plows. This allows for smaller vessels that may not need to go out to deep sea, as most cable breaks are close to shore, meaning there is a greater opportunity for customizing older ships to fill shorter-term gaps.

ASN’s Maillet said he’d like to see the hyperscalers and other companies invest in larger, longer contracts that would allow cable ship companies to invest in new and converted ships.

“There is a hard competition on the maintenance market, but the duration of the contract for the moment is probably not at the right level,” he says. “The cable system owners should understand that by keeping pressure on this market, it will prevent the cable ship operators’ abilities to invest.

“In a sustainable industry, three or four vessels should be under construction or conversion right now to address the demand and to be able to cover all the repairs.”

The future: More crunch, more long-term charters?

There is no magic bullet to solve the capacity crunch; cable demand continues to go up, and the number of cable ships to deploy and maintain them is unlikely to increase substantially in the near future.

As a result, it remains a seller’s market and cable owners should likely expect longer timelines on their projects.

Though it would likely be affordable to companies making billions of dollars of profit per quarter, none of the people DCD spoke to for this piece predict it is likely that a hyperscaler such as Google will build or buy a cable ship or cable ship company in the short to near term.

While most admit it could be remotely possible in the long term, a more likely scenario would be to see a hyperscaler lease a boat on a long term charter.

This is a common practice in the Oil & Gas industry, and also happens in the telco/ subsea fiber sector, and would allow them to benefit from having ready access to such a vessel and skills without the long-term investment or ongoing management.

NEC recently signed a long-term charter contract with Global Marine, securing the Normand Clipper for approximately four years from September 2022 to May 2026. NEC said the contract “strengthens its provision of submarine cable systems” and allows it to “respond to expanding demand for submarine cables.”

Built in 2001, the 127-meter ship can carry up to 5,000 tons of cable, equating to around 7,000km worth of fiber.

“Until now, NEC has procured submarine cable-laying ships for each project separately,” the company said. “In order to respond to the growing demand for new submarine cables due to the recent spread of 5G and the increase in data traffic between data centers in various countries, NEC has chartered a long-term dedicated cablelaying ship for the first time.

But as demand for subsea cables continues to grow, something may have to break.

“The situation does seem unsustainable in the long term. In the near and medium term, I think it'll just continue and we'll band-aid it as we go,” concludes Tully. “It doesn't seem sustainable to continue without a big delivery of new ships. At some point, there has to be a tipping point.”

“I wonder if it will mean the entrance of new companies stepping in and saying ‘we're going to build new ships." 

38 | DCD Magazine • datacenterdynamics.com
cable ship capable of carrying thousands of miles of cable means new ships are unlikely for now.
>>CONTENTS

A BRIEF HISTORY OF CABLE SHIPS

While today cable ships are custombuilt specifically to lay subsea fiber cables, the first ships involved in deploying undersea telegraph cables were paddle ships chartered and customized where possible.

One of the first offshore cable proof of concepts was conducted in 1849 by Charles Vincent Walker of the South Eastern Railway Company: Walker successfully laid two miles (3.2 km) of cable in UK waters from the ship Princess Clementine off the coast of Folkestone to the shore where it connected to the railway telegraph lines, sending telegraph messages from the ship to London. The Clementine was reportedly a 147-ton, 180-hp ironhulled paddle steamer launched in 1846 as a passenger ferry across the English-French Channel that was briefly used as a transport during the Crimean War in 1853.

English cable pioneer John Watkins Brett's Channel Submarine Telegraph Company was the first to lay a cable between England in France. In 1950, the converted paddle tugboat Goliath laid an unarmored cable between Dover and Cap Gris Nez in France. The cable failed the night after its first test, possibly due to damage by fishermen. Despite its status as the first cable ship, very little is known about the Goliath; though it was likely a wood paddle tug built in 1846 measuring around 100ft and 100hp.

A year later, a stronger second cable was laid by the reconstituted Submarine Telegraph Company from a government hulk, Blazer, which was towed across the Channel. The cable was laid between South Foreland and Sangatte with the Blazer under tow from two tugs.

A month later the steam tug Red Rover was tasked with replacing a temporary part of the second cable with a new section of armored cable, but weather and navigation issues meant it missed a planned rendezvous with HMS Widgeon which had been tasked with making the splice at sea. The Widgeon did eventually make the splice at a later date.

The paddle steamer Monarch, built in the UK in 1830, was the first ship to be permanently fitted out as a cable ship and operated on a full-time basis by a cable company, and was the first of a series of cable ships named in that regal fashion.

The vessel was acquired and converted by the Electric Telegraph Company in 1853 and subsequently laid a number of telegraph cables around British and European waters. After nationalization in 1870, Monarch irreparably broke down on her first cable mission for the General Post Office and was turned into a coal hulk.

Though technically successful, the first

attempt to lay a transatlantic cable in 1857 required two vessels, was plagued with problems, and quickly failed once activated. Two converted warships, the HMS Agamemnon and USS Niagara, borrowed from their respective governments, were loaded with cable; both ships were needed as neither could hold 2,500 nautical miles of cable alone.

At the first attempt, cable laying began off Ballycarbery Castle in County Kerry, on the southwest coast of Ireland, and broke on the first day. It was grappled and repaired, but broke again over a region of the North Atlantic nearly 3,200 m (10,500 ft) deep known as Telegraph Plateau, and the operation was abandoned for the rest of the year. Around 300 miles of cable were lost, but the remaining 1,800 miles were sufficient to complete the task.

A year later, after improving the mechanisms for rolling out cable, the Agamemnon and Niagara tried again. The vessels arrived at the middle of the Atlantic, spliced cable from the two ships together and headed off; Agamemnon east towards Valentia Island, and Niagara westward towards Newfoundland. The cable broke three more times. A third attempt was successful, though the cable was damaged within a few days after misuse by an engineer and failed within a month.

A second, more successful transatlantic cable was laid by the SS Great Eastern in 1866 and the ship, unlike its predecessors continued to be used specifically for cable operations for years afterwards. An iron sail-powered, paddle wheel, and screw-propelled steamship designed by English engineer Isambard Kingdom Brunel, she was the largest ship ever built at the time of her 1858 launch.

Originally a passenger ship before being contracted out for cable laying in 1865, she was converted to hold 22,450 kilometers (13,950 mi) of cable. After a successful laying project across the Atlantic, the Great Eastern continued to lay and repair subsea telegraph cables until the 1880s. Later re-fitted as a liner, then a showboat, and then used for advertising, she was scrapped in 1890.

The CS Hooper, built in 1873 in Newcastle, was the world's first purpose-built cable-laying ship. It was designed to carry the whole of the cable to be laid between England and Bermuda for the Great Western Telegraph Company, however the project was abandoned. It laid a number of cables for the company before it was sold to the India Rubber, Gutta Percha and Telegraph Works in 1881 and renamed Silvertown. A series of dedicated cable ships, including the CS Faraday, followed shortly after the Hooper

The CS H. C. Oersted, built for the Great Northern Telegraph Company in Denmark in 1872, was the first ship specifically designed for

cable repair. She was scrapped in 1922.

One cable project was responsible for not only the first ever loss of a cable ship, but the second also. Though details are sparse, the ill-fated CS Gomos was reportedly rammed by another ship in the 1870s while laying a cable between Brazil and Uruguay for the Brazilian Submarine Telegraph Company. Chartered alongside CS Ambassador for the project, she was the first cable ship ever to be sunk. Replacement cable was manufactured and the CS La Plata chartered. However, La Plata foundered in the Bay of Biscay with the loss of 58 lives. The Ambassador did eventually complete the laying.

The most recent cable ship to be lost was KT Submarine’s CS Responder. Built for Maersk in 2000 and belonging to KT Submarine since around 2016, she sank in September 2020 in the East China sea off the coast of South Korea. A fire broke out on deck while laying cable, and the ship sank due to the flooding caused by the fire fighting. No one was hurt and the 60 crew were evacuated to a nearby smaller cable laying ship working in tandem with the Responder

The first trans-Pacific telegraph cable from San Francisco in the US via Hawaii, Midway, and Guam to Manila in the Philippines, and onto China and Japan, was laid around 1901-2 by the India Rubber, Gutta Percha and Telegraph Works Company using CS Silvertown (previously the Hooper), and the Telegraph Construction and Maintenance Company (Telcon) using CS Colonia and CS Anglia, two custom-built ships.

The first submarine transatlantic telephone cable system, TAT-1, was laid between Oban, Scotland, and Clarenville, Newfoundland in the 1950s by the cable ship HMTS Monarch, a successor to the original Monarch and built in 1946.

TAT-8, the first transatlantic fiber optic cable, landing in Tuckerton, New Jersey, Widemouth Bay, England, and Penmarch, France, was laid in 1988 by CS Long Lines (owned by AT&T), CS Alert (BT), and CS Vercors (French Telecom). Capacity on the cable was reportedly reached within eighteen months, despite some predictions it would take a decade and other suggesting it would never be filled and no other cables would be needed.

Long Lines, built in 1961, was involved in a number of cable firsts. The ship also laid the first trans-Pacific telephone cable, TRANSPAC-1 (TPC-1) in 1964; and laid TPC3, the first transPacific fiber cable along with CS KDD Maru.

The ship was acquired along with CS Charles L. Brown by Tyco International in 1997 when it bought AT&T Submarine Systems (which was spun out in 2000 and now known as Subcom). As with all these ships, she wasn’t saved for posterity and was sold for scrap in 2003. 

Issue 46 • November 2022 | 39 That sinking feeling 
>>CONTENTS

Data centre battery solutions

High

Designed for

energy
a compact design Improved technology of inside components like advanced alloys and manufacturing processes allow the highest performances in a very compact design.
first Flame retardant, high quality ABS plastic casings ensure maximum safety
in
Safety
All our products are designed and tested to ensure the best performance for the whole life. Maintenance free Our maintenance free products are easy, quick to install and reduce overall operating costs during life.
life

Simulating the flamingo universe, and other challenges at trillion-particle scales

Since man first looked to the skies, we have tried to comprehend the cosmos.

But peering outwards is just one way to help understand the universe. Another answer lies within, in the highly detailed simulations now possible thanks to profound microprocessor advances and decades of investment in high-performance computing (HPC).

In the north of England, one such supercomputer hopes to do its part to present a history of the universe in unprecedented detail, providing new insights into how we came to be.

Earlier this year, as the UK suffered an unprecedented heatwave caused by climate change, DCD toured Durham University’s data center at the Institute for Computational Cosmology (ICC), and learned about its most powerful system, the Cosma-8.

Cosma-8 is part of the UK government’s Distributed Research using Advanced Computing (DiRAC) program, which is formed of five supercomputers around the country, each of which has a specific unique feature.

In the case of the Durham system, that differentiating factor is its breathtaking amount of random-access memory (RAM).

DCD Magazine #46 >>CONTENTS
At a university campus in England, a RAMheavy supercomputer tries to unlock cosmic secrets

"For the full system, we have 360 nodes, 46,000 cores, and very importantly for us a terabyte of RAM per node - that's a lot of RAM," Dr. Alastair Basden, head of the Cosma service, said.

Two nodes in the system go even further, cramming in 4TB of RAM per node. "These are for workloads that don't scale as well across multiple nodes. So things like accessing large data sets and code which aren’t very well parallelized," Dr. Basden said.

This huge amount of RAM allows for specific scientific problems to be addressed that would otherwise not be possible on conventional supercomputers.

But more on that later, first a quick rundown of the system's other specs: It boasts dual 280W AMD Epyc 7H12 processors per node with a 2.6GHz base clock frequency and 64 cores, installed in Dell Cloud Service C-series chassis with a 2U form factor. It also has six petabytes of Lustre storage, hosted across 10 servers that have their own two CPUs and 1TB of RAM.

The supercomputer uses direct-to-chip cooling, and a CoolIT CDU.

You may notice a distinct lack of GPUs, despite their usefulness in a number of other simulation-based systems.

"Basically the codes that we're doing don't match well to GPUs. There are efforts that are going on to port these codes to GPU, but the uplift you can get in performance is a small factor rather than large," Dr. Basden said.

However, the data center is home to a two-node cluster funded as part of the UK's Excalibur exascale efforts that have six AMD MI100 GPUs in it. "MI200 GPUs should follow shortly," the researcher added.

Cosma-8, however, has no plans for GPUs, instead aiming to push CPUs and

RAM to their limits, fully connected by a PCIe-4 fabric. “Although our system isn't as big as many of the larger systems, because we have this higher RAM per node, we can actually do certain workloads better,” Dr. Basden said.

One example is the MillenniumTNG-XXL simulation, which aims to encapsulate the large-scale structure of the universe across 10 billion light years. “It's basically the largest simulation of its type that can be done anywhere in the world,” Dr. Basden said.

“So this is 10,2403 dark matter particles, this is trillion particle regimes - a large step up from anything simulated previously,” he said, “You can begin to see within the simulations it actually building spiral galaxies and things like that, all from the physics that we put in.”

The simulation takes data from telescopes, satellites, and the Dark Energy Spectroscopic Instrument (DESI) to see “how well we can match what we get in our simulator to what is actually seen in the sky,” Dr. Basden explained. “That then tells us more about dark matter.”

The MillenniumTNG-XXL simulation began in July last year, taking up a huge amount of computing resources. “We dropped about 60 million CPU hours on that,” Dr. Basden said.

“A large amount of memory per node is absolutely essential. HPC codes don't always scale efficiently, so the more nodes you use the more your scale goes down. Your simulation would take longer and longer to run until you reach a point of no return. So it wouldn't have been possible without a machine designed specifically for this."

Dr. Azadeh Fattahi is one of the researchers trying to take advantage of the machine’s unique talents, seeking to

understand the importance of dark matter in the formation and evolution of the universe.

"There's actually more dark matter than normal things in the universe," the assistant professor at UKRI FLF in Durham's department of physics, said.

“The normal matter - which is what galaxies are made out of, along with the Solar System, planets, us, everything in the universe that we can observe, basicallyincludes only a small portion of the matter and energy in the universe.”

Visible matter makes up just 0.5 percent of the universe, with dark matter at 30.1 percent. The final 69.4 percent is dark energy.

To understand how these forces interact requires enormous computing power. “Earlier efforts only looked at dark matter distribution and ignored the more complex systems,” Dr. Fattahi said.

“But we want to include more complex phenomena in the models that we're using,” she explained. “Now on Cosma-8, we can basically run a full hydrodynamical simulation, which means we include all the complex procedures like gas pools, stars forming and exploding into a supernova, as well as supermassive black holes.”

One of the flagship projects on Cosma-8 is the ‘Full-hydro Large-scale structure simulations with All-sky Mapping for the Interpretation of Next Generation Observations’ study, or, as it is more commonly known, the FLAMINGO simulation.

“So FLAMINGO is at the cutting edge,” Dr. Fattahi said. “MillenniumTNG-XXL is a slightly bigger volume, but doesn't have hydrodynamics. Compared to anything that has been done with hydrodynamics it is the biggest in the world.”

43 
Space is big
>>CONTENTS

FLAMINGO’s simulated universe is about 8 billion light years, featuring 5,0003 elements of dark matter and 5,0003 of gas. “This is the largest number of resolution elements that have been run on a hybrid simulation anywhere in the world,” she said. It took most of Cosma-8 working for 38 straight days to finish.

Dr. Fattahi's team uses these giant models to then zoom in to work at a comparatively smaller scale, operating at ‘simply’ the galactic level. By choosing a smaller chunk of space, she can focus the computational power while keeping the rest of the universe at a lower resolution.

“I study the low mass, the very small dark matter clumps, and dark matter halos,” she said. “It turns out that small galaxies have a lot of dark matter, they are the most dark matter dense dark galaxies in the universe. The question that derives my research is what we can learn from the small-scale structures about the nature of dark matter, which is a fundamental question in physics.”

Even in these smaller simulations, the scale is still immense. Astronomers use solar mass as a unit of measurement, with one solar mass equal to that of our sun. “The target resolution is about 104 solar mass,” Dr. Fattahi said. “FLAMINGO has a resolution of 108.”

Again, this simulation would not have worked without the high RAM, Dr. Fattahi argued. “If we go over too many connections, the lines become quite slow, so we have to fit these simulations in as small a number of nodes as possible,” she said. “The 1TB per node allowed us to fit it into a couple of nodes, and then we could run many of them in parallel. That’s where the power of Cosma-8 lies.”

The hope is to make it more powerful if and when more money comes in. The exact roadmap to that new funding is not known - when we visited the facility the UK government was in turmoil, and as this

article goes to print it is in a different turmoil - but Dr. Basden is confident that it is on its way.

Cosma-8 was funded under DiRAC II, with the scientific community building a case for III. "We put it to the government," Dr. Basden said. "They said 'fantastic, but there's no funding.'

"Year after year, we're waiting for this money. Finally, at the end of 2020 they said ‘you can have some of the DiRAC III funding, but not all of it.’ We're still waiting for the rest, hopefully it will come this year, maybe next.”

When the money comes, and how much they get, will define phase two of the system. Lustre storage will likely be doubled, and it will probably use AMD Milan processors.

"Depending on timescales, we might get some [AMD] Genoa CPUs, where we think we could go up to 6TB of RAM per node," Dr. Basden said. "And we have use cases for that."

When that happens, the data center will be set for a reshuffle. The data hall currently holds Cosma-6, -7, and -8 (with -5 in an adjacent room), but is at capacity.

"Cosmos-6 will be retired. Its hardware dates from 2012 and it came to us secondhand in 2016."

Each system draws around 200kW for compute on a standard day, with around 10 percent more for the cooling demands.

"Sometimes there can be heavy workloads and it reaches about 900kW; our total feed to the room is 1MW. So we're getting close to where we wouldn't want to have much more kit without retiring stuff. Yesterday we saw a 90 percent load."

That day, the hottest on record in the country, taxed data centers across the UK. It brought down Google and Oracle facilities, but the Cosma supercomputers ticked on unperturbed, Dr. Basden said proudly.

"We survived the hottest temperature,"

he said. "Most of the time we use free air cooling, but days like that we use an active chiller. That means that most of the year we have a PUE (power utilization effectiveness) which is about 1.1, which is pretty good, and then it can get up to around 1.4."

It has not always been easy, however, he admitted. "For the last year or so, the generator wasn't kicking in, so if the grid had gone down we only had an hour on the UPS. Fortunately, that didn't happen."

The generator is now fixed, but there still exists another risk: "The chillers are not on UPS, so if the power dies the UPS will take over compute and the chillers will have to be brought back to life by the generator," he said.

"That doesn't always happen. Once I was sitting in the data hall and the chillers went down when we were testing stuff," he recalled. "I was like a frog in boiling water. I was just sitting there getting a bit warmer and warmer. It got quite warm in here, and I was like ‘guys what's happening?’ A circuit breaker had tripped."

Part of that lack of perfect redundancy is down to requirements- unlike a commercial provider that cannot go down, research supercomputers can be more lenient with downtime (for example it has three multi-day maintenance outages a year), so universities are better off spending the money on more compute than more redundancy.

Another issue is the location: The data halls are within the Institute for Computational Cosmology, a larger building built for students and researchers.

In the future, as they plan to move into the next class of computing, the exascale, they will have to look elsewhere. "We are going to need to build a new data center for that,” he said. “I think we are looking at a 1015MW facility, which by the time we get to exascale is achievable," he said.

The only official exascale system currently is the US' Frontier supercomputer, which has a peak power consumption of 40MW (but usually is closer to 20MW). However, by the time the UK government will fund such a system, advancements will have brought that power load down.

By then, we might also have a better understanding of how the universe works, with scientists around the world now turning to the simulations built at Cosma-8 to help unpick the complexities of our cosmos. 

44 | DCD Magazine • datacenterdynamics.com
DCD Magazine #46 >>CONTENTS

How do we produce the energy and performance you’re used to from a back-up generator, while producing 90% less carbon? Simple. By powering it with Hydrotreated Vegetable Oil (HVO) a fossil-free and 100% renewable fuel. No generator modifications required. No waiting for future engineering developments. Our journey to carbon zero begins right here, right now. And with a KOHLER® generator, so can yours.

FOR TODAY’S GENERATORS, FOR TOMORROW’S GENERATIONS. WITH 90% LESS CARBON, OUR JOURNEY TOWARD CARBON ZERO BEGINS RIGHT HERE, RIGHT NOW. HVO.KOHLERPOWER.COM

Why telcos are switching off legacy networks, and what it means for 5G

The discussion about the rollout and impact of 5G has dominated the telco sector for a number of years, as the industry looks to the future for new markets to conquer.

But we can’t forget about our past - before 5G there was 4G; before this, there was 3G, 2G, and 1G. The launch of 2G in 1991 marked the beginning of digital mobile telecommunications, as 1G was analog, while 3G in 2001 was when it really hit its stride.

Decades later, those networks are still going. Even 1G survived until 2017, when the last network was reportedly switched off in Russia.

Now, as operators across the world focus on upgrading their 4G and 5G networks, it’s come to the point where these operators are finally turning off 2G and 3G services.

And with good reason, with over half the countries (53 percent) in the world forecasted to have launched 5G services by next year, according to analyst firm CCS Insight's latest predictions for 2023 and Beyond

The refarming of spectrum is essential to driving the full potential of what 5G can do and ultimately bring, Wireless Logic group head of MVNO Paul Bullock explained.

“Modern mobile core technology is not focused on 2G or 3G, so as operators want to upgrade their infrastructure, having to maintain the services either imposes greater costs on them or substantially limits what they can do with their upgrades,” he said.

DCD Magazine #46
Why are we retiring legacy networks?
>>CONTENTS
Network operators are scrambling to make the most out of their 5G networks, but why does this mean switching off 2G and 3G networks?

Bullock also notes that 2G and 3G just aren’t required anymore, and it’s in the mobile operator’s interests to focus on pushing more modern technology. It’s just not profitable for MNOs to push these older networks.

“So, the operators are trying to find a way to be more efficient in their spectrum allocation, and also more modern in building infrastructure which will enable them to be more profitable because they just deliver one common plane for network access for all the consumers.”

Because 5G is effectively upgrading and enhancing the 4G networks, 4G will not be shutting down anytime soon, with many observers agreeing that 4G still has a lot to give.

Bullock agrees and says that 4G is still evolving, and the shutdown of 3G services won’t have any immediate impact on these services.

“4G is completely mature and is still evolving. There'll be new releases of 4G software stacks, that will fit into existing infrastructure. Personally, I don't see any material impacts on the state of 4G when you retire 2G and 3G.”

Refarming key to unleashing 5G

Bullock does make a point to say that the current 5G we’re seeing now is more akin to a faster version of 4G, with plenty still left in the tank to extract.

Dropping the older technologies “will mean that operators have network resources to deploy 5G from spectrum to people, to various technical capacities it takes to deliver cellular network services,” he said.

“This is not going to make 5G deployments happen any faster necessarily, but it will be economic for operators to deploy.”

He adds that with the shutting down of these legacy systems, operators will have a greater focus on delivering 5G, ultimately making the most out of this next-generation technology.

3G going first

With many operators preparing to phase out the legacy networks, it’s actually 3G, which came later, that is being phased out earlier than 2G.

Surprisingly, 3G is less in use right now than 2G - although there are still those

who will be impacted. The Alarm Industry Communications Committee found in a survey of its members that about two million security, fire, and medical alert devices remained on 3G. Senior citizens who still have their old cell phones may be impacted, too.

As of mid-2019, there were 80 million active 3G devices in use across North America.

2G is equally used in rural areas and for some early smart meters and Internet of Things and M2M services that still rely on 2G for support.

2G may cling on a little longer, but 3G’s shutdown is proceeding at pace. For example, Italy’s Telecom Italia (TIM), recently shut down its 3G service, but has yet to confirm when its 2G service will go offline.

Bullock adds that it’s much more straightforward to switch off 3G services than 2G services, labeling 3G as a “stopgap technology.”

“Switching off 3G is more straightforward from a revenue and also a direct impact point of view,” he said. “There’s just not that many 3G things out there, it was almost a bit of a stopgap technology.

“There are no [new] consumer devices that are 3G-only, and there aren’t very many IoT devices that are still around that use 3G, so it’s easier to kill off.”

As for 2G, he explains that it’s a bit more complex, with a lot of smart meters using 2G in the past two and a half decades.

“For 2G, the biggest victims will be the smart metering industry. There have been a lot of smart meters around the world, in the last 20 to 25 years, and they'll have 2G SIMs in them, and 2G modules.

“So, it’s not just a matter of driving from one place to another, switching SIMs. Comms units will also need to be replaced and this could be an expensive pain. So, the pace at which 2G actually gets retired will be a function of the political weight of the electricity companies in their respective countries.”

Impact on IoT

With IoT products such as smart meters, water meters, and tracking meters still using 2G networks, Bullock noted it will have a big impact on this segment of the industry.

“A lot of businesses still depend on 2G, with a lot of devices out there needing these services, so there could be a serious business impact once 2G is switched off.”

He predicts that it will be hard to compromise and make everyone happy when 2G services are eventually retired and could be expensive for some of these businesses to upgrade replacement technologies such as LTE-M, NB-IoT, LTE CAT-1.

“There's a lot of stuff in IoT that is still attached to 2G networks. Not all those people are going to be able to be made happy, and so there will be an impact on these businesses,” said Bullock.

“This flows into the replacement technologies of either LTE-M, NB IoT, or LTE CAT-1. And all of the IoT businesses that have a large 2G installed base have been frantically figuring out what to do next for a few years, and they're mostly solving these problems now.”

Bullock warns businesses that don’t upgrade their IoT devices that it will be more expensive in the long run, and says their technology will stop working, with little that can be done.

Leaders in the switch-off

The US has been one of the leading markets for switching off these services, observed Bullock, who expects Europe to catch up.

AT&T and T-Mobile terminated 3G services earlier this year, while Verizon plans to shut down its 3G network by the end of this year.

For comparison in the UK, the government has given all four MNOs by 2033 to retire both 2G and 3G services, although it’s widely anticipated that services will be phased out much sooner, Vodafone, for example, is planning to switch off its 3G service next year.

While in Belgium the switch off of 3G services is slightly further down the line, with Telenet signaling its intention to switch off its 3G networks by 2024, as part of a phased shutdown, while Orange Belgium plans to switch off 3G a year later, in 2025, with 2G networks to be turned off by 2030 by the latest.

Either way, it remains to be seen what full benefits 5G will have once operators are able to redistribute the 2G and 3G spectrum.

As Bullock says, it won’t be an instant process and won’t necessarily happen everywhere at once.

But it’s an exciting time for the industry and could be vital in unleashing the true potential of 5G. Users just need to make sure they are prepared for when their networks may shut down. 

Issue 46 • November 2022 | 47 Escape Ukraine 
Moving on
>>CONTENTS

Taking down the infrastructure of cybercrime

Data centers raids are rare, but cybercriminals love hijacking legit cloud instances

While data centers will often shout about the strict security around their perimeters, and in some cases even point to the presence of armed guards, it's rare there’s anything close to conflict occurring. And in the few times there has been any sort of action happening on-site, it’s usually being led by law enforcement and without resistance.

But while data centers are rarely the stuff of action films, they are regularly the source of illegal and nefarious activity. And the move to cloud is making it much harder for law enforcement to track down and take out the infrastructure of cybercrime.

Data center raids: Rarely Hollywood fodder

While data center raids are fairly common, they are usually quiet affairs with little fuss. A couple of agents or officers with a warrant are more likely than a SWAT team breaking down the door.

“Search warrants, or raids at hosting providers, are really not all that glamorous, to be honest,” says Matt Swenson, Division Chief of the Cyber Division at the Homeland Security Investigations Cyber Crime Center. “You usually just go into a data center with a search warrant that says, you're legally authorized to search XYZ server, and the provider will find where that's being that's being hosted. We’ll then make a copy of the data and take it with us. And we do that fairly regularly.

“If we're doing a search warrant at a threat actor residence then it's a little different, but that is very rare these days. It's not very Hollywood at all.”

However, while most raids are done quietly, there have been examples of major law enforcement activity at data centers through the years, as well as quieter searches that made the press.

Most notably, the 'CyberBunker' facility in Traben-Trarbach, western Germany, was raided by more than 600 police officers in September 2019. Eight people were convicted in 2021.

Built by the West German military in the 1970s, the site was used by the Bundeswehr’s meteorological division until 2012. A year later, it was sold to Herman-Johan Xennt, who told locals he would build a webhosting business there. Illegal services allegedly hosted at the German data center were Cannabis Road, Fraudsters, Flugsvamp, Flight Vamp 2.0, orangechemicals, and what was then the world's second-largest narcotics marketplace, Wall Street Market.

While less malicious than drugs, Swedish Police raided the Pirate Bay more than once in an effort to take the site down, including once in 2006 when some 65 Swedish police officers entered a data center in Stockholm, and again in 2014. During the 2006 raid, servers belonging to a number of other companies, including a Russian opposition news agency and GameSwitch, a British game server host, were seized. The site is still in operation today. Apparently at the time of the 2014 raid the Pirate Bay required just 21 virtual machines (VMs) to run; 182GB of

RAM, 94 CPU cores, and 620GB of storage.

A similar example was Kim Dotcom of Megaupload. The New Zealand Police arrested Dotcom and three other Megaupload executives at a mansion outside Auckland in 2012. Reports suggest dozens of armed police swooped on the estate in helicopters around 7am on the morning of Dotcom’s birthday party, including several members of New Zealand’s elite counter-terrorist force. Dotcom remains in New Zealand and continues to operate the successor site Mega. Mathias Ortmann and Bram van der Kolk, who were both arrested during the 2012 raid, recently reached a deal that will see them avoid being extradited to the States in exchange for facing charges in New Zealand.

In 2014, the US Drug Enforcement Administration (DEA) and Internal Revenue Service (IRS) agents raided an Albuquerque, New Mexico data center run by a local provider called Big Byte. The DEA also searched the Pagosa Springs resort in Albuquerque also owned by the same family. No arrests were made at the facility, which is still in operation today. No charges were brought against the owners, though a relative of the owners pleaded guilty to submitting a false federal income tax.

In 2011, the FBI raided a colocation site in Virginia – reported at the time as possibly CoreSite’s facility in Reston – in search of servers being used to hack into CIA and other major institutions and corporations. The agency seized servers of Switzerlandbased hosting firm DigitalOne.

The same year, Dallas-based Tailor Made

48 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS

Servers were raided in hopes of finding initiators of that month's cyber attacks on PayPal. As part of the same investigation, German police executed a warrant for a search of a German hosting company's offices.

Most recently in October 2021, police in South Korea raided an SK Corp data center that had recently suffered a major fire. Local police confiscated documents relating to the fire, which was caused by a battery and brought down the KakaoTalk messaging service and disrupted much of the country.

Working with law enforcement to bring cybercrime infrastructure down

Most cloud and colocation providers take little interest in what their customers actually do with the hardware or instances in a provider’s facility, and even providers in major data center markets can be used to

two scanners that should detect this type of malware, but did cooperate in the investigation and gave Argos access to the information on the server. They have since stopped working with the tenant previously utilizing that machine.

“We see stuff hosted at gig providers like AWS and DigitalOcean. We see a lot of infrastructure hosted at big Internet service providers like OVHcloud in the UK and France and throughout Europe, and a lot smaller providers that are being utilized,” explains Swenson. “You name it, these guys will utilize it. ”

He says that ‘most of the large companies’ are very cooperative and respond to the vast majority of legal processes. However, the international nature of cybercrime means US law enforcement often has to deal with actors and infrastructure based abroad, which can complicate issues.

“When we're working a case within the United States, and infrastructures being

Difficult, but takedowns do occur

While difficult and time-consuming, major takedowns of illegal infrastructure does happen.

Last year four men pleaded guilty in the US to conspiring to engage in a Racketeer Influenced Corrupt Organization (RICO) and face 20 years in prison for providing bulletproof hosting services to cybercriminals. According to the DOJ, between 2008 and 2015 the group rented Internet Protocol (IP) addresses, servers, and domains from which cybercriminals conducted attacks, including malware distribution, botnets, and banking trojans. Malware hosted by the organization included Zeus, SpyEye, Citadel, and the Blackhole Exploit Kit.

Operation Onymous was a concerted effort by agencies including the FBI, Homeland Security, and Europol to take on darkweb markets. Through the cooperation with police forces of 17 countries, notorious markets including Silk Road 2.0, Cloud 9, and Hydra were taken down.

Artem Vaulin, founder of KickAss Torrents, was arrested after investigators crossreferenced an IP address he used for an iTunes transaction with an IP address used to log into KAT's Facebook page. The FBI also posed as an advertiser and obtained details of a bank account associated with the site.

While most the of world is looking at ways to lengthen the lifecycle of hardware and reuse the likes of servers, in the wake of raids and seizures little hardware survives once the investigation is over.

host cyber criminal infrastructure.

Last year an Iranian malware campaign attacking targets across the world was found to be being hosted out Dutch colocation data centers. Cyber firm BitDefender found the command and control (C2) infrastructure of two strains of malware linked to Iranianattributed Advanced Persistent Threat (APT) actors were being hosted within the Netherlands. The server was being hosted by American hosting company Monstermeg, which provides services out of Evoswitch’s AMS1 Amsterdam data center in Haarlem, and the malware had been present there since April 2020.

Monstermeg owner Kevin Kopp told Argos the company was not aware that this malware was on the server, despite

hosted abroad, we rely on the cooperation of foreign governments to respond to legal process,” says Swenson. “But the process is not fast, particularly abroad, and a lot of times we don't have months to wait.

“If that country doesn't respond or isn't responsive to the US legal process, there's nothing we can really do in order to to get a copy of that server. A lot of infrastructure is being hosted in Russia and Belarus, and we just can't get a lot of cooperation. A lot of cyber criminals know that, so they specifically stand up infrastructure in countries that are untouchable by US law enforcement.”

Swenson does note, however, that the FSB will cooperate with the US if it's a child exploitation investigation online.

“If it's used in the commission of a crime it'll be wiped and destroyed,” says Swenson. “Way back when I first started and kind of the early 2000s, we used to wipe a lot of computers and then repurpose them. But we moved away from that. So almost all of it now gets wiped and destroyed.”

He says one of the reasons for this is precautionary security reasons, in case hardware has particularly resilient malware present that may be able to survive a hardware wipe.

Crypto: dangerous but useful

Crytomining can be profitable but dangerous for criminals. A shootout at a cryptomining data center in Abkhazia, a separatist state recognized as part of Georgia, led to one man being killed during an attempted robbery by armed gunmen.

In February 2021, Spain's national police raided a building that they thought was

Issue 46 • November 2022 | 49 Raiding data centers 
>>CONTENTS

being used to grow marijuana, that it was an illegal cryptocurrency mining operation.

“We have seized a lot of equipment being utilized to mine for crypto,” explains Swenson. “A lot of times dark web criminals will have a side business where they are cryptomining. But it's not as common as it was a few years back, I think the hobbyists have kind of been pushed out.”

More common, however, is the use of cryptocurrency to pay for hosting and obfuscate their identity if there is an investigation by law enforcement.

“We see a lot of movement to the payment of infrastructure via cryptocurrency,” says Swenson.

“A lot of the hosting providers are now accepting various forms of cryptocurrency and that can add a layer of anonymity because they no longer have to provide a credit card or a bank account; they can just move it from a wallet that's been completely stood up without any sort of information that can be used for threat actor attribution.

“The hosting providers, they're in it to make money. And I don't necessarily think their number one concern is who's paying the bill. I don't know that they really care all that much because they're usually going to do the bare minimum that they have to do in order to be compliant.”

Cybercrime moves to the cloud

While malware and cybercrime infrastructure continues to live in physical data centers, much of it has been abstracted and virtualized to the cloud. And in the same way legitimate enterprises are looking to the cloud to reduce the amount of on-premise hardware they need to manage, criminals are copying that trend.

“[The German facility] is the only ‘illegal data center’ I've personally seen and heard of in the physical sense,” says Andrew Barratt, Principal Consultant of Adversary Ops at penetration testing firm Coalfire. “And I suspect because it's just really hard to do and go unnoticed; there's loads of just really dull logistical stuff that make it hard to run physically dark operations without making yourself a huge red flag to lots of people very quickly.

“But we've seen that the more sophisticated intruders are heavily leveraging compromised cloud environments where their approach is more about building up virtual data centers that can leverage infrastructure that they don't have to pay for.”

Threat reports from Unit42 suggest Cloaked Ursa, a threat actor group affiliated

with the Russian government, used Google Drive cloud storage services as well as Dropbox, a company that transitioned off the cloud back to its own data centers. In 2019 and 2020, RiskIQ (since acquired by Microsoft) reported that Magecart credit card-skimming attacks were repeatedly being launched from poorly-configured Amazon Web Services Simple Storage Service (AWS S3) buckets. According to Malwarebytes, malware delivered over the cloud increased by 68 percent in 2021.

Lumen’s research arm Black Lotus Labs recently published research that points to more than 12,000 servers that are running Microsoft domain controllers hosting the company’s Active Directory services and regularly used to magnify the size of distributed-denial-of-service (DDoS) attacks.

Such attacks are called ‘living off the land’ attacks and can be harder to spot and stop as companies often whitelist legitimate companies such as Google, Amazon, and Microsoft. Access to cloud accounts with credits already in hand to procure more compute resources can be sold for a high price, reports IBM.

Coalfire’s Barratt says it's not uncommon to see cloud accounts hijacked and used to mine cryptocurrency. A 2021 report from Google said: “86 percent of the compromised Google Cloud instances were used to perform cryptocurrency mining, a cloud resource-intensive for-profit activity.”

The NSO Group, which is less a cybercrime group and more a statesponsored hacking company for hire, was previously hosted out of AWS infrastructure until it was kicked off the platform in the wake of an Amnesty report into its operations. NSO’s Pegasus spyware is used by numerous governments around the world to spy on media, opposition political figures, activists and NGO workers, diplomats, and others. NSO is also known to use Digital Ocean, Linode, OVHCloud, UpCloud, Neterra, Aruba, Choopa, CloudSigma.

The difficulty enterprises face managing virtualized, multi-cloud, and increasingly serverless infrastructure is also creating huge opportunities for cybercriminals.

“Most enterprises lack awareness of what's in their environment, or even where their crown jewels sprawl to,” says Joel Fulton, CEO of security startup Lucidum and previously CISO of Splunk. “And the bad guys are building infrastructure now so that it can be transitory.”

This combination of cloud-enabled sprawl and increasingly ephemeral infrastructure is providing a safe haven from which attackers can develop, store, and launch attacks.

“Cyber criminals needs need a place to store their software and a safe environment to distribute them,” says Fulton. “And those could be cloud, EC2 instances, S3 buckets, for example, that are never well monitored; they'll find universities and non-profits and large enterprises that don't control their sprawl, and they'll squat there in order to assemble the kits, practice their exploits, execute them on unmonitored systems and refine the tool.”

He says criminals are also increasingly using short-lived cloud instances from hijacked legitimate accounts to probe and scan network perimeters and defenses, and then launch attacks.

“The attackers who make use of the cloud, do so because it makes them a continuously moving target,” says Fulton. “With autoscaling groups, elastic responsiveness can be 20,000 or more computers, spins them up in seconds and sometimes they last just minutes. And enterprises don't have the ability to know that all 20,000 are theirs or what is on them.

“If, for instance, a ‘legitimate’ server that only exists for three minutes probes you for vulnerabilities, it's fast, nobody can notice it. I would use one of those short-lived instances to collect my tools and preposition them. ”

That move to the cloud has changed how law enforcement approach and deal with investigations, and seen a massive shift in the types of devices seized during raids and investigations.

“I started as a digital forensic analyst in the mid-2000s and there was no cloud back then. Everything was stored locally, and we would see a lot of external drives and stuff like that,” says Swenson. Nowadays that it's almost the opposite, and a very minimal amount of data being stored locally.

“Where we used to go in and seize a bunch of computer towers, now it's a lot of iPads, Chromebooks, and phones that are then connecting to the cloud, and they're not storing anything locally.”

That change has made investigations far more difficult for law enforcement from a legal perspective, as cloud-hosted data can often escape warrants.

“[The cloud] has made things more problematic for us from a legal perspective: If I have a search warrant for a house and computers in a residence, I don't necessarily have the authority to grab the data from a cloud provider because it doesn't exist at the actual physical residence. Either write a separate warrant for cloud storage or add it to a warrant if we're going into a residence. It's just a matter of figuring out where that's being hosted and then adding additional legal process into what we do.” 

50 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS

Ampere's core challenge

Building an Arm chip for the cloud

The story of Arm in the data center isn’t new, that’s not an important detail. The real story is that we've hit a tipping point where the cloud requires something new because of the way it runs, and it needs to be efficient. Nobody who’s doing x86 is building that. It just so happens that we’re Arm-based, but we are building it.”

Over a wide-ranging discussion in London, Ampere Computing's chief product officer Jeff Wittich gave DCD a detailed run-through of the company’s technology roadmap, and why he thinks the five-yearold business can dominate the cloud CPU market.

Arm’s multiple attempts to break into the data center may not be new, but quickly revisiting the tale provides valuable context.

You had the early failures that just weren't ready," Wittich recalls. "The Calxeda chips, the Applied Micro stuff, but the model was wrong, people thought that you could take small cell phone chips and if you scaled them enough out, you would be you would have enough performance. The problem is, is that there is a minimum bar for performance per core, it's not zero."

But they laid the groundwork, and got the ball rolling on a software ecosystem.

Next came the middle phase, where "a bunch of people that had the wrong approach and or got out too early, when it got hard, because it was a side bet as part of a differentiated business."

One such example is Qualcomm's Arm server processor line. Its Centriq 2400 chip was well received, and may have proved

Issue 46 • November 2022 | 51 Ampere's core challenge 
Sebastian Moss Editor-in-Chief
>>CONTENTS
" Sebastian Moss Editor-in-Chief

successful in the long run. But then the company got into a legal fight with its largest customer, Apple, and spent most of 2018 fighting off a hostile takeover attempt by Broadcom.

Bruised and in need of reassuring investors with cost cutting, it laid off hundreds from its data center division and killed the project. Ampere and Microsoft hired many of those let go.

But now, after years of failures, the stars finally seem aligned. Amazon Web Services released its Graviton series of chips, while the fastest supercomputer of 2021 uses Fujitsu's Arm A64FX processor.

"The software ecosystem is there, so stuff is either at parity or in some cases even a little bit ahead of where the x86 stuff is," says Wittich. "The time is now."

Of course, once you have come to the conclusion that Arm is inevitable due to its energy efficiency and flexibility over x86, the next question has to be which form of Arm.

With AWS several generations into its own processor family, Microsoft known to be working on its own Arm chips, and Google likely cooking up something, what space is there for Ampere?

"They're big companies,” Wittich admits. “But our customers are a blend of all those companies, so we can get to massive scale. It’s also useful from a network effect perspective. As an end user, is it possible to be running on Graviton and the Ampere Altra processor across two clouds? Sure.

"But it's a level of complexity that doesn't need to exist when we can come in and run the same processor at Microsoft, at Google, or on an HPE box.”

Still, if a hyperscaler-made processor was significantly better the lack of network effort

would not dissuade users. For Ampere to succeed, it needs to show a clear advantage, Wittich admits.

That's where its big bet on cores comes in.

UK-based Arm licenses out its eponymous instruction set architecture to chip designers. It also licenses out processor core technologies, with the latest being Neoverse.

That's what AWS relies on for Graviton. And it's what Ampere used to rely on.

"On Altra & Altra Max, we decided that we couldn’t sit around for five years building a core,” he says. “We said ‘let's get something out because otherwise we won't have customers, we won’t have feedback, and we won’t have an ecosystem.

“Going forward, with what we're doing with our cores, it looks a lot different. And it starts to really deliver huge density rate power efficiency, while delivering the type of performance that the cloud wants.

"I just don't know that we'll see that from Arm [cores].”

Ampere's view is that Arm's own cores will never put the data center first. "Arm develops the cores for the client product first, and then they adapt them to infrastructure cores a few months later. But at their heart, they were still developed for a different market."

That means a lot of other approaches to Arm server CPUs are flawed, Wittich argues. "It's one of the fundamental problems with a bunch of the CPU models today, they have

features that make complete sense for a client processor but make no sense in the cloud."

Its cores, it argues, are targeted directly at the cloud. Even traditional highperformance computing is too tangential a market. "I'm not focused on that, you would make a different type of core."

There is one other market that Ampere is targeting beyond cloud and on-prem cloud, though: The high end Edge.

"The self-driving car company Cruise uses us in their vehicles,” Wittich reveals. “This isn't us getting into the automotive space, more that they needed a really high performance Edge server that would sit in the car. They couldn't actually find any other CPUs that within 100 watts that gives a reasonable performance, and our 64 core chip consumes 70 watts."

But the company hopes that cracking the automotive space will tie back to the cloud. "There's a bunch of other smaller Arm devices sitting everywhere in the vehicle, but a lot of the developers are just doing that stuff on x86 machines in the cloud, and then moving it over to the car. You're constantly porting back and forth. And it's a waste.”

Building its own cores also reduces its licensing fees to Arm, and insulates it from the chip designer's ups and downs. In late 2020, Nvidia announced it would acquire Arm for $40 billion but, after two years of distracting regulatory investigations, the deal collapsed.

Arm soon appointed a new CEO, who announced layoffs and plans for an IPO. "I didn't have to worry that much during all this stuff over the last two years," says Wittich. "We have an architectural license, and we can build what we want to build."

But Ampere is not alone in that, with others developing their own cores.

On the consumer side, Apple took the approach for its M1 Arm chips, first launched in 2020. On the data center side, things get a little more complex.

In 2018, three Apple veterans launched Nuvia to build their own Arm server CPU with their own cores. Just a year later, Apple sued one of the founders claiming that he solely worked at Nuvia while still employed by Apple.

52 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
“This isn't the old days of the Arm chips that come in and just undercut everybody on price, we're not the lowest price"
>>CONTENTS

In 2021, it seemed like that Nuvia had put the controversy behind it, selling to Qualcomm for a respectable $1.4 billion. Curiously, it seemed like it also put the server chip behind it, with Qualcomm announcing that it would use the tech in mobile, IoT, and networking products, integrating it into Snapdragon.

A year later, it pivoted back, shopping around a server product. Over the summer,

Yet more confusion soon followed. In September, Arm sued Qualcomm - one of its largest partners - claiming that it did not agree to Qualcomm’s use of Nuvia’s licenses, and terminated the licenses in February.

Should it win the case, it could unwind a major acquisition for Qualcomm, and wreck its desktop and server chip plans. Even if the case is ultimately settled, it will delay and distract Nuvia - and it's hard to have faith in Qualcomm's management to maintain focus.

Wittich is diplomatic in his views on Nuvia. "We have our own cores, and that gives us a five-year lead over anyone who decides it might be time to start designing their own cores. Now, that's a big differentiator."

Another factor it hopes will give it a lead over Arm and non-Arm processors is its own chip-to-chip interconnect, which will let it go to a chiplet approach, where tiny dies are used instead of one monolithic die.

“Our first two products went monolithic, because it’s critical that our performance is

really, really consistent,” he said. “We don't want any variability across the chip, where if you got placed in one core versus another core, the performance looks a lot different.

"We wanted to avoid any bottlenecks. A lot of the chiplet approaches to date have big bottlenecks, because there are too many hops and the latency is still too large from chip to chip.

“Our chiplets interconnect is done in such a way that we remove a lot of the common bottlenecks that occur in a chipletbased approach,” he claims.

The company was able to get 128 cores into a single die; it plans several hundred as it goes chiplet. “We want to make sure that as you bring more and more cores online, that the performance per core doesn't go down. And that's not really the case with a lot of legacy x86 CPUs.”

That’s why this is not a story about Arm, he argues. “This isn't the old days of the Arm chips that come in and just undercut everybody on price, we're not the lowest price. But what we are is for the highest performance processor, and we're the most power efficient processor.”

With x86, “the problem with it is that you're getting into a space where that additional performance gets really power inefficient,” Wittich says. “So you're adding 20 percent more power to get 10 percent more performance.”

At the rack and data center level, that doesn’t make sense, he says. “Each chip looks like it's delivering more performance, but overall you've just reduced your overall capacity. For no reason.”

Still, while he eyes those x86 workloads as land to conquer, Wittich notes that he’s aware of where the Ampere chips’ limits are.

“Trying to make everything one size fits all is a disaster. We do awesome at inferencing on a CPU, but if you’ve got batch inference jobs that you're gonna plow through over the next 12 hours, maybe move that stuff off to an inference accelerator.”

“I don't think we're going back to a market where you've got one CPU that's deployed at 99 percent of the servers out there, we're not going back,” says Wittich, who was lured to the company after 15 years at Intel watching that market share fall. “The world's changed now.” 

Issue 46 • November 2022 | 53 All about the core 
"A bunch of the [Arm] CPU models today have features that make complete sense for a client processor but make no sense in the cloud"
>>CONTENTS

The UK’s next big telco merger

Will four become three in the UK? Advanced talks between Vodafone and Three have got the industry talking about a potential merger and its chances of success

The telecoms market loves to consolidate.

We can see it in Europe, where Orange and MásMóvil are set to combine in Spain, and in South Africa where MTN and Rain have been tussling over a merger deal with Telkom, with the former recently pulling out of talks

And we saw T-Mobile merge with Sprint in 2020, in a bid to compete with AT&T and Verizon in the US.

The next big potential merger on the line could be between Vodafone and Three, in the UK.

There have been rumblings of this potential merger for a while, but in late September, Vodafone confirmed both operators are in advanced talks over a deal, which could potentially value a merger deal between £12 billion to £15bn ($13.4bn to $16.8bn).

In a statement, Vodafone said the envisaged transaction would see Vodafone owning 51 percent of the business, with conglomerate CK Hutchison owning the remaining 49 percent.

"By combining our businesses, Vodafone UK and Three UK will gain the necessary scale to be able to accelerate the rollout of full 5G in the UK and expand broadband

>>CONTENTS

connectivity to rural communities and small businesses," Vodafone said in a statement at the time.

The operator also added that the “merged business would challenge the two already consolidated players for all UK customers.”

Vodafone is referring to EE, which merged with BT in 2016, while EE itself was created as Everything Everywhere back in 2012, when Orange and T-Mobile merged.

The other big operator in the UK is of course O2, which merged with Virgin Media in a £31.4bn ($35.3bn) deal last year, in another fusion of mobile and broadband assets where the merger was split 50:50.

So, unlike the other two previously mentioned mergers, this would be different in the sense that it would connect two mobile-specific operators, although Vodafone does boast a sizeable broadband business.

An obvious move

"Confirmation of a potential tie-up between Vodafone and Three comes as no surprise - the two companies have made no secret of their interest to consolidate,” says CCS Insight’s director for consumer and connectivity Kester Mann.

"The leading motivation to join forces is scale. In telecommunications, the most successful companies tend to be the largest; bulking up would offer many synergies and cost-saving opportunities. Under the status quo, it’s hard to see either operator growing enough organically to challenge BT and Virgin Media O2 for size in the UK."

Meanwhile, James Gray, managing director of Graystone Strategy, says the ramped-up discussions and talk of the merger have been the worst-kept secret in the industry.

Gray is uniquely positioned to comment on these talks as he’s worked for both companies. At Vodafone, he worked across a range of marketing roles for a decade between 2002 and 2012. More recently he was a marketing strategy consultant at Three for close to three years until 2020.

He agrees with Mann that a merger would make sense for both operators.

“They do make good complementary partners. Vodafone is more premium than Three and has a strong enterprise base compared to Three’s limited enterprise base.

“Three has a more youthful brand than Vodafone and it appeals to younger datahungry consumers who are less likely to be found on the Vodafone base.”

Why now?

Understanding how this deal came to be first requires understanding the UK telecoms market. The mergers between EE and BT, and O2 and Virgin Media, have seen these companies increase their connection numbers drastically.

After combining forces last year, Virgin Media O2 now has more than 47 million customers across broadband, mobile, phone, and home subscribers.

Meanwhile, BT and EE, along with broadband provider Plusnet, which was acquired by BT Group in 2007, boast 35 million subscribers, across mobile and broadband services.

In comparison, Vodafone has over 18 million subscribers, although most of which are mobile, bar about 600,000 broadband customers. Three UK has about half this amount, with around 9.3 million customers.

Although the number of connections would still be less than the other big two operators, the number of mobile subscribers will make up the bulk of the combined 27 million plus customers.

Phil Sheppard, formerly director of network strategy and architecture of Three UK and now a self-employed telecoms consultant, says it’s in the interest of both operators to merge, noting that Three UK especially could benefit from the scale that a merger would bring.

“Both parties need this,” he says. “The public analysis of Vodafone is that its UK business is an underperforming market compared to some of its other markets globally and so Vodafone needs to do something about this. As for Three, it’s the smallest operator so it desperately needs some scale in order to perform better. I do think it works for both.”

Sheppard points to a merger between Vodafone and Three in Australia, which provides evidence that a merger between the two groups can be successful in the UK.

“From my viewing of what I’ve seen it was a tricky project to integrate the networks at the time, and had some challenges combining the two networks,” explains Sheppard.

“But once these challenges were overcome, it’s become a scalable enough network for the company. It’s successfully managed to keep trading in the country and has a long-term vision.”

5G is a big asset

It’s worth noting that Three UK has a solid asset at its disposal, and that’s the 5G spectrum the operator has picked up in recent years.

The company has acquired 160MHz of 5G spectrum in total, with 100MHz of this in a contiguous block.

As a result, Three claims to cover 56 percent of the UK with its 5G coverage, as of July 2022. This spans over 400 locations across 3,200 locations.

Sheppard acknowledges Three’s 5G spectrum gains, and notes that Vodafone has a good stock of spectrum in the lowerfrequency bands as well.

“I’d say that all the operators have a decent amount of spectrum, so I don’t think the merger is being done purely for this basis,” he said.

“Three’s 5G spectrum holding is very good, slightly better than the others, but then again Vodafone’s low-frequency spectrum holding, which provides the wide area network (WAN) coverage, is better, so I think the combination is good.”

“With the potential merger of Three and Vodafone, the joint company will have access to a large spectrum asset,” said NTT Data UK & Ireland president head of networks Sharad Sharma.

“This will enable a joint venture to take some technologically advanced initiatives around 5G and later 6G, putting the UK at the forefront of tech innovation.

“Moreover, with the joined-up might of two large organizations, we expect to see the expedited rollout of 5G networks and broadband to rural areas. Vital services will benefit from faster connectivity – enabling faster response times, better patient care, and more reliable service all across the public sector in underserved areas.”

Issue 46 • November 2022 | 55 There can only be one 
>>CONTENTS
“Three’s 5G spectrum holding is very good, but then again Vodafone’s low-frequency spectrum holding is better"

Vodafone under pressure to perform

A merger has been on the horizon for some time for Vodafone, with its chief executive Nick Read coming under considerable pressure from shareholders in the past year following disappointing financial performances.

It’s no secret that Vodafone is keen to streamline its business, with a recent investment from French tycoon Xavier Niel, who purchased a 2.5 percent stake in the company

Niel, who is a founder and major shareholder in French telco Iliad, is keen to pursue consolidation opportunities, further driving speculation that a merger between Vodafone and Three is likely to happen.

The operator recently acquired Portuguese operator Nowo from MasMovil, but has sold off large chunks of business in other markets, notably its Hungarian business unit for $1.8bn in August, its Egyptian unit to Vodacom (in which it owns a large stake), and is reportedly in talks to sell its Ghana business to Telecel Group.

Vodafone Group is also looking to sell a stake in its Vantage Towers business unit, with several potential buyers reportedly lining up bids, including American Towers, Cellnex, Brookfield Asset Management, and Digital Bridge Group, while a sale of a stake could fetch the company £12bn ($12.83bn).

Meanwhile, Three’s CEO Robert Finnegan has made no secret of his views on a merger deal, and consolidation for the UK market. He’s previously said that the UK market has too many players, and has repeatedly referred to the market as “dysfunctional.”

T-Mobile US blueprint?

But can consolidation be successful for both operators? Even if this does go through, both operators will continue to lag behind the other two for customer numbers.

Still, a similar situation occurred in the US, where the third and fourth biggest operators at the time, T-Mobile and Sprint, merged.

The merger was first proposed in 2018, by the then T-Mobile CEO John Legere and Sprint CEO Marcelo Claure. It was estimated

to be worth $23bn once it was finalized in 2020.

This merger has been somewhat of a success, if you note that T-Mobile, which the company is still called, has been driving its 5G deployment, with the operator claiming to cover more than 5,000 cities nationwide with 5G coverage.

The company is also clearly number one for 5G coverage in the US against its rival operators for area coverage, with a report by WhistleOut revealing that its 5G footprint extends to 53 percent of the country. In comparison rival networks AT&T and Verizon are reportedly lagging behind with 29 percent and 12 percent coverage respectively.

Mann believes that the merger of T-Mobile and Sprint could be a useful blueprint for Vodafone and Three to follow, as the US operators shrunk down from four MNOs to three.

“T-Mobile is on a roll at the moment. It has all that great mid-band spectrum from Sprint, and has been able to roll it out fast. It’s definitely putting pressure on Verizon and AT&T and that’s good for the industry because it puts pressure on the incumbents. I think so far, this is a great example of what a successful merger can be.”

It should also be noted, however, that T-Mobile has laid off a significant number of employees in recent times, something that then-CEO Legere said the company wouldn’t do.

Little is known about what Vodafone and Three’s plans would be, but any merger would see a lot of empty retail spaces on the high street, as four operators dilute into three. Job losses are likely, which the company will say in the name of improving efficiency - but if too aggressive could cause problems.

Sheppard is keen on the idea of consolidation, but only if the impact on consumers is positive.

Approval will be needed

Before any merger deal is completed, there’s a fair bit of scrutiny that needs to be done, especially around the fairness of competition, and all three people DCD spoke to agreed that the Competitions and Market’s Authority (CMA) and UK regulator Ofcom will have a lot to look at.

“Should any deal materialize, regulation would be a major hurdle,” says Mann.

“It would be up to the competition authorities to decide whether reducing the number of players is for the overall good of the market. Advocates will argue it

56 | DCD Magazine • datacenterdynamics.com DCD Magazine #46
>>CONTENTS
“The complex network sharing deals could be an issue, notably as Vodafone shares with O2, and Three shares with EE"

encourages investment; dissenters will claim it’s a reason to push up prices.”

Just a few years ago in 2016, a proposed £10.45bn ($11.7bn) merger between O2 and Three was knocked back by the European Commission, as it would have reduced competition in the UK mobile network operators market.

The decision was overturned in 2020, when CK Hutchison, the parent company of Three appealed that there was no evidence that a merger would hurt competition.

At the time the merger was blocked, and O2 has since merged with another company. Gray, though, believes this time it will be different, noting that the UK is no longer in the EU, while the appetite for consolidation is more appetizing in general.

“The CMA and Ofcom will want to put a lot of scrutiny on this, as the companies are very similar mobile offering businesses. It’s different from the Virgin and O2 merger, with one business being more broadband and the other mobile-focused.

“However, the UK is no longer part of the EU and that deal was referred to the EU for review. This would be a decision made largely in the UK, while this EU decision actually was overturned.”

Gray expects there to be a less aggressive approach from regulators over a potential merger, adding the amount of investment needed for 5G for all the operators is significant and that maybe it’s in everyone’s

interests and more sustainable for the market if the two operators merge.

Sheppard points out another potential barrier to a deal, the shared network deals that the operators have in place.

“The complex network sharing deals could be an issue, notably as Vodafone shares with O2, and Three shares with EE, and this involves sharing different pieces of networks, sites, sharing arrangements, and suppliers. They’ll need to work out what to do with that in order for a deal to be approved,” says Sheppard.

The market has consolidated a lot in recent years, even if there has been some resistance to the idea of slimming down choice.

However, there’s been successful showings of consolidation, notably the previously mentioned T-Mobile and Sprint merger.

Gray believes that, although the other UK operators, EE and VirginMedia O2 won’t necessarily welcome a merger, it wouldn’t have come as a surprise to them either.

“They will have watched the speculation, much like we have, and they’d have planned how they will react to this when it happens.

“There may even be a short-term benefit for the other operators as Three and Vodafone will be somewhat distracted initially as they attempt to merge, it will take some time and this could be an opportunity

for the other operators to step in and get some customers during this period if the customer experience drops during this merging process,” adds Gray.

Mann expects that the deal will likely go ahead, as does Sheppard.

“If I was to guess on this, I’d say yes, (on whether a deal will happen),” said Sheppard.

“However I’d expect it to be heavily scrutinized and there will be some mitigations put in there that might be complex and difficult. By mitigation, I mean obligations to the other telcos as there are network-sharing deals and potentially some concessions. It should get through this time.”

The Virgin Media and O2 merger was provisionally cleared by the CMA within 12 months of the deal first being announced. It remains to be seen if the same will happen for Vodafone and Three.

Both parties can take encouragement, with Thailand’s regulators approving a recent merger between DTAC and True, and this is one that effectively leaves just two operators jostling for position.

However, this October Virgin Media O2 pulled out of plans to acquire TalkTalk due to market and regulatory uncertainties, with the deal thought to be worth £3 billion ($3.44bn).

It remains to be seen what the outcome will be, but for Vodafone and Three, it might just be a necessary fit to challenge the other two players in the UK. 

Issue 46 • November 2022 | 57 There can only be one 

Making the most of your waste heat

After years of hot air, the digital boiler market is heating up

As data center operators strive to make their facilities more efficient, some are embracing the concept of reusing waste heat.

Greenhouses, fish farms, and apartments have benefited from the idea - with servers using the heat that they don't want to warm those that need it.

But it requires data centers to be near district heating systems or farms, and is viewed as a secondary priority after the main job of being a data center. What if we flip the idea on its head? What if we find the places that are already using electricity to create heat, and stick compute in the middle?

A few companies have tried to build digital boilers and heaters over the past decade, but the idea has struggled to find traction, with several startups going bust or pivoting out of the sector. Now, French company Qarnot thinks it has managed to solve the challenge.

"10 years ago, we were considered to be crazy," Paul Benoit, the company's CEO, told DCD

"And what is very interesting is that for a year now, what we do is considered the way to go to deploy more IT infrastructure with less energy," he proclaimed.

It's a bold statement that may slightly oversell the company's success - many are still uncertain about the concept - but there are signs things are changing. In 2020, the company raised around $6.5 million, adding to the $2.5m it had received from Data4 Group a few years earlier.

Qarnot also has clients it can point to, including Société Générale, KTH Royal

DCD Magazine #46
Sebastian Moss Editor-in-Chief
DCD Magazine • datacenterdynamics.com >>CONTENTS
Images by: Qarnot

Before we unpick how it got to those deals, first let's understand how we got here. "In the beginning, we made a space heater for houses," Benoit explained.

The concept was that, instead of using your electricity to produce heat through a traditional radiator, use the same amount of electricity on a Qarnot box that looks like a radiator. At the same time as producing heat, a side product would be compute, which could be sold to other companies as distributed computing.

But the economics were tricky as the equipment cost significantly more than a comparable heater. There was also a fundamental flaw: Data centers require 24x7 compute, while people don't require 24x7 heating. There were also security concerns about putting servers in people’s homes, and limitations to residential fiber connectivity.

In the Netherlands, these issues led to rival e-radiator business Nerdalize declaring bankruptcy in 2019.

"So now we go to social housing [and large apartment blocks], where we can run 24x7 for hot showers and stuff," Benoit explained. "We're stopping doing heaters and going more for boilers,” he said

One exception, a much-publicized

crypto-heater launched in 2018, was mostly a marketing stunt, Benoit admits.

“It was B2C, which we’re not doing anymore, and it was quite expensive,” he said, but added that those that bought it would have nearly doubled their money on Ethereum mining at the currency’s peak. Benoit would not disclose how many units were sold, but said it was small. “We’re focused on B2B now.”

There's still a limit to housing blocks, though: "The scaling is not great, because you cannot put 500kW of hot water 24x7 in that housing [complex], even if it's a large one," he said. But Europe is building more local district heating, covering multiple such properties from one location - "this is our strategic focus."

The hope is to convince such sites to use their energy twice, both for heating and computing, while paying only for the heating. The digital boilers themselves are more expensive than conventional ones, because you're replacing simple heating equipment with semiconductors and IT infrastructure. The IT infrastructure also becomes obsolete much faster than normal boilers do.

"We provide the chassis, and replace the servers every five years," Benoit said. The company works with circular economy firm IT Renew to use recertified and new Open Compute Project servers.

To offset the higher costs, Qarnot partners with a heating company "and then we sell the heat we produce at a much lower price to them than they sell the heat themselves."

Traditional air-cooled data centers produce waste heat at around 30°C (86°F); Qarnot’s boilers use direct water cooling to deliver water at at 65°C (149°F) through 2cm copper pipes.

Ampere's core challenge 
Institute of Technology, and animation house Illumination (best known for the Minions franchise).
>>CONTENTS Reusing waste heat

"Megawatts of heat at 30 degrees? It's crap, you cannot do anything with it," Benoit said.

At the same time as it sells the heat, Qarnot then sells its compute to customers, at a low price because the electricity costs have been offset.

That compute infrastructure, which is either paid for by the customer upfront or on demand, is accessed through interconnection points at investor Data4's data centers, and managed through a custom software stack.

While the deployments are close to potential end users, it’s important to note that it is currently not an ‘Edge’ system focused on low latency. Instead, Qarnot is targeting batch-processing high-performance

computing workloads.

“Most of our clients are not very sensitive to latency, because they are doing simulations, and workloads like that,” Benoit said. There is still an opportunity for placing the boilers in the same building as clients, or at least nearby, but Benoit does not see it as a priority. “It would be great,” he said. “But the problem is more on the business side, because if you have to discuss with the heat guy of the building and the IT guy of the company, it may take 10 years to make the deal. We don't do this, but it can be a way to go [in the future].”

French bank Société Générale is trialing using the system for risk computation. "It's a first proof of concept, but the project [is

expected to grow to] 500kW-1MW sites, the sweet spot for district heating that is 24x7," he said.

As Qarnot deployments grow larger, they bump into a different challenge: Connectivity. Nobody wants their residential block's Internet being sapped by their boiler, and nobody wants to pay for a data center that is limited to residential fiber rates.

"Historically when we deployed heaters in the building, we would deploy 100-300 of them and use fiber to the building," Benoit said. "Now we're building larger sites, so we deploy dark fiber because the investment is reasonable compared to the infrastructure and it can be financed by the end user."

The company currently rolls out dark fiber for 500kW and above projects, but plans to do it for sites in the lower hundreds moving forwards.

It may also need to build in more redundancy as its sites grow bigger. "It's totally possible if clients want it, and some do, but most do not ask for it," Benoit said. "We have to be able to understand the needs of the client and then discuss whether they want to pay for it.”

The next difficulty is finding those sites, with district heating systems only available in some countries. “In France, district heating penetration is very low,” Benoit admitted. “We work in Finland and want to work there more, as well as in all the Nordic countries, where we have partners. We also have discussions in Canada, and in Japan - but Japan is special in terms of infrastructure.”

His hope is that, as countries invest in net zero infrastructure, more district heating systems will begin to be rolled out across the world - at least in the colder regions.

“The opportunity is gigantic,” he said.

This feature is from our Critical Power supplement: Read the rest for free here 

60 | DCD Magazine • datacenterdynamics.com
>>CONTENTS

Brace for impact

The last decade has been one of prosperity and growth for the data center sector. Even the pandemic, which destroyed other industries, meant a fresh wave of investment and expansion as the world rushed online.

But now comes trouble. The global economy is teetering, and the future is growing ever more uncertain. How will the data center sector fare in a recession?

The good news is that it’ll do better than many - core data center services are a necessity, so enterprises won’t be able to drop them entirely. But it would be naive to think that there will be no impact.

As cost-cutting becomes a priority for businesses, expect them to drop non-essential workloads and expansions. Startups will slow, relying less on unsustainable business models that leverage the cloud to scale rapidly.

On the building side, debt markets will not be as friendly as they once were, investors will not be as abundant. Chip fab manufacturing will be scaled back, slowing innovation in new

processors and technologies.

At the same time, pressures on energy markets remain heavy, with the Ukraine war unlikely to end soon.

Already, the most fragile in the sector have declared bankruptcy: Sungard, UKCloud, Datacenter Almere.

They had preexisting issues that were exposed by the worsening times. Others will fare better, but they will not be unscathed. Even the hyperscalers, often seen as endless pits of money and power, are feeling the pinch. Their market caps have fallen dramatically, and they warn of lower capex spending - which will filter down through the whole industry.

A recession is not an existential threat for the sector, but it is a time to be cautious. The dotcom bubble brought Equinix to its knees, and led to the end of Exodus Communications.

Equinix recovered, prioritizing a sustainable business model over the excess of the early Internet era. Others would be wise to take note. 

62 | DCD Magazine • datacenterdynamics.com DCD Magazine #45
 Difficulties Staying Up >>CONTENTS
piller.com Nothing protects quite like Piller PILLER GROUP GmbH | AUSTRALIA | CHINA | FRANCE | GERMANY | INDIA | ITALY | SINGAPORE | SPAIN | UK | USA | PARTNERS WORLDWIDE Electrically coupled UPS for protection of industrial process through to large scale data centres | Up to 3.24MW of protection in a single module | Self-diagnostics for predictive maintenance | 5-year maintenance intervals | Reduced CapEx | Battery-free flywheel energy storage, Li-Ion or VRLA battery-backed options NEW PILLER UB-V SERIES UPS: The Power to Protect A Langley Holdings Company
Listen to the DCD podcast, new episodes available every two weeks. Hear from Microsoft, Intel, Google, Digital Realty, and more! Brought to you in partnership with Vertiv › Zero Downtime

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.