How to move a data center out of a war From the Editor
This issue, we hear how cloud technology evacuated a data center from the war in Ukraine.
It's a heroic tale, and teleporting servers from Kyiv to Frankfurt is a powerful application of modern cloud technology.
As well as the workloads of Wise Infotec, the rescuers also saved the files of two universities, otherwise at risk from Russian invaders.
Despite the grim backdrop, which continues as we write, it's a story of optimism and ingenuity.
How data centers relate to their neighbors matters, because they are so huge
Tech geopolitics
Way back in the 19th century, war impacted tech, when Britain lost contact with a large part of its empire when Egypt severed a vital telegraph cable.
Unbelievably, the successors to that cable are still a single point of failure in communications between Europe and Asia - a risky and expensive bottleneck which, it seems, could only be opened by Google brokering a deal involving Israel and Saudi Arabia.
Meanwhile, in the rest of Africa, as fiber and data centers bring new generations online, there's another issue. Who owns the infrastructure, and who owns the skills? The emphatic answer is African nationals.
On land and water
This issue, we visited facilities in unusual real estate. One is on a barge, near San Francisco Bay, while another is in Singapore, where until recently a moratorium prevented other new projects from going ahead.
Like any data center, both had to address the issues of cooling and efficiency, but in both cases, those issues were put in sharper focus by the land or water where they stood.
And elsewhere in our APAC supplement, we found out about a data center in Japan, where shoveling snow is the bane of local life, but that snow turned out to be a boon for the White Data Center.
Who goes there?
Keeping to practical issues, we decided to have a thorough look at how data centers keep out intruders.
At this stage we haven't gone out ourselves attempting to break into data centers. But we have spoken to people who have - and some of their methods may surprise you.
Penetration testers see things very differently to you or me.
An odd-looking future?
We may be seeing the end of supercomputing history. Over the next few years, new standalone HPC systems may be prohibitive and supercomputing will move to the cloud.
At the same time, advanced AI has become so pervasive, we were able to borrow some to experiment with magazine illustrations.
The results show are well worth looking at - but we think our designers won't be out of a job any time soon.
Meet the team
Executive Editor
Peter Judge @Judgecorp
Editor-in-Chief Sebastian Moss @SebMoss
News Editor Dan Swinhoe @DanSwinhoe
Telecoms Editor
Paul Lipscombe
Reporter Georgia Butler
Partner Content Editor
Claire Fletcher
Head of Partner Content Graeme Burton @graemeburton
SEA Correspondent Paul Mah @PaulMah
Brazil Correspondent Tatiane Aquim @DCDFocuspt
Designer Eleni Zevgaridou
Head of Sales Erica Baeta
Conference Director, Global Rebecca Davison
Conference Director, NAM Kisandka Moses
Channel Manager Alex Dickins
Channel Manager Emma Brooks
Chief Marketing Officer Dan Loosemore Head Office
DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
© 2022 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
The data shifted from Kyiv to Frankfurt to save Wise Infotec Peter Judge Executive EditorThe biggest data center news stories of the last three months
Dominion Energy admits it can’t meet data center power demands in Northern Virginia News
North American utility Dominion Energy says it may not be able to meet demands for power in Ashburn, Northern Virginia, delaying building projects in the world’s fastestgrowing data center hub by many years.
Dominion has told customers that it has power supplies, but can no longer guarantee to deliver the quantity of electricity customers want via overhead powerlines. If these warnings prove true, this could stall projects with billions invested, and Loudoun County’s tax revenue would take a severe hit if the hub of data centers in Ashburn stalls. For now, local authorities and industry bodies are struggling to understand the sudden warning from Dominion.
Dominion supplies electricity in Virginia, North Carolina, and South Carolina, as well as natural gas to parts of the US. In the data center-rich counties of Loudoun, Prince William, and Fauquier, most of the electricity is carried by overhead powerlines marching along roads - a delivery method that has led to protests.
Loudoun County has 26 million square feet of data center space, with 5 million more in development and many more projects planned. Data center equipment taxes
provide one-third of the County’s tax income, but has recently faced a backlash from residents who want fewer facilities built.
Now they may get their wish - and may suffer from a lack of funds to the County coffers in the process.
Dominion could halt power delivery for new data center developments until 2025 or 2026, according to a warning that emerged in a note from Wells Fargo analysts to clients.
Facilities which are only three to six months from completion should get their power, said the note, but any further from completion might face “significant delays.”
“The issue stems not from a lack of power generation capabilities, but rather, an inability to distribute over high-voltage power lines to Ashburn,” said the Wells Fargo note, which DCD has seen. Even if developers have a commitment letter from the utility, they might still not get their power.
Wells Fargo predicts that data center construction planned for 2023/2024 could be “significantly delayed,” which would derail 90MW of hyperscale data center commitments, along with more leases planned for years to come.
bit.ly/LeftPowerless
NEWS IN BRIEF
FedEx to close data centers, retire all mainframes by 2024, saving $400m
A known Oracle Cloud and Microsoft Azure customer, FedEx plans to go all-in on the cloud. It is also a Switch customer.
European operators plan to cut water use to 400ml per kWh by 2040
The proposal was put forward by the Climate Neutral Data Centre Pact, a group of operators which includes 90 percent of the European data center sector.
Home building to halt in West London, due to data center power demands
The Greater London Authority has told developers that new housing projects in West London could be banned till 2035, because data centers have taken all the electricity capacity.
Coolant shortage will hit chips and data centers after pollution forces 3M to cease production in Belgium
Because of high levels of toxic chemicals around the the Zwijndrecht plant, 3M was forced to cease production of the two coolants 3M Novec and 3M Fluorinert there.
Ammonia could power data centers, says Fujitsu, given better catalysts
The Japanese giant has been working with Icelandic startup Atmonia since April to try and find better catalysts for a green production process. One reason for this is that Fujitsu wants green hydrogen to power its own data centers and those it operates for customers.
Customers sue 365 Data Centers over alleged ransomware which caused outage
According to the legal action, 365 Data Centers failed to secure its systems properly, leading to a ransomware attack which caused a signifiant outage to 365’s retail colocation services. ”The intended target was a third party,” a leaked email by the CEO said.
Short seller Jim Chanos bets against data center REITs
Investment manager Jim Chanos is taking large short positions against data center real estate investment trusts (REIT), betting that cloud providers will take their business.
Short selling is when you borrow a security and sell it on the open market, and then purchase it at a later date for a lower price - keeping the difference after repaying the initial loan. The more a share price collapses, the more a short seller makeshowever, if the price rises, they will lose money.
Chanos rose to prominence for successfully shorting Enron, and profited off of the fall of Luckin Coffee, Wirecard AG, and The Hertz Corporation. However, he has lost money betting on the collapse of the Chinese
real estate market and the fall of Tesla.
“This is our big short right now,” Chanos told the Financial Times about his bet against data center REITs.
REITs are companies that own, and usually operate, income-producing real estate - in this case, data centers. To qualify, companies must invest at least 75 percent of their capital in real estate assets, must get at least 75 percent of their income from those real estate assets, must have at least 100 shareholders, and must distribute at least 90 percent of their taxable income to shareholders.
In return, they receive tax breaks and other benefits.
Data center companies like Digital Realty,
CyrusOne, Equinix, Iron Mountain, Coresite, QTS, and Keppel DC are all REITs, with JLL estimating that more than 70 percent of the world’s data centers are REIT-owned. More than 72 million square feet (6.7m sq m) of data center space is owned by publicly-listed data center REITs.
But Chanos, and his investment group Chanos & Company, are betting that data center REITs’ market domination is set to come to an end.
“The story is that although the cloud is growing, the cloud is their enemy, not their business,” he told the FT. “Value is accruing to the cloud companies, not the bricks-andmortar legacy data centers.”
He continued: “The real problem for data center REITs is technical obsolescence. Their three biggest customers are becoming their biggest competitors. And when your biggest competitors are three of the most vicious competitors in the world then you have a problem.”
Digital Realty’s Bill Stein told CNBC that Chanos “maybe isn’t aware that demand has never been stronger in our space,” pointing to increasing bookings and revenue.
He added: “[Chanos] refers to cloud service providers as our competitors, but we view them as our partners, and they view us as partners. We enable their growth around the world.”
Sector investors Jefferies and GGV also came out in support of data center REITs following Chanos’ public bet. Shares in Digital Realty fell at the outset of the short, but have since risen on strong earnings.
bit.ly/ShortThinking
Three people were brought to hospital after being critically injured by an “electrical incident” at the Google data center in Council Bluffs, Iowa.
The patients are awake and talking, but were rushed to the Nebraska Medical Center in critical condition - with one taken by helicopter due to their condition.
“We are aware of an electrical incident that took place today at Google’s data center in Council Bluffs, Iowa, injuring three people onsite who are now being treated,” Google said in a statement.
“The health and safety of all workers is our absolute top priority, and we are working closely with partners and local authorities to thoroughly investigate the situation and provide assistance as needed.”
It is believed three electricians were working on a substation close to the main data center when an arc flash electrical explosion caused major burns.
The data center is one of Google’s first facilities, first announced in 2007 and brought online in 2009.
The incident happened several hours after an outage to Google Search, that some publications erroneously tied to the explosion.
bit.ly/ArcFlashSafety
Three people in critical condition after “electrical incident” and explosion at Google Iowa data center
Ireland publishes new principles on sustainable data center development
There won’t be a country-wide moratorium on new data centers, but facilities must adhere to new green principles, the Irish government has said.
The government this week published a revised Statement on the Role of Data Centres in Ireland’s Enterprise Strategy.
“The revised statement adopts a set of principles to harness the economic and societal benefits that data centers bring, facilitating sustainable data center development that adheres to our energy and enterprise policy objectives,” the government said.
It added that, while data is an “essential enabler” of an increasingly digital economy and society, the country needed to align the twin transitions which are “both digital and green.“
At present, data centers are responsible
for 14 percent of electricity usage in Ireland as of 2021.
The new principles within the statement focus on data centers that create a positive economic impact, make efficient use of the country’s electricity grid through using available capacity and alleviating constraints, increase renewable energy use, colocated with a renewable generation or energy capability, decarbonized by design, and provide opportunities for community engagement and assist SMEs.
“Data center developments which are not consistent with these principles would not be in line with government policy,” the government said.
Ireland first published a statement on the role of data centers in the country in 2018
that largely focused on their importance to the Irish economy and attracting investment into the country.
Taoiseach Micheál Martin recently confirmed there will be no moratorium on new data centers, saying that they were important to attracting investment in the country.
“When particular companies invest in Ireland and have a big presence in Ireland they’re saying it’s not one or the other. You can’t say you want all of our investment but by the way, we don’t want you doing anything with a data center.”
Ireland would have to “weigh that up,” he said: “We can’t say no to all data centers because that potentially would be saying no to a lot of investment on the technology front both on the digital and the bigger companies.”
Last year it was reported that data centers use more than 10 percent of Ireland’s electricity supply, and continued growth could cause blackouts and prevent the country from reaching targets for renewable energy use.
After political parties the Social Democrats and People Before Profit called for a limit on data center development, the electricity regulator CRU refused to set a moratorium, but warned that grid operators could demand that data centers should be able to provide their own power at times of need.
Since then, a de facto moratorium appears to be in operation in Dublin after EirGrid said the grid in the capital was under strain, and no new data centers are expected to be authorized before 2028, with Digital Realty pausing plans for expansion.
bit.ly/IrelandsSilentMoratoriumIrish data center developed Dataplex has entered into voluntary liquidation, after waiting 12 months for grid power approval, only to be refused power.
Co-founder and CEO Eddie Kilbane told DCD that “for us, Ireland is now closed to building out any new data centers.” He explained: “Unfortunately Dataplex Group has entered into voluntary liquidation, due to being refused power contracts from Eirgrid, the National Grid provider, for two projects in Dublin.
“This was too much for the investors to take as we had spent a lot of money to get to the position for our applications to be refused, this was after 12 months waiting for an answer and with full technical approvals from EirGrid.”
Dataplex planned to develop brownfield projects involving retrofitting existing structures and larger greenfield projects, with a specific focus on hyperscale and wholesale colocation customers. It hoped to turn a warehouse in the Willsborough Industrial Estate in the Clonshaugh Business and Technology Park into a data center, expanding a small Vodafone facility that existed at the site. It also acquired a land parcel in Abbotstown for a greenfield 70MW+ data center and industrial campus development in Dublin, Ireland, in partnership with investors Chirisa Capital and DAMAC Group.
bit.ly/CasualtyoftheGrid
Charter to pay over $7bn after employee murdered customer
Charter is liable for $7bn in punitive damages and $375m in compensatory damages for the murder of one of its customers by an employee.
The company failed to do a job history check on Spectrum employee Roy James Holden Jr., who stabbed 83-year-old Betty Jo McClain Thomas to death. It then destroyed evidence and forged documents in an effort to avoid culpability. Charter is appealing the verdict.
Spectrum cable repairman Holden visited Thomas’ house to fix her fax machine. He drove a company van to her house the next day, where she caught him attempting to steal her credit cards. He murdered her and went on a spending spree with her cards.
Holden pled guilty to the 2019 murder and was sentenced to life in prison. A civil lawsuit against Charter revealed that Holden had lied about his job history, but the company never checked to verify the details. Holden repeatedly turned to management for personal problems and told them he at one point thought he was a Dallas Cowboys player. He may have been sleeping overnight in his Spectrum van. Following the murder, Charter destroyed video surveillance and tracking information on Holden. Charter tried to compel the case into arbitration using forged terms of service documents.
bit.ly/CharterMurderPerpetrators & reasoning for April attack on Paris fiber cables still a mystery
No groups have stepped forward, no arrests made
The vandalism against a number of fiber cables around Paris earlier this year was likely a coordinated and purposeful attack, but no group has stepped forward to take responsibility.
In April this year, a number of fiber optic cables across France appeared to be intentionally cut, causing Internet outages and slowdowns in cities across the country.
Cables connecting Paris to the cities of Lyon, Strasbourg, and Lille were physically cut in several places in what looked like a coordinated attack. French media reported major Internet outages in cities including Paris, Lyon, Bordeaux, Reims, and Grenoble.
Wired reports that French Internet companies and telecom experts familiar with the incidents say the damage was more wide-ranging than initially reported and extra security measures are needed to prevent future attacks.
In total, around 10 Internet and infrastructure companies—from ISPs to cable owners—were impacted by the attacks.
In the space of around two hours, cables were surgically cut and damaged in three locations around the French capital city—to the north, south, and east—including near Disneyland Paris.
“The people knew what they were doing,”
says Michel Combot, the managing director of the French Telecoms Federation. “Those were what we call backbone cables that were mostly connecting network service from Paris to other locations in France, in three directions.”
“The cables are cut in such a way as to cause a lot of damage and therefore take a huge time to repair, also generating a significant media impact,” added Nicolas Guillaume, the CEO of telecom firm Nasca Group, which owns business ISP Netalis, one of the providers directly impacted by the attacks. “It is the work of professionals.”
The cuts were clean, suggesting power tools were used, and in some places had sections of cable removed. Whoever conducted the attacks would have had to know the exact locations of the cable ducts and been informed about the targets—the incidents were also carried out in the dark.
“It implies a lot of coordination and a few teams,” said Arthur PB Laudrain, a researcher at the University of Oxford’s department of politics and international relations who has been studying the attacks.
No groups or individuals have claimed responsibility for the damage, and French police have not announced any arrests related to the damage.
bit.ly/IfYouDidItLetMeKnow
Cooling failure brings down Google Cloud data center in London on UK’s hottest day
Cooling failed at a Google Cloud data center in London on the day when the UK experienced a record-breaking temperature of more than 40°C (104°F). Oracle’s London region also suffered cooling issues.
Multiple Google services were brought down on Tuesday at 18:13 local time (01:13 ET), according to the Google status page, which described the failure as “cooling related.”
Google said the outage only affected a small number of customers - including DCD
“There was a cooling-related failure in one of our buildings that hosts a portion of capacity for zone Europe-west2-a, for region Europe-west2, that is now resolved,” said the status report.
Among the services affected were Google Cloud, Persistent Disk, and Autoscaling. By 22:00 BST, some users still faced impact with HDD-backed Persistent Disk volumes showing IO errors.
The unprecedented record temperatures were made more likely due to man-made emissions.
bit.ly/BuiltForTheOldWorld
Credit: Tony WebsterAlibaba disconnects Taiwan from cloud network
Alibaba Cloud has disconnected Taiwan from its private network, preventing users in the country from connecting its cloud services.
“Dear customers of Alibaba Cloud Cloud Enterprise Network (CEN), due to Alibaba Cloud’s business adjustment, Alibaba Cloud will cease the operation of China (Taiwan) region for CEN starting from June 30, 2022,” the company said in a short statement.
CEN is a network built on Alibaba’s global private network that customers can use to send data and connect to other regions & facilities. It can be used to facilitate communication between different virtual private clouds (VPC) in different locations as well as to on-premise data centers.
The Chinese cloud company, also known as Aliyun, said that customers in Taiwan can no longer use CEN to establish network communication to other regions and will be disconnected from other networks, including those deployed outside Taiwan.
To mitigate the impact, Alibaba recommended that customers migrate services from Taiwan to an available region nearby.
The Commerce Secretary has warned that losing access to semiconductors of Taiwan would spark a deep recession, and impact the US military.
Commerce Secretary Gina Raimondo told CNBC: “If you allow yourself to think about a scenario where the United States no longer had access to the chips currently being made in Taiwan, it’s a scary scenario.
“It’s a deep and immediate recession. It’s an inability to protect ourselves by making military equipment. We need to make this in America. We need a manufacturing base that produces these chips, at least enough of these chips, here on our shores because otherwise we’ll just be too dependent on other countries.”
The majority of the world’s most advanced semiconductors are developed in Taiwan,
primarily by contract semiconductor manufacturer Taiwan Semiconductor Manufacturing Corp (TSMC).
This has given the nation an outsized impact on the world’s economy - with delays to manufacturing impacting industries around the world.
That represents a critical vulnerability for all those industries, including the data center sector, given political tensions between Taiwan and the Chinese mainland.
China has long viewed the sovereign state as a breakaway province, with China’s President Xi Jinping saying last year that “reunification” with Taiwan “must be fulfilled.” The number of Chinese military jets flying into Taiwan’s air defense zone has hit record highs.
bit.ly/WarWithoutWinners
The chairman of TSMC claims that a Chinese invasion of Taiwan would not give the occupying forces access to its chip fabs.
TSMC is the world’s largest contract semiconductor manufacturer, and industries around the globe are reliant on its chipsincluding in China.
The comments, made to CNN, come as the Speaker of the US House of Representatives visited Taiwan, sparking condemnation from Beijing and raising tensions.
“Nobody can control TSMC by force,” Mark Liu said. “If you take a military force or
invasion, you will render TSMC’s factories not operable, because this is such a sophisticated manufacturing facility.”
He added: “It depends on the real-time connection to the outside world - With Europe, with Japan, with US.
“From materials, to chemicals, to spare parts, to engineering software diagnoses - it’s everybody’s [combined] effort to make this factory operable.
“So, if you take it over by force, you can no longer make it operable.”
bit.ly/TaiwansSecretWeapon
Commerce Secretary: Chinese invasion of Taiwan would cut off chips, cause “deep and immediate recession” in US
Peter’s chip factoid the majority of advanced coming from the plans to spend $52bn to reverse years of In August, signed the CHIPS and Science Actbit.ly/CloudedThinking on can control TSMC by force”
Ukraine awards Microsoft and AWS peace prize for cloud services & digital support
The government of Ukraine has awarded Microsoft Azure and Amazon Web Services (AWS) peace prizes for providing critical cloud and digital services during Russia’s unprovoked invasion.
Ukraine awarded Google the same prize in May, but did not disclose why.
“Big tech support Ukraine,” Ukraine’s vice prime minister and digital transformation minister, Mykhailo Fedorov, said. “Microsoft delegation has been awarded today with ‘Peace Prize’ from President Zelenskyy. We are grateful to have you on the light side of digital. Microsoft stands for truth and for peace.”
Microsoft previously claimed that it had helped shift 16 Ukrainian government ministries to the cloud, as well as a number of Ukrainian corporations. The company said in May that it had spent over $100 million in providing technology support to the Ukrainian government, including on cyber security.
It has also developed an artificial intelligence system to log the time and date of Russian bombings to be used in future war crimes investigations.
“It’s important to ensure that we create the foundation to do what was done after WWII, at a place like Nuremberg,” Microsoft president
Brad Smith said in May.
“That’s why we’re providing the technology platform free of charge.”
AWS has also helped transfer data out of Ukraine, and onto its cloud platform.
“One more Peace Prize by Zelenskyy comes to AWS Cloud,” Fedorov said. “AWS literally saved our digital infrastructure — state registries and critical databases migrated to AWS cloud environment. Ready to cooperate on gov tech solutions and reform judicial sphere radically.”
AWS said that it met with Ukrainian officials on February 24, the day of the invasion.
AWS Snowball devices - ruggedized compute and storage hardware for data transfer - were then brought into Ukraine. Huge quantities of data were then moved to AWS data centers outside the country.
By June, the company said it was still adding over 10 petabytes of data migrating from 27 Ukrainian ministries, 18 Ukrainian universities, the largest remote learning K–12 school, and dozens of other private sector companies. There are 61 government data migrations to AWS underway, with more expected to come.
bit.ly/GoodNewsInDarkTimes
Yandex CEO Arkady Volozh resigns after facing EU sanctions
Arkady Volozh, co-founder of the largest Russian technology firm Yandex, has resigned as CEO after being targeted by EU sanctions over the war in Ukraine.
The Moscow-based company, which has its business address in Amsterdam, announced on June 3 that Volozh has stepped down as executive director and CEO, and will hand his voting powers to the board. The move came after the EU sanctioned Volozh for “materially or financially” supporting Russia, which invaded Ukraine in February.
Yandex, which Volozh cofounded in 1997, is widely seen as the “Russian Google.” It has not been sanctioned itself.
Yandex has already had to make several moves to avert sanctions. In April, it sold its media arm to Russian social media firm VK (VKontakte), a deal which included the Yandex News aggregator and the blogging platform Zen, both of which have been accused of spreading Russian propaganda and blocking accurate reporting on the war in Ukraine.
bit.ly/PutinNext
Russia’s Rostelecom stops data center development amid equipment shortages
Russia’s largest provider of digital services, Rostelecom, has suspended the development of its data centers due to a lack of access to equipment.
The partially state-owned telecoms and colocations provider said that it could no longer build facilities in Russia’s western ‘Central Federal District’ due to parts issues.
The company is focused on using its available equipment to finish projects that are near completion, including in Moscow and St. Petersburg.
Rostelecom has an even harder job getting equipment than most Russian data center firms as it is also sanctioned by the US and UK governments, while the EU has sanctioned the company’s president.
Rostelecom confirmed to Kommersant that it was pausing projects “in order to wait for the market situation to stabilize.”
Pavel Kulakov, founder of data center and cloud provider Oxygen, said that components ordered last year have not arrived.
bit.ly/RussianTechStalls
Data migration during an invasion
Sebastian Moss Editor-in-ChiefHow Wise escaped Russia’s war in Ukraine, with the help of Equinix Metal and others
People in Ukraine didn't believe it was going to happen," Dmytro Iolkin recalled. "Not even large companies had backup plans in place when Russia invaded."
As a tech lead at Ukrainian managed service provider Wise Infotec, Iolkin was faced with ensuring his company maintained critical services in the middle of a war.
This was not just about keeping customers online, this was about vital applications, including a security checkpoint app, government workloads, and hospitals. All were running in a data center in Kyiv, which at the time many thought would quickly fall.
"We didn't have a place to go," Iolkin said.
To pull off a rapid data migration in a conflict zone, Iolkin relied on a network of IT technicians and data center workers around the world to keep Wise online.
Among those crucial in making this possible was Zac Smith of Equinix Metal, the colocation giant’s bare metal service. "I knew him from when he was CEO at Packet," which Equinix bought in 2020 to form Metal, Iolkin said.
He didn't know Smith well, having just spoken to him for a small project that never passed the drawing board. Still, it was the best shot he had of finding some help in a crisis.
"I knew he wasn't on the colocation side of things, but he was the only point of contact I had, so I asked him about that. We first thought about physically moving our hardware," Iolkin said, despite the great difficulty of moving a large number of servers in a war, and the fact that militaryage males weren't (and still aren't) allowed to leave Ukraine.
"When all of this started, it was hard to think through things straight - I was just doing what I could," Iolkin admitted. "It was just thinking about how to get to the next day, not how we would actually do it. I had no plan."
Zac Smith had a better idea. "He said 'why don't you just use our Metal service for free?'"
"Wise is hosting critical infrastructure," Smith told DCD. "We wanted to help."
The company had already decided to offer free services to existing Ukrainian customers that were struggling to continue in the face of Russian assault, but was now finding that new companies were reaching out for assistance.
The company took an ad hoc, case by case for new requests. Some received full support for free, others were given huge discounts, where it “is very similar to what we would do with any new customer, except with more generosity,” Smith said. "Like if some things that we have are readily available on our balance sheet and are easier for us to provide, then we can be more liberal.”
The company is still trying to work out how widespread such discounts and free services can be, especially in the face of a lengthening conflict. "To be transparent, it's very expensive for us to provide this infrastructure," Smith said. "It's not like we're providing a $2,000 a month colo cabinet, we're providing a couple $100,000 a month of physical infrastructure."
For Wise, the intervention was a critical lifeline, and one that came within days of the invasion. But companies usually spend months or years planning moves, and it wasn’t an immediate easy fit.
“In Metal, if you have a server that is running and it stopped working for any reason, the assumption is that you would just pick up another server, spin it up and start using it,” Iolkin explained. “But we would have had data on the server, and if it dies the data will die with it.”
The company realized it needed dedicated or distributed storage - either servers just for storage, or redundant storage across the existing servers. “But we figured out that there would not be enough space on the hard drives in the servers that Metal provided us,” Iolkin said. “They are not providing all of the possible Metal options, just those from their 'On Demand' service, and those don't have enough space.”
"I couldn't sleep at all. I dedicated my time to data migration. Because that's what I could do. That's the only thing I'm a professional at. What else could I do? It was difficult"
He told Metal about the problem, and they called in their partner Pure Storage, which installs dedicated persistent storage servers near the Metal servers - and this time offered to do so for free, using equipment from their demo pool. But Pure’s product was in the US, and Wise was moving to an Equinix facility in Frankfurt.
“I didn’t know how long it would take,” he said, remembering the urgency of the situation. “Things can happen during transportation: They could break the system, they could lose it, it could get stuck somewhere for who knows how long.”
Again, he turned to the community for help.
Iolkin had long used hybrid cloud data services and data management software NetApp, and was part of a group known as the 'NetApp A-Team,' a group of customers that evangelize the product and help each other with tricky problems.
He knew NetApp would work as it was what Wise used for its storage system in Kyiv, and he trusted its data transfer and compression capabilities. He sought the A-Team’s help: "My idea was to install the NetApp system on the Frankfurt side and then replicate some or all the data, and then gradually migrate from NetApp to Pure, whenever it would be available."
André Unterberg, a member of the team, put Iolkin in touch with a company selling used NetApp systems in Germany, Miller Anlagen. When he told them about the problem, they gave the systems to Wise for free, for as long as they needed to use them.
"They provided hardware and they delivered the system, and André, Alex Scholz [both from Bechtle], and some other engineers actually installed the system." The whole process took under three days - and then the Pure Storage systems came around a week later.
The data migration happened in stages, first with most of the colder data replicated in Frankfurt, before moving to the live data. It only required a few minutes of downtime "because we did this granularly, virtual machine by virtual machine," Iolkin said.
Wise's network engineer also had to reconfigure the network to a new data center architecture, all as quickly as possible.
In total, 182 terabytes was shifted. It
wasn't easy.
"At the time when we were moving all of this data, Kyiv was under attack," Iolkin said. "And when the Russians were bombing, they broke one of the three cables that we used for the Internet connection, so traffic was slower."
The cable was eventually fixed, as part of an ongoing momentous effort by Ukraine's network engineers to keep the country online amid targeted attacks by Russian forces. Their work has helped the world see the atrocities being committed, and has allowed Ukrainians to communicate and plan escapes amid the unfolding horror.
Among those using such connectivity was Iolkin's own family, who had been slow to leave the Kherson region, wishing to stay with their community and in the house they built and lived in all their life.
He didn't hear from them after Russian troops took over the region, nor in the weeks that followed. And then, at last, thanks to repeated fixes to telecommunications networks, he heard from them: "They wanted to move out but it was not feasible," he said, his voice cracking with the memory. "But one week ago, they found a way - and they have moved to my apartment in Kyiv."
For Iolkin, who had been fortunate enough to move to the US for work before the conflict, watching his homeland be invaded, his family go dark, and his childhood home be taken over, was an intensely fraught experience.
"I couldn't sleep at all. I dedicated my time to data migration. Because that's what I could do. That's the only thing I'm a professional at. What else could I do? It was difficult."
While it served as a distraction from the horror, it also had tangible benefitssupporting online tools that are currently being used in Ukraine. It also will help with the nation’s rebuilding.
At the outset of the conflict, he reached out to his alma matter, Severodonetsk University, to move workloads to Wise's data center, and then Frankfurt.
"I wasn't able to reach them in time to get all of it, but they moved some of the data over, so now we have the university running on Metal," Iolkin said. "Now, there is no Severodonetsk."
After a lengthy battle, Ukrainian forces were pushed out of the eastern city. Russian forces shelled the city every day from late February until the end of June, reducing it to ruins.
Iolkin was also able to migrate most of the data out of Kherson State University, which is under Russian occupation.
He, like many Ukrainians, hopes that the conflict will be over soon - but is realistic about the likely long and brutal road ahead. When peace does return, and should the territory remain independent, he is optimistic about rebuilding.
The tech sector could steer clear of Ukraine, or, he argues, it could be crucial in helping the country return to prosperity, showing that Russia does not have the power to destroy a civilization. "We can look at South Korea, which is still in a conflict with North Korea," he said. "Or Israel, where there is a lot of stability issues, and they have some of the biggest tech companies and startups."
He added: "It can go both ways. And that depends on us, on everyone in the world to move it one way and not the other."
That future is still tragically a long way off. We shouldn't wait until then, and allow the continuing horrors to slowly slide down the news pages and into obscurity, lost among the noise of global suffering, as well a local news and the latest frivolities, Iolkin implored.
"You can see it going down and down and down," he said. "People are losing interest in it, and we need to speak up, we need to remind what's going on," noting that such vigilance was crucial not just for the people of Ukraine, but to prevent Putin expanding further, nuclear weapons proliferation, and China being emboldened to invade Taiwan.
Throughout the conflict, Wise has continued to operate its data center in Kyiv - with some customers unwilling, or legally unable, to leave the country. Staff continue to work there, moving in and out to the safer western edge of the country.
Shelling continues in Kyiv. "Fortunately, they never hit the data center in Ukraine. They hit lots of places, but not the data center - it hasn't gone down once,” he said proudly.
"People are losing interest in the war, and we need to speak up, we need to remind what's going on"
"They moved some of the data over, so now we have the university running on Metal. Now, there is no Severodonetsk"Zac Smith, Equinix - Sebastian Moss
The acquisition man
Peter Judge Executive EditorThe data center industry has more than its fair share of men with money. There are giant investment companies everywhere you look, buying and
building data center capacity to meet a level of demand that’s gone through the roof.
But who’s the most significant and important right now? It could be Marc Ganzi. He’s CEO of DigitalBridge, a holding company that already manages more than $48 billion worth of digital infrastructure - and has venture capital partners with a seemingly insatiable appetite for more acquisitions.
DigitalBridge manages data center brands including DataBank, EdgePoint, Landmark Dividend, Scala Data Centers, Vantage Data Centers, and more. Switch Data Centers is set to join, soon.
Each month seems to bring new acquisitions so, by the time you read these words, it’s highly likely the company will have more data center space under its control And it’s all happened in the last eight years.
Modeled on Liberty
“To go from zero to managing $48 billion on a global scale in eight years is hard to even fathom and comprehend,” he tells us.
“Arguably, if you accumulated all of our data centers around the globe, I suspect we're probably the second largest data center operator in the world today.
"And surely the number of our facilities rivals that of Equinix.”
We meet Ganzi in a gleaming, almost empty office near Victoria, and he starts right in on the modus operandi, and the history that’s led to this massive loosely-knit empire
DigitalBridge is a center of expertise, that spots digital properties with good management and growth potential, and funnels investment into them. He makes it sound simple: “We look at great businesses and great management teams and try to recognize greatness.”
Unlike a lot of data center entrepreneurs, Ganzi comes from a different digital domain, telecoms towers.
And he models his company on an icon
Marc Ganzi of DigitalBridge seems to have venture funding for the most ambitious acquisitions. What’s his angle?
from yet another sector: Liberty Media, the entertainment and communications company built up by cable TV tycoon John Malone.
By 2013, Ganzi had built up a wireless infrastructure REIT, Global Tower Partners. At that point, he sold GTP for $5 billion, to American Tower Partners, but he wanted to carry on, and address digital infrastructure more broadly.
He recalls: “I had just sold my digital REIT, and I really felt like there was a lot more work to do in the sector. Thankfully, I was correct. There was a lot more work to do. I don't think I quite had envisioned how much.”
Ganzi, and his former GTP colleague Ben Jenkins, looked at John Malone’s Liberty, and asked themselves: “Could someone build the Liberty Media of digital infrastructure?
“I have a great admiration for John Malone at Liberty,” he says. “He has built this incredible holding company that owns a disparate amount of assets around the globe, in media and telecom. He's been an absolute legend for four or five decades.
He goes on: “It was perhaps a bit brazen to suggest in 2013 that Ben and I could
create a holding company, with resident expertise in deal making, capital formation, operations, financing, and all the things that are critical to building great companies.”
Skin in the game
A key part of the first stage was for Ganzi and Jenkins to pledge their own capital, from the sale of GTP: “I think in this business, you have to be in the check writing business. When you're managing other people's capital, you have to have conviction around what you're doing.
“Investors want to see that you not only have skin in the game, but that you care. And I think the best way to show you care as an investment manager is to go out and write checks.”
The first investment was Mexico Telecom Partners, which has become the largest independent operator of mobile infrastructure in Mexico.
“In the same year we went on to start Vertical Bridge REIT,” an owner and developer of telecom towers - a follow on to GTP, Ganzi says. “And off we went!”
“Over the first four or five years of the business, we were doing deals without the backstop of a fund,” he said. “We didn't have a fund structure, so it was pretty harrowing having to go out and raise capital, deal by deal, almost in a merchant banking model.”
Over those few years, the group raised about $4 billion of capital, and built six successful brands: Mexico Telecom Partners, Vertical Bridge, network company ExteNet, DataBank, Vantage, and another South American tower company, Andean Telecom Partners. That last one abbreviates to ATP, not to be confused with American Tower Partners, which bought his original REIT.
“All told, we raised about $4 billion of equity, we ended up putting on $4 billion of debt on those businesses. And seven or eight years later, those businesses are marked at about $12 billion today. So we did a really nice job.”
Those first few years were “conservative” he said. “We started the business with a conservative capital structure, backing really good management teams in really good sectors.”
Then, in 2018, they stopped getting funds deal-by-deal. “Five years into the journey, we
“I think the best way to show you care as an investment manager is to go out and write checks”
decided to raise our first fund.”
In 2018, the group raised DigitalBridge Partners 1, a $4 billion dollar fund: “That was quite successful, we did 10 investments out of that.”
Two years later, in 2020, they stepped up: “We began to raise DigitalBridge Partners 2, which ended up being $8.3 billion, so double the size of the first fund.”
Data center expertise
Moving into data centers was planned, and based around expertise: “When we started, we were focused on towers. But in 2013, one of the first operating partners we hired was a guy named Michael Foust.”
Foust was one of the founders and the first CEO of Digital Realty, and Ganzi describes him as “the godfather of the data center industry.”
Ganzi knew Foust in the 1990s, when he was at property firm CB Richard Ellis, and kept in touch when he went to GI Partners to start identifying data center opportunities - a project which eventually spawned Digital Realty, which Ganzi calls “the first data center operator.”
“Michael and I just became good friends,” he says. “And as he ascended and grew Digital Realty, I was growing Global Tower Partners. We always exchanged notes about the similarities and the differences between data centers and towers.”
In 2013 Mike left Digital Realty. “He was on the bench, or more literally on the beach, in California, and I caught up with him and, and said, ‘Gee, why don't you come work with us? We need a partner that understands data centers.’”
As Ganzi tells it, Foust said he had to talk to his wife. The next day he called back, with a greeting from his wife, who said Foust needed to get out of the house and working again.”
Ganzi also brought in Jon Mauck who was CFO of IO Data Centers: “John and Mike have chaired our global data center practice and have done an absolutely brilliant job.”
The data center investments only started in 2016, with DataBank, but Vantage
followed quickly. “Besides towers, this is fundamentally our most important practice, and it's gone incredibly well.”
A changing sector
Today’s data center sector has changed, with the pressures of data sovereignty, expanding storage needs, and the arrival of Edge: “The complications of the ecosystem today is far different than it was 20 years ago, when Mike Foust was imagining what Digital Realty was going to look like in 2003.”
Today, data centers are interesting, because “you can't take one big paintbrush and say, ‘this is data centers,’” he says.
“It used to be that we had big colocation data centers, and we had managed cloud services or hybrid IT, which were bought inside of a data center. And that was it. As an industry, we were a one-trick pony,” he remembers.
“Today, you have five very distinct and different businesses,” he says. “You have large public hyperscale campuses; you have large, private cloud hyperscale campuses; you have hyper Edge, which is the 500MW down to 4MW sector that Equinix and
DataBank are doing so well; you certainly have enterprise colocation; and then you have managed services.”
These different sectors create “swim lanes” for investors, he says, but it sounds very much like the biggest and fastest moving of these lanes in the hyperscale sector which supports cloud.
“The reality is the opportunity just gets sharper and clearer every day - and it keeps growing,” he says. “Close to $200 billion of capex will be spent this year on cloud infrastructure. So data center spend on a global scale has never been bigger than it is today.”
This is because “data begets data” he says, with high-powered applications creating more data that needs to be stored.
To REIT or not to REIT?
A lot of data center companies have formed as real estate investment trusts (REITs), which confers tax advantages, but DigitalBridge this year converted from a REIT to a conventional company or C-Corp.
The concept has almost universal
acceptance, but this year, investor and iconoclast Jim Chanos, famed for shortselling stock, announced he would bet against REITs.
So far, the data center ecosystem has declined to comment on Chanos’ statements. Our view is that Chanos needs to back up his ideas, and show he understands the industry well enough to issue his prophesy of doom.
Ganzi likewise, declined to comment, but continues to be positive about REITs: “I think a REIT is a great idea,” he says. “I think it's a great idea when a business hits a certain phase in its cycle. I think where Digital Realty is, and Equinix is, they've achieved a size and scale and credibility with public investors. There's a steadiness to their cadence about returning capital and growth.”
While the holding company is no longer a REIT, some of its companies are: “Two of our companies are REITs: Vertical Bridge, is a private REIT, and Vantage is a REIT as well. And DataBank is moving to REIT.” Upcoming acquisition Switch is also a REIT.
Others might follow: “To the extent that
we can make other businesses digital REITs, we are in favor of that.”
To operate as a REIT is a balance, he says. “Equinix and Digital Realty really found that nice balance between doing greenfield and mergers and acquisitions, but also divesting of assets at the same time.”
The established REITs sell off older, less attractive facilities, he notes, “recycling capital back into younger next-generation facilities. That makes a lot of sense, because some investors want to pay for the yield and that that was a great trade.”
Selling older properties is sensible: “I think when you're ultimately a steward of other people's capital, you have a burden to return that capital and see that it goes well, so I think I think that’s essential.”
At the top of the data center tree, these people all know each other: “Those guys are great competitors. And candidly, they're friends. I think the world of [Digital CEO] Bill Stein. I think he's absolutely a gem of a nice guy. And I think Charles [Meyers, Equinix CEO] is a really great guy as well.”
Prospects for Switch
We are speaking shortly after DigitalBridge bought US data center operator Switch and took it private, in an $11 billion deal that has yet to close. Some observers balked at that price, which was something like 20 percent more than the then share price.
Ganzi explains this move happened because the stock market was obsessed with cloud stocks, and missed the value of Switch - an operator which deals with private cloud for large enterprises.
Essentially, he felt Switch was worth more as a private company, using long-term venture money to deliver private cloud, than it was as a public company, which had to post good figures each quarter.
“I think Switch is interesting, because it’s a great business that doesn't work as a public story but works as a private story,” explains Ganzi.
“There's a massive amount of growth in private cloud,” he says Ganzi, but the stock market couldn’t see Switch’s differentiation.
“What Rob [Roy, Switch CEO] and his team does is differentiated, in the ecosystem that he's created for corporate users that don't want public cloud, but really want privacy and security and on-demand capacity. It’s a very unique offering. I've looked at every data center business in the world over the last decade. And I find that the Switch model is probably one of the most unique models out there.”
Roy bet against the cloud hype, we suggest: “Not a lot of people understand what Switch is doing,” says Ganzi.
Small companies (like DigitalBridge itself) can get all their IT from the public cloud, but larger organizations need a private cloud, and someone to host it: “There's going to come businesses that do require a private cloud solution, whether it's the US Federal
Government, or a banking institution, or a shipping company or an IT services business. Some people don't want to put their workload environments into the public cloud,” he says.
Rob Roy saw other operators chasing public cloud, and “decided to skate to where the puck was not going,” says Ganzi. “He made a conscious decision to focus on a core group of customers that required specialization and required a really tailored approach to their cloud environments. On that basis, he's been incredibly successful. I have a lot of deep respect for him.”
Ganzi likes the quasi-military feel of Switch facilities, labeled with a selfproclaimed “Tier V” resilience standard, and manned by security staff visibly armed with tasers: “Those environments are highly secure environments. When you walk into a Switch data center, it's a very different feel than walking into, a CyrusOne facility, a Digital Realty facility, or even a Vantage facility.”
Switch is operating in a different one of Ganzi’s “swim lanes,” he tells us: “Same geo, different markets. You've got the big part of the circle, public cloud, where Vantage is doing a great job, focused on 5MW to 100MW opportunities. I'd say Switch is next, with private cloud, which is anything from, half a megawatt to 20MW. DataBank is focused on what I call hyper-Edge, chasing half a megawatt to 4MW workloads.
“The market is becoming a lot more tailored and differentiated,” he says. “It’s the same as if you said, Crown Castle does the same thing as American Tower does. They don't, Crown is in the connectivity solutions business and American Tower goes out and provides towers.”
Making the deal work
On the high price he paid for Switch, he says: “Based on first quarter and second quarter leasing results, we're effectively going to pay 27 times [the EBITDA] for that business.”
But he lists Switch’s assets: “A land bank of over 900 acres, 1.3GW of unused capacity, and a pipeline of big customers that need big workloads” and says it’s similar to when
DigitalBridge bought Vantage in 2017.
“We paid $1.2 billion for Vantage in 2017 - and everybody looked at me like I had 20 heads. I said: ‘You don't get it. We're backing a great CEO, who has a great product, who has a deep pipeline of opportunity.’”
Like Switch, Vantage had land (in Quincy, Washington and Santa Clara), along with building permits and customers.
“You can take a deal that's expensive, and you can say, ‘look, I understand how to underwrite this. I'm going to start at this entry multiple here, and I'm going to take it down three turns through the leasing pipeline, I'm going to take it down another turn and a half, because we have this expansion land, and then I can take it down another two turns over two years, because I'm going to build this, I'm going to buy that.’
“It requires an intense understanding of the environment, the business, the team, the customers, the markets, the permits,” he says. “We're incredibly thoughtful buyers, and at the end of the day, based on what we had paid and how much EBITDA has been generated at Vantage, we're now into Vantage at a single-digit multiple. That took five years to get there.”
By that same process, he plans to get DigitalBridge’s investment multiple in Switch down to eight to twelve in a period
of five years: “And we're going to do that through building and leasing into that 1.3GW of future capacity. We're confident we can double the EBITDA in the next five years.”
That couldn’t happen to a company listed on the stock market, he says: “They have to post figures every quarter. They have to raise capital every quarter, which is painstaking. When you're an infrastructure fund like us, and you have funds that are 11 to 13 years in duration, we can be patient. There's a velocity and a cadence to how we put money to work. And we're not in a rush. Right? This isn't a sprint, this is a marathon run.”
Switch had been expanding internationally through joint ventures in Italy and Thailand, but Supernap Italia was bought by IPI Partners, and Switch Thailand is reportedly for sale, as the company slimmed down for a sale. Could DigitalBridge’s backing revive Switch’s international ambitions?
“I can’t comment on that,” says Ganzi.
High-flying champion
Ask a CEO for their stance on diversity and the environment, and you can predict the response. But Ganzi is unusually eloquent, on some topics
“Here's my philosophy. We have two
responsibilities as a corporate citizen. One, we have to endeavor to return the planet in a better condition than we found it. Two, we have to provide a path for every young person that wants to be in digital infrastructure, to have a vocation.”
On his first point, we have to raise a question, as Ganzi uses a private jet, which is probably the most carbon-intensive form of travel available. Private jets like Ganzi’s 20-seat Gulfstream emit two tons of CO2 equivalent per hour of flying time, The actual environmental impact is double that, because of non-CO2 emissions at high altitude.
Environmental sites reckon that works out at more than four times the emissions of a passenger on a regular flight, even if the private jet is full to capacity.
We failed to put the question directly to the man in our interview. Later, his people handed back a no-comment on the issue.
We did find, from the company’s 2022 proxy statement, that DigitalBridge pays part of the costs of the jet which is owned by Ganzi. The board of directors has approved an agreement to reimburse “certain defined fixed costs of any aircraft owned by Mr. Ganzi” based on the number of hours it is used for business purposes, as well as “variable operational costs of business travel on a chartered or private jet.”
The report says: “The company reimbursed Mr. Ganzi $3.0 million in 2021.”
Aside from the money, this means that the emissions of those business flights will need to be included in the company’s carbon footprint, in ESG reports.
Diversity player
The second point is what got him excited. He tells us that DigitalBridge is helping to fund data centers for historically-black colleges and universities.
“It's not about just creating and building a data center. It's also creating a core curriculum with classrooms and teachers that can give young diversity candidates a chance to get the education.”
He’s talking about DigitalBridge’s
partnership with ImpactData, a company with a plan to build and operate purposebuilt colocation data centers on the campuses of Historically Black Colleges and Universities (HBCUs), backed with integrated learning.
The project will turn students into “the most wanted young employees,” says Ganzi, “because they come with a skill set that's specialized, that's built for the next economy. These young adults will come out of school with hardcore, real digital training. They don't teach that at Harvard. They don't teach it at an Ivy League school.”
ImpactData appears to be at an early stage of rolling out its data centers, but is clearly a company to watch. The company’s press materials say the DigitalBridge partnership is helping it roll out “Dream Centers” which combine digital learning infrastructure with carrier-neutral colocation.
The company has been given a $500,000 grant to support a Dream Center at Agricultural and Technical State University (A&T), Greensboro North Carolina. The plan is for a 9MW Tier III colocation facility, combined with an innovation center for use by A&T University, along with a lab for workforce training and community engagement.
A&T Chancellor Harold L. Martin Sr. says the project “will allow the university to expand academic and research offerings in high-demand areas, such as cybersecurity and engineering.”
ImpactData says it is investing $100 million in the site over four years, to create 28 permanent jobs, as part of a $1 billion digital infrastructure plan placing similar centers at other HBCUs in markets including Atlanta, Dallas, Houston, Nashville, Birmingham, and Charlotte.
Ganzi mentions the likes of Morehouse College, the alma mater of Martin Luther King, and Samuel L Jackson, but ImpactData doesn’t yet have a project announced there.
It does, however say it has partnered with an ‘influential HBCU in a key southern edge market’ to pilot its Dream Center concept, and partnered with a ‘Global Fortune 100 client’ to consolidate its existing data
center assets, while elsewhere promising a “flagship” facility on a “soon-to-beannounced HBCU campus.”
Changing trajectory
Ganzi goes on: “If we can help change the trajectory of a young person's life, man or woman, or otherwise, then we've done something incredibly powerful. We've changed not only their arc, we've changed the narrative. We can help create the great young minds of the digital economy.”
Historically, he says “parts of our youth are told, you're not welcome here. You're not invited. That's rubbish, right? They are invited.”
This is better than getting a good ESG report out, he says: “Anyone can write an ESG report, they put you to sleep. What excites us is that there's hope. And that hope has to come in the form of restoring the planet to where it needs to be, and rekindling the desire in youth to imagine, and to create, and to be the next great digital entrepreneurs.”
DigitalBridge mentors high school students to go to college, and then creates internships and a vocation path for them, before hiring and promoting “great diverse candidates that have the vocation and the experience in digital.”
More than half DigitalBridge’s public board is “diversity and female candidates” he says, and its chair is London-based Nancy Curtin.
“I have the most diverse board of all my peers, and I'm proud of that,” he says.
Appointing Curtin as chair “took a lot of courage, in a very male-dominated digital world” he says, even though she is chief investment officer at an investment firm, Alvarium.
“I deferred being chairman, because I think she's wise, and she has great expertise,” he says.
“In every instance, we've always put the most qualified person in the chair. We just have had the tenacity, and the ability to find the great minds that can help us build this great platform.”
APAC Supplement
Data center technology meets the markets of the Asia Pacific region
>
> opens up
Australia at the Edge
Cooling with snow
>
FOR TODAY’S GENERATORS.
FOR TOMORROW’S GENERATIONS.
Today’s generators should be more than just the answer to emergency power needs. They should also be the solution for the zero-carbon economy. That’s why Kohler goes beyond providing the energy you need, to power the progress the world demands. You see positive results right now. The next generation will see the benefits tomorrow.
Contents
The APAC difference
The Asia Pacific region is not like other data center markets.
There's plenty of growth there, as APAC catches up to the longer-established regions of Europe and North America.
Each APAC nation is finding it has a growing population of middle-class digital consumers, that need online services.
But beyond that, the similarities end, and every APAC market is different.
India has a creaking power system, but an established digital class.
Singapore is home to APAC headquarters for vast numbers of multinationals, but hasn't got room for data centers.
Australia has plenty of room, and a rich economy, but few people.
Japan has an industrial economy with a lack of renewable energy.
Across the region, we've found stories which show an ingenious set of solutions to the problems in each individual APAC nation.
Cooled by Japanese snow
For centuries, Japan has been working creatively with snow, producing such masterpieces as Nobel Laureate Yasunari Kawabata's novel Snow Country.
So, while US wisdom says that when you have snow, you should shovel or make snow angels, it's no surprise to see Japan come up with another option.
The White Data Center turns snow from a waste product to a source of cooling.
At the same time, the facility is using its waste heat to feed Japan's love of eels (p10).
Towering ambition in Singapore
With no spare land to build on, operators in Singapore have gone upwards. Equinix's SG5 is Singapore's tallest facility at nine stories (p4).
That record only stands till Facebook's 11 story tower opens next door.
But SG5 is well worth a visit, not just because of its height, but because of its role in supporting colocation, in a data center hub which is redefining itself.
SG5 was planned before 2019, when the government responded to data centers' rising power demands with a ban on new projects.
That moratorium is opening up now - but future would-be builders will find a lot of competition for a very limited number of data center permits (p6).
Australia's Edge
There is no market in the world like Australia. A tiny population, making up a relatively wealthy economy, distributed across untold acres of open space.
Edge computing players have realized that these factors make Australia the perfect market for the pitch, which is to distribute tech resources close to users.
Most Australian towns can't muster resources for large data centers, but they are spread too far to support low-latency applications without some local, Edge resources.
The downside of this is that multiple Edge players are moving in to compete for the market. A price war on capacity could drive a race to the bottom, leaving Australia's Edge patchy, till we get consolidation (p13).
Inside SG5, Singapore’s tallest multi-tenant data center
Equinix is no stranger to the data center hub of Singapore, where it has been operating since 2002. In the intervening two decades, it has steadily grown its portfolio of data centers in the island state to five facilities, with the newest launched just last year.
Constructed with an initial investment of $144 million, and opened in August 2022, the nine-story SG5 is the tallest data center in Singapore - at least until Facebook’s mammoth 11-story facility goes live, virtually next door.
Even when the Facebook tower opens, SG5 will still be the tallest multi-tenant facility in the data center-dense nation, where data centers consume a staggering seven percent of the available electrical power.
Why SG5 matters
SG5 is noteworthy for more than its height. It’s Equinix’s fifth data center in Singapore, but only its second greenfield facility here. Its first greenfield, SG3, launched in 2015, and there are clear differences between the two.
Paul Mah APAC EditorIt would be interesting to see how Equinix would design a data center for a tropical, land-scarce location if it started on a new project today.
Another significant fact about SG5 is it was approved before a moratorium on new data centers in 2019. Though the moratorium was lifted this year, new data centers will now be subject to a raft of guidelines around PUE and innovation around energy use. Though the rules are still being ironed out, it is safe to say that future data centers will be quite different.
An exploration of SG5 won’t be complete without an overview of the other Equinix data centers in the country. Equinix claims that it hosts the most network-dense data centers in Singapore, housing many of the international and regional networks connecting South Asia.
Much of this can probably be traced to Equinix’s SG1 data center at Ayer Rajah Crescent, which houses its Asia-Pacific Network Operation Center (NOC) and is often anonymized with the tag of “carrier hotel” on the presentation decks of local telecommunication providers and competitors alike.
Though SG1 is old in data center terms, it is supported by fiber optic connectivity that goes under a driveway to the larger, purpose-built SG3 right
Equinix’s newest data center spans nine stories, not counting underground storage tanks for fuel
next door. Crucially, all Equinix data centers are part of an islandspanning metropolitan area network, built using a ring topology for protection against a single point of failure.
Inside Equinix SG5
DCD was invited to visit SG5 last year, as Covid restrictions eased in Singapore and shortly after its official launch . Located in the Tanjong Kling data center park at Sunview Drive, the new facility is right across the street from Facebook’s upcoming data center and adjacent to Telecom Indonesia’s Telin 3, which we toured in 2016.
SG5’s location in a data center park allowed designers to incorporate features to enhance reliability and security, such as its three fiber paths for diverse connectivity. And like all new data centers in Singapore, the building is compliant with the Threat and Vulnerability Risk Assessment (TVRA) guidelines from the Monetary Authority of Singapore (MAS).
The compound is fronted by a standalone security building at the entrance for security personnel to verify the identities of visitors. There is a small private car park in front of the building, which you cross to enter the building.
According to Equinix, the first phase saw SG5 offering an initial capacity of 1,300 cabinets, which was recently expanded to 2,950 cabinets. When fully built, SG5 will hold 5,000 cabinets. Data halls are located between the ground level and the roof at level nine, giving it seven floors worth of data halls.
The facility is powered by dual redundant 66kV power feeds, supplied from dedicated substations built to serve the data center park. It is understood that level 2 through to level 4 offers 21MW of power in total, while the next four levels are designed to support up to 3MW per floor.
Sophisticated design
SG5 was clearly designed to maximize every inch of its compound: Backup generators and high-efficiency watercooled chillers are installed on the roof, which comes with upper and lower levels. The diesel fuel for powering the backup generators is stored underground, while backup power is supplied from lithiumion UPS – which have a smaller footprint than traditional lead-acid batteries.
Things get interesting inside the data halls: each of the two data halls on each level is cooled with a fan wall, which Equinix calls the Equinix Cooling Array. These take up the entire length of each data hall and are positioned as part of a hot-aisle and cold-aisle deployment.
In addition, Equinix has eschewed a raised floor design altogether, which together with a slab-to-slab height of 6.5m presumably allowed for more floors
within the building’s height restriction without compromising its ability to support tall racks.
Equinix says its Equinix Cooling Array supports high-density customers whilst reducing water and power consumption needs. But though there is no question that this is optimal for hyperscalers which tend to roll out more standardized systems, the downside is the need for more careful planning and allocation of space for smaller colocation customers.
The shape of things to come Unlike SG4 which was converted from a warehouse, the foundation of SG5 was designed from the very beginning to be a data center, which means its space, height, and various technical specifications are optimized, says Yee May Leong, the managing director of South Asia at Equinix, in an interview with DCD last year.
When asked about the Equinix Cooling Array, Leong noted that it was an example of a creation that came out of Equinix’s sustainability research. She added: “We are constantly looking at our design and construction to review how we can build it faster, more efficiently, [and with higher] energy savings.”
Indeed, Equinix has had its eyes on sustainability for a while now and had previously revealed that it has operated with 100 percent renewable energy in Singapore since 2020. Leong was eager to highlight the value that data centers bring to today’s digital economy in general, though she also emphasized that not all data centers are the same, drawing a distinction between hyperscale facilities operated by a single provider and retail colocation providers.
In that vein, Leong sees Equinix’s mission as an “everyday retail data center” to power the digital ecosystems and essential services.
“[Given the moratorium], we are very strategic in terms of the customers we want. We want to ensure that these are customers that will help continue to enable the digital economy, [that they are not here] just for colocation, but [are here to] interconnect to our clouds, interconnect to our business partners, interconnect to the network and the different enterprises,” she said.
The moratorium has since been officially lifted, but the restrictions mean that Leong’s words are probably just as true now.
Dissecting Singapore’s pilot for green data centers
Who will take the coveted prize of a new data center permit?
The moratorium on new data centers in Singapore has finally been lifted after three years, and the country has made clear that it will now be “more selective” of data center projects in future. But how exactly will new data centers be selected?
The launch of a pilot scheme for new data centers by the country's Infocomm Media Development Authority (IMDA) and Economic Development Board (EDB) at the end of July finally put that question to rest. The pilot data center - call for application exercise (DC-CFA) outlined the criteria by which proposals for new data centers will be evaluated against.
Call for application unveiled
A summary of the key criteria can be found here (pdf), but the three key evaluation requirements asked of hopeful data centers operators can be summed up broadly as sustainability, a strategic value to strengthen Singapore as a regional or international connectivity hub, and the potential to advance the nation’s economic objectives.
Many of the requirements in the sustainability category are hardly new and include certification such as the Singapore-developed Green Mark for Data Center (Platinum) – increasingly found in new builds, a PUE of 1.3, and the ability
Paul Mah APAC Editorto demonstrate “best-in-class IT energy efficiency.”
However, it was the strategic and economic evaluation requirements that got some quarters of the industry to sit up and question if hyperscalers and smaller operators are left out in the cold.
Jabez Tan, the head of research at Structure Research, told DCD that the evaluation criteria appear to favor retail operators such as Equinix, Global Switch, and Digital Realty, given their interconnected assets from a carrier and cloud on-ramp perspective, as well as a global footprint.
“It would likely be tougher for operators
that specifically cater to large hyperscale deployments to differentiate given the emphasis on connectivity,” observed Tan.
He also pointed to the capacity allocation of up to 60MW per application as “somewhat perplexing ” This could result in only one data center being be built in this pilot, due to the higher efficiencies achieved by larger facilities.
As we reported previously, the initial plan outlined at the start of the year was for data centers of between 10MW and 30MW capacities, to be allocated from a total pool of 60MW. However, Tan also noted that the Singapore government has now expressed a willingness to increase the initially proposed 60MW depending on the attractiveness of submitted applications.
Creating the incentive to go green
It is worth noting that this pilot scheme came in the wake of various governmentinitiated or government-funded efforts over the years to find a path to sustainability in hot and sunny Singapore, a small island nation that is bereft of most renewable energy sources.
Just counting the publicly-announced projects, there was a trial of a tropical data center, an attempt to explore the potential of high-rise data centers, and even funding to research technologies such as water cooling
There is no question that the DCCFA represents a radical shift in strategy towards a private-led approach to building more sustainable data centers. What does the government hope to gain from this? The answer can probably be found in the decarbonization clause under the
sustainability requirement, which called for proposals for renewable energy use, or plans for “innovative energy pathways” to offset carbon emissions footprint.
Interestingly, hydrogen is named as a specific example of an innovative energy pathway. Multiple studies and consortia have been initiated in Singapore to explore this, including initiatives by STT and Linde, KBR, Keppel, and a Keppel and Osaka Gas joint project
But is the data center industry ready for hydrogen though? When quizzed on this, Chris Street, the head of data centers at JLL is unequivocal that hydrogen is not currently available on a production basis for mission-critical workloads such as a data center.
He said: “That is simply a fact of the current market situation. With that said, there are a significant number of investors, agencies, and industry participants that are looking into the situation and trying to pull this forward given Singapore’s position in the hydrogen supply chain.”
Jason Plamondon, a senior manager of sustainability at Equinix Asia-Pacific concurred: “While hydrogen does represent a potential future source of green energy for data centers, it remains some distance away from being commercially viable.” Plamondon pointed out that Equinix is currently piloting hydrogen-ready fuel cells in Italy, however.
The price to the prize
When viewed from this perspective, it is evident that the intention is to create an irresistible incentive to entice the data center industry to explore more radical technologies and bring them to fruition.
And the prize is a coveted slot to build a new data center in Singapore. It won’t be a free ride, however, and industry stakeholders can expect to piece together the resources or partnerships to get in the running.
“[Projects that want to] stand out will have to show this commitment and then push beyond the standard rhetoric and demonstrate that they will make a positive impact in Singapore’s digital ecosystem. Whether that contribution comes from cutting edge technologies, improved operational strategies, or new business partnerships, is what every investor and operator is trying to [figure out],” said Street.
But will we end up sacrificing opportunities in terms of digital capabilities and innovation? Probably not, when one considers that Singapore already hosts all the top public cloud players, which operate out of multiple data centers. Moreover, capacity requirements for the Singapore cloud regions will likely ease as new public cloud regions are established in the region.
No matter how you dice it, there will be far fewer data centers built in Singapore, relative to the many new builds continuing apace in other Southeast Asia countries. Is there a risk of Singapore losing its current position as a data center hub? The current consensus appears to be a “no ”
Tan from Structure Research summed it up this way: “I don't think it's a binary outcome. Singapore will always be a data center hub. [But] moving forward, there will likely be other data center hubs across Southeast Asia that will eventually decentralize the need to solely rely on Singapore as the only data center hub in ASEAN.”
The DC-CFA pilot application closes on 21 November 2022
GreenCLOSE TO HOME
Here in Singapore, a large scale industrial facility in a western part of the city-state plays a critical role in Southeast Asia’s rapidly expanding data center sector.
Within its four walls, diesel generators are carefully assembled to individual customer specifications. These highly advanced pieces of equipment, often with outputs of up to 4MWs, provide the mission-critical power for multimillion-dollar hyperscale and colocation data centers in many countries across the region.
The Jurong Pier Road plant is the headquarters of Kohler Power Systems in SEA. It is here that large power node generators are designed, built and tested before being shipped locally to tier 1 data centers hubs in Singapore and other hotspots such as Singapore, South Korea, Indonesia and ANZ.
Recent investment at the plant reflects the growing importance of having a ‘close-tomarket’ production strategy serving the data center sector in Southeast Asia. This approach allows for a more bespoke service – with 90
percent of generators built in the plant having some form of customization. It also drives better business relationships and shorter lead times, resulting in on-the-ground cost and logistics efficiencies.
Keith Khoo Head of Marketing Power Systems, KOHLERDelivering end-user advantage
So, let’s take a more in-depth look at the benefits of local design and manufacture for data center applications. Firstly, customization is vital because it allows generators to be designed for the specific requirements of local cities and countries. This could relate to stringent emissions regulations or on-site restrictions around noise. Kohler’s technical teams have extensive knowledge of regulatory variations across Southeast Asia and can optimize designs to help customers meet any requirements.
The Singapore factory also provides a regional base for rigorous testing and approvals, often attended in person by data center customers. By working together with local engineering consultants or end-users at the plant, test results for first-of-type data center generators can result in modification of the design, with Kohler working swiftly to perform refinements to systems and components, as required.
An understanding of local conditions on the ground is also crucial for logistics. Southeast
The Singapore factory also provides a regional base for rigorous testing and approvals, often attended in person by data center customers
How a local production strategy for the supply of generators better serves the requirements of data center operatiors across Southeast Asia
Asia, as a region, comprises an impressive diversity in religion, culture and history. Its terrain and infrastructure also vary from one country to another – and transporting large pieces of equipment such as generators to different data center locations can present a logistical challenge. Therefore, local insight and knowledge are critical considerations to ensuring that equipment gets to the site without delay. For example, large-scale 4MW generators often need to be lifted carefully into place, requiring mobile cranage on-site. A detailed assessment of floor loadings needs to be arranged before delivery and installation can occur. These arrangements are better made in close collaboration with a local plant.
The Singapore facility also acts as a foundation for consistent after-sales and technical support, with 24/7 service support including instant access to genuine parts that help deliver optimal performance. Distributors, dealers and teams of service engineers benefit from having close business relationships with Kohler’s in-house design and production experts, ensuring up-to-date knowledge and training.
Above all - a regional plant allows for a truly global footprint, giving customers an advantage in production redundancy. Uncertainty created by the ongoing pandemic and production costs makes a regional location all the more important. With Kohler’s Singapore plant, we
can offer alternatives in the event of a disruption in other countries - ensuring customers get the backup power they need when they need it.
Committed to local customers
Data center operators have exacting standards and demand the highest levels of design, manufacturing, testing, delivery, installation and after-sales. That is true now and will remain so in the future.
A close-to-market strategy for missioncritical equipment such as generators delivers advantages in multiple areas. As a result, Kohler is committed to investing in its Singapore facility, ensuring customers get the best products and service every time.
Data centers cooled by snow
Peter Judge Executive EditorIn the city of Bibai, on Japan’s North Island of Hokkaido, an artificial hill of snow is slowly creaking and settling, gradually melting in the island’s mild summer climate.
The hill has been covered with a layer of grayish insulation, as if to preserve it. But the hill’s owners don’t want to keep a pile of snow forever. They just want to use the cold it produces as it melts.
An ice-cold pipe extends from the mound to a nearby building.
The pipe carries anti-freeze. But more
The system cools data center servers using half the energy required by conventional techniques, and uses a waste product with no environmental impact
importantly, it carries the hopes of a small band of data center experts, who believe that one future for data center cooling could rely on snow.
No skiing on these slopes Hokkaido is a center for winter sports. It’s the northernmost part of Japan, and the coldest. Even its summer days only reach 17 to 22°C (62.6 to 71.6°F).
There’s no shortage of snow in Bibai, the fifth largest city on the island. Around 10m falls every year. And while tourists may enjoy
skiing and snowboarding, it’s a headache for the city to keep the streets clear.
Each year, the city spends 400 million yen ($3.5m) shoveling the frozen precipitation up. Because heaps of snow might not melt all year, the city gathers it in dump trucks and removes it to dedicated snow melting sites.
Basically, snow is a costly nuisance.
Or it was, until Professor Kobiyama Masa yoshi of the Muroran Institute of Technology, began experimenting with pipes of antifreeze.
Under Professor Masayoshi, the Bibai
It’s said that snow gives you two options: shovel or make snow angels. Now you have a third choice: cool your data center
Natural Energy Research Association looked into ways to make use of snow, and found that its coldness or “snow cooling energy,” could actually be useful.
The group proposed cooling data centers with melting snow.
One source of inspiration seems to have been Lawson, a Japanese convenience store chain, which in 2012 introduced a selfcontained snow-based cooling system for a store in northern Japan. Rather than using piles of snow, the store was equipped with a 100 cubic meter insulated container that was filled with cold snow.
Water running through pipes in the container was cooled by the snow and used in the air-conditioning system.
Professor Masayoshi developed that idea and, by 2014, a prototype known as the White Data Center was up and running, backed by funding from Japan’s NEDO (New Energy and Industrial Technology Development Organization).
The system cools data center servers using half the energy required by conventional techniques, and uses a waste product with no environmental impact.
The White Data Center’s snow mountain has no associated CO2 emissions. Even the trucks that gather the snow would have to do it whether or not the data center existed.
It’s a comparatively low-tech solution, with low costs to implement. The antifreeze circulates between the snow mound and the data center, where a secondary water circuit cools the servers.
There is no need to clean up the snow, which is gathered in a pile that also contains trash and mud from the streets. The team simply runs a pipe through the heap, circulating the antifreeze, which goes back through the heap after its journey through the data center.
Obviously, in winter, no snow-cooling is needed as the ambient temperature is low enough to cool the servers unaided.
And the snow mound - covered with insulating material to preserve it - lasts all year. More will be gathered next winter before
this year’s heap has melted away.
Other plans from Kyocera…
As we said, Hokkaido has plenty of snow, and the White Data Center was not the only project on the island to use snow to cool data centers.
In Ishikari, 50 km away from Bibai, Kyocera Communication Systems has announced plans for a zero carbon data center solely powered by wind, solar and biomass.
Construction on the “zero-emission” data center was announced in April 2019, with the facility due to come onstream in in 2021, and be hooked up to renewable power facilities owned by Kyocera in 2022.
The plan was for 2MW each of wind and solar power to be available to the site, which
would also draw energy from a nearby thirdparty biomass plant.
The project is due to go live in the Ishikari Bay New Port industrial park, which has a “100 percent renewable energy area” commitment that requires all companies who build in the area to power their facilities with renewable energy.
We haven’t found any more recent updates on the project, and we suspect the pandemic may have put it off target, but we’ve reached out to Kyocera to find out about progress, and hope to report more details
… and Data Dock
We’ve also had reports that a third company brought the snow cooling idea to the more temperate main island of Japan, Honshu.
In the Niigata Prefecture, data company Data Dock reportedly ran a sustainable data center using renewable power, cooled with snow meltwater, in the city of Nagaoka, just 174 miles north of Tokyo.
The area has an average temperature of 12°C (53.6°F) from February to December, so there will be less snow around, but the site is reported to have made use of it, along with cool air.
Sadly the Data Dock website seems to be down and its Facebook page has not been updated since 2019, so we suspect this chilly data center may have melted away.
WDC expects to ship in around 300,000 eels, which will be grown on-site for seven months, till they reach a commercial weight of 250g
Time to go commercial
Of the three Japanese snow-cooling projects we know of, it seems it was the White Data Center which was successful enough to get commercialized.
After running for five years, from 2014 to 2019, the WDC proved its viability, and in 2021, the White Data Center company was established.
In April 2021, the commercial White Data Center began operations with approximately 20 racks of servers, on a 3.6-hectare plot of land bought by one of the project's partners, Kyodo News Digital.
“Currently, a 20-rack data center is in operation as an experimental facility,” a White Data Center spokesman told us by email.
It offers commercial services, we were told, with four unnamed companies using it or planning to use it: “Fees are lower than those of general data centers,” said our source at WDC.
And the company has ambitious growth plans: “We plan to begin construction of a new building with a capacity of 200 racks by the end of this year [2022].”
While the experimental facility operated at 2kVA load per rack, the new facility will have a higher power density of 5kVA per rack.
According to a government press release issued earlier this year, WDC plans to operate a zero CO2 emission data center, using renewable power from biomass generation plants.
“The next data center we plan to build will be ten times the size of the current one, with 200 racks of servers running,” says WDC president & CEO Ijichi Shinichi in a statement.
Warmth for fish
There’s a side benefit to the facility too. The president spoke about how the Center may develop.
“In order to use energy efficiently, we’re experimenting with vegetable cultivation and fish and seafood farming in greenhouses using waste heat produced by the servers during winter,” said Shinichi.
“We plan to turn this into a reality as the data center grows in scale.”
During its research phase, WDC explored various agricultural options for this heat, including abalone, sea urchin, Japanese mustard spinach, cherry tomatoes, and other products.
As it enters a commercial phase, it has chosen eels and mushrooms as the first products, according to a report in Asahi Shimbun. Both can be harvested after a short period of cultivation.
Eels are a major food staple in Japan, where a massive eel-farming industry has ramped up to some 250,000 tons per year in the last 40 years.
WDC has imported 1,700 elvers (young eels) which will grow to maturity in tanks at the data center. The water cycle of the data center cooling system produces water at 33°C, which is ideal for eel farming, as the
tanks can be kept at 27°C all year round with no heating costs.
WDC expects to ship in around 300,000 eels, which will be grown on-site for seven months, till they reach a commercial weight of 250g.
At that point, they will be sold nationwide and included in local school meals. They will be the first eels cultivated in Hokkaido, Asahi Shimbun reports.
Can we all join the snow party?
WDC’s success is pretty cool. Operators elsewhere might be wondering if they can imitate it, but WDC is not encouraging on this score.
“It is difficult to imitate a place with snowfall alone in terms of revenue and expenditures,” said the WDC spokesperson we contacted.
“This is because there are surprisingly few places with the right conditions, such as land price, amount of snow and collection methods, and stable collection of fuel needed for biomass power generation.”
WDC has filed patents for snow cooling systems and heat utilization circulation systems, we were told, but they might have limited applications, because there simply aren’t many places which have enough winter snow, the right amount of summer sun, and a combination of local data demand and renewable power.
The Australian Edge: the perfect market for an Edge industry
Dan Swinhoe News EditorDespite spanning an area almost the size of the US or Europe, Australia has a population of just 25 million, close to that of Madagascar or the Ivory Coast.
The US boasts major data center hubs on both coasts as well as growing markets in the south and north, while Europe has at least five established markets (the FLAP-D of Frankfurt, London, Amsterdam, Paris, and Dublin) alongside nascent locations across the continent. But the world’s largest island has just two notable data center markets in Sydney and Melbourne.
Because of its sprawling geography but relatively small population, Australia may actually be better suited for a nascent Edge data center industry than Europe or North America. The demand for local compute in smaller cities and towns is there, but many areas lack any sort of nearby data center facilities of any sort, and most will never justify build-outs at large scale.
Australia’s geography might make it a perfect market for a proper Edge industry, but is there room for multiple players?
Battle for the Australian Edge
A number of firms have sprung up in Australia, looking to fill the gap for Edge locations outside the traditional metros.
Founded in 2018, Leading Edge Data Centres (LEDC) uses prefabricated data centers that can be quickly erected. The prefabs come in either 30 or 75 rack configurations. The company has data centers in Tamworth, Newcastle, Albury, and Dubbo; it has plans for 10 more in New South Wales (NSW) before moving into Victoria and then to Queensland and the Gold Coast. The company has partnered with Schneider Electric and Cisco, and its backers include Australian investment firm Soul Pattison and DigitalBridge
Edge Centres (EC) builds smaller, providing modular ‘off grid’ data centers powered by on-site wind and/or solar power and connected to the main grid as backup. Each facility is equipped with just under 1MW of solar infrastructure, and 48-hour battery, and UPS backup equipment, which supports 64 1kW quarter racks. The company says the sites can produce more electricity than they use.
As well as Traralgon and Bendigo in Victoria, EC has or is developing Edge locations across Grafton and Dubbo, NSW; Toowoomba, Cairns, Mackay, and Townsville, Queensland; and Hobart, Tasmania. In September 2021, the company
broke ground on an Edge network operations center in Albury, New South Wales.
By the end of 2023, it aims to have 20 sites operational, and 40 sites operational by the end of 2024. Some are built in containers outside, others are installed in purpose-built rooms inside existing buildings.
Other players including DXN and DC Two develop small modular facilities for enterprises as well as operate a small number of colocation facilities. As well as operating three permanent data centers, DXN has delivered at least 20 modular data centers to customers including Boeing and Covalent Lithium as well as a prefabricated Cable Landing Station to the Belau Submarine Cable Company (BSCC) for its new cable spur to Palau.
The state of the Edge now
As DCD has previously reported, many within the data center industry believe we are still in the early days of the Edge. While future use cases are being developed, the likes of mobile network operators (MNOs), hyperscalers, and local managed service providers/SaaS players are likely to be the early customers.
Edge Centres has secured contracts with the likes of local IT & ISP players, while Leading Edge says its initial customers include regional businesses such as manufacturing & distribution as well as
government and telcos.
“What we're seeing initially, which is what we anticipated, is telcos, ISPs, retail service providers,” says LEDC CEO Chris Thorpe. “We've really built these facilities with government in mind: if it's good enough for government, it's good enough for everybody else.”
“Last year for us was extremely challenging with Covid. This year, completely different; we're seeing a very different sort of open-minded approach to the edge, which is now very much becoming accepted.”
“We've got quite a substantial MSP very close to launching on a multi-site deal. We've got a tier-one telco on board as well.”
Thorpe says LEDC is close to launching its own private cloud that can be used for small medium and large enterprise in partnership with an as yet unnamed ‘significant cloud partner’.
“This is where we really see the ecosystem evolving; initially with telco and then building out with ISPs, RSVs, and major cloud providers as well. And that's that in turn is going to start bringing enterprise in en masse.”
As the 800-pound gorilla in the data center space, the fact that DigitalBridge is interested and investing in the Edge is seen as a boon. Executives in Europe have told DCD that the company’s investment in the AtlasEdge venture alongside Liberty Global
“This is effectively a land grab; we all want to position ourselves as an Edge player in a particular country or a particular location. There's also the long game that you've got to be able to manage that out"
has helped raise the profile of the Edge with investors and customers, and the same effect has happened in Australia.
“It's made a big difference. It's definitely been a milestone moment for us,” says Thorpe. “Even from an institutional investor perspective, we've got a stack of inquiries coming through now. But also from a client perspective, it adds a layer of credibility as to 'well this is really happening'.”
Even Edge Centres’ CEO Jon Eaves acknowledges DigitalBridge’s investment in Leading Edge has been good for the local industry: “It’s validated the Edge, I think. DigitalBridge blessing a particular technology legitimizes it.”
Is Australia big enough for more than one Edge player?
As previously mentioned, Australia is huge geographically, but has relatively little existing data center build-out compared to Europe or the US. But the need and early demand for the Edge is already there and could help spur further development.
“People in regional Australia, which is a third of the population of Australia, have had a serious challenge accessing cloud-based services, which is why the penetration is still quite low,” says Thorpe.
“You've got these vast distances with no credible data center facilities. There's no credible, resilient place to have serious IT infrastructure, and that's exactly where the opportunity is; I think if anywhere in the world is suited for an Edge network, it's Australia.”
“Sovereignty is becoming a massive issue over here. Where is your data; is it held in Australia or in the States? You need to know as a company director exactly where your information is held.”
Despite the vast geography, the population of Australia is small, especially outside the existing hubs. Is there enough room and demand to allow multiple Edge players operate within the country? Both Eaves and Thorpe say probably not.
“Australia's a tiny market,” says EC’s Eaves. “We've only got 25 million people, the towns we're building in have never even had a data center.”
“You look at the size of our cities, region by region, you're probably talking 2-300,000 people,” adds Thorpe. It’s not a huge capture in each location, so that first mover advantage is really critical.”
Though their offerings are all different – pre-fabbed containers for DXN, solar powered pods for EC, and Tier III designs for LE – is there enough differential between the market players to create a leader?
“I don't think anything separates us at the moment. There's three companies all going after the same market,” warns Eaves.
“What Leading Edge offer, what Edge Centres offer, what DXN offer are all the same, which is detrimental because then it becomes the race to the bottom which is obviously price.
“This is effectively a land grab; we all want to position ourselves as an Edge player in a particular country or a particular location. There's also the long game that you've got to be able to manage that out. There's a lot of companies talking about going to the Edge and I look forward to there being more partners and more competitors in the space.”
If more players enter the space, we’re likely to see some consolidation and failures: Earlier this year Edge Centres acquired fellow local Edge player DC Matrix, adding two facilities currently in development in Sippy Downs, Queensland, and on the Gold Coast, to its portfolio. In the US, early Edge data center startup EdgeMicro entered liquidation late last year.
The APAC Edge
Beyond Australia, Edge Centres is heavily focused on expanding in Asia, which the company predicts could be more of an opportunity in the long run.
“The regional areas of Asia are as underserved as the regional areas of Australia,” says Eaves. “But there is no competitor; there is no DXN or Leading Edge in Asia, they are very much focused on the hyperscale towns.”
Edge Centres is developing facilities in Kuala Lumpur, Malaysia, another in Vietnam’s Ho Chi Minh City. It is planning further sites in Johor, Ipoh, and Penang in Malaysia, as well as at locations in Indonesia, the Philippines, Japan, and three locations in Vietnam.
There is, however, some competition for the Edge in APAC: DXN has delivered containers to Palau and the Cocos (Keeling) Islands for cable landing stations, while Turbidite – an APAC-focused Edge player led by former Global Cloud Xchange CEO Bill Barney – has a presence in Guam and Hong Kong.
“With the emerging markets of Asia Pacific set for solid growth in the next wave of digital transformation, and a huge gap in highly connected, safe data centers, we believe this is a perfect time to build our footprint in the capital cities of the key emerging Asian markets, then expanding to the second tier developed markets,” Barney said last year.
Thorpe says LEDC is currently focused on Australia, though it has had ‘some
discussions’ about projects overseas.
Edge later; what will it look like?
It was 2018 when Michael Dell said “the Edge will be bigger than the cloud.” Whether that will be true in the long run is still unclear. So too is the route to get us there.
Will 5G be the major driver of Edge uses cases? The ‘Metaverse’? Gaming? IoT? No one is sure, and none have yet taken off in a way that could sustain a new segment of the data center industry.
“There's definitely a 5G arms race as far as trying to get some 5G connectivity out there. But as far as business applications, I think that will be a little bit slower,” says Thorpe. “The carriers are still trying to work out how they're actually going to make money out of it.”
“We don't have a true Edge customer because we don’t really know what defines an Edge customer yet,” adds EC’s Eaves. “We've got people using compute in the region, but that's not necessarily for Edge; it's just that they don't want to have to drive it to Sydney when something goes wrong.”
“I have my money on the hyperscale really being the Edge customer,” says Eaves. “Because really all the Edge is doing is filtering heavy workloads that are then transferred back to the cloud anyway.”
Eaves also says over-the-top proviers (OTTs) like Facebook as well as major streaming services will likely be future customers.
Thorpe adds that agri-tech may also be a large customer of the Edge in future, especially in the likes of Australia where farms can span thousands of hectares.
“It's a huge market. There's a lot of need for IoT out there. The whole ag-tech sector, I think absolutely will be red hot. But not just yet.”
“I think the Edge is in its infancy,” concludes Eaves. “We're all still pre-Edge. What the Edge is right now is not what it's going to be; I still don't think any of us know what it will look like in 2026.”
Floating a new idea
Sebastian Moss Editor-in-Chief Photography by Sebastian MossNautilus is reading to move from bold claims to a shippable product
Deep in the bowels of the vessel, surrounded by large metal pipes and the soft hum of pumps, you can't feel the surrounding water. But, if a large ship passes by, you can hear the distant whir of propellers, a hint that this data center is rather unusual.
Nautilus Data Technologies is best known for the barge data centers it has been working on since it was founded in 2014. And yet, even as we toured its sole facility in Stockton, California, the company was keen to highlight that it was more than just floating servers.
Instead, key to its pitch is simply access to ample supplies of nearby water, preferably already moving. While it has plans for additional barges, Nautilus hopes
to develop more traditional data centers on land, pumping in water through its actual unique selling point - its patented Cooling Distribution Unit (CDU).
On the barge, it takes external water, holds onto it for 15-16 seconds, and transfers heat to it from an internal closed loop of pure water via a heat exchanger. The external water leaves the CDU 4°F/2.5°C warmer.
On land, it will need water brought to it. "We've arbitrarily set our distance [from the river/lake] at about a kilometer,” Robert Pfleging, president (now CEO), said. “And that's a function of pumping horsepower, and then you have rights of ways and the costs of installing those pipe lengths."
Each approach has its benefits, and accompanying drawbacks. Land-based data centers can grow larger, and there are more possible locations to choose from, even with the kilometer cap.
Floating facilities, meanwhile, have to
find a waterway where they are permitted to permanently moor - such as a port - and have to come in the shape of a barge, or at least something that floats.
But, in return, the facility can be moved if required. You don't have to waste power pumping water over a distance, and there is even a small conductive cooling effect from the water passing around the vessel. Nautilus argues that you also save time on permitting, because you can slip under a port's existing permitting umbrella.
These differences have made the two approaches appealing to different customers - or, at least, potential customers. "I didn't personally like the floating platform idea," Pfleging admitted.
"But the big guys love it, because of the flexibility that affords them," he added, referring to possible hyperscale interest. "I can't wait to do a large 40-50MW brownfield - that's what Maine is."
In Millinocket, Maine, Nautilus hopes to build (on the land) at the site of an old paper mill, once operated by the Great
Northern Paper Company, which also has a fast-flowing water source, that also drives a hydroelectric power station.
"It's got a large intake and discharge system," Pfleging said. Google built a data center in Finland on the site of another disused paper mill for the same reason, using seawater to help cool its facility.
"In 12 to 18 months, I can float in 10MW, sign a 10-20 year lease on a port slipthe power's there, the connectivity is there"
However, Pfleging is keen to point out that Nautilus data centers embrace the water-cooled concept far more aggressively than the Google site. "Oh my gosh, they're using naturally cooled water, but that is the starting sentence of War and Peace from what we're talking about," Pfleging said. "Here, it is much more extensive than that."
There are other parallels, As well as ts paper mill installation, Google also considered a floating facility, applying for
a floating barge patent concept way back in 2008. The company built four barges between 2010 and 2012, the use of which was never clear. Google once claimed one would be a "marketing center for Google Glass," the company’s now-nearlyforgotten smart glasses failure
In a cruel twist of fate, one of Google’s barges found its way up to the San Joaquin River-Stockton Deepwater Shipping Channel, near Nautilus' facility. "There are still remnants of it, up along the river," Pfleging said.
Now, Nautilus hopes that Google, or a hyperscale rival, will instead turn to it as a partner, with the small company setting up barges it can float to places where the cloud provider needs more capacity.
forward and then stall, because it really begs to be 30-50MW out of the gate," Pfleging said. "So it really narrows the number of customers that we would want to put in there.
"I honestly wouldn't open the Maine site for 5MW, it just doesn't make any sense. We're going to build the greenest data center in the world: It's going to have new green materials, with energy recapture on water, a little bit of solar, feeding downstream customers, and those types of things," he said, adding they were aiming for the hydropowered facility to have a PUE of 1.08-1.10. It is also talking to fish farms about using its warm waste water, similar to Green Mountain in Norway.
Along with its Maine plans, the company is targeting an undisclosed city in California, another in Europe, and has signed a Memorandum of Understanding with DFNN for a potential data center in the Philippines,
"We have interest from a hyperscaler, and another with a large colo," Pfleging said
He added that they were drawn to the potentially fast permitting times and rapid deployment of the barge
"In 12 to 18 months, I can float in 10MW, sign a 10-20 year lease on a port slip - the power's there, the connectivity is there," Pfleging claimed. "And while I'm doing that, the customer can work on their 50MW data center, and maybe keep the barge or float it to its next city."
Similarly, it hopes to find hyperscale or large wholesale interest at its potential Maine facility.
"Our Maine site continues to leap
and with Raimon Land to explore licensing its tech in Thailand and the Philippines.
It also has a long-delayed barge project in Ireland. "I was just in Limerick, Ireland, last week," Pfleging said, claiming that the project is getting back on track.
After being proposed in late 2018, the port-based facility was quickly approved despite complaints by local businesses. But a new wave of objections, Covid-issues, and the lengthy process of getting data center power in Ireland held up the facility.
"We have just got a power letter from [Irish utility] ESB, they committed that we are in the 2022 batch, we're on their timeline now," he said.
"So it might take as much as 90 more days for them to identify, but as soon as they identify a delivery date of power, we're off to the races there - we’ve got a conditional lease, we validated connectivity, we have line of sight to all of our permits.
"It's close, I would say our [California] site and our European site are closer."
With these projects, and its colocation facility in Stockton (which has space to be joined by other barges), one could be forgiven for thinking Nautilus was looking to muscle in on the same turf as Digital Realty, Equinix, and dozens of other colocation and wholesale companies that dominate the land.
"People often get confused by that," Pfleging said. "I've been there and done that," as a former VP of CenturyLink (now Cyxtera), where he ran 55 data centers. "When I was interviewing here, I said that if we're gonna be another Savvis or CenturyLink, I'm out.
"It's not our desire to go out there and build up 50 data centers and be an owneroperator. I don't want that headache again."
The company will do some of its own projects, such as Stockton - a necessary proof-of-concept - and has plans to proactively develop in some locations ahead of finding a customer. But mostly, it hopes to work with hypercalers and colos, including the Equinixes and DRTs of the world.
"We'll deliver the full product to you, and they fund it," Pfleging explained. "We have a percentage of ownership for carrying the project that you can effectively call our margin, and then we will stand up operations through testing, around 24 months.
"We'll stand up that operations company, hire the people, get all through compliances. And then contractually, at some point in time, then the customer can go, 'Okay, great. We're going to now buy the operation company away from you. And those are now our co-workers and we will buy out your equity stake in the company. It's ours. We know now it runs well.'"
Further down the line, if it has done this multiple times with a company, "I would then have a level of comfort where we can just do a straight license." The barges themselves will be built modularly by shipbuilders and designed in tandem with naval architects Elliott Bay Design Group.
But that's all in the future, if everything goes to plan. For now, we have the Stockton data center, floating on the Stockton Channel.
The company made a mistake with that facility when it first began. "We went and bought a used barge," Pfleging said. "We thought it was great for the story, and saves on steel."
Another reason was that California has a permitting process called ‘shade on the waterway,’ where permanent platforms that block sunlight to the river bed need a permit - but if you buy an existing barge, it comes
with a permit.
"We thought we were really smart," Pfleging said. "We spent about twice as much refurbishing that than just buying a new one, and the footprint was not ideal.
"We have to live with the footprint we have," he said, explaining why PDUs were in the data hall where the customer gear is. "It's not like that in any of the new designs, where it's in its own electrical room. But lesson learned."
The whole facility is a series of lessons learned, with improvements in design and layout apparent from the early first data hall to the more modern fourth - including going from 2MW of IT capacity to 2.5MW and swapping the lead acid UPS for lithium-ion. The ship could hold 440 IT racks; a newer design boosts it to 480.
Evolution is also apparent in the cooling rooms, which we were not allowed to photograph for fear of disclosing information about Nautilus’ proprietary CDU.
The first CDU room, a 'Gen One' design, is a mess of pumps and machines, akin to a jungle. Gen Two, a floor above, is closer to a delicate bonsai garden.
"Everything you're seeing here is a piece of our IP," Pfleging said, gesturing proudly. "We actually bought this IP," he added, tapping on a box. "The original IP was owned by an entity out of the UK, we bought it and then we continued to update it since then." Pump and control monitoring software is also designed in-house.
Gen Three is in development, moving the system's air handler onto the CDU so that it can help cool itself. It is also expected to be increasingly pre-fabricated, reducing on-site labor time.
We go outside via the CDU room, and are immediately confronted by a wall of hot, sticky, Californian air. It's not wholly pleasant, with the odd waft of sulfurous algae occasionally overpowering the industrial odor of an active port.
Such natural and man-made contaminants - as well as fish, driftwood, and rubbish - are common to all major waterways. But they do not present a threat to the barge's intake, Pfleging argues.
Here, the company did not need to develop new IP or revolutionize the water wheel. "We find people that have been doing it for 50 years and use them," Pfleging said. "There are great companies out there where you tell them the body of water you're sitting on and your flow rates, and they'll size your intake system to include the little slots or slivers so that a fish can swim along and not get sucked into the inside of it."
Large debris, fish, and larvae are first
strained out. Then the water is ionized so that it doesn't stick to anything. Pipes are vibrated with ultrasonic transducers, a tech developed by the US Navy to stop barnacles from being able to attach to their ships. The water then travels through stainless steel or high-density polyethylene, which is hard to attach to, and doesn't impart chemicals into the water.
Should nature, in its irrepressible inventiveness, find its way through and cause a blockage, the jet can be reversed. Along with the filters, the heat exchanger can also opened up to be cleaned. "And there's a redundant one for every data hall," Pfleging pointed out. "So I'm N+1 at the heat exchanger level for every data hall."
In the case of the Stockton data center, water is sucked from 11-12 feet below the surface level to ensure a consistent temperature that isn't warmed by the scorching Californian sun, but is also not so low that it is sucking up silt.
"Go over there and look at the water," Pfleging said. "It's murky water, but we can do it."
As he speaks, a vast oil tanker passes slowly, flanked by two protective tugboats, there to
avoid an Evergiven-in-the-Suez-Canal situation.
It's a relatively straight journey for the vessel, and it has its guiding ships - but mistakes can happen, and the 183m tanker could theoretically veer off target.
"It's a very narrow, very deep channel," Pfleging reassured me. "They'd run aground
equipment below the line is designed to live underwater.
"The fear of sinking is a non-starter," Pfleging declared.
On the small deck, you can see signs that this barge lived a whole life before it became a data center. "It spent 20 years of its life floating between here and Hawaii carrying aggregate rock back and forth, so it took some abuse," Pfleging said, giving a railing an affectionate pat.
On its land side, the traditional high, barbed-wire fence looms imposingly. But here, on the river, the railing is all there is between the barge and open water. Across the channel, residential houses are visible, with the nearest garden
"So they pulled his identifier number off his boat," Pfleging said. "DHS went and paid him a visit, they impounded his boat and said he can have it back in a year or so. He's also on the no-fly list. Just because he was an ass, not because he was a bad guy."
Keen to not jeopardize our flight home, we turned our attention to the other side of the data center, where it is linked to the land.
Power cables and submarine cable fiber connectivity snake in along with a gangway. All are capable of moving up and down, because the tidal river shifts around three and a half feet. "We could handle a 15-foot wave or 100+ mile an hour winds, not that we're going to get any of that," he said.
On land, the data center plant continues. Diesel generators and fuel storage are based on terra firma, as is a Network Operations Center (NOC), which includes a small
before they get to us."
Still, freak accidents can happen, perhaps with a smaller boat that could make it across the channel and flood the barge. What would that mean for its precious, electricallypowered cargo?
The barge has 13 watertight compartments, five of which would have to be compromised to sink it. In such a case, all the IT equipment would still be above the waterline should the barge touch the riverbed. In its other planned Californian location, where the water is deeper, it is placing pilings underneath it so that it would still settle above the water line. All the
proudly sporting a miniature Statue of Liberty - base and allto welcome passing ships.
This makes the side of the data center feel strangely open and unguarded. Not so, Pfleging countered. "Within 50 feet of us is Homeland Security," he said. "And we've had two instances where they put the boat out after people."
One time, Nautilus was doing weekly water testing and forgot to notify the agency. "Our boat pulled up alongside to talk to somebody here, and turned around just in time to see the fast attack boat coming down with an M16 machine gun on the nose pointed at these guys," Pfleging said.
The other time was a bass fisherman. "We were trying to be nice and told him to stay away, but he was all 'you don't tell me what to do,'" Pfleging recalled. Unfortunately for the recreational fisherman, the camera footage is linked to port police and Homeland Security.
warehouse. "If a customer wanted it, we could do it all floating, but at some point it just becomes a question of what you are really trying to accomplish,” Pfleging said.
That question is at the forefront of his mind in what he calls the “twilight of his career.” He explained: “I couldn’t be more excited that it has finally evolved to be something meaningful.”
After years at both APC/Schneider Electric and Emerson Network Power/ Vertiv, as well as CenturyLink and healthcare IT, he remembers chasing one percent efficiency gains in cooling systems. “Frankly, we're moving the needle 10s of percentage points overnight here, it was just this complete paradigm shift in the way we think about mechanical cooling,” he said.
“Data centers have a black eye right now, a lot of places don't want us - Singapore, Amsterdam, Ireland… and rightfully so,” he said. “It's about making data centers part of community planning and part of the ecosystem that it lives in, as opposed to a drag on the system.”
African data centers for African people
As investors finally build data centers for Africa’s emerging technical generation, will the continent get the infrastructure it needs?
Data centers are developing rapidly in the African continent, but don’t expect them to follow the same path they have taken in the rest of the world.
Africa is a different environment, and a diverse one, where different nations’ economies are developing rapidly - and at different rates.
More importantly, the African nations are starting from a different place to the rest of the world. They are developing their digital infrastructure using today’s tools, while other nations got on board when technology was at an earlier stage of development.
Africa has to play catch-up, but it can also play leapfrog - getting ahead of the rest of the world by skipping whole generations of technology.
Peter Judge Executive EditorInflux of money
For outside investors, Africa is a new territory, where finance is needed for new projects.
“Based on the critical need for infrastructure in Africa, we've seen a heightened focus and interest within the continent,” said Colm Shorten, senior director of data centers at JLL, at a recent DCW conference panel.” From some of the recent announcements we've seen, there's now between $2 to $5 billion of investment being targeted in Africa.”
Part of the acceleration is down to Covid: “Post pandemic, over an 18-month period, we've seen more investment in the last 18 months than we had in the previous 18 years,” says Shorten.
DCD has been told there is a desperate need for digital infrastructure across the continent. For instance, in early 2021, the Africa Data Centres Association told us that Africa needs 700 data centers totaling 1,000MW, to enable the spread of digital services across the continent.
But not everyone agrees. Funke Opeke, founder and CEO of Nigeria’s MainOne, is skeptical: “Is there a shortage of data centers? Is a demand being presented locally and then the data centers aren't being built to satisfy it? Have global players decided they don’t want that footprint for their global platforms?”
She believes that a more realistic reason data centers have been slow to appear is that demand has been lacking: “Part of what drives this is the volume of use, otherwise every country globally would have its own large data stores for its population. With costs and the economies of scale, we know that's not feasible, so you have to work with what technology has to offer.”
She says: “I don't see a shortage. But I would say, as the continent reaches a critical mass in terms of data consumption and the digital transformation of society, more is being done. You will see more data centers being built to meet these requirements. But I don't think the demand has previously presented itself locally, and has not been satisfied. I think there just was not enough demand “
MainOne has some 5MW of capacity live at the moment. Other providers including 21st Century and Rack Center have similar amounts
These are figures which make Nigeria ripe for expansion, she agrees: “From a digital services perspective, Africa represents one of the untapped growth markets for International players. If you look at the number of unconnected people who are just coming online, the growth of 4G and smartphones, the size of the population and the amount of data they are consuming per capita, we recognize there's still a lot of
growth in the African markets, and I think that's what the global players are paying attention to.”
And she agrees that Covid has sped things up: “Everybody here did the pivot to working from home, with endless conference calls, and changing how we travel. I think some of those things have come to stay.”
North-south divide
In the past, there was an extreme digital divide between South Africa and the rest of the continent, but that is improving as other hubs develop.
Right now, South Africa has the lion’s share of capacity: “South Africa is probably two-thirds of the capacity of the whole of Africa today, with somewhere north of 150MW,” says Shorten.
But even South Africa is under-developed compared with the rest of the world: “In the context of global data center, that is relatively small, even though it is the most advanced African ecosystem from a technology standpoint.”
As the whole of Africa develops, the balance will change: “This is no longer just a South Africa conversation. We’re moving beyond South Africa. We’re looking at other parts of the country, particularly those that
"As the continent reaches a critical mass in terms of data consumption and the digital transformation of society, you will see more data centers being built"
are centered around coastal areas, because they have a distinct advantage, in that it's very easy to take the subsea cables which are coming, and which will effectively light up the infrastructure on the continent of Africa.”
Funke Opeke thinks South Africa used to be a special case, but no longer: “South Africa was not particularly integrated with the rest of the continent in the apartheid days,” she says. Since apartheid ended, that’s changed, “but the demographics and the economics are different, although there are also similar challenges in some areas. I think the divide has closed somewhat, and I think it will continue to get better.”
One tangible sign of this is that South Africa is no longer the first foothold for companies moving into Africa, she says: “As technology has developed, other hubs have grown. South Africa is not necessarily the first place that a multinational looking to do business in Africa feels they have to open an office.
“In previous years, that would be the option. If you're going as a multinational to Africa, you’ve got to open an office in South Africa and see what you can do from there. With technology, there's just a lot more access to different countries. I think the divide between South Africa and the rest of the continent is definitely closer.”
For Shorten, there’s a lot of interest in North African states: “Egypt has the highest number of submarine cables [see page 50], with connections from Turkey and the Middle East and Africa. Morocco is another key strategic area because of the proximity to Europe.”
For smaller countries, like Zambia and Zimbabwe, operators will have to accept that they must start small, as he says: “You have to start somewhere. Nigerian operators, whether it’s 21st Century, Rack Centre or MainOne all started small. The journey that they go on is they start from retail, they start small, moving to wholesale, before they begin to address hyperscale cloud.”
Nigeria takes the lead
In this context, the most exciting country in Africa is Nigeria, says Shorten: “It has the largest GDP in the continent of Africa. At over $500 billion, it’s significantly bigger than South Africa, which has a GDP of around $330 billion. It also has a population of around 211 million, most of them have a young age, who are becoming Internet users with a strong appetite for digital technology.”
It’s only set to grow. “We're going to have an additional 20 percent of Nigerians going on the Internet in the next three years - and 20 percent of 211 million is a big number.”
Against this background, Nigeria is clearly ripe for growth: “When you think
about the capacity that's in South Africa, it's heading towards 180 megawatts. Yet today in Nigeria, we only have 8MW.” For Shorten, that explains the interest big players are taking in Nigeria.
One of the data center players there, Wale Ajisebutu, CEO at 21st Century Technologies, has a prediction: “Nigeria will become a trillion dollar economy in four yearsand what will drive that is data. Nigeria is becoming digitally connected, and more importantly, data-driven.”
Unlike Opeke, he sees a shortage: “How do we keep this data without building enough data center to warehouse the data? It is impossible. So we need in the next three, four years, about 200MW of data center capacity.”
Ajisebutu is courting hyperscaler players, with the idea of creating large quantities of space for them: “We are the only hyperscaler-ready data center in Nigeria,
strike a balance. When we look at mergers or acquisitions, or purchases, they are predominantly to get access to available network or connectivity, but also to partner with people, to keep it local. Because you do need to have local networking, you need to have local cultural understandings. And you also need to have somebody with a proven track record.”
He warns: “I'd say going it alone is probably not wise. But I think using collaboration is probably the best approach.”
Opeke knows about the acquisition at firsthand, or course: MainOne is in the process of becoming part of the digital infrastructure behemoth Equinix in a $320m deal.
Joining Equinix is a plus, she says: “There’s been a positive reaction. The broader tech community sees more capability in Nigeria and West Africa, which has been a high growth market. Having
that is ready to be used, that people can move into today.”
Phase 1 of 21st Century’s Lekki data center is available, and there are plans for a 36MW campus in Ikeja
Ajisebutu has ambitions to have 17MW of data centers in Nigeria, and this includes smaller facilities: “We’re building data centers in every business district” he says “we have Edge data centers that the GSM operators use.”
Foreign takeovers
Ajisebutu says that any foreign investor should at least have a local partner: “My recommendation would be, it would be better to partner with Nigerians. But, if you can do it on your own, you're welcome, Nigeria will welcome you.”
More often, foreign investors don’t come as tenants. Big players from outside the African continent invest in local operators, and make them part of their empire.
Shorten thinks this is often a way to get that local knowledge: “I think you need to
MainOne in Platform Equinix, we're now able to service that. They'll get bigger and better and faster, and all the latest bells and whistles here. ”
The investment should also enable Nigerian businesses to connect outwards: “For the local community there is that excitement about having access to the global platform that Equinix brings. It is going to deepen infrastructure in West Africa.”
Opeke says she is staying with MainOne.
Elsewhere in Africa, Digital Realty has bought into a few different providers, with a stated aim of achieving a “pan-African” position.
In January, Digital took a controlling stake in Teraco, a South African provider with seven facilities in Johannesburg, Cape Town, and Durban totaling 75MW.
That investment followed the acquisition of Nigeria's Medallion Data Centres in 2021, while it also has a controlling stake in Kenya's icolo.io via its Interxion subsidiary.
Teraco has 187MW of total planned
capacity, and owns land next to its campuses in Johannesburg and Cape Town that could support 93MW.
The company has a 19MW development project underway, and it’s natural to expect these developments to accelerate now that they are backed by Digital’s money.
African control, African skills
This process of investment raises questions: when foreign companies buy up African infrastructure providers, does the control pass out of African hands?
Opeke isn’t concerned about that: “The infrastructure that Equinix is focused on is all within the country. So local laws and regulations still take precedence on the structure that we're continuing to deploy. So in that sense, it doesn't pass out of African ownership. Each of these entities is locally regulated.”
Another question is whether foreign investors will invest in local talent, or ship skilled staff in from abroad, with the result that the top skills remain abroad.
Opeke says there’s very little danger of that. “I don't see that yet, and I don't expect that”. She points out that building in Africa requires working in different environmental conditions, with different infrastructure, so cooling and power infrastructure will be different. Our teams will continue to play an active role.”
Even if foreign owners were tempted to rely on their familiar staff, it wouldn’t even be possible for them to do that, she says. “It's not as if global players have excess skills. They're facing skills shortages as well. So I think rather than remove or eliminate the local roles for top talent, we'll be actively doing more recruiting.”
21st Century took a proactive role, says Ajisebutu: “Five years ago, we set up a digital infrastructure academy in Nigeria, where we train a pipeline of talents to cater for our needs and the needs of our customers.”
Ten percent of 21st Century’s profits go into the academy, the company claims. “My idea is to upgrade that academy to an infrastructure university, where we lecture when I retire in about five years' time,” he said. “So we partner with the likes of Schneider; we have a lot of global partners that are supporting us in this space.”
Of course, there’s a danger that some may see data centers as an exciting route for career development abroad, but Opeke thinks there will be plenty to keep them engaged in their homelands.
“Some will take that route, and it's okay,” she says. “But we really hope we can retain the core of our talent to grow the business and have interesting opportunities for them.”
She herself is an example of this. After
graduating from Columbia University, she rose through the ranks at Verizon in New York, before returning to Nigeria, and eventually founding MainOne.
Is fiber a data siphon?
Some people have suggested that the large quantity of fiber coming into Africa could actually impede demand for local facilities, as it would enable users to access cloud services such as AWS instances in the US, or elsewhere around the world. In this scenario, the overall effect might be to siphon data out of the continent.
Set against that, though, individual countries are increasingly applying data sovereignty or data protection measures, which would tend to encourage or require organizations to store data locally.
Opeke thinks this will have some impact: “There's some regulations for data sovereignty, or data protection. I think that does have an impact but I don't think it has a huge impact. I think over time it's just easier to operate in the region, if you have data centers there.”
Shorten thinks there’s little to worry about, because an increasingly connected population will draw data towards it: “These young, modern, smart people are well educated and are pushing for that technology.”
With the arrival of subsea cables, this population will get more online activity in the continent: “When you have that greater connectivity, you have a shift not just in the financial sector, where we've seen the origins of data centers in Africa. They’ll move to healthcare, retail, or 5G, and AI. You'll find everybody trying to figure out how to make it happen.”
More reliable facilities
There’s a popular image of a data center in Africa. It is prefabricated in Europe or the Far East, and shipped to the site. When it’s operating, it has to run on diesel for a larger proportion of the time than an equivalent facility in Europe or the US, because of flaws in the local grid.
Opeke says that picture is changing, but it has taken effort: “As far as MainOne is concerned, that is out of date. I can't speak for all data centers, but we've been very intentional about power sourcing and the environment. We've strategically placed our data centers across West Africa with direct access to the grid. We build dedicated lines to take power from a substation, and that results in 96 percent or higher power availability from the grid.”
That 96 percent figure may be somewhat less than European grids can give, but it represents massive progress: “Yes, we are
burning diesel four percent of the time - but that's a deliberate strategy.”
Achieving that figure requires different strategies from country to country, and in Ghana, it drops to 95 percent, but MainOne is keeping to that target: “In Ghana and Cote d'Ivoire you have to think it through. Burning that much diesel isn't good for the environment or the stability of the data center. We are deliberate about power sourcing. We get 95 percent in Ghana, and in Cote d’Ivoire we do better than that.”
Wale Ajisebutu of 21st Century is a big believer in data centers that can stand on their own two feet: “We don't believe that the grid can support the data centers of today, so we have to do something about it. So our philosophy is microgrids, and captive power.”
He explains: “You build your own power plants and use the grid as a support, or vice versa. But you have to build your own, and you have to rely on a microgrid to be able to survive.”
21st Century has the largest solar farm that is powering a data center in Nigeria, claims Ajisebutu, and it is also investing in battery power for its facilities: “The era of connected to the grid is over. You have to do captive power to be able to get power all the time.”
Going green
Renewable energy is also harder in Africa than in many other geographies, so while European operators can aim for 100 percent renewable energy, that’s not yet attainable: “Nigeria has a lot of hydro, but it's not 100 percent renewable,” says Opeke.
“We're starting to see some interesting proposals now for building solar farms that might actually generate enough power, and storing it locally, that could be used as a separate source. But I think that's still very early days.”
In the meantime, she says: “There other things that we're doing that we can continue to do to reduce our carbon footprint.”
In Europe, there’s a need for older facilities to be brought up to date and made efficient, either through regulations or voluntary movements like the Climate Neutral Data Center Pact. Opeke doesn’t see things like that happening in Africa.
“Regulation could be a good thing, from an environmental and also from a business standpoint,” she says, but Africa won’t have such a problem with legacy facilities. “We're able to deploy the technology that gives us those efficiencies today from day one.”
African facilities can skip generations, and go straight to optimal energy use and cooling, she says.
How to break into a data center: Pen testers reveal their secrets
Physical security isn’t a commodity, but an ongoing challenge
Dan Swinhoe News EditorAman in disguise. Armed to the teeth with fake documents, plans to the building, and tablets to help him foam at the mouth and fake a fit in a tight squeeze. This hit has been in the works for weeks.
Having called ahead under false pretenses to ensure he’s expected, he walks up the driveway, mentally preparing to break into a high-security data center brimming with critical data and applications… hey is the fire escape door propped open?
We need to talk about physical security
While cyber takes the headlines, the physical security of data centers can’t be overlooked. While it may seem simple, the access controls that protect your facilities – and the people that roam the facilities – can easily be compromised if a company is lackadaisical in its approach.
So, is the physical security of data centers as good as companies make out?
“No,” Andrew Barratt, principal consultant, adversary ops, at security firm Coalfire, says simply. “I could probably count on one hand how many are well thought-out. It gets forgotten because it's a presumed commodity.
“Everyone thinks all that stuff just works. And then they don't think about the real-world threats to those physical controls.”
Physical security needs thought
Barratt notes that, to do physical security well, companies have to really think designs out well and model threats properly. Otherwise, security controls can often become ‘theater;’ looking like they are doing the job, when tyey are actually easily circumvented either by attackers or the staff meant to be enforcing them.
“Some of the newer data centers, they'll have things that will look cool from a security perspective, but then you'll see people smoking outside the fire escape and you could have just walked in with a packet of cigarettes.”
He notes that the poorest levels of security – where simple confidence tricks of wearing high-vis jackets are most likely to work – are in corporate data center facilities which serve one company and its subsidiaries.
“It's much more common that you've got peripheral defense, a bunch of very rudimentary access controls, or you'll see gates that you could sneeze and fall over to get into a building. I've even seen some environments where the security guards themselves would let you in if you just looked like you were struggling with the key card.
“In my experience, the ones that have been really well crafted have normally been designed by folks who are ex-military,” he explains. “They’ve been thought out purely from a military perspective for use as critical national infrastructure environments. Generally, the best people who do this professionally are those with the intentions of getting either government, military, or critical infrastructure hosted with them.”
An example of good design might be in the parking. There should be an area to pre-scrutinize and hold visiting cars before they reach the final car park so they can be rejected without creating gridlock. Another would be gates – it should be impossible for someone to tailgate the person ahead of them. However, this is expensive, and corners are often cut for commercial reasons.
Barratt suggests that instead of ‘crappy solutions’ that may be vulnerable or create a false sense of security in a facility’s personnel, companies should sometimes simply accept the risks.
“It's sometimes better to just not do it and know there’s a risk that people have got to be more vigilant towards personally,” he says.
Rishab Verma, of the penetration testing team at Defense.com/Bulletproof, notes that fire exit doors are often neglected and make an excellent point of both entry and exit.
“Sometimes people use it, maybe going to lunch and wanting to get out quick, and it's just left open,” he says. “There is no good security, or access control in place for fire exit doors; I can just simply use the fire exit to get out of the building.”
He notes that a lack of logging can make it harder for companies to track personnel. People should be automagically logged for both entry and exit time, and failure to do so – if, for example, a door was held for them at the exit – should be flagged.
The human factor should be a focus
One of the primary routes for a penetration tester to gain access to a data center is through the people; tricking reception and security staff into letting you in directly, or creating scenarios where they can be fooled or distracted long enough to allow an attacker inside.
“I've lost track of the amount of times
I've put on a striped suit and walked into a building because people just think you look important,” says Barratt. “The old school confidence tricks are very successful and hard to defend against.”
Security guards, despite themselves, can often be a weak point in defenses. Often low-paid and outsourced roles, these staff can be over-eager to help for fear of losing their job.
“What really is required is a degree of hostility, in a social environment where people have got a very customer service mindset. The big changing point when it comes to physical security is making them feel like they're actually part of the business and have a valued role,” says Barratt.
“You need the CEO to say ‘you can stop me and I'm not going to fire you.’ The CEO even probably needs to make an example of themself occasionally. It requires leadership and good management and actually really good soft skills and team management, so that they don't feel like they can just be bowled over by somebody playing the ‘I'm more important than you’ card.”
Physical security teams operate most effectively when they feel like they're a cohesive part of the overall business and feel empowered. If security guards that are hired help on low pay, and worried about being let go, they aren’t as likely to stay sharp or feel brave enough to challenge people that might be senior to them.
Barratt notes that a CISO he worked with had a portfolio of buildings including data centers, and was concerned about the security guards at his premises.
“On a number of the security tests, the security guards were actually the biggest weakness because they were socially conditioned to be helpful to people; anything that they could do to feel like they were valuable to the business they would try and do.”
What this CISO did was halve his physical security team and triple their pay. They were then split into two teams and made to operate in military-style tactics permanently against one another. They ran a leaderboard and would offer rewards to successful teams.
“It was a really fascinating play because the people were mostly the same but they permanently had a team that was on high alert because they knew their counterparts were always trying to simulate a break in.
"A colleague’s favorite prop to break into buildings was a fake pregnancy belly. She'd be very ‘pregnant’ in the latter stages, waddling and struggling...
That level of alert rapidly changed their security team almost overnight to the point they wouldn't trust anybody.”
Access controls aren’t impenetrable
Physical access controls such as key cards, biometrics, CCTV, and mantraps can make a facility much harder to break into.
However, many keycard-based systems can be easily circumvented. There are devices that can scan cards in the immediate vicinity, clone them. Some older cards could have their encryption broken to allow attackers to make entirely new profiles for them.
Employee mistakes can also make it easier to compromise fixed access controls. Staff should know to keep items such as key cards hidden and secured.
“Depending on the notoriety of the target, users often post pictures online of their key cards on social media,” says Nicky Whiting, director of consultancy at Defense.com. “From there on, it's very simple to create your own key card with a similar layout and print that out on some plastic.”
Eric Florence, cybersecurity consultant at SecurityTech, tells DCD that a small facility he previously worked at conducted regular penetration tests, with one incident involving an attack pickpocketing a security card from an employee.
Keatron Evans, principal security researcher at Infosec Institute, says constantly reminding staff – whether security or more technical roles – of potential security threats is the best way to ensure procedures are followed properly.
“The regularity at which you're reminding them of these things is important,” he says. “When you do a good security awareness campaign, for a month after, the success rate of people blocking attacks and things goes up tremendously. But you go out five to six months later, and it's almost back to where you were in the beginning.”
Evans also points out the importance of screening those given access in the first place to prevent keycards and important information from falling into the wrong hands.
“Companies should do a better job of scrutinizing and background checking because there are cases of people slipping into places through that mechanism. Not just the technical employees, but people like janitors, cleaning crews; anyone that has physical access to the facility.”
Sometimes companies make even more basic mistakes that render existing controls moot. Gillian Vanhauwaert, of the penetration testing team at Defense.com/
Bulletproof, notes that one facility had a sign noting gatherings every Wednesday at a certain time. During that gathering, a paper box was put in the door to allow people to walk in and out.
“It was advertizing how to get in, so I just had to wait till that time and I just walked in,” she says.
Tricks of the trade: how to break into a data center
Confidence tricks
First and foremost, penetration testers will seek to simply be let in by staff at the facility. Tailgating – the act of closely following someone through a door or gate – is a common technique, as is wearing a hi-visibility jacket and pretending to be a construction worker.
Phil Robinson, principal security consultant and founder of Prism Infosec, tells DCD he was once conducting a physical assessment of an organization that had two sites – the main office building and a data center.
The data center seemed reasonably secure, so he gained access to the office via fakes pass and tailgating. After finding an empty office, he found a workstation with the name of a senior member of staff on the login screen.
“I used the desk phone in the office to dial reception and asked for the extension for the data center. They gave me the number and I called the data center, which clearly originated from an internal DDI and I mimicked the local regional accent, announced myself as the person who used the office and said that there was an urgent IT issue and that I’d need access to the IT rack to be able to fix the issue.
“The guard on reception gave me an access number that I could use and said he’d expect me within the hour. I walked to the data center and was signed in within 5 minutes, with no check of my ID. I was escorted to the rack and managed to plug in my pen testing laptop and started exploits against the environment directly from the local data center network!”
Another example goes back to vetting practices, and ensuring visitors on-siteeven if expected - shouldn't be left alone for too long, if at all.
“We saw that there was a job being advertised, and we had someone with us that had a profile that fitted that job and had the right credentials,” says Defense.com’s Vanhauwaert. “We made them submit an application and they got in for an interview. We then debriefed that person saying ‘when they ask if you want coffee, you say you want
a special kind of coffee that will probably take a while to make, and while they leave you alone you try and look for any kind of port and you just plug this [device] into.’”
“You can act like a new joining employee, go in with a letter that says you're a new employee, that your card’s not been delivered yet and you just need to go in,” adds Defense.com’s Verma.
“You then challenge the receptionist that you have a meeting at 10 o'clock at like you're really scared that it's your first day and you need to go in.”
Another example of a confidence trick from Verma is the time he impersonated a council health and safety inspector.
“I sent an email acting like I was from the council, saying ‘we need to do health and safety checks, look at your fire exit doors, etc. Please let me know your availability when I can come in and have a look at the building.’
“We received a reply back and they said ‘you're more than happy to,’ gave us their availability, we then gave them a call to discuss what to do.”
In this example, Verma was given a tour of the council building, but the staff member was reluctant to provide a tour of the data center.
A recent case study from NetSPI, a Minneapolis-based penetration testing firm, detailed how the company was tasked with breaking into a facility owned by a colocation firm with one security guard and two employees.
"We started looking at the vendor list and noticed that they use a very well-known national pest control brand," said Dalin McClellan, senior security consultant at NetSPI. "One of the consultants we work with just had that same company in to work on their home and they had all the confirmation emails. We took those emails and modified them and then we sent a spoofed email
that looked like it came from one of the employees at the data center and sent it to the other employee at the data center."
The other employee didn’t notice the email was a fake and gave the OK for the visit. A van was rented and filled with ladders and other hardware. The security guard, expecting a visit, allowed the attackers in, before the colo employee brought the attackers through the building.
“We tried [to access customer cages], but the employee said no - but he did let us get into the ceiling tiles to check for pests, where it would have been easy to install microphones, video cameras, or splice a device into the cables."
let you tailgate into the environment, and share useful information than they would with a ‘stranger.’”
“The second option is ‘Alka Seltzer man.’ One person walks towards the guards and pops two Alka Seltzers in their mouth simulating a seizure. When the guards react to the seizure, a second or third confederate makes their entry. Once they are inside, the ‘seizure’ victim recovers and says that they don't need medical attention and exits the premises.”
Infosec Institute’s Evans says that on a recent test on a bank and data center, the team noticed there seemed to be remodeling work going on – including within the data center – with construction workers coming
concealed cisterns, rooms can be designed with either panels that fold or a small corridor along the back side for easy access by plumbers. In the pen test in question, the building had been built using the corridor method. In this instance, the toilet corridor bypassed cylinder man-trap gates, and was discovered on publicly accessible planning documents.
“After gaining access to the insecure side, I entered the toilets. Via the accessible cubicle, there was a concealed door into the piss corridor. I opened it, walked along, minding my own business,” he said. “After *really* making sure there wasn't someone else in the other accessible cubicle, I let myself out. And I'm in the toilets on the secure side, in the data center.”
Jacob Ansari, security advocate and emerging cyber trends analyst for security compliance assessor Schellman, has a similar story of construction insecurity: “Many years ago, when I was a little leaner, some colleagues and I got into a less-secure area that was under construction and where the keycard lock didn’t work correctly.
“It shared the same raised floor as the data center, so we found a floor tile puller, lifted up a tile and crawled under the raised floor into the data center space. There’s a photograph of us covered in dust from crawling beneath the raised floor, and it’s one of my favorite moments in my career.”
Enter drones
Props and uniforms
If charm and confidence don’t work, get creative. Penetration testers are happy to use props to create diversions or scenarios where they will be let in.
“A colleague’s favorite prop to break into buildings was a fake pregnancy belly,” says Defense.com’s Vanhauwaert. “She'd be very ‘pregnant’ in the latter stages, waddling and struggling with a door and holding a bunch of boxes, and people just open any door for you at that stage.”
Coalfire principal consultant Justin Wynn notes that crutches work similarly well to a fake pregnancy belly: “Props like these are the equivalent of skeleton keys. These props are force multipliers when going solo, but having a team who can run distractions opens a world of opportunities.”
CyberOpz CEO and founder Peter Clay agrees that your best weapon to gain physical access is a pack of cigarettes: “Find out where the smokers go and hang out there as long as possible.
"After sharing a cigarette you aren't a faceless stranger but a new friend that shares the forbidden habit, and your new friends are much more likely to open doors for you,
and going regularly.
“We took some pictures and we went and got some green vests, printed out that logo, and put it on a blank white hard hat and we were able to literally walk into that bank, walk into their data center. We were able to just walk right in there, get in and plug in stuff without any challenge whatsoever just because we looked the part.”
Construction fails
If the people of a data center can’t be tricked into allowing access, sometimes flaws in building design provide a way for penetration testers to gain access.
In a recent series of tweets, penetration tester Andrew Tierney, known as CyberGibbons, revealed how he once broke into a data center via plumbing engineer corridors behind the toilets.
“I needed to gain access from the lesssecure side of a sub-basement floor to the more-secure side,” he said. “By studying the floor plans of the building, I could see what I ended up calling the 'piss corridor' running along the back of the toilets.”
Tierney noted that in buildings with
The cheap price and increasing ease with which drones can be bought and flown present new opportunities for potential threat actors to conduct reconnaissance on properties.
“We use drone technology, and in one instance, we found a ceiling access door that was propped open with a brick on top of a data center building. We were of course able to get some people to climb the building, and just walk right into the data centre from the roof,” says Infosec Institute’s Evans. “If you got a five-storey building, the assumption is the top of it is more protected than the ground, so people tend to not put as much security on the roof.”
Both Evans and Coalfire’s Barratt note, however, that an armed response can quickly deal with any rogue UAVs.
“One of my hobbies is clay pigeons, so I think a 12-gauge would certainly deal with most drones quite quickly; it's cheap, and it's relatively easy to train people with,” jokes Barratt.
“But you probably don't want to see people walking around with shotguns just because there might be a random drone.”
Egypt's submarine cable stranglehold
Sebastian Moss Editor-in-ChiefThe world’s digital infrastructure has been built by the paranoid. At every turn, equipment is duplicated, routes are triplicated, fuel reserves are over-filled. Astronomical sums are spent on building layers and layers of safety into the system, as suspicious minds game out various scenarios that could put the precious flow of data at risk.
And yet, there remains one giant bottleneck, a quirk of geography and geopolitics, that is anything but redundant.
If you take a map of the world’s submarine cable infrastructure, responsible for shuttling data between nations and entire continents, and zoom in on the Middle East, you will notice something striking: Everything goes through Egypt.
Data traveling to and from Europe and Asia, as well as Northern Africa and the Middle East itself, has just one route.
Coming from the Gulf of Aden, cables snake up along the Red Sea, and into the Gulf of Suez. There, they make landfall in Egypt, traversing little more than a hundred miles, before breaking out into the Mediterranean Sea.
"There's no way a network operator would design their network like this under ideal conditions, right?" said Paul Brodsky, senior analyst at Telegeography, best known for its maps of cable routes. "They don't like having everything funneled through one place."
This route concentration is a concern for reliability, putting an estimated 17 percent of the world's Internet traffic in the hands of one country, and in one shallow and narrow sea. But it is also a concern for businesses, which have to contend with a monopoly.
To get through Egypt, companies have to pay exorbitant fees to state-owned Telecom Egypt. Prices have risen dramatically, amid claims of corruption, but operators have had little choice but to pay. At least until now.
The only route
The story of Egypt’s submarine stranglehold is hard to tell. Several analysts declined to talk on the record due to business relationships with Telecom Egypt. Cable providers either declined to talk, or did not respond to requests for comment. “I am afraid I won’t be open to discuss the Egyptian submarine cable bottleneck due to certain
Understanding the Middle East bottleneck, and how things could be set to change
concerns,” another industry figure said, declining to elaborate.
In Egypt itself, it’s even harder to talk about the cable situation. In 2019, the TV host of local news program 90 minutes, Ossama Kamal, accused the government of corruption with the way it charges submarine cable operators, and said it risked destroying its position as the gateway between Asia and Europe.
Immediately following the broadcast, he was suspended from his show, fined, and forced to apologize. He did not respond to requests for comment.
Whether Telecom Egypt abuses its market dominance is a matter of debatesome, speaking on background, called fees extortionate. Others accepted it as the cost of business for using the most logical route through the Middle East, with more than a dozen major cables choosing to go across the country.
Egypt’s position as a critical communications node between East and West dates all the way back to the colonial era (see p.57), and remains, due to a few simple reasons.
First is geography: It’s the shortest stretch of land between the Mediterranean and
Arabian seas, hence the creation of the Suez Canal for shipping. Network operators like to avoid needlessly traveling across land, with its expensive owners and pesky national sovereignties that need to be dealt with.
Then comes geopolitics. Do Western companies want data to travel through Iran? How about Iraq, Afghanistan, or Syria? Operators like to steer clear of sanctioned nations, or active war zones, so they are off most people’s preferred routes - although some have still tried, but we’ll get to that later. There is one other journey they could take, but that too, we shall save.
Finally, there are market forces. "Once you establish a route and everybody's using it, the cost goes down as more people use it," Doug Madory, director of Internet analysis at Kentik, explained. "So it's really hard not to use it, and it's hard to break out of what ends up being the most selected path.
“With this Egypt chokepoint, obviously the geographic layout is the number one reason, but then once it gets established, it's super hard to break out because then there's so many cables, so many lines, so much infrastructure built along that path.“
With this in its favor, Telecom Egypt has been able to charge huge fees - between 6.6
percent and 17.4 percent of its total revenues came from cable fees between 2008 to 2019, according to Submarine Cable Networks. The founder of SCN declined to comment.
It took a while for the state telco to realize it was sitting on a goldmine: It used to sell a perpetual license for somewhere in the ballpark of $100k. Then they moved to a monthly fee, a source told DCD. "Then they said 'oh no, we want to have the transit costs, where people pay by volume of traffic.’ So if tomorrow traffic doubles for a telecom, they get double pay or whatever the tiering system is," Madory said. "I feel like that was too far - people started to revolt, although what can you do? It's not like there's another Egypt you can go to."
Another industry figure called the fees "ridiculous." An SCN report found that 12 submarine cables crossing Egypt paid the telco at least $369 million for Indefeasible Right of Use, with additional Operation and Maintenance (O&M) charges during the lifetime - however, it is not clear if this is before the telco tried to shift to charging more for more traffic.
Warzones and warnings
The lack of diversity and huge costs of
traveling through Egypt have led some to seek alternative routes - some of which appear to have been scams, others of which were ill-advised.
In 2010, Saudi Telecom Company teamed up with Jordan Telecom Group (JTG), Turk Telekom, and Syria Telecom for an ambitious terrestrial cable spanning some 2,530km (1,570mi).
JADI - named so because it would link Jeddah, Amman, Damascus, and Istanbul - had a short life. "There was a lot of hype around this being an alternative to Egypt," Madory recalled. "We were able to test the line and see that it was up… and then it went down."
His contact at Jordan Telecom, sent from France by parent company Orange, let him know why: "It got blown up." Just months after JADI launched, the Syrian Civil War broke out. To bring the cable back online, someone would have had to repair it in the middle of a warzone, with no guarantee it wouldn't just immediately be broken.
"I don't think anybody's ever mentioned that route again," Madory said. "But there was a brief period that existed."
Another alternative came in the EPEG cable, stretching an impressive 10,000km (6,200mi), primarily across land. Announced in 2011, it set off from Oman, briefly crossing the Gulf of Oman and into Iran. From there, it traveled north, up through Azerbaijan, and into Russia, where it veered west via Ukraine, Hungary, and Austria, before terminating in Frankfurt, Germany.
Madory remembers asking network operators about the cable back in 2013, where they told him it was too expensive.
But EPEG had a unique opportunity: That year, the Seacom cable went down after thieves tried to steal copper by setting fire to its terrestrial connection traveling through Egypt. Just a few months later, divers were arrested off the coast of Alexandria for damaging SeaMeWe-4 in a purported effort to get scrap metal (this is a story that has raised many eyebrows in the industry, but alternative explanations are not known).
EPEG raced to take advantage of the trouble, pushing forward its launch, but the price was still offputting. There were other challenges, namely its route. "Iran was still volatile after the Arab Spring," Madory said.
Curiously, it managed to find one major customer, the state telco of Bahrain, Batelco
- despite deeply strained relations, and just a year after Bahrain had accused Iran of inciting upheaval in the small state.
Madory has worked with the Bahraini telecoms regulator, so let them know about the strange decision.
"I said: ‘I don't have a dog in this fight, but we think you should care that you are sending your government stuff through Iran.’ People usually flip out over that kind of stuff.'"
Instead, the regulator told him it couldn't do anything, but asked him to talk to the Batelco engineers.
Iran’s telco wasn’t much better when it tried to sell access to EPEG: "My contacts in the region said they were quite inflexible on price, so they walked away," Madory said.
With the cable also passing through Russia, it's far from a preferred route for US allies.
Its website is no longer active, but archived versions state it had a capacity of 540 gigabytes per second - not much compared to today's cables.
However, it may still be active: Madory noticed Internet outages in Iran that were tied to cable cuts in Ukraine due to the invasion, and he thinks it must have been EPEG.
None of these efforts have proved a viable alternative to the Egyptian route. But there exists another way.
Answers in Aqaba
If you head up the Red Sea, you have a choice as you reach the tip of Egypt's South Sinai province. Head northwest and you will travel up the busy Suez Gulf, topped by the eponymous canal. Or you could turn northeast, passing through the narrow Straits of Tiran and into the Gulf of Aqaba.
At its end, you will have another choice. Virtually the entire west of the gulf is still Egypt, almost all of the east is Saudi Arabia (which does not border the Mediterranean).
But right at the tip, crowded into just a few dozen kilometers are the edges of Israel and Jordan, the former of which does open up to the Mediterranean on its other side.
Travel along this gulf is treacherous, prone to sudden squalls in a narrow channel filled with islands and coral reefs. What little sea-side territory Israel and Jordan own
is precious, with a focus on port or beach access.
But it would be possible to get a cable through there, if only you had the money and the political power to get Saudi Arabia, Israel, and Jordan to work together.
Enter Google.
In 2020, it was revealed that the company was working on the $400 million BlueRaman cable traveling from Mumbai, India, across the Indian Ocean, touching on Djibouti and Oman, before going to Saudi Arabia, and then ending up at the Jordanian port of Aqaba.
It then goes into Israel, out to the Mediterranean, and across to Genoa, Italy.
A tale of two halves
Blue-Raman is a single cable, but Google pretends it is two - the Raman portion covering the section from India to Jordan, and the Blue section going from Jordan, through Israel and Italy.
This act of theater is thought to be just so that Israel and Saudi Arabia can pretend that they don't share a cable, even though they do. Google won't confirm the reason for the double name, nor how much of the cable runs across Saudi territory, declining to comment for this piece.
The name switch, lots of money, and an intense behind-the-scenes transnational lobbying effort allowed Google to pull off a geopolitical coup that will offer the first real alternative to Egypt since the Internet began.
The cable is expected to go live in 2024, offering 16 fiber optic pairs to Google, and partners Omantel and Telecom Italia Sparkle. "In time, consortium members hope to make additional landings and connect the two systems through terrestrial network assets," Bikash Koley, VP and head of Google Cloud for Telecommunications global networking, said when the project was officially announced in 2021.
The company made no mention of Egypt, just hinting at the bottleneck by saying that "developing additional network capacity and routes is critical to Google users and customers around the globe."
Unsurprisingly, operators and industry figures were again reticent to disclose the background of how the cable came to be. One said: “Google has been able to negotiate something with the various parts… I should probably stop talking.”
Madory was more open: "They're a bottomless pit of money, so if they want it, they can get it."
Helping matters is the fact that Google Cloud is building data centers in both Saudi Arabia and Israel.
"There was a lot of hype around this alternative to Egypt. We were able to test the line and see that it was up… and then it got blown up"
In the former, after slightly delaying its announcement due to the state-sanctioned murder of Jamal Khashoggi, it teamed up with state-owned fossil fuel giant Saudi Aramco to launch data centers in a joint venture.
In echoes of the Blue-Raman wordplay, Google claims that it can work with oil company Aramco without helping the company's oil business, because it is working with Aramco's technology division. When Google previously contacted DCD to make this clarification, we pointed out Aramco’s supposed non-oil-related technology arm does not have any website or public presence. At this point, Google stopped responding.
The cloud region has also been criticized by 39 human rights groups due to Saudi Arabia's well-documented record of illegally surveilling its own citizens, and torturing dissidents.
Over in Israel, Google is building its fourth data center, after winning a $1.2bn government cloud contract (jointly with AWS). That contract has also been criticized by human rights groups and some of Google's own staff, due to the supported agencies' treatment of Palestinians. Israel equally has a long history of state surveillance and nation-state cyber attacks.
The cable is far from perfect, having to travel through nations with a poor track record of espionage and surveillance.
They also are nations that do not talk to each other, which sits oddly with sharing a communications cable.
Saudi Arabia has not recognized Israel since it gained independence in 1948, although they do work together behind-thescenes in the Arab-Israeli alliance against Iran.
While Google's own backdoor meetings may never become public, what is clear is that it was able to pull off what others have failed to do, charting a new route through the Middle East.
Concern at home
In Egypt, it sparked consternation. It is this contract that caused the journalist Kamal to make his comments, arguing that the greed and corruption of officials charging such high fees for transit was what caused Google to go elsewhere. This, he and others argue, could spell the end of what was an easy revenue stream for the country.
Telegeography's Brodsky is careful to not overstate the impact. "It's not like Egypt is suddenly out of the submarine cable business, quite the opposite,” he said.
“There's plenty of cables that have either very recently launched or are in planning or development right now that are absolutely going to run through Egypt: The 2Africa cable just launched to Africa, which is going
to be the longest submarine cable system in the world by far. That's planned to run through Egypt.
“Africa-1, IEX, SeaMeWe-6 - these are all new cables that may come online in the next several years. They all transit Egypt. So it's not like Egypt is suddenly being abandoned."
He also noted that Egypt's submarine cable concentration is not as dire as it first appears - they do take some different routes across land, and benefit from four cable landing stations in the north and three in the south, offering some level of diversity.
Telecom Egypt claims there is sufficient diversity so that if one cable goes, the others will be unaffected. However, a June cable cut to AAE-1 appeared to also impact SeaMeWe-5, potentially suggesting more overlap than disclosed. The outage impacted huge swathes of Europe, East Africa, the Middle East, and South Asia, as well as all the major cloud providers.
"Cuts happen," Brodsky said. "They happen all the time, and to the best of us."
It's also unlikely that Egypt itself would decide to exercise too much power over the cables - just as with the Suez Canal, it knows it can keep collecting revenues as long as it doesn't threaten to shut things down.
At the peak of the Arab Spring, as protestors flooded into Tahrir Square, President Hosni Mubarak cut Internet services to the country, enacting a digital blackout. Wild and desperate, these were his
“There's plenty of cables that have either very recently launched or are in planning or development right now that are absolutely going to run through Egypt"Google lands its Curie cable
final days in power, but he knew not to mess with international cables or the Suez Canal - both flowed freely.
It is unlikely any Egyptian government would change that equation. The same cannot be said for individuals. In April 2022, a coordinated attack was carried out in Paris, with multiple cables connecting the French city to Lyon, Strasbourg, and Lille physically cut in several places. It caused major outages across the country - and no one knows who did it.
A similar event in Egypt would wreak a lot more havoc on the global Internet, as would any damage that could come from more revolutions or civil warsas we saw with the JADI cable in Syria.
These may seem unlikely, but the job of those providing constant uptime is to prepare for such eventualities. Issues with the Suez Canal were dismissed as groundless fears, until a single shipthe Ever Given - wedged itself in the middle of the channel, disrupting global commerce.
Google's cable would be immune to such events but, of course, will itself be vulnerable to where it travels through. "Many-country terrestrial lines are hard to pull off," Madory said. "If anything goes wrong in any of the countries..."
But at least Blue-Raman changes the calculation by being at risk of different events than every other cable that travels through the Middle East.
What comes next
It is not clear how long Google and its partners will maintain a monopoly on its alternative route. The hope is that, by helping lay the geopolitical and infrastructural groundwork, it will spark investment in more cables through Israel.
In 2020, the CEO of Israeli telecoms company Cellcom, Avi Gabbay, said that it hoped to build a terrestrial fiber cable from Israel to Dubai.
It then hopes to launch a submarine cable from Israel over to the EU. However, no other details about the project were made public and Gabbay was fired by shareholders in 2021 for being too independent. The company did not respond to requests for comment.
This July, Saudi Arabia signed a strategic partnership with Greece to explore a submarine cable. It appears to be very early days for the system, which may not come to pass, and its route has not been disclosed. Passage through Egypt still remains the most likely, but Saudi Arabia could choose Israel - however, then it would have to accept that it is a single cable they share. Saudi government representatives did not respond to requests for comment.
Eventually, however, more cable systems will replicate the passage Google found. But they will augment what goes through Egypt, not replace them. They may also help lower the prices Telecom Egypt charges, making it a more reasonable investment.
Because, if those charges are normalized, Egypt still represents the most logical route - the shortest stretch of land before the safety of the open sea.
The colonial roots of Egypt’s submarine cable routes
Sebastian Moss Editor-in-ChiefHow did Egypt come to be the Middle East’s primary data route?
To understand that, we have to head back - way back - to the mid-1800s. At the time, communication between Egypt and Europe traversed through Turkey, while telegraphs to Asia went from Egypt to India via the inventively named Telegraph to India Company.
Messages moved slowly and expensively across the land, making the British government's management of its vast realm a challenge. The world’s largest empire was constrained, unable to quickly respond to issues at its periphery.
Long routes had much greater latency, but also cost significantly more - at each intermediate point you would need a 24x7 missioncritical manned station. There, someone would have to manually resend the message to the next hop along the chain.
In an effort to speed things up, the SS Queen Victoria set sail in 1860 to lay a government-operated cable between the British-dominated territories of Myanmar (then Burma) and Singapore. Sdly, the ship sank before it had left the English Channel.
Next, in 1868, the government funded a cable that stretched from Egypt to what is now Pakistan, but it soon failed, leaving the government in debt.
The British Empire encircled the globe, but had to rely on Egypt to tighten its grip
"The government had also got burned on the first Atlantic cable and on the cable to Crimea," submarine cable historian Bill Burns explained. "So it became almost entirely a private industry, with the government sometimes giving a guarantee of a certain amount of traffic."
For the Europe-Asia effort, salvation would come in 1869, thanks to a seminal submarine cable figure, John Pender. A British Member of Parliament, Pender would found 32 telegraph cables, connecting many of Britain's willing and unwilling subjects around the world, as well as the US.
He created multiple different companies due to the high risk of cable laying in the day, insulating his other operations from the financial impact of a broken system. Once they proved successful, he would merge them into one conglomerate that eventually became Cable & Wireless.
In 1869, Pender formed the British Indian Company, acquiring the rights from the Telegraph to India Company.
His cluster of interconnected businesses then set about connecting Egypt to Yemen and India, and onwards through to Malaysia and Singapore, as well as Australia a year later. They would go on to be known as the Eastern Telegraph Company.
On June 11, 1870, the first message was sent from Falmouth, UK, directly to India. It passed through Egypt.
"The system of submarine telegraphs which is generally known under the name of the ‘Eastern’ may truthfully be said to be one of the greatest monuments of British enterprise and perseverance that the world has ever seen," the 1894 trade paper The Electrician says in breathless colonial prose.
The opening gave unprecedented access to the East. "The Earl of Mayo was murdered in the Andaman Islands, and the news was confirmed by a special message brought through the submarine telegraphs in a few minutes (February, 1872)," The Electrician recounts.
"During the war in Afghanistan in 1878, 1879, and 1880 the Government made large use of the telegraph, and the British public was enabled to read full details of all actions almost as soon as they took place."
Already, though, the dangers of a single route were clear. The Egyptian war of 1882 brought down landlines in the country, heralding months of outages where the British government could not communicate with India and beyond.
"The inconvenience of these total interruptions to the Governments and telegraphing public, as well as the loss of revenue sustained by the Companies showed the management the absolute necessity of duplicating and triplicating the communications," The Electrician states.
But despite all its wealth and power, the British government and Pender made a decision on redundancy that was based on minimizing distance rather than route diversity: It simply built another cable through Egypt in 1882.
"It is the obvious route through the Mediterranean," Burns said. "You want as little land as possible - remember, in many places, the nature is so hostile, the terrain so impenetrable. To put it bluntly, if you tried putting in a landline telegraph in most places you might find your poles disappeared in short order."
The cable operators primarily stuck to land that had already been carved out by the great railway endeavor, running parallel to the tracks that ran up and down Egypt.
Little has changed, in Egypt and elsewhere, Burns said, producing a map of the Eastern Telegraph company's network in 1901. "If you look at a modern cable map, there's not much different."
Count your carbon
Organizations deciding whether to run a data center or move to the cloud should do some carbon accounting
Peter Judge Executive EditorEnterprises considering their options will automatically look at the financial impact of any options. They should also look at carbon emissions.
And they may find that decisions about their IT resources - including whether to run a data center - will have a big impact on their emissions.
Have you set targets?
If your company has set targets for limiting emissions, then you will need to track those emissions so you know if you have met the targets.
Even if fighting global warming is not a number one corporate goal for your company, there are plenty of other good reasons why you will have to keep track. Among other things, proposed changes to the SEC rules on reporting risk could mean US companies bigger than a certain size (having $25 million in assets) will have to report their carbon emissions. Other nations have similar rules.
A study by LBNL found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used.
So having a weak story on emissions can harm your prospects for raising money from investors and other sources. At some point, you need to do carbon accounting.
The major standard for carbon accounting is the Greenhouse Gas Protocol (GHG Protocol), a global standardized framework which measures emissions from private and public sector operations and their ecosystems. It is a joint effort from the World Resources Institute (WRI), and the World Business Council for Sustainable Development (WBCSD).
The GHG Protocol is where the Scope 1, 2 and 3 emissions are defined (see Box).
Carbon accounting uses ideas from lifecycle analysis (LCA) and there is also an ISO standard (ISO 14064) for it.
There are concerns that ISO 14060 might not be exactly in line with the GHG Protocol. For this and other reasons, large companies have set up Carbon Call, a movement to make sure carbon accounting is actually useful and consistent.
The new SEC rules are likely to apply to
the most obvious emissions your company produces - Scope 1 (direct) emissions and Scope 2 emissions produced by your energy suppliers.
You may also have to report on Scope 3 emissions - those you cause within your entire ecosystem of suppliers and customers - which is normally a much larger figure.
If you have set targets for Scope 3 emissions, then you will have to account for them. And the SEC could well come after you for detailed figures.
What’s this got to do with your data center?
Given that IT is likely only a small part of your carbon footprint, it can get overlooked, or dealt with too quickly - but there’s a serious debate in the data center and cloud sector, over who has the best story on emissions.
If you are calculating the carbon footprint of your IT, you must determine the
emissions (Scope 1, 2 and hopefully Scope 3) of the servers and network equipment you run in-house. If you build a data center, there will be significant Scope 3 emissions embodied in the equipment and the construction of the building.
But the chances are high that you also run some of your IT in the cloud. You will need to account for the emissions that causes. But will those emissions be counted in the same way you account for in-house IT?
When the cloud began to take off in the 2010s, cloud providers asserted that they were reducing the carbon footprint of their customers, because the IT resources in the centralized cloud data centers were deployed more efficiently.
All the IT loads were virtualized and aggregated on the smallest number of servers, so there was less wasted hardwarea full data center can be run more efficiently than an empty one, so when enterprises shift their IT into the cloud, it is often counted as a reduction in greenhouse emissions.
In 2020, a study led by Laurence Berkeley
National Laboratory found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used - and therefore little increase in Scope 2 emissions.
The result was attributed in part to small inefficient enterprise data centers being replaced by more efficient capacity in the hyperscale facilities run by cloud service providers.
Coauthor Arman Shehabi of LBNL said: "Less detailed analyses have predicted rapid growth in data center energy use, but without fully considering the historical efficiency progress made by the industry.”
How green is your cloud?
The cloud leader Amazon Web Services (AWS) has lost little time in capitalizing on this, and offers a free tool for customers, which tracks the carbon footprint of cloud resources in AWS data centers. It then helpfully helps users compare this with what they might emit if they ran those resources in an in-house facility.
Needless to say, the in-house figures are estimates made by Amazon, and AWS instances always come out much better. In many instances, they come out an unlikely 88 percent better. The tool ia also limited in only reporting monthly aggregate totals.
Amazon has promised that it will have net-zero carbon emissions by 2040, so the company tells users that moving to the cloud is a surefire way to reduce emissions.
"If you are an AWS customer, then you are already benefiting from our efforts to decarbonize and to reach 100 percent renewable energy usage by 2025, five years ahead of our original target," said AWS evangelist James Barr in a blog post.
Barr says "the AWS path to 100 percent renewable energy for our data centers will have a positive effect on [customers'] carbon emissions over time."
However, it’s worth pointing out that the AWS tool only takes account of Amazon’s plans to use renewable energy (Scope 2) in the AWS cloud, ignoring Scope 3.
And there are question marks over the way AWS accounts for its own emissions, since it makes heavy use of power purchase agreements (PPAs). It pays for renewable energy to match the amount of energy it uses - but it matches variable renewable sources with AWS’s steady consumptionso its PPAs may only cover about half the energy used in the AWS cloud, according to
a report written by McKinsey for the Long Duration Energy Storage Council.
AWS is not alone - Google also offers a carbon footprint tool to cloud customers, but this does also include useful features such as a reminder to switch off server instances which are not being used.
Microsoft also offers a footprint tracker for customers of its Azure cloud. Again, it will be important to make sure this is tracking emissions in the same way you follow them for your in-house emissions. And also it will be important to note that the tool has a vested interest in presenting a good record on Microsoft-hosted resources.
Look for a third party
Given the potential conflicts of interest, you may want a third-party to measure your cloud footprint. One company that claims it has this is Cirrus Nexus, a company that has moved into cloud carbon accounting from straightforward financial measures.
"The same data that we collect for cost optimization also works for carbon," Cirrus Nexus CEO Chris Noble told DCD at the launch of its TrueCarbon tool. "If a company is running 100 VMs in a data center, we can tell them the most costoptimized place to run that - whether it be in that data center, some other data center, or another cloud provider. At the same time, we can say you're causing X amount of kilos of carbon to be produced - and you'll produce less carbon somewhere else."
The Cirrus Nexus tool examines cloud use in real time, and cross-references that with the known footprint of the data centers used in the regions they operate.
Customers can set their own internal carbon price, which then creates an incentive to move resources to the least environmentally damaging cloud. “The business is now incentivized to go and put it in a less carbon-generating region, or a less carbon-generating data center," says Noble.
As with all the other cloud carbon accounting tools, the job of comparing with in-house resources remains. For that job, you will need to have your own internal expertise - or work hard to find someone outside your organization with no vested interest in selling cloud or onpremise solutions.
This feature is from our Enterprise supplement: Read the rest for free here
Understanding
Scope 1, 2 and 3
Carbon emissions are not simple to account for. As well as the greenhouse gases you produce yourself on site (for instance by running a diesel generator), there are more which you are indirectly responsible for.
Scope 1
These are the direct greenhouse gas emissions produced directly from operations that your company owns or controls
Scope 2
These are the indirect emissions created by generating the energy used by the company. This includes electricity, but also includes steam, heating or cooling if your organization buys those in.
Scope 3
This is the potentially vast category of emissions created within your supply chain. This includes both upstream and downstream emissions. If your company has a building constructed, there will be a lot of scope 3 emissions in the materials such as steel and con crete, and Scope 3 would also include the emissions embodied in making the equipment such as IT systems that you use, and in providing you with raw ma terials to carry out your business.
Scope 3 also includes downstream emissions from products shipped, used and eventually recycled by customers.
The end of the supercomputer era
As exascale system power requirements reach tens of megawatts, on-premise facilities are becoming less feasible
Georgia Butler ReporterThere’s some debate over what can be counted as the first supercomputer, but it is possible that we may soon see the last one.
Supercomputers are unique facilities, providing exceptional computing power. So on that basis, the world’s first programmable computers in the 1940s could be described as supercomputers: they weren’t just exceptional, they were unique.
By today’s standards, the performance of 1945’s Eniac was less than “super.” The 1,500 sq ft machine’s 40 nine-foot cabinets, housed in the University of Pennsylvania, held more than 18,000 vacuum tubes and 1,500 relays, as well as hundreds of thousands of resistors, capacitors, and inductors.
It was capable of 5,000 calculations a second, with its then-hefty 160kW energy consumption even reportedly causing blackouts in Philadelphia.
Then there’s Control Data’s CDC 6600, seen by many as the first supercomputer. It had other systems to compete against when it launched in 1964, and gave triple the performance of the previous record holder the IBM 3070.
In the decades that have followed, power has risen by orders of magnitude from the CDC machine’s now-puny three megaflops.
For its first decades, the field was led by Seymour Cray, who left Control Data after building the CDC 6600 to form Cray Research - which is now the supercomputer division of HPE.
Supercomputers have consumed huge sums of money and years of researchand, despite efforts on maximizing energy efficiency, the energy demands of highperformance computing (HPC) have kept growing.
This year, the HPC industry officially hit a major milestone - Frontier broke the exascale barrier, which means a system that is capable of at least a billion billion (1018) floating point operations per second - a target China is believed to have secretly hit one year earlier.
That performance is 300,000 million billion times (3x108) that of the CDC 6600.
The Frontier system, at the Oak Ridge Leadership Computing Facility in Tennessee, cost $600 million and uses 30MW of power, more than many data centers.
While it represents the pinnacle of computing achievement, it’s not clear whether it represents the future.
Last of their kind "Leadership HPC appears to be engaging in unsustainable brinkmanship while midrange HPC is having its value completely undercut by cloud vendors," Glenn K. Lockwood, storage architect at the National Energy Research Scientific Computing Center (NERSC), said in a blog post announcing his resignation.
"At the current trajectory, the cost of building a new data center and extensive power and cooling infrastructure for every new leadership supercomputer is going to
become prohibitive very soon. My guess is that all the 50-60MW data centers being built for the exascale supercomputers will be the last of their kind, and that there will be no public appetite to keep doubling down."
He left to join Microsoft.
That destination, and the timing of his departure, may well be significant for the HPC sector.
While supercomputers have become bigger, faster, and more powerful, they are also much more in demand.
No longer limited to governments, research universities, and the most wellheeled corporations, high performance computing (HPC) is becoming a powerful tool for commercial firms, and for anyone else who can afford it.
But while everyone wants HPC, not everyone can afford the prohibitive IT hardware, construction, and energy bills of dedicated supercomputers. They are turning to HPC in the cloud.
Cloud HPC emerges
In many ways, HPC has never been as big as it is today. But that’s only if you broaden the scope beyond the standalone facilities that have come down from CDCs to the Frontier.
The fact is that you no longer need a dedicated HPC facility in order to run these kinds of applications, as cloud providers now offer HPC services that can be rented by users, allowing for temporary HPC clusters that spin up when needed.
Those providers, as we shall see, include Glenn Lockwood’s new employer, Microsoft, as well as the other cloud giants Amazon Web Services (AWS) and Google.
Last year, Yellow Dog created a huge distributed supercomputer on AWS, pulling together 3.2m vCPUs (virtual CPUs) for seven hours to analyze and screen 337 potential medical compounds for OMass Therapeutics.
It was a significant moment, because the effort won the temporary machine the 136th spot in the Top500, a listing of the world's fastest supercomputers. It managed a performance of 1.93 petaflops, (1.93x1015 pFlops) which is roughly 1/500th of the hardwon exaflops of the Frontier machine.
Instead of sending a workload to a
supercomputing center, to be popped on a waitlist for its turn, Yellow Dog and OMass had opted for cloud HPC, where the capacity appears to be ready and waiting on demand - as long as you can pay.
Larger and more traditional workloads are also moving towards cloud supercomputers. One of the most significant with is the UK Met Office, which this year awarded a $1 billion dollar contract for a 60 petaflops supercomputer for meteorological analysis.
This performance could put it in the top ten of the Top500 list, and yet the Met Office’s plan makes use of the cloud. The contract has gone to Microsoft Azure, which partnered with HPE Cray.
But it’s not an ad hoc machine like Yellow Dog’s effort. This is somewhere between
a dedicated supercomputer and a cloud offering.
Best of both worlds?
The Met Office’s HPC jobs will be run in Microsoft Azure cloud facilities which are not open to access by anyone else, and are combined with extensive on-premises systems from HPE Cray.
“Microsoft is hosting the multiple supercomputers underlying this service in dedicated halls within Microsoft data centers that have been designed and optimized for these supercomputers, rather than generic cloud hosting,” Microsoft told DCD in a statement.
“This includes power, cooling, and
networking configurations tuned to the needs of the program, including energy efficiency and operational resilience. Thus, the supercomputers are hosted within a ‘dedicated’ Microsoft supercomputing facility for this project.
“However, that supercomputing facility sits within an overall cloud data center site. This brings the best of both worlds – the cost-optimized nature of a purpose-built supercomputing data center along with the agile opportunities offered by integration with Microsoft Azure cloud capabilities.”
Microsoft makes a strong pitch - and one that has convinced many in the industry, as the movements of significant staff make clear.
When HPE acquired storied supercomputing company Cray for $1.3bn in 2019, a notable number of senior employees left to join Microsoft, including CTO Steve Scott, and exascale pioneer Dr. Daniel Ernst. Others have also left the company for pastures new, including CEO Pete Ungaro and senior software engineer David Greene.
A huge driving force for the Met Office was Microsoft’s potential integration with cloud computing. The Met Office supercomputer is, in essence, an on-prem supercomputer hosted in a Microsoft data center. It holds its own storage capabilities, while also being about to leverage those offered by the cloud.
However, this is a decision born out of necessity, and one that we will see made more and more, according to Spencer Lamb, the COO of Kao Data, a hyperscale provider hosting HPC in a campus in Harlow, North London.
“It's how things will move on and it's how things will happen because, ultimately, the Met Office and other organizations of their ilk cannot build a 20MW data center on their existing campus because it physically won't happen.
“They can either go and utilize a colocation facility and go and buy the computing infrastructure and do it in that fashion. Or, they can outsource it to someone like Microsoft.”
The field of HPC has become so collaborative that there are strong fears that UK research could fall behind since the country detached from the European Union which has strong shared supercomputing initiatives.
Without EU partnership, the UK at least needs to organize its own actions, according to the Government Office for Science, which released a review of large-scale computing, ‘Large-scale computing: the case for greater UK coordination.’
The report called for a single unified
national roadmap and policy direction around its supercomputing capabilities in order to further research capabilities, and reach the goal of a 20MW exascale supercomputer for the nation in the 2020s.
While a noble goal, there are questions over the practicality of building these facilities. As stated in the report: “A single exascale system in the 40MW range would consume roughly 0.1 percent of the UK’s current electricity supply, the equivalent of the domestic consumption of c. 94,000 homes.” Even at the goal of 20MW, the impact is significant.
That power has to come from somewhere, and the energy cost of data centers is in danger of becoming a political issue. Ireland, Singapore, and Amsterdam have imposed de facto moratoriums, followed by tight regulations, and grids are even struggling to meet demand in the world’s largest data center hub in Northern Virginia.
The Greater London Authority (GLA) issued a warning that data center power projects in West London have annexed so much electrical power capacity, that future large house building projects may be unable to get connections.
If HPC can be hosted in data centers that already have the capacity, not to mention the technology and cooling equipment needed, the supercomputing problem could become much simpler.
Another way - colocation?
Cloud-based HPC is one option, but there’s another alternative: colocation, where the customer owns the hardware, but puts it in a shared space.
Spinning up HPC on demand in the cloud can be a simple option, but its costs can become large and uncontrollable, warns Kao Data.
In a white paper, the North London provider compares the cost of HPC in the cloud versus the cost of buying the hardware yourself and hosting it in a colocation facility - and reckons the cloud could cost 20 times as much.
“For the colocation facility, the cost of a [Nvidia] DGX-1 machine and its storage plus switching is on the order of $238,372. If you round that up and depreciate it using a straight-line method over two years, that’s $10,000 a month. Then, add in 10 kilowatts of power and colocation rent, and that is another $2,000 a month or so.
“On AWS, a DGX-1 equivalent instance, the p3dn.24xlarge, costs $273,470 per year on-demand and $160,308 on a one-year reserved instance contract. Comparably, Microsoft Azure charges about 30 percent
less for an equivalent instance, but AWS is the touchstone in the public cloud. Add in AWS storage services to drive the AI workloads, and it is around a cool $1 million to rent this capacity for two years.”
So did the Met Office get burnt? Probably not, as the Met Office’s deal was awarded to Microsoft after a lengthy public procurement tender (which was challenged by Atos in court). It’s a long-term deal, with better financial terms than renting instances by the hour.
Kao’s Lamb hopes to offer a space for those who still want their own HPC infrastructure, without the hassle of building a warehouse and finding power and cooling. “We've set ourselves out to be somewhere where they can put these systems and rely upon them being looked after in the way that's needed for them to be looked after,” he said.
“They can then go in and do their research, rather than trying to build data centers within their own campus, which ultimately is something they are not very good at doing because they're not experts in that field.
“As these systems grow in size and scale, to be able to build a data center to house a very power-hungry supercomputer becomes increasingly challenging. They can buy a supercomputer within a period of months, but it will take probably two to three years to build a data center around it.”
Kao’s Harlow campus provides 8.8MW of IT load in a single building, and there will be four buildings on the campus once it's fully complete.
The HPC field has always pushed the boundaries of technology, so Kao is promising more advanced options than standard colocation offerings, including liquid cooling which has become de rigeur in the higher rankings of the Top500.
“Because of the high-power nature of the systems, what we are working through at the moment is bringing a water coolant to the chip. So there's a combination of traditional air-cooled, as well as bringing direct cooling to the technology as well. That hybrid approach is something that we very much see as the future and it's necessary for an organization like us with the ambitions we have.”
The company scored an early win with Nvidia, which wanted a supercomputer in the UK - nominally to help with healthcare research, but also as part of its failed lobbying effort to win approval from the government to allow it to acquire Cambridge-based Arm.
Cambridge-1 was the UK’s fastest supercomputer when it launched in 2019,
but it has since been surpassed by The University of Edinburgh’s in-house Archer2 system.
Global comparison
However, it pays not to read too much into the microcosm that is the UK, where a few petaflops are counted as a big deal.
To get a more global view, Jerry Blair, co-founder and SVP of strategic sales at US data center provider DataBank, as well as the company's SVP of managed services, Jeremy Pease.
“We are seeing higher density cabinets,” says Blair.” It's taken a long time for the average to start going up above 5-6kW per cabinet but over the last year or two, the chipsets have gotten to a price point now where they can put so many chips in a cabinet that it now requires more power.
“We're seeing a lot more requests for over 10kW and up to 20 kW of capacity to be delivered to a cabinet and even higher than that. In several cases that we're working on up to up to 50kW in a cabinet.”
As densities start reaching this level, data centers have to be specifically designed to manage the cooling requirements. DataBank has turned to water-cooled back cabinet doors which take cold water right to the CDU (cooling distribution unit) in the cabinet.
But what is perhaps most important for DataBank has been the realization that many more customers have HPC needs than ever before.
“We're seeing a lot more GPU use at a high density that I would term HPC or supercomputing. We're seeing it from the universities, and we're seeing it from a lot of standard enterprise clients actually,” continues Blair.
“They’re not putting everything in at that density, but if they have 100 cabinets that are 5kw to 10kW cabinets, they may have five cabinets that are 25kW to 50kW cabinets, which are more GPU-based, for particular projects that they're working on.”
It is because of this, as well as supply chain issues, that DataBank is seeing the need for a different approach to providing HPC services to clients, and is introducing a bare metal product.
“We're in this dynamic where gear is hard to get, and networking gear is one of the hardest things to get and you can have nine to 12-month lead times just to be able to get the networking gear that can run all that equipment,” explains Pease.
“So that's why we're launching our bare metal products, which are meant to have GPU capabilities, where we actually have this stuff stocked and ready to go, we have the
networking gear and equipment in place and core facilities where we can manage it.
“With the gear that we have, we can get the high-end chipsets, like the GPU chipsets that can manage as high as they want to go. If they want to go 50kW per rack, we've got the chipsets that can enable that, we've got the processors, we’ve got the cores, we've got the RAM,” says Pease.
“Unless they're talking something super high-end with a special very special configuration, we should be able to manage that within the chipsets that we've got on the GPU side.”
It is those ‘super high-end’ projects which are the problem. With the variety of options now available, it really doesn’t make sense for many to turn to dedicated supercomputing centers to run their HPC workload, but when it comes to those specific use cases - like the Met Officebuilding these facilities at scale becomes a real issue.
Uncontrolled power demands?
Whether housed in purpose-built facilities, in colocation buildings, or in the rarified atmosphere of the cloud, all these petaflops need to run on hardware which has a demand for power and cooling.
Wherever it is located, HPC will need a close consideration of the power it uses, and the cost to the planet (and its owner’s pocket).
As Microsoft’s Met Office announcement put it: “There is also a prudent need to minimize those costs where practical, both in terms of money and – perhaps more critically – in terms of environmental sustainability.
“For this reason, the Met Office and Microsoft, who each have longstanding commitments to environmental responsibility, have worked to ensure this supercomputing service is as environmentally sustainable as possible.”
Microsoft and the Met Office appear to be relying on renewable energy PPAs. (power purchase agreements) where the IT consumer pays for energy generation in bulk.
But, as the demand for these bigger, more powerful supercomputers continues to grow, there are many ways to rein in power use and address sustainability.
When asked about this issue, Bill Magro, chief HPC technologist at Google, told DCD that the cloud was the logical solution for greener HPC.
“The demand for HPC compute seems insatiable, and the power consumption
associated with that demand continues to rise. At the same time, the HPC industry has embraced parallelism, through multi-core CPUs, GPUs, and TPUs. This parallelism enables ever higher efficiency, measured in performance/watt,” he says.
“One of the best ways to minimize the environmental footprint of compute is through highly-efficient, highly-utilized data centers, powered by clean energy,” he added, launching into a pitch for Google’s renewable energy PPAs and power matching.
When asked if there is an upper limit to what we can feasibly power, Magro, like everyone we asked, had little to offer.
To a certain extent, we can hope for the “law of accelerating returns” (Ray Kurzweil’s term for the way some technologies seem to improve exponentially). Perhaps as the power and capabilities of supercomputers continue to grow, our ability to make them more efficient and produce renewable energy will keep pace.
Until then, these facilities will be limited by what Magro dubbed ‘the available power envelope.’
Goodbye to all that?
It is too early to declare the end of the standalone supercomputer, just as the enterprise data center has outlived most predictions. But it is no longer the obvious choice for enterprises and researchers needing access to HPC resources - the cloud offers easy access, with potential sustainability benefits.
For other deployments, colocation and bare metal can fill the need - as long as the facility can meet the increasing power and cooling demands.
That leaves the ‘leadership’ systems, like Frontier, which capture the headlines and are at the forefront of what’s possible in the industry.
"You can stick a full Cray EX system, identical to what you might find at NERSC or OLCF, inside Azure nowadays and avoid that whole burdensome mess of building out a 50MW data center," Lockwood said in his resignation post.
Why, he asked, should the Department of Energy spend billions on the next wave of ginormous supercomputers? Government agencies have already begun to shift traditional workloads to the cloud, cutting down on sprawling data center portfolios that were deemed inefficient and expensive.
“That all said,” he admitted. “The DOE has pulled off stranger things in the past, and it still has a bunch of talented people to make the best of whatever the future holds."
Exploring algorithms and art
Introducing our new resident artist: The data center
The images on the following pages came from a data center.
We try hard to visualize this industry - taking our own photographs, hiring external photographers, illustrators, and designers. But what if the data center could visualize itself?
OpenAI's DALL·E 2 offers that opportunity, running an advanced artificial intelligence program in a Microsoft Azure data center to generate new images.
In coming issues of the magazine, as well as on our news website and supplements, we plan to use AI tools like DALL·E and Midjourney to help us illustrate our industry. We then use another AI system to upscale the images to a high enough resolution.
However, we will only use it when it makes sense, and in conjunction with our own photography and illustrations. We will also always make sure that you know the images came from an AI.
For now, though, we wanted to end this issue of the DCD Magazine with a selection of images we created with AI - to show the promise and potential of the medium, as well as highlight creations that we just think are cool to look at. Besides each image you will find the prompt used to create the image.
above: A data center on stage as a stand up comedian, digital art right: Server rack, as painted by Tullio Crali
Sebastian Moss Editor-in-ChiefShort thinking
The news that someone was betting against data centers was a shock to many in an industry that has grown rapidly since the difficult days of the dotcom bubble.
More surprising was the stature of the man making the bet - Jim Chanos, a wellrespected short seller who made his fortune going against the market - and the scale: A cool $200 million.
The data center sector was abuzz with hot takes on what the short meant - with most mischaracterizing it as a bet against the concept of data centers. "But data centers are huge, everyone needs connectivity!" they cried, pointing to the rise of the Edge, cloud, and things like the metaverse.
But it pays to closely parse what Chanos is suggesting will fall: Data center real estate investment trusts like Digital Realty.
He argues that as more and more of the world's compute and connectivity shifts to just three hyperscalers, they'll either drive down wholesale margins or cut them out entirely as they ramp up their own building.
This may prove true - corporations don't like to share if they don't have to, and claims by Digital Realty's CEO that they're friendly partners is misleading.
Hyperscalers use partners because of necessity. They can't build as fast as they want to, they can't connect up to the rest of the network at the pace they need, they can't juggle as many projects as the cloud demands.
So then the question shifts to one of timing and scale. That is: A) How long will it take for hyperscalers to grow to a point where they can do it all, and B) Will they truly continue to scale indefinitely, or is there a hard cap on companies that will always prefer to use their own wholesale or colo space?
Answering either of those on a long timescale is a fool's errand - especially as we slide into a recession that will tighten corporate budgets and cause IT rethinks - and as Equinix’s Metal and Digital’s PlatformDigital expand what they do.
On a shorter timescale, the stats are against Chanos - REITs continue to grow, hyperscalers remain dependent on them, and the overall growth of digital transformation is plumping up everyone's revenues.
Quarterly reports will not show what Chanos is predicting any time soon - he will have to convince the market of a coming storm. But he has made little effort on that front, preferring to rely on an air of mystery and ‘all knowing’ to support his claims.
Industry figures, who are admittedly biased by wanting to remain in a profitable field, have derided him as not understanding the sector and oversimplifying a complex ecosystem of hyperscalers, wholesale providers, interconnection companies, builders, and more.
I may also suffer from the blindness of being in this sector, but I can see where they are coming from.
If Chanos wants anyone to take his thinking seriously, he needs to show his working.
- Sebastian Moss, Editor-in-ChiefCONFIDENCE ON
THE NEXT GENERATION OF CLOUDS TAKES A DIFFERENT KIND OF RAINMAKER. FIND THEM AT DESTINATION ZERO.
From designing your power solutions, through installation and commissioning, our rainmakers are on call 24/7. They’re authorized to contact any data center expert to help you with anything related to protecting your data, now and in the future. Our rainmakers are the supply chain who keep your confidence on.