CUSTOMIZATION IS OUR STANDARD.
MILLIONS OF CONFIGURABLE RACKS AVAILABLE IN TWO WEEKS OR LESS
CUSTOMIZATION IS OUR STANDARD.
MILLIONS OF CONFIGURABLE RACKS AVAILABLE IN TWO WEEKS OR LESS
6 News
Meta madness, Power takes power, and Equinix heats up
14 Data centers’ Buddy
The future of Loudoun, with Buddy Rizer - the man behind the rise of Data Center Alley
16 Digital Realty’s next move
Building Digital Dulles, a gigawatt campus
22 The newcomers
PowerHouse and CorScale on
developing for hyperscalers
27 NTT GDC’s moment
The Japanese giant is ready to take on the US. Can it compete in the big leagues?
31 The Queen of Prince William
How PW’s Digital Gateway came to be, from the lady behind it all
36 There’s something about Maryland
Touring Quantum Loophole’s giant data center campus with CEO Josh Snowhorn
41 The cooling supplement
Over cooling, liquid breakdown, and plant-based oils
57 Australia’s great migration
Moving away from the East Coast, and building data centers out west
59 The Awards winners
We dive into the winners of the 2022 DCD Awards, highlighting who won the industry’s biggest competition
78 Exploring immersion cooling
What are Dug McCloud and Microsoft up to? We take a deep dive
82 Asia explores the multi-cloud
And an intro to Hashicorp
85 5G in Africa
Why the rollout is so slow, and what comes next
88 Op-ed: Social media drama
Meta and Twitter have ripped up their data center plans. What does that mean for the industry?
What happens with the world's data center hotspot runs out of power and land?
Last month, armed with a recorder, camera, and a drone, Dan Swinhoe and I toured Virginia and Maryland, in search of answering that simple questionwhat's next for Loudoun and Virginia?
Our Buddy
The success of Loudoun as the data center capital is thanks to the work of hundreds, as well as some fortunate
The newcomers
Next, we travel to the construction sites of PowerHouse and CorScale, the new kids on the block.
Backed by large funds, and with a history in real estate, the two companies hope to cash in on the data center goldrush by building powered shells for the hyperscalers.
The challenger
We then talk to NTT Global Data Centers, touring one of its many Loudoun sites.
After buying up a number of regional contenders like RagingWire and leaving them mostly independent, NTT has pieced them together as a global giant.
Now, the company is ready to spend billions to try and become one of the biggest data center companies in the world.
Its next move? Build out in Prince William County.
events in history. But few can claim more credit in its rise than Buddy Rizer, the county's economic director.
We caught up with Rizer to understand that story, and find out how Loudoun is recovering from Dominion's surprise power crisis.
Start big
Our journey to understand the future of Virginia begins with Digital Realty.
We tour one of their massive million square foot data halls and then head to Digital Dulles, an ambitious plan to build a gigawatt campus next to an airport.
The company still has plenty of runway for growth, but warned that there is no more land left in Loudoun.
The dealmaker
Over in PWC, we talk to the lady who created the PW Digital Gateway, a massive landsale to data center operators that could reshape the Virginia data center landscape.
Mary Ann Ghadban never wanted to sell, and was settling into retired life - until a data center and high voltage power lines came along. So she took control, and decided to leave on her own terms.
The mega campus
Finally, we leave Virginia for nearby Maryland to understand Quantum Loophole's plans to build a giant campus for data centers to build on.
The amount of electricity Buddy Rizer expects Loudoun data centers to consume
Meet the team
Editor-in-Chief
Sebastian Moss @SebMoss
Executive Editor
Peter Judge @Judgecorp
News Editor
Dan Swinhoe @DanSwinhoe
Telecoms Editor
Paul Lipscombe
Reporter
Georgia Butler
Partner Content Editor
Claire Fletcher
Head of Partner Content
Graeme Burton @graemeburton
SEA Correspondent
Paul Mah @PaulMah
Brazil Correspondent
Tatiane Aquim @DCDFocuspt Designer
Eleni Zevgaridou
Head of Sales
Erica Baeta
Conference
Director, Global Rebecca Davison Conference
Director, NAM Kisandka Moses
Channel Manager
Alex Dickins
Channel Manager
Emma Brooks
Channel Manager
Gabriella Gillett-Perez
Chief Marketing Officer
Dan Loosemore
Head Office
DatacenterDynamics
22 York Buildings, John Adam Street, London, WC2N 6JU
permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
The landscape of Ashburn is dotted with cranes, construction never stops in DC Alley
Meta is reviewing a number of underconstruction data center projects around the world as part of a drastic design change for AI workloads.
The Facebook-owner in December laid off its primary contractor on two data center projects in Denmark, ending development on the Odense expansion.
Now, DCD understands, it is reworking some of its 11 under-development projects for new designs, which themselves are still under development - with the first such pause in Temple, Texas.
This may include the cancelation of contracts with contractors for existing designs, but our sources note that Odense is the only one where a new development is currently not planned. The others are being “rescoped,” which will likely impact their construction timelines, and require new contracts (and potentially approval by local authorities), but are still expected to ultimately end in new data centers.
“Supporting AI workloads at scale requires a different type of data center than those built to support our regular online services,” Meta’s Nordics comms manager Peter Münster told DCD when the Odense expansion was canceled.
“This is why we are focusing our efforts on building a new generation of data centers.”
When asked why the Odense site couldn’t be retooled for the new designs, Münster said: “We are focused on building AI capacity and as of now, this site does not fit our current needs.”
Those facilities will be liquid cooled, the company said at the Open Compute Summit in October.
At the time, it appeared like the shift would be gradual, but the company now seems to be making a drastic switch. The Denmark decision, at least, appears to have been on quite short notice - for example, the contractor for the Odense data center was only brought in this August.
Odense will still see the completion of a smaller expansion that was well underway. It is believed that data centers that are near completion on the older design will still be finished as is.
Meta last month announced it would lay off 11,000 employees, some 13 percent of its entire workforce, amid worsening economic conditions, impacts to its business model from Apple, and an as-yet-unsuccessful pivot to the metaverse.
bit.ly/Metastasized
“These [bills] are corporate cronyism at its finest,” Michigan Democratic Representative Yousef Rabhi said of two proposed bills.
The company has received a $5.2 million tax bill from Wasco County for the facility in The Dalles which it built in 2006, which has now expired.
Datagrid bought a 43-hectare site in January 2022. The company aims to build up to ten 6,500 sq m (70,000 sq ft) 10MW modules totaling 65,000 sq m.
Governments around the world have agreed to put an end to leap seconds, which are added to coordinated universal time (UTC) to keep it aligned with astronomical time (UT1). 27 such leap seconds have been added since the practice began.
The companies said the acquisition will allow Power Control and Legrand to help customers ensure that they have the right UPS and other mission critical power solutions in place to meet potential supply volatility over the coming months. “We want to develop our presence in the UK,” Legrand’s CEO said.
Amazon Web Services (AWS) said that it plans to keep hiring next year, after the broader company laid off more than 10,000 people. AWS implemented an executive hiring freeze earlier this year, and laid off contractors, as well as workers at Luna cloud gaming.
Equinix has promised to “adjust the thermostat,” increasing the temperature in its data centers to reduce energy spent in cooling systems.
The giant colocation company has promised to shift temperatures closer to 27°C (80°F) across its global data center fleet, which will lead to less energy wasted in unnecessary data center cooling. This is an unusual move, as colocation providers often operate at lower temperatures than necessary to avoid upsetting the customers who own the equipment in their facilities, and fear that overheating may damage it.
Equinix plans to allay these fears by applying the increased temperatures gradually over several years, explaining that
the announcement will have no immediate impact on existing customers. Customers will be notified when the thermostat is going up at the site where their equipment is hosted.
A temperature of 27°C has been recommended by industry bodies for many years and is approved by hardware manufacturers. It is also used by hyperscale cloud companies that successfully run their equipment in their own facilities at even higher temperatures. However, colocation customers and enterprise data centers have often continued to use temperatures that industry standards bodies regard as unnecessarily low.
Equinix explains that increased temperatures will result in less energy used
in cooling, and will start the process to move gradually: “Starting immediately, Equinix will begin to define a multi-year global roadmap for thermal operations within its data centers aimed at achieving significantly more efficient cooling and decreased carbon impacts,” says the press release.
In its announcement, Equinix points out that increasing temperatures will allow its customers to reduce the Scope 3 carbon emissions associated with their data center operations. Scope 3 emissions are those from a company’s supply chain, and are important - but are proving difficult to reduce.
Equinix has lined up support from analysts and hardware vendors.
“Most data centers operate within the restrictive temperature and humidity bands, resulting in environments that are unnecessarily cooler than required,” says a press release quote from Rob Brothers, program vice president, data center services, and analyst at IDC.
He added that Equinix wants to “change the way we think about operating temperatures within data center environments”, saying: “With this initiative, Equinix will play a key role in driving change in the industry and help shape the overall sustainability story we all need to participate in.”
In response to a question from DCD, Equinix said that: “There is no immediate impact on our general client base, as we expect this change to take place over several years. Equinix will work to ensure all clients receive ample notification of the planned change to their specific deployment site.”
bit.ly/HotDataCenterNews
Digital Realty CEO Bill Stein has been terminated from his role as CEO, effective immediately.
The colo giant announced today that its board of directors has appointed current president and chief financial officer, Andrew P. Power, as its CEO and to the board of directors, effective immediately.
In an SEC filing on the change, Digital said the board “approved the termination of A. William Stein as Chief Executive Officer of the company without cause, effective immediately” on December 13. No reason for the change was given, but it noted Stein will receive around $15 million in cash and various other separation payments and benefits.
According to another SEC filing from August 2021, Stein’s employment with the company was changed to “automatically be extended each year for successive one-year periods until either the employer or Mr. Stein provides 60 days written notice of non-extension prior to the expiration of the thencurrent term.”
Digital said in a statement: “Bill was explicitly terminated ‘without cause’ pursuant to his employment contract. This is different from being terminated for cause.”
bit.ly/PowerTakesPower
Immersion cooling specialist Iceotope and social media company Meta have demonstrated that immersion cooling can be safely used with hard drive storage - by re-engineering an air-cooled storage system to be cooled by liquid.
The study found the liquid-cooled version of the system had a more uniform temperature, and the power required to cool the system was reduced to less than five percent of the system’s total power consumption. The silent operation of the system also protected the hard drive from acoustic vibrations which can be an issue for air-cooled hard drives.
The test took a standard commercial air-cooled, high-density storage system that held seventy-two hard drives in a 40U
rack, along with two single socket nodes, two SAS expander cards, a NIC, and a power distribution board, and re-engineered it for single-phase immersion cooling.
The test was important because hard drives, with capacities up to 20TB, currently provide 90 percent of the storage in data centers (according to research by Cybersecurity Ventures). While increasing power densities are driving data centers to consider immersion cooling, hard drives have normally been excluded for fear that they might be incompatible with the technique.
According to Iceotope, the test found that hard drive systems in a rack form factor “turned out to be an ideal fit for precision immersion cooling technology.” One
reason for this seems to be a change in hard drive engineering. While hard drives have normally been sealed to prevent the ingress of dust, the arrival of helium-filled hard drives means that such drives are now hermetically sealed, making them compatible with immersion cooling.
To carry out the test, Iceotope and Meta added an Iceotope precision immersion liquid cooling system, immersing the drives in dielectric fluid and fitting a dedicated dielectric loop and a liquid-to-liquid heat exchanger and pump.
Facebook-owner Meta then measured temperature variation across the hard drives and cooling pump power in the air-cooled and liquid-cooled systems.
The results showed the variance in temperature between all 72 HDDs was just 3°C, regardless of their location inside the rack. The cooling system released its heat to a secondary water circuit, and the drives operated reliably with rack water inlet temperatures up to 40°C.
On top of that, the system was efficient, with cooling power at less than five percent of the total power consumption. And the companies assert that liquid cooling will mitigate vibrations that have been known to cause damage or failure of hard drives.
Chassis immersion might seem an extreme option, but Iceotope argues that other forms of liquid cooling such as it is less invasive than cold plates, tank immersion, or two-phase immersion, and allows user access for servicing, and the ability to hotswap drives.
bit.ly/ColdColdStorage
A fire broke out at a QTS data center in New Jersey in the early hours of Wednesday, November 23.
The fire was reported at 02:45 am, and extinguished by 05:00 hours, with no casualties. However, the local fire department reports the fire was extinguished with heavy flows of water, and extra care had to be taken on the site due to flammable building materials on site to build an extension to the facility.
“Early Wednesday morning, authorities responded to a fire on a concrete and steel structure under construction adjacent to QTS’ Piscataway data center,” QTS said. “The local fire department... fully extinguished the fire shortly after arrival.”
QTS has been building a two-story, 90,000 sq ft extension to a data center on the site. The fire broke out in this new construction, and did not spread to the adjacent operational data center, which was unscathed.
The spokesperson explained: “QTS determined that several pallets of roofing material stored on the roof for future installation caught fire. The cause has not been determined. No injuries or customer disruption was reported. The operational data center adjacent to the construction site was not impacted.”
QTS bought the data center site from DuPont Fabros in 2016. The 38acre campus already holds two facilities, as well as the new building under construction.
bit.ly/RaiseTheQTSRoof
Being involved in a project as early as concept design stage allows us to manage and coordinate the utility interaction and significantly reduce the design and procurement timeframes.
Engineering & Design
Our Engineering & Design teams are engaged with Distribution Network Operators in the leading Data Centre Markets
High Voltage
We have established ourselves as a leader for Engineering, Procurement and Construction of energy solution projects.
Electrical & Mechanical
Our Electrical and Mechanical teams have extensive Project delivery experience having delivered significant projects globally.
hmvengineering.com
hmvengineering.com
French data center engineering company APL is proposing a novel solution to data center location: lightsout data centers in newly-constructed underground caverns.
Eco-Caverne, designed by Swissbased underground construction startup Eccus, is based on new underground space excavated 30m under existing terrain. APL says these underground facilities can be build up to 20 percent cheaper than building above-ground, and have benefits including providing waste heat to warm nearby buildings.
Numerous data centers have been built in underground caverns, claiming advantages in cooling and resilience.
Eccus intends to take these advantages to urban areas, adding further benefit: the Eco-Caverne will be easy to access, and create available space in completely built-up areas.
The Eco-Caverne is a waterproof cylinder 30m underground, up to 150m long. The chambers will be built in one of three standard diameters; “Vega” approximately 10m, “Rigel” approximately 13m, and “Hadar” around 15m. Each chamber will be provided with a 3m x 6.5m lift with a capacity of 12 tonnes at one end, and an emergency exit at the other.
The chambers will also have ventilation, security systems, and a fire detection and prevention system.
Eccus says it can build underground, using the same techniques as underground road and railway tunnels, and deliver secure underground space quickly compared with building above ground, and with other benefits. The Eco-Cavernes can be built under existing buildings, and provide space where land is otherwise unavailable.
APL is applying this idea to data centers, which it plans to offer in France and Switzerland: “This new solution makes it possible for a company to create or complete its computer hosting area on land that
has already been built or in a saturated zone,” says APL’s announcement.
The data center builder is making much of the heat advantages, proposing the facilities’ waste heat can be used directly to warm buildings above. APL says a 2,000 sqm (21,500 sq ft) data center can supply 22GWh of heat energy per year, enough for 2,000 homes.
The facilities can also be run without day-to-day staff, allowing the operator to reduce oxygen levels to 13 percent, cutting fire risks.
bit.ly/DataDownUnder
A French startup is proposing to float data centers on urban rivers.
Denv-R plans to launch facilities cooled by river water, starting with a test facility in Nantes next year. The company’s two founders, based at the IMT Atlantique engineering school in Nantes, say this will reduce energy consumption and CO2 emissions.
The company is planning to float a demonstration system on the Loire river by the Quai Wilson island in Nantes, in June 2023.
There is only limited information on Denv-R’s website, but according to an article on 20 Minutes, the idea sounds similar to the barges pioneered by the US company Nautilus, which has a floating facility in California, alongside one in development in Ireland, and a land-based data center in Maine.
Denv-R says its system will be smaller than Nautilus,’ making it more suitable for deployment on urban rivers. It also circulates water passively, without the need for pumps.
Based on the render provided by IMT Atlantique, the facilities appear to follow a catamaran-like design, with two hulls next to each other.
bit.ly/DataDownUnder
Thales Alenia Space will lead the European Commission’s ASCEND feasibility study for data centers in orbit.
‘Advanced Space Cloud for European Net zero emission and Data sovereignty’ is part of the EU’s Horizon Europe research program, and aims to see if data centers in space would lead to fewer emissions than those on Earth.
The data centers would rely on solar power plants generating several hundred megawatts, which would also be in space.
They would then link to the Earth via high-throughput optical communications.
The first stage of the study sets out to answer a critical question - whether the emissions created by producing and launching the space data centers would be less than that generated by ground based ones.
Then it will study whether it is possible to develop the necessary launch solution and to ensure the deployment and operability of these spaceborne data centers using robotic assistance technologies.
bit.ly/SpaceToExpand
From designing your power solutions, through installation and commissioning, our rainmakers are on call 24/7. They’re authorized to contact any data center expert to help you with anything related to protecting your data, now and in the future. Our rainmakers are the supply chain who keep your confidence on.
GPU maker Nvidia and cloud giant Microsoft have entered into a multi-year collaboration to build “one of the most powerful AI supercomputers in the world.”
The cloud-based system will use Nvidia GPUs and networking gear, as well as use Nvidia’s AI software stack.
Specifics were not disclosed, but Nvidia said that the deal will add tens of thousands of Nvidia A100 and H100 GPUs, as well as Quantum-2 400Gb/s InfiniBand networking gear.
“As part of the collaboration, Nvidia will utilize Azure’s scalable virtual machine instances to research and further accelerate advances in generative AI, a rapidly emerging area of AI in which foundational models like Megatron Turing NLG 530B are the basis for unsupervised, self-learning algorithms to create new text, code, digital images, video or audio,” Nvidia said.
When the system comes online, customers will be able to deploy thousands of GPUs in a single cluster to train large language models, complex recommender systems, run generative AI models, and more. A date was not disclosed for when the supercomputer is expected to launch, but it will likely be installed in phases.
bit.ly/BringBackSiliconGraphics
Microsoft has acquired UK-based Lumenisity Limited, a manufacturer of hollow core fiber (HCF) solutions.
A type of optical fiber technology, HCF features an air-filled center channel that is surrounded by a ring of glass tubes, akin to a honeycomb pattern. The design allows for higher capacity with minimized chromatic dispersion.
Though not a new technology, interest in the technology has been growing as performance and reliability issues have improved.
Lumenisity was formed in 2017 as a spinoff from the Optoelectronics Research Centre (ORC) at the University of Southampton to commercialize its HCF technologies. The company had raised £12.5 million; euNetworks was a customer, while BT had conducted trials with the fiber firm. It recently opened a 40,000 sq ft HCF manufacturing facility in Romsey, UK.
Microsoft company said the acquisition will expand its ability to ‘further optimize its global cloud infrastructure’ and serve Microsoft’s Cloud Platform and Services customers with strict latency and security requirements. Terms of the deal were not shared.
Lumenisity’s HCF solutions uses a proprietary design where light propagates in an air core, which it claims has ‘significant advantages’ over traditional cable built with a solid core of glass.
“Organizations within the healthcare, financial services, manufacturing, retail, and government sectors could see significant benefit from HCF solutions as they rely on networks and data centers that require high-speed transactions, enhanced security, increased bandwidth, and high-capacity communications,” Microsoft said of the acquisition.
bit.ly/AHollowVictory
The London Stock Exchange announced that Azure will be its preferred cloud provider... after Microsoft invested in it. The company has won similar contracts by investing in Cruise, OpenAI, and others.
Microsoft is believed to have picked up DPU firm Fungible.
Data processing units (DPUs) are a relatively new class of programmable processor that manages how data moves through a data center, offloading networking tasks and helping optimize application performance.
Fungible was founded in 2015 as the first company to pitch such a product to the cloud, and managed to raise over $370 million.
But the company, co-founded by the founder of Juniper Networks, struggled as
larger players entered the market, including Nvidia, Intel, and AMD. Lightbits, Liqid, and GigaIO also took market share.
This August, the company laid off staff as its sales slowed and its cash piles dwindled. SemiAnalysis reports that the company initially tried to sell itself to Meta, but failed. It was in talks with Microsoft for a custom silicon deal, but as its options narrowed, it sold to the company for a fire sale price.
Microsoft is believed to have no interest in selling Fungible’s kit to external customers.
bit.ly/AFungibleToken
P i c t u r e d : J o a n n a S t i l e s B u s i n e s s M a n a g e r f o r D a t a C e n t r e s a t M e e s o n s A I p r e p a r i n g t o p r e s e n t w i n n e r s V a n t a g e a t t h e D C D a w a r d s
D C D
P r o u d s p o n s o r s o f t h e M i d d l e E a s t a n d
A f r i c a D a t a C e n t r e D e v e l o p m e n t A w a r d a t t h e 2 0 2 2 D C D A w a r d s .
K e e p i n g D a t a C e n t r e s s a f e a n d s e c u r e w i t h
l a y e r e d p h y s i c a l s e c u r i t y .
O u r a p p r o a c h t o e n t r y s e c u r i t y a t M e e s o n s f o c u s e s o n b e t t e r - r a t e d , b e t t e r - t e s t e d p r o d u c t s A s
a n i n t e r n a t i o n a l p h y s i c a l s e c u r i t y p r o v i d e r , w e u s e o u r k n o w l e d g e o f s t a n d a r d s , c e r t i f i c a t i o n s , p o l i c i e s a n d 3 r d p a r t y a c c r e d i t a t i o n s t o s u p p o r t p u b l i c s a f e t y d e m a n d s w o r l d w i d e W e f i r m l y u n d e r s t a n d t h e i m p l i c a t i o n s o f n o t p r o t e c t i n g d a t a s u i t a b l y , w h i c h i s w h y w e a r e c o m m i t t e d t o s e c u r i n g t h e d a t a a n d a s s e t s w i t h i n y o u r b u i l d i n g w i t h t h e f o l l o w i n g :
H i g h - s e c u r i t y p o r t a l s S p e e d g a t e s
F u l l - h e i g h t t u r n s t i l e s
H V M
F i n d u s a t w w w . m e e s o n s . c o m
The making of Loudoun County as the heart of the data center industry wasn’t always a foregone conclusion.
It had to survive the dot-com bubble bursting, build its credibility in a cautious industry, and find ways to keep on growing, no matter what. Now, its position is secure, with the Virginia county cementing its place as the capital of a demanding industry.
But Loudoun, and Ashburn within it, face new headwinds as power and land demands outstrip supplies, and some locals push back against the preponderance of one industry.
To understand how Loudoun will adapt in the current climate, and what it means for the future of data centers in the country, we sat down with the man who made it possible - Buddy Rizer, Loudoun’s executive director for economic development.
“When it all started here, it came by accident,” he explained. “The Internet exchange was in Tyson's Corner, but the federal government realized that you could just drive a truck into that building, and moved it out here.”
That was followed by AOL and WorldCom, and some of the first dedicated data centers in the country.
It was the height of the dot-com bubble, and "they were all putting fiber in the ground. Companies like PSINet and UUNET were just throwing money around and valuations were just stupid."
Then came the crash. "That kind of blew up," Rizer said, with PSINet going bankrupt in 2001, and UUNET owner WorldCom filing for what was then the largest Chapter 11 bankruptcy protection in history a year later due to widespread fraud.
Rizer was brought on to the economic development team in 2007 with a simple aim: Increase county revenues. "81 percent of our tax revenue was coming from residences," he recalled. "And, as we saw during the housing bubble a year later, that was not good."
Looking out his window, Rizer pointed to three data centers built during the first boom. "They sat empty for years, they never got filled, until I was able to convince Digital Realty to move in."
The early companies may have left, but their infrastructure was all there - available for a new crop of corporations to buy up for cheap and build more sustainable businesses on top of.
"That was purposeful, that was something that we saw as an opportunity. When I hear people say, ‘Loudoun’s success was a lightning strike’… it was purposeful, we looked at that as
“If we're at 28 million square feet in Loudoun today, we could definitely exceed 40 million at total build-out"
an opportunity and I had to be proactive in building that," Rizer said.
"I was going to every show, we were trying to get deals done," he said, sitting in front of a stuffed gorilla wearing hundreds of lanyards from various events he visited. "With CyrusOne, wherever [then-CEO] Kevin Timmons was speaking at a conference, I was going to that conference and sitting in the front row, just so he could see me, and so I could talk to him after. He always said, 'I think we're too late to get in,' and all these things, but then when they came in, they were incredibly successful.”
Proudly surveying the data center landscape from his window, he continued: "We worked very hard to build this. And it was not just us, it was landowners and Dominion," the power utility.
But after decades of building out the infrastructure that made the unprecedented data center expansion possible, Dominion this summer shocked the industry with the surprise announcement that it could no longer guarantee new power connections for four years.
"To wake up one day and to find out that there's now no new power till '26? I was flabbergasted. I really couldn't believe that we ended up there,” Rizer said, admitting that despite their close ties he found out at the same time as everybody else.
The challenge was understandable, he said - "there's no case study to point to given the power density we have here," he cautioned. "There were a lot of factors, there was the Covid growth, and the shift in 2017, when the hyperscalers came in, where their ramp up of power is much quicker. They bring it on like that,” he said, snapping his fingers. “That changed everything.”
Dominion’s announcement was a similar sudden snap. “No one's really been able to explain to me how we went from the idea of unlimited power that we always just assumed we had, to suddenly 'we don't know.'"
Beyond just the immediate impact on the data center sector, the sudden cessation of power and buildouts meant that the county’s tax revenue plans were - and still arethrown into disarray.
"We had projected 20 percent year-overyear revenue growth because that's what we traditionally had," Rizer said. "Now we know that that's probably not going to be the case. So that does impact our county budget. When you're dealing with half a billion dollars or more, 20 percent of that is a big number.
“When we're trying to plan schools, roads, community services, parks, sheriffs, fire departments, and all of those things, having
that unpredictability that's going to impact our budget process for the next three or four years is not ideal. And we were already into our budget planning when we found this out.”
But the Dominion delays, the scale of which we are still learning, represent just a temporary blip in the story of Loudoun, Rizer argues.
“It sucks for those companies that had already deployed investment into the county, and then now find out that they can't power the buildings that they built. But that hasn't paused any of the demand for the land. People are still trying to secure long runways so that when the power is here, they'll be able to really move pretty quickly. Nothing has slowed down the demand on our land.”
As it currently stands, the data center industry takes up around three percent of the land in Loudoun (with a higher concentration within Ashburn), and consumes around two gigawatts of power. At full-zoned buildout, “maybe it’s five percent of the landmass,” Rizer said, adding that older facilities will eventually be torn down and rebuilt taller and denser.
“If we're at 28 million square feet, we could definitely exceed 40 million at total build-out. That gets us to five gigawatts, and over a billion dollars of annual revenue.”
After years of building without limits, 2022 also saw the Board of Supervisors limit data center projects in some neighborhoods, particularly along Route 7. The new rules will also require data centers to adopt higher-quality building designs and tougher environmental rules depending on their proximity to housing.
“There's always going to be pushback when things start to get big,” Rizer said. “That's a natural part of the conversation. But there is a difference between process and policy - the process has been tough, I’m not gonna hide from that. But when you look at it from a policy standpoint, honestly, I don't find it overly restrictive.”
He believes the certainty of the new policy will be welcomed by the industry, while design standards are only for the worst offenders - most new builds already meet the requirements. Rizer said that while he welcomes more data centers, it’s important to set restrictions in some areas to build “unique communities where people want to be and also protect Western Loudoun County and protecting the farmland there. I don't see a scenario where we're going to go west of Route 15. I just don't know that that's what we need to do.”
He added: “There are places where we don't think there should be data centers. And I think that's okay. There's been a decade or
Where data centers call home
more of unrestricted growth in the last year, and do you know how many data centers we've turned down? One. It's not like we've turned off the spigot.”
While they haven’t turned down that many data centers, the dwindling number of large tracts of land and skyrocketing land prices have driven companies to neighboring counties.
“It's not like if Prince William wins or Frederick wins a deal that we lose,” Rizer said. “We have limited resources and limited land, and now limited power. I would rather it come to the region than not at all. I'm competitive and I don't like losing, but if there’s a deal that they can't put here, I'd rather it go to Prince William."
The county receives around 31 percent of its tax revenue from data centers but, at some point, data centers will max out all the space made available to them, even if the industry stays strong for the foreseeable future.
“I think that the idea that data centers are going to go away anytime soon is probably science fiction,” he said. “Our internal research tells us that the demand for data centers outstrips supply through the next 20-30 years. And if it does start to deteriorate, I don’t think that it starts here.”
But with a limit to growth on the horizon, he hopes to use the money raised by data centers to help promote a more diverse economy. “Even the data center sector wouldn't believe that it makes sense to have all of your eggs in one basket,” he said.
“We're trying to build Life Science cluster, we're trying to build a cybersecurity industry, and take advantage of the Dulles Airport, and take advantage of the fact that we now have Metro,” he said, referencing the Silver Line rail extension connecting Loudoun to DC that took nearly two decades, but opened in the week of our conversation.
“There's really three things that have been total game changers for us - one was 60 years ago, when the airport came, the second was the tech infrastructure that was built here pre-bubble burst, and then I think Metro is going to be the third.”
The future of Loudoun, he hopes, will involve more data centers than ever beforebut will rely on them less as new sectors sprout alongside it. “I don't feel like what we've done here in Loudoun County is the end of anything.”
Still, he is happy to reflect on how far the county has come from its dot-com bubble days. “It's so cool to be the biggest in the world at something, to be able to have this industry that has built what it has. There's something to be said for that.”
As one of the world’s largest data center companies, Digital Realty has an outsized presence in the data center hotspot of Virginia.
The company has some of the earliest buildings on data center alley, dating back nearly two decades, along with some of the largest, and is building a massive new campus right near Dulles Airport.
We visited a million-square-foot facility and toured its new construction site to understand Digital’s future in Virginia. “It's such a critical market, not only for Digital, but for the industry overall. It is the gold standard of availability zones on Earth,” company CTO Chris Sharp said.
The company operates around 600MW of IT load in Loudoun today, and is building a new ‘Digital Dulles’ campus that is planned to add another gigawatt on top. “What I think is interesting is we have almost 1,000 acres inside of the Loudoun County area, and 575 acres of that is left to develop,” Sharp said. “There's a lot of runway.”
Its involvement in the county dates back to Digital’s earliest days, when it acquired three data centers starting in 2005. It also built its own, and gained more facilities with the acquisition of DuPont Fabros.
“Once we started development of our campus, we just never stopped,” Rich Becher, design manager at Digital Realty, said. “When I was preparing for this interview, I learned
“The largest cloud availability zones on Earth are in this market. And what we're seeing now is the adjunct workload that is going to drive more demand is AI"
that the first customer we put into that campus is still with us,” he added. “That was a happy thing to learn.”
The company currently has no plans to tear down its older facilities and rebuild them larger and denser. "They still serve their needs," Becher said. "Our customers are happy with them, and are staying in them."
He noted that the components and features of the data centers that age faster are modular, allowing for them to be replaced more frequently than the building itself.
"That modularity is what really allows those buildings to keep up with customers' evolution within them," Sharp added. "It is extremely tedious for these larger deployments to ever lift and shift."
Beyond the size and the number of stories, telling apart the older facilities from the new ones is easy: Just look for windows. “It's funny how that topic has evolved,” Becher said. “In the beginning, we wanted to make the building’s appearance comfortable for our customers - concrete so cars don't get in, and no windows so people don't get in.
“What we're seeing now, and not just in Virginia, is that [local governments] are pushing back on the appearance of the buildings, they don't want these blank wall buildings. So now they have glass on the outside, but you still can’t get in.”
The facility DCD toured, 'Building L' at Round Table Plaza, is built to the company's
newest designs, several generations on from the older concrete Virginia sites.
At over one million square feet, it is a huge data center in its own right. "When we were building 'Building L,' one of the crazy things we considered was a moving walkway," Becher admitted. "It's so long that we went to a manufacturer to understand how they work, but ultimately never put it in."
We had to rely on our legs to tour the site, which has a utility power capacity of 120MW. The company's largest in the state, the first customer moved into the data center in October 2017.
"We put our last customer in that building in December of 2020," Becher said proudly. "I think we sold the building faster than we built it."
Developing a single building of that size was a learning moment for the company. “When it launched, we only had one freight elevator, and then we added another one because it was so busy with everybody moving in,” Becher said. “Now any building that gets close to that size will get two freight elevators.”
With Digital Dulles, its huge planned campus adjacent to the airport, Digital Realty is planning smaller individual buildings, but a larger overall footprint. “We have 14 buildings planned on Digital Dulles,” Becher said. “The biggest buildings are larger than half the size of Building L.”
The company hopes that the facilities will represent the next stage of Digital’s life in the data center capital. It also may be the company’s last major land deal in the vicinity. “Believe me, these large blocks of land - there are no more left,” Sharp said. “And, at the size we operate, it was getting upwards of a million to a million-and-a-half per acre inside of this area,” he added, noting that with Digital Dulles they were able to get the land for “around half of that.”
That fundamental land limit will eventually mean the end of the unperturbed growth in Loudoun (delayed somewhat by Dominion’s power issues, which will impact Digital Dulles to an unknown extent). But it doesn’t mean the end of Loudoun as a data center hub - far from it, Sharp argued.
“The largest cloud availability zones on Earth are in this market,” he said. “And what we're seeing now is the adjunct workload that is going to drive more demand over some period of time is artificial intelligence. We see this as being the next epicenter of artificial intelligence, because of the fact that some of the largest data oceans on Earth exist in this market. And you want to do analytics against that.”
Cloud providers don’t want to have fiber repeaters within the same availability zone, so that means operating within 2-12km of another data center to have a “contiguous” or parent/child setup. Sharp envisions high-density data centers for AI workloads close to facilities working on analytics, storage, and other tasks, all operating under a single availability zone.
That requires a lot of interconnection. “Once you build an epicenter, there's a lot of value both in how it's interconnected, and just the efficiencies from the amalgamation of infrastructure and the matchup of customers,” Sharp said.
With Digital Dulles, its meet-me-rooms are much larger than its existing sites, on a percentage basis, “because of the amount of physical fibers and conduits required to run it,” Sharp said, with those fibers connecting to its data centers as well as those of its rivals.
“ServiceFabric is absolutely everything,” he said, referencing the company’s global service orchestration platform. “Because we are open, and so we don't care if your workloads are in another competitor's data center or ours.”
Given the desire for every major company to have an IT presence in Virginia, and the preponderance of potential customers, it is unlike any other market. “There's just some uniqueness with other competitors, it’s more like coopetition,” Sharp said. “Quite frankly, there's more demand than we could ever meet.
“And we don't see that slowing down anytime soon.”
•50% + savings in capital cost vs chilled water systems
•Lightweight and compact footprint manufactur from Stainless Steel and Composite materials
•Approaching 2000 units installed throughout the world cooling over 500MW of Data Centre space
•Rapid deployment allowing simple installa and set up
•Minimal maintenance and water
•Low connected power, reducing the size for generators and power distribution
+44 (0)1527 492750
info@excool com
www.excool.com
Virginia remains the main hub of data centers worldwide. But while the area looks increasingly full, new companies are still popping up looking to serve the hyperscalers’ insatiable demand for new capacity.
Two new debutantes, PowerHouse and CorScale, are set to launch new hyperscale facilities in Virginia in the near future.
Both are newly-founded by large, wellestablished real estate firms. Both are backed by large investment firms. And both are set to develop large amounts of new capacity dedicated to serving the big cloud providers.
DCD visited both companies’ maiden data center development sites – PowerHouse in Loudoun County’s Ashburn and CorScale’s outside Gainesville in Prince William County – and spoke to company executives about being the new kids on the block.
In the heart of Ashburn, PowerHouse is
opening its account with a sizeable data center project. The company, founded by American Real Estate Partners (AREP) and backed by investment firm Harrison Street, is seeking to start with an 80MW facility.
“Like a lot of other companies, it was Covid that really pushed it,” says Luke Kipfer, vice president of data center development and construction at AREP/PowerHouse Data Centers. “AREPs portfolio was heavily in class A office space, so we've been actively diversifying. And our partnership with Harrison Street lets us go all in on the data center model.”
The company is planning developments on three sites across Northern Virginia Six buildings are currently in planning or underway totaling 2.1 million sq ft (195,100 sqm) and up to 338MW of critical power.
Sat on 10 acres and currently under construction, ABX-1 is set to be the first building completed. It will comprise one two-story 265,000 sq ft (25,000 sqm) building. Located at 21529 Beaumeade Circle in Ashburn, the first 15MW will launch in 2023; at full build-out at the end of the
Dan Swinhoe News Editoryear, the facility will offer 45MW across six data halls. The site, which DCD visited in November 2022, is reportedly expandable up to 80MW.
Harrison Street and AREP acquired the site for $21.5 million in January 2021. The land, formerly home to a retail strip mall-type facility, was previously owned by Chirisa Investments, which bought the site in 2018 and had planned to build a 280,000 sq ft (26,000 sqm), 30MW data center.
DPR is the construction partner with PowerHouse on the facility. And despite being in one of the areas most affected by Dominion’s surprise capacity issues in Ashburn, Kipfer suggests the site will have enough power for the launch of the first phase and Dominion should hopefully have the issues sorted by the time the facility is reaching full capacity around 2026. An onsite substation is being developed on part of the project which will serve both ABX-1 and surrounding facilities.
On Sterling’s Pacific Boulevard, PowerHouse Pacific will comprise three three-story buildings totaling 24 data
halls across 1.1 million sq ft (102,200 sqm). Construction is due to start in 2024 and complete in 2024/2025; the site will offer 265MW.
The Pacific campus was previously part of the former AOL headquarters and most recently used by Yahoo!. New parent Oath sold the 43.3-acre property in December for $136 million.
And, finally, the company is developing a 23-acre site in Arcola’s Arcola Boulevard. PowerHouse Arcola will comprise two two-story buildings and 12 data halls across 614,300 sq ft (57,070 sqm). The buildings will offer 54MW and 66MW. Construction is due to begin in April 2024 and end in 2025, with the second building finished in 2026.
Kipfer said the sites aren’t yet pre-leased, but there is “strong interest” from clients and the company expects them to be leased soon.
Before joining AREP, Kipfer was regional director at Direct Line Global and director of construction at Markley Group in the Boston area.
“I came out here just to be more involved in some of these larger-scale projects,” he says.
“In Boston, it was more Edge deployments, smaller enterprise, higher ed, pharmaceutical. So they had some real requirements, but they were all a one-off build. Even though the numbers and scale is different, the projects here are almost less complex. With a lot of hyperscale builds, it's the same widget 100 times over.”
Kipfer said PowerHouse is an AREPowned entity, with the projects operated in a joint-venture model with Harrison Street.
While Harrison Street is well-versed in data center investments, AREP doesn’t have a long tradition of developing data centers, and it’s flexible in what it will deliver.
“While AREP's new to data centers, they've been a developer in this area for 20 years,” says Kipfer. “We've got very deep contacts with brokers, with local utilities, all the players here that you have to know to make things happen. We're able to get a lot of sites before they come publicly to market, so we're able to get good sites and identify power.”
“We're not limited to one development model. We’re open to just about everything right now,” he adds.
“We’re developing relationships with hyperscalers and understanding their needs; as long as it's something that has market viability, it's something that we're interested in. These guys are just building so quickly, we're basically giving them a
head start in their deployments.”
Prior to setting up PowerHouse, AREP acquired what is now known as Quantum Park in 2016 for a reported $212 million alongside hedge fund Davidson Kempner Capital Management from Verizon.
The park is the former UUNet/MCI Worldcom site that was a key connectivity hub dating to the early days of the Internet; Verizon had an existing data center there and Aligned has since developed on the site. Last year, DigitalBridge-backed Landmark Dividend acquired a portfolio of assets in the Quantum Park campus.
Harrison Street has a number of data centers under its ownership. January 2021 saw the company acquire the Pittock Block carrier hotel in Portland, Oregon, for $326 million alongside 1547 CSR, and in November 2021 it acquired CIM Group’s stake in four US data centers co-owned with 1547.
Harrison and 1547 have completed a number of data center deals together; the two companies previously acquired the Wells Building carrier hotel in Milwaukee, Wisconsin, for $7.25 million in 2020.
“Harrison are a knowledgeable data
center firm, they've got a lot of experience that the team that it's great to work with, they know the market,” says Kipfer, who adds the company is looking to expand out of Northern Virginia with Harrison’s help.
On whether hyperscalers will continue to need outside developers, he says the combination of having the right sites and development teams, alongside the need for flexible capital deployment will mean there is always a need for outside developers.
“There's always going to be a certain need there. There seem to be a lot fewer individuals in the hyperscale enterprise teams that are really good at site development; a lot of them are focused on the nuts and bolts of a data center; the UPS, network methodology, things like that,” he says.
“Certain users have different priorities in terms of deployment of capital; some would rather lease it out and spread it out, and some want to own.
"And that's where we've been flexible; we'll do long-term leases, we'll do leases with option to buy, we're able to work with a lot of different users’ of needs for how they are deploying their infrastructure.”
Like PowerHouse, CorScale is backed by a large and established real estate firm, this time in the shape of Patrinely Group. And it has the financial backing of real estate investment firm USAA Real Estate.
While not big names in the data center space, the two companies have the portfolio and capital to operate at scale, and have experience in the industry. Nic Bustamante, senior vice president at Corscale/Patrinely Group, tells DCD that the two companies have a history working together and developing enterprise data centers – usually in the 1-5MW range, but in some cases up to 20MW – as part of office developments. One of the most recent examples is HPE’s new HQ in Houston, Texas.
“The company [Patrinely] had this experience building these smaller data centers, and for around five years considered getting into the hyperscale space. It is
heavily diversified in real estate assets, and data centers are one of those real estate and asset classes that was interesting to them.”
Like PowerHouse, CorScale isn’t starting small. The company’s debut project is a 300MW campus in Gainesville, Prince William County. The campus, known as Gainesville Crossing, is located close to Manassas Battlefield and looks set to be joined by QTS and Compass once the two companies start development on the PW Gateway project.
After breaking ground in
utility power. A Dominion Energy on-site substation will deliver power.
The site was bought for $74.5 million from Buchanan Partners in August 2020, by real estate firm Wolff Co, on behalf of Patrinely.
“We didn't have to spend a ton of time and resources putting together the utility plan for that project, it was already relatively de-risked,” says Bustamante. “We have direct access to connectivity adjacent to the site, and the power plan for that area with Dominion is already really well vetted, so line of sight to power was really great at that site. It is closer to Loudoun and the core network market in Ashburn than something further south, and we thought it was kind of a nobrainer.”
Discussions with an end-user are ongoing to close to agreeing on a pre-lease for the first building. While CorScale and Bustamante declined to name a customer, DCD understands AWS is in talks to be their first customer.
Bustamante said the closing of the site acquisition occurred around a month before he joined the company. He previously worked on data centers at Apple, Google, Microsoft, and Rackspace; and has been joined at Patrinely by Stuart Levinsky, formerly of Iron Mountain Data Centers, Switch, CyrusOne, and Cincinnati Bell.
“I got to be CorScale employee number one, so I could handpick our team,” he says on what attracted him to the company. “It requires very little effort to get capital and get committee approval to deploy that capital, and I think that's a significant part of the execution; the visibility to property on a global basis, through our partner at USAA.”
Prior to its official launch, Corscale’s existence was revealed in a press release that said the nascent data center firm was signed up to use Ledger7860's carbon accounting package. Bustamante said the focus on green credentials is one of the ways the company is hoping to differentiate itself from other hyperscale-focused developers in the space and area.
early 2022, the first 72MW phase is expected to come online around Q4 2022 and will consist of a single two-story building with eight data halls and office space totaling approximately 483,000 square feet (44,900 sqm). At full build-out, the 130-acre campus will comprise 2.3 million sq ft (213,700 sqm) over five two-story buildings and 306MW of
“These guys are also very comfortable with and prefer Green development. So they've already had a number of LEED Platinum projects. I see a lot of hyperscale developers who prefer not to do that; the traditional hyperscale data centers don't seek LEED Platinum, Net Zero type of approach.”
Another differentiation, according to Bustamante, is the company’s ability to deliver complex projects quickly makes them a desirable partner.
“A lot of developers will say ‘I need perfectly
level, square, rectangular properties.’ Our ability to execute on complexity was already pretty evident with the prior relationships.”
As well as the Gainesville Crossing site, the company is planning a second Virginia development in the Kincora area of Sterling in Loudoun County. Sat on a 22-acre site, the company is planning a single 500,000 sq ft (46,450 sqm) building, consisting of eight 9MW data halls across three stories. Other projects are planned in California and outside London in the UK.
On the potential capacity issues Dominion has in Loudoun’s Ashburn, Corscale is confident it won’t be affected thanks to its building timeline.
“We weren't going to start construction until 2024 anyway, so we think that our initial connection date and service date may not end up being impacted depending on how they perform fixing things in Loudoun,” explains Bustamante. “You can expect us to do more in Virginia. I think we're pretty well positioned in that market to continue development there.”
Going forward, CorScale aims to stay focused on greenfield development with some brownfield sites that make sense. On acquiring existing facilities, the company is less keen.
“We've looked at a few acquisitions and generally said no because they look more like repositions or they're relatively distressed and not of the right scale,” says Bustamante. “We see a lot of capacity come on the market that isn’t bad, but it is not at the magnitude that we like. We can't take a 20MW data center and turn it into a 50MW data center too easily.”
While Loudoun is and will remain the epicenter of the world’s data center industry, things are changing. Quantum Loophole is looking to bring millions of square feet of new data center space to Maryland, while Prince William County is seeing huge amounts of development in a short space of time. But as the geography of the industry changes, the way facilities look and operate will likely have to change too.
CorScale’s Gainesville site gaining planning permission, combined with the roll-out of new transmission lines, were the driving reasons behind local landowner Mary Ann Ghadban gathering other locals together to sell the initial 800 acres that make up part of the new PW Digital Gateway project.
Bustamante tells DCD the fact QTS and Compass are investing so heavily in land adjacent to CorScale is ‘validation’ of their idea of building outside of the area’s traditional data center hubs. However, he says the sheer magnitude of those projects means development needs to be very carefully considered to ensure it meets energy and sustainability requirements that locals are happy with.
“I fear data center development that is unchecked. Data centers need to be comfortable coexisting wherever they go, in any market. We're successful because we work hand in hand with the municipality and user groups; and we shouldn't find ourselves at odds with the local community.
“What I see in the Gateway Project is a whole lot of people concerned about those things and more, but just the sheer magnitude and scale of it. Hopefully, the
green spaces that they put in will separate them from those things, but development needs to be very considerate not just to the battlefield, but to the locals who are very concerned about it.”
In Loudoun and Ashburn, both Bustamante and Kipfer can foresee data centers getting taller, but also more aesthetic. Two and three stories are becoming more common, and DCD has heard rumors of an application for a five-story facility being submitted.
Over the summer Loudoun officials proposed new zoning rules for data centers that would also set out new environmental standards for building design and noise.
“A typical data center that is a big grey wall-to-wall box? I think that's pretty passé. And those days are probably behind us, particularly in Loudoun, and I see that becoming table stakes in PWC and other markets,” says Bustamante. “Those operators that design data centers that have long mechanical gantries and generators spread out across hundreds of acres, I think those users are going to have more problems obtaining consent to develop, and they're also going to continue to be viewed negatively.
“I think you've got to build a product that is Class A. People are going to be held to a higher standard [going forward], and I think that's only fair given the capital that's coming in.”
Kipfer agrees. “Ashburn is never going to go away,” he adds. “This is a desirable place to be in terms of connectivity and power. But we want to keep building here, and if we don't keep a good relationship with the county and the neighbors, we know it's going to get turned off.”
“The hyperscalers are just building so quickly, we're basically giving them a head start in their deployments”Photography by Sebastian Moss
The ZincFive UPS Battery Cabinet is the world's first Nickel-Zinc Battery Energy Storage Solution product with backward and forward compatibility with megawatt class UPS inverters.
There’s a throughline to my conversation with Don Schopp, as we toured NTT Global Data Centers' VA3 facility in Ashburn, Virginia. NTT GDC is ready to play in the big leagues
Schopp joined RagingWire as a senior national account manager way back in 2012. “We were a family-owned privately held company, in just one market with two buildings,” he recalled. A year later, the Japanese telecoms giant NTT took a large stake in the company, but remained mostly hands off.
“We were only so relevant,” Schopp said. By 2017, NTT said it would buy the whole business, but its involvement was still limited. At the same time, it acquired India-based NetMagic, UK-based Gyron, Europe's e-shelter, and South Africa-based-Dimension Data, while NTT built out its own data centers in Malaysia, Japan, and elsewhere.
We're at over a gigawatt across our global portfolio. I don't know how many people are really today running a data center business at that scale”
Move
NTT is hereSebastian Moss Editor-in-Chief
It took until 2019 for NTT to mix the different companies together, under the brand name of NTT GDC, and still a few more years for the disparate businesses to work together under one corporate culture and a primary design template.
“We've arrived,” Schopp, now VP of strategic growth & channel sales, said several
One of the markets NTT GDC is looking to expand its presence in is Africa, first with a presence in South Africa, before expanding north.
We caught up with Michael Abendanon, head of NTT GDC MEA, soon after the launch of its Johannesburg data center.
Like Schopp, Abendanon is a transfer from an NTT acquisition - Dimension Data, which operates nearly a dozen data centers on the continent.
times as we walked through the facility. “We're at over a gigawatt across our global portfolio. I don't know how many people are really today running a data center business at that scale.”
There are obviously two competitors that blow that number out of the water - Equinix and Digital Realty.
"When you look at NTT globally, we're getting to that level," Schopp argued. "We're in 30 different countries, and we're growing... we're looking at Nairobi, Cape Town. Warsaw, Milan, Ho Chi Mihn, Singapore, and more."
The company has its roots in Asia, where it has a huge presence, but "we're now entering the main stage in the Americas," he said.
"What our construction plans in the United States are right now over the next two years will double what we currently have under operations today," he said. "It took all of that time to get to this point. And the next two years will beat that."
Currently, its 'Americas' business just means the United States, "but expect the Americas to include Canada and South America," Schopp said. “And then there are second-tier markets like Nashville, Austin, Charlotte, Miami, Montreal, Toronto, Salt Lake, and Denver that we are looking at.”
Crucially, while its ambitions are still dwarfed by the industry's two giants, Schopp noted that "NTT is way bigger than them, bigger than IBM, bigger than Bank of America bigger than Cisco. It's a huge company, but 80 percent of its business is in Japan, and we're part of the global business."
Being tied to a wider business can haveits positives and negatives. It could mean being tied to a slower business, and lost in the shuffle of more profitable ventures, but Schopp argues that it means access to a global customer base and lots of money.
But NTT has kept that subsidiary somewhat separate, deciding not to merge the older and smaller data centers into the GDC brand. "Those data centers have been around for a long time," Abendanon said.
"Things have moved on. Ultimately, Dimension Data won't be building any more data centers going forward, they will be focusing on taking up space in GDC facilities," he said, with the company bringing over customers to the new facilities.
That first new data center opened this October, with Johannesburg 1 Data Center providing 12MW of capacity across 6,000 sqm (64,600 sq ft) once fully built out (it is currently half that).
"We're looking to expand in Johannesburg beyond that facility," Abendanon said. "We are feverishly putting plans together as to the expansion journey there and in Cape Town."
Beyond South Africa, the company is looking to East Africa in the short term. "And the natural location for East Africa would be Kenya," he said.
The opportunity is vast, he argues: "Africa typically lags first world economies by four to seven years, and if you have a look at the data center penetration in Africa, comparative to the rest of the world, it's got less than one percent of the data center space versus the rest of the world.
“There is definitely lots of room for growth in Africa, considering the population, considering what's happening with Internet penetration. One might say ‘guys are you not over-investing in Africa?’ Our view is that it is only the tip of the iceberg in terms of demand.”
“What you'll see out of us over the next few years is billions and billions poured into the US and globally to construct data centers,” he said.
That has required a maturation of its business proposition, as it pursues larger customers like the hyperscalers.
“We're building and attracting more clients who want powered shell or want to take down an entire server room or an entire building,” he said. “We even build to suit for them, and that’s not necessarily was what we were known for, but we're now competing against the heavyweights.”
The shift is part of the company’s evolution, Schopp said. “We came into this market as a colo company, and started moving upstream to bigger single tenants, so building design data centers that are fit for those clients.”
Currently, hyperscalers are “small potatoes” as a percentage of NTT GDC’s customer base, but it hopes to increase their presence in the years to come. “But we'll never get away from our heritage, because we'll offer colocation down to the single cabinet,” Schopp said. “Not in every building and every market around the world, but where it makes sense.”
It’s also had to improve on how it builds
data centers with a standardized design, and how it handles its supply chain. “I would say that we've gotten a lot better than what we were as RagingWire,” Schopp said.
“Part of that is to admit that we had to improve, and then the other part was bringing people in to do something about it. Brittany Miller [previously of Microsoft], leads our construction and a lot of people from Facebook and Google are now part of the NTT family.”
The other part is its vendor-managed inventory. “On our newer sites we have a standard build, and we can ship the same products to different data centers,” Schopp said. “Before, we were kind of a wannabe on pre-builds and supply chains,” he admitted.
“Companies like Compass were out in front of us, and they paved the way that we follow. Now we’re doing that too, and I think that's also helped us secure more of those large-scale deals.
"The people on the other end of the table would ask the same question that you were - 'how do I know you're predictable?' Well, we have the capital source, the design, and we have this vendor managed just-in-time inventory.”
The facility we toured was mid-way through NTT GDC’s transformation,
representing 16MW of critical IT load out of its 224MW Ashburn data center campus. With 112,000 square feet (10,400 sq m) of data floor space, it is larger than its new standard design that spans 21,000 square feet and 6MW.
Uniquely, this site also has a huge amount of space given over to a large open staircase, conference-like center, and offices.
"We thought it would be nice to have people in here, a conference room, a NOC, and all that," Schopp said. "That's really changed in this era."
That space isn't used as much as it was pre-pandemic, but it still isn't worth knocking it down for more data hall space, Schopp said.
With Loudoun's space at a premium and hard to find, NTT GDC is looking to Prince William County as its next Virginia buildout.
This June, the company said that it had purchased nearly 104 acres in the county to develop a 336MW data center campus in PWC. A month later, utility Dominion said that it wouldn't be able to provide power to new builds in Loudoun.
"People were like 'what did you know?' but we were just a little bit fortunate in that," Schopp said. "We tried many times to get land and were unsuccessful and then we finally got land."
“What you'll see out of us over the next few years is billions and billions poured into the US and globally to construct data centers”
While the south side of the Potomac river is home to the world’s highest concentration of data centers, Maryland and Frederick County to the north are largely virgin space.
But as land becomes increasingly scarce and expensive, a new company is looking to turn Maryland into the next great data center market outside Washington DC with a new data center park offering to hyperscalers and wholesalers.
Led by former Terremark and CyrusOne executive Josh Snowhorn, Quantum Loophole has partnered with TPG Real Estate Partners (TREP) and is developing a 2,100-acre, gigawatt-scale data center park in Maryland’s Frederick County.
“The growth areas that we're focused on going outside of the Virginia area; and on Chicago, California, and potentially Dallas”
The world’s first multi-tenant gigawatt data center campus looks to change the local geography of the data center world
Located some 25 miles north of Ashburn in Adamstown, the campus is centered around the former Alcoa Eastalco Works aluminum smelting plant. The land includes the plot on which the now-demolished metals plant stood, as well as a number of surrounding greenfield plots currently used to grow animal feed, as well as a manor house from the 1800s.
“We provide land, energy, water, and fiber services at an unprecedented scale,” says Quantum Loophole CEO Josh Snowhorn, who has pitched the project as “the wholesaler to wholesalers.”
Work has begun on the site; with groundworks on supporting infrastructure such as power distribution and underground
power ducts, water and sewer pump stations and piping, and underground fiber distribution ducts, beginning in July 2022. Given its previous use, most of the land is already zone for industrial uses.
Under general contractor STO Mission Critical, the site is due to go live with power, fiber, and water infrastructure in late 2023 or early 2024. Groundworks for the first data center have already begun, so the first facilities will likely be ready around the same time.
According to Quantum Loophole director of operations
Chris Quesada, the company will be offering different ownership models depending on customer needs; the land will be available to buy, with Quantum owning the supporting fiber, water, and power infrastructure; or available on a long-term lease on which customers can build their own facility; and customers can ask Quantum to build the facility in more of a powered shell-type arrangement. The latter is reportedly the company’s least favored option due to the upfront capital requirements, but one it will happily do if required. Quantum has previously said it expects to deploy individual data center modules of 30-120MW capacity in less than nine months.
A key part of the project is the company’s QLoop network; a 43-mile fiber conduit system able to hold more than 200,000 strands of fiber running from the campus
to Leesburg in Virginia’s Loudoun County and back, running under the Potomac in two places as well as the Monocacy river on the journey. The infrastructure build out will include two on-site network centers.
Quantum raised $13 million in seed funding in 2021 before TREP, the real estate equity investment platform of asset firm TPG, invested. The size of TREP’s investment in Quantum hasn’t been disclosed, but it has around a 20 percent stake in the company.
While its investment portfolio includes stakes in the likes of Airbnb and Dropbox, TPG isn’t known for its dedicated data center investments; its previous telco/ communications investments include AllTell, later acquired by AT&T, and Astound Broadband. However, TPG previously invested in the department store chain Neiman Marcus, which was founded by members of Snowhorn’s family.
“We had bidders hunting us down and competing to invest in us,” says Snowhorn. “TREP are wonderful partners with a deep understanding of the data center sector.
“They looked at the assets that were for sale; Switch and CyrusOne and lots of other folks out there, but I think that they looked at it as a world that was starting to commoditize itself a little bit. But they looked at us as a business, as a wholesaler to the wholesalers, that is unique and less at risk of being commoditized. We're going to be the single greatest return they've ever seen.”
When asked which came first, the site as an opportunity or the concept of a gigawatt wholesaler, QL tells DCD the company and concept pre-empted the interest in the property.
“I always felt that there was something missing that hadn't really pushed the limits
of scale and building an Internet city-scale ecosystem that would support massscale interconnection and commonality, something like a master-planned community,” says Snowhorn.
The site was acquired for around $100 million; at around $48,000 an acre, that’s significantly less than the $3 million per acre paid in parts of Virginia for prime data center land.
In May 2022, Aligned Data Centers became the first company to publicly announce plans to develop a data center at the campus. Andrew Schaap, CEO of Aligned Data Centers, said at the time that the “attractive tax exemptions, power availability, and proximity to Northern Virginia” were key drivers in its decision to choose Frederick County.
On its website, Aligned says its Maryland plot spans 75 acres. There, it says the company is planning a total of 1.3 million sq ft (120,800 sqm) and 192MW of capacity
across four multi-story buildings, each spanning 325,000 sq ft (30,200 sqm) and 48MW per facility. However, Aligned reached out to us to say it's planning a total of 3.3 million sq ft (306,600 sqm) and 264MW.
Aligned has already begun work on the site. The company’s plot is located in the center of the Quantum Loophole campus, with a small number of Aligned-affiliated staff on-site as DCD was given the tour.
Snowhorn previously said the company has signed contracts with four different entities totaling more than 240MW for the Frederick site, representing the first phase of power available to the site. It is unclear which other companies have leased space at the site, or the scale of each project, but
Snowhorn makes reference to ‘multiple hyperscalers’ and government customers.
While some of the ‘smaller’ plots measure more than 70 acres, operations director Quesada told us that most interested parties are looking at developing multi-building mini-campuses within the wider Quantum Loophole park averaging around 200 acres or more.
“One of the hyperscalers is engaged with us for 300 acres and around 800MW of power across eight buildings; that will be two parcels of four buildings at two stories tall and 100MW each, and they'll interconnect those and use those as two distinct availability zones,” explains Snowhorn. “Another client is taking 50 to 65 acres and they're going to put up anywhere from three to four two-story buildings in a denser environment.”
While part of the same PJM marketplace as Virginia’s Dominion Power, First Energy is the transmission provider to the site. 230-kilovolt transmission lines already run through the site and a new 230-kilovolt on-site substation is scheduled to be built by First Energy subsidiary Potomac Edison.
“My biggest worry is that 2.4GW is not enough power; what happens when we go to liquid cooling, and other things that densify that same environment? All of a sudden you could have a four to 10× level of power load demand requested on the campus, so it's going to be very interesting to see how well we can accommodate the future densities.”
The campus will also reportedly include a “battery farm” to offer large-scale energy storage.
While the bulk of the Frederick site is either former smelting plant or farmland, the land includes a little piece of American history.
The site is home to Carrollton Manor (also known as Tuscarora), previously owned by Charles Carroll, one of the signatories of the US Declaration of Independence and at the time the wealthiest man in America.
The land was gifted to him by his father, Charles Carroll of Annapolis. The Carrolls were wealthy landowners, with the estate once measuring some 17,000 acres. Built around 1820, Carroll never officially lived in the three-story 21-bedroom limestone house, merely visiting for various periods. It was, however, home to several of Carroll Junior’s daughters and granddaughters.
Later tenants at one point used the manor house as a farmhouse, with turkeys raised in a bedroom, hogs in the basement, and cured hams dripped from the attic. While most of the land was sold off piece by piece, mostly to farmers or food canning factories, the manor house and some 2,000 acres were bought by the Baker family in the 1920s and then the Renns in the late 1940s.
The Renns sold the site to Eastalco the following decade, which kept the manor as a guest house and meeting center. The manor house was listed on the National Register of Historic Places in 1997.
The smelting site opened in 1970 as a plant for the French and Japanese partnership Howmet/Pechiney. The plant was bought by Alumax in 1983, which itself was acquired by Alcoa in 1998.
The smelting plant, which included its own railway line, took up around 340 acres of the property, with most of the rest leased to a local farmer. At its peak, it employed more than 800 people, but the smelter closed in the mid-2000s and was finally demolished around 2017. The site was closed due to the high cost of power and cheaper imports making the site unprofitable.
“There's a lot of history around that property. We were probably the 100th party to engage with Alcoa to try and buy it,” says Quantum’s CEO Josh Snowhorn. “They were very careful about who they wanted to sell it to because they wanted people to be good stewards of the land. I think a lot of other folks probably had their eyeballs on this site, and we were just able to take a risk and execute very quickly.”
Quantum Loophole tells DCD that as it’s listed, the company will be keeping the manor house for historical preservation.
“It'll be part of the property owners association, and they'll be able to use utilize that for community meetings and events. It's quite a beautiful area, and we're able to maintain that along with large data centers being within it. I think I'm going to be quite proud with the end result.”
“We're thinking about a 100-hour level battery farm, so very long duration to accommodate any variability of outage,” says Snowhorn, “but also give us the ability to potentially store energy and to play some arbitrage in the marketplace like peak shaving.”
With land prices soaring, power capacity limited in some areas, and growing opposition from residents, building data centers in Northern Virginia isn’t as simple as it once was.
Some developers are heading further south. Amazon is looking to expand into Culpeper and Fauquier Counties, though has faced pushback from residents opposed to such developments. Many are looking to Prince William County, which already has a sizeable market, but faces continued opposition from local groups despite government support.
Maryland, however, is largely devoid of data centers, with Frederick surprisingly lacking facilities despite its immediate proximity to Loudoun. During the tour, Quesada tells us he thinks there is at most a handful of other small data centers in the whole county belonging to local government and/or telcos serving the area. Colocation and wholesale facilities in the area seem non-existent.
The company reportedly “looked hard” to find a suitable plot in Virginia but found itself competing with a lot of the companies it was hoping to court as customers. QL also found many of the Virginian counties outside of Loudoun and Prince William were either too far away from the centerpoint of Ashburn, lacking the required infrastructure, or not particularly welcoming to data centers.
Local government in Maryland is on side. After watching Virginia, and particularly Loudoun and Fairfax Counties, prosper from increased tax earnings from data centers, the state of Maryland and Frederick County are looking to attract investment.
2020 saw Maryland introduce new tax breaks that will see data centers exempt from certain sales and property taxes if they meet the required investment and jobs thresholds.
“When we were looking, we always discounted Maryland because there was no tax legislation offering tax benefits like you would see in Virginia counties, particularly Prince William and Loudoun,” says Snowhorn. “That all changed in the summer of 2020, legislation was passed that provided tax incentives in Maryland, and that was everything.”
Frederick has also made changes in local regulations to allow for quicker and more streamlined development processes after losing what could have been a second major project.
AWS had hoped to develop a number of data centers in the county, but pulled out after local officials said it couldn’t meet the cloud company’s aggressive timelines. This was partly due to the fact zoning changes were required, and such amendments couldn't be made on the eve of county elections. Known as Project Holiday, local reports suggest the company was looking to develop on the west side of I-270 near Sugarloaf mountain.
In response to losing AWS, the county has since passed an amendment to zoning laws that lists critical data infrastructure, such as data centers, as a permitted use under Frederick County zoning laws and would allow them in industrial-zoned plots.
“The government itself locally in Frederick County, they love it,” says Snowhorn. “Because of those tax benefits and low impact on the things that they have to worry about funding. They're very excited to bring that in as well as the additional benefit of the massive fiber backbone we're bringing.”
Many data center workers live in Maryland and commute into Virginia. Quesada noted that the project has attracted attention from potential employees looking to cut their commute times.
Local residents can be the hardest to win over. We’ve seen elsewhere in Virginia that failure to do so can result in protracted battles and sometimes canceled projects. A recent local press report said Quantum executives answered questions from Adamstown residents for more than two hours during a town hall meeting.
“There were farmers happy to see Alcoa go away,” says Snowhorn. “And we had to alleviate concerns within the community about the scale of what we're doing.
“It's millions of square feet, and we had to make them understand the tax benefits, that the demand on housing, schools, and other community infrastructures can be quite nominal.
"While initially there'll be a lot of traffic with construction, once these data centers are up and running, they'll hardly see a car on the road. Getting them comfortable with that was important.”
Despite the potential to change the geography of data centers in the area,
Snowhorn is confident his company will be the main ‘land & services’ provider in town.
“I very much doubt you'll see a dramatic amount going to Montgomery County. In Frederick, whatever ecosystems develop out will simply be a supporter with us at the heart of that.”
It seems issues in Loudoun could help push more operators north into Maryland. Quesada notes that the project was already attracting interest from potential customers with operations or interest in Virginia, but Dominion’s surprise announcement about a capacity crunch in Loudoun County has accelerated that interest. DCD’s tour of the site was the first of many that day from interested parties.
“There's very much a stretching of the marketplace; there’s almost no land left in the center of Loudoun County at all, it looks like Silicon Valley in that respect,” says Snowhorn. “The power crunch has certainly been eye-opening for a lot of folks. The growth out to Manassas or up to Maryland, it was happening anyway; that simply accelerated with that crunch happening because people were forced to very rapidly make decisions to come up to us and to go to Manassas.”
While Maryland is currently the company’s only project, Snowhorn has previously said that Quantum Loophole is laying the groundwork for similar “data center cities” to serve north, south, and western US.
Despite asking for further details, the company isn’t ready to share any more information about its wider ambitions.
“In some markets, we're trying to assemble up to 10,000 acres,” Snowhorn tells DCD
“The growth areas that we're focused on going outside of the Virginia area; we will be focused on Chicago and California and potentially Dallas markets.”
The company is reportedly in the process of working through details such as energy studies and fiber right of ways, legislation, tax rules, etc.
“We're building the biggest campuses in the world, so a lot of work has to go in place to make that happen. But you'll see some announcements hopefully next year.”
While we’re yet to hear of a second gigawatt campus, in the time since Quantum Loophole announced its plans there have been a number of very large projects measuring more than 400MW.
South of Loudoun, QTS, and Compass are both part of the PW Gateway project that could see more than 25 million sq ft (2.3 million sq m) of data centers developed in Prince William County; the 2,000+ acre development could reportedly support up to 1GW of capacity. Switch Inc.’s Reno and Las Vegas Prime campuses each offer more than 400MW of capacity. Digital Realty’s Digital Dulles campus in Loudoun is expected to grow to a gigawatt (see page 16).
In the UK, developer Reef is looking to create a 175-hectare, 600MW data center campus in the east London borough of Havering. Campuses offering a hundred megawatts or more of capacity have gone from extremely rare to not uncommon in the space of a couple of years.
“Anybody can put a dot on the map and say I'm going to build a campus. That's happened many, many times,” says Snowhorn.
“What's important is that we prove that we are able to put our money where our mouth is and go and actually accomplish what we're doing in Maryland. So that's really our first step as a business. And that's happened now; the industry is incredibly confident in what we're doing now.”
Since the 1990s, Northern Virginia has been the center of the data center world, with Loudoun County as the capital.
But that geography could be changing. 2022 has seen officials in neighboring Prince William County –already a sizeable data center market in itself – vote to replan 2,133 acres of the county's "rural crescent" for data centers, paving the way for up to 27.6 million square feet (2.56 million sqm) of data centers.
If fully built out, the PW Digital Gateway could more than double the county's existing data center footprint and see it overtake neighboring Loudoun County.
Some 18.5 million sq ft (1.71 million sqm) is set to be developed by just two companies – QTS and Compass Datacenters – over the next decade or so, potentially turning two already large players into the biggest data center operators in the state.
But the site, located along Pageland Lane between Manassas and Gainesville, is adjacent to Manassas National Battlefield in a historically rural area. Opposition to the project has been fierce, and is expected to continue.
Reports of a PW Digital Gateway surfaced
Dan Swinhoe News Editorin early 2021 after a group of landowners unveiled plans for a mammoth 800-acre data center development.
At the time, the proposal aimed to string together 30 parcels of agricultural land owned by 15 property owners along the county’s “rural crescent” to be developed by a single unnamed data center developer, later revealed to be QTS. The company hopes to develop 7.9 million sq ft (734,00 sqm) of data centers on 812 acres.
The project is being led by local landowner and landbroker & commercial real estate consultant, Mary Ann Ghadban. Aged 68, she has lived on a plot in the center of the Gateway land for around 40 years with her family and horses.
“I didn't know anything about data centers until 2019,” she tells DCD. “I didn't know that our area was even in the running for data centers. Then I found out all the revenue that Loudoun County gets and all the world-class schools they have because of all the revenue.
“It was a lot of work. We had been working seven days a week, most days for the last two years. A lot of people kind of laughed at me and thought we were crazy.”
Ghadban said there was a combination of factors that led her down the road to create the Gateway proposal.
The first were the Dominion transmission lines that run through her land along the battlefield north to Loudoun that were installed around 2008. The second was a quarry expansion nearby to the north in the last few years. And the third was a data center; CorScale’s Gainesville Crossing project currently in development on the southern end of PageLand lane.
The recently-launched data center platform of real estate firm Patrinely Group and USAA Real Estate, CorScale is developing a 300MW campus featuring five two-story buildings. The site was bought from Buchanan Partners for $74.5 million in August 2020; CorScale broke ground on the first building in February 2022 and the site is due to go live early next year (see page 22).
“I've been here forty years. And it's just been my dream home, we thought we'd stay here forever. But when Gainesville Crossing got approved in 2019, that's when we said, ‘we're stuck between a quarry and a massive data center,’” says Ghadban. “That was the final nail in the coffin.
“We've become an industrial quarter. There’s nothing rural about this area anymore. And when you put these transmission lines through here, it's been proven to destroy your property values and the ruralness,” she says as we stand under the towering lines, which are noticeably buzzing in the light rain.
“We had to take matters into our own hands. When can you find 194 neighbors agreeing on anything?“It just so happened, instead of just saying let's build houses, we said let's build data centers.”
Ghadban claims the site makes more sense as a large ‘data center corridor’ than the county’s current Data Center Overlay District – an area where data centers are permitted and require less zoning and planning permission applications – given its self-contained nature and proximity to existing infrastructure.
“This is how you should have planned data centers to begin with. You've got the power lines here, you have the fiber, why wouldn't you put data centers where the power lines already exist?,” she says.
“We'd been listening to what's going on over in Loudoun, and the attitude changed; data centers don't want 20 acres anymore. They want 100 acres, because they want a runway. Plus we were not selling at the prices of Loudoun County; more like a third of the price of Loudoun. We wanted to entice those data centers to come here, get a footprint in where they can be for years.”
While Ghadban has been the driving force of pushing the Gateway project through, she and other landowners in the project had for years been staunch defenders of the area.
“There's never been peace on Pageland, never,” she tells DCD. “But the writing's on the wall, and we can't be here anymore. It doesn't matter how much you love it.”
Ghadban and another local landowner, Page Snyder, were key players in a yearslong battle against a planned ‘Bi-County Parkway’ which would have taken acres of the Battlefield and connect I-66 in Prince William to the Dulles International Airport in Loudoun. One profile piece from that time dubbed them ‘the Ladies of Pageland Lane.’
During the 1950s, Snyder’s mother Annie fought against the widening of local roads and a motocross speedway racetrack. The Snyders had previously fought against an amusement park, a large retail mall, and the proposed Disney theme park. Ghadban, however, was in favor of Disney.
Like Gadban, Snyder has featured heavily in the press in over the Gateway project. “We’ve spent our entire lives fighting one thing after another, it’s just gotten worse and worse,” Snyder, 71, told the Wall Street Journal. “Basically, we’ve just thrown in the towel.”
While the initial 800 acres was already a massive land offering in a constrained area, Ghadban claims that once the county saw the proposal, it sought to expand it further.
Another 1,300 acres of land were added to the proposal by more than 160 other local landowners looking to sell up – Ghadban said she wasn’t involved in discussions for those tracts – with around 800 acres set to be bought and developed by Compass. The company wants to build 10.52 million sq ft (977,350 sqm) of data centers by 2030.
The original Gateway landowners claim to have spoken to around 11 interested parties for the original 800 acres, before settling on QTS. Ghadban says she was firm that the land should be sold to an operator, not merely a developer.
“Because of my experience, I know what happens if you sell to a developer; they start squeezing the landowner and/or the county,” she said. “This is a very sensitive area, and we knew we had to go above and beyond on everything to get this off the ground.
“We turned down two major developers, who were very mad at us. I was looking for a data center user that could go through the school of hard knocks with us and take the heat, and do what we need. And QTS met all the criteria.”
The QTS and Compass projects combined cover around 1,630 acres and 18.42 million square feet of data centers. A letter from NOVEC in a previous county staff report suggests the project could total more than 1,000MW.
Aside from what is described as a ‘small parcel,’ the rest – around 500 acres of space and potentially 9 million sq ft (836,100 sqm) of data centers – is set to be public parks, trails, and remain undeveloped.
The number of buildings the companies each plan to develop is unclear. Documents seen by DCD suggest that, between them, the two companies have around 10 sizeable plots of which to develop, each
large enough to hold multiple multi-story buildings.
DCD understands QTS is likely to start developments on the south end of Pageland closest to existing infrastructure and work its way north with future developments. The first buildings aren’t expected to begin development until 2024.
Despite repeated attempts, neither QTS nor Compass were willing to be interviewed by DCD for this or previous articles on the Gateway project.
Opposition to data centers, especially in rural areas not used to such developments, is not uncommon. But the scale of the Gateway project has unsurprisingly seen opposition on a scale rarely seen for such buildings.
benefits – estimated by the county to eventually reach $400.5 million in local tax revenue annually under current tax rates – as a way to improve local schools and services.
Opponents of the projects argue against developments in the rural area, worried about the potential impact the rezoning could have on the nearby Manassas National Battlefield and other local historical sites, as well as noise pollution and impact to the local water table and rural nature of the area.
The Prince William County Historical Commission, Manassas Battlefield National Park, and American Battlefield Trust are all opposed to the development.
Environmental officials at Prince William County asked the board of supervisors to reject the proposal while Fairfax County officials submitted a letter to PWC officials requesting they rethink the proposals due to the potential impact to the Occoquan Watershed and drinking water in the area.
US Rep. Jennifer Wexton, a Democrat representing Virginia's 10th Congressional District, said the project could have a "significant negative impact" on the surrounding environment and community.
Even documentary filmmaker Ken Burns has spoken out against the plans, saying the proposals could have a “devastating impact” on the Manassas National Battlefield. The scale of opposition saw the proposals make it into mainstream news including WSJ and Reuters, a rare feat for data centers.
In February 2022, around 50 people from the Coalition to Protect PWC gathered outside QTS’ facilities in Manassas, with chants and signs saying “stay out of the rural crescent” and “save our sacred battlefield.” QTS officials did not acknowledge the rally, on the day or subsequently.
for data centers on the land in question. Normally a formality, the meeting itself filled with acrimony. Officials voted 5-2 in favor with no abstentions (although one reclusion) in a marathon 23-hour meeting.
The meeting began at 7.30pm local time, and continued until after 9am the next day. Local news reported more than 250 registered to speak during the meeting. Some 40 people spoke remotely via Internet calls, which the meeting didn’t get to until around 5am and didn't finish until 8am local time. The board had to take short recess a little after 1am because the audience repeatedly ignored Chair Anne Wheeler’s warnings to follow rules of decorum.
The selling landowners and prospective buyers claim the view-shed from both the battlefield and the neighboring Heritage Hunt community will be protected, as will the watershed. The group promises to make some 400 of the 2,100 acres public parkland. The groups also promise a 0.30 floor area ratio (FAR) that is less than that permitted in the Data Center Overlay District and so will result in more green space between facilities.
“There's all kinds of new parks and open space being created, says Ghadban. “Without these 2100 acres, that would never happen because people would just subdivide their lots and it would still be private property with no open space.”
Ghadban also points to the financial
Like many others selling up, Ghadban aims to remain in the county. She says she is largely nonplussed about the idea of leaving the home she built, or any potential bad feeling from what she says is a small minority of local people (though she admits some opponents have been “rude” to her).
She does, however, direct her ire towards local supervisor Jeanine Lawson, who has historically been very pro data center with the exception of the Gateway project.
“Supervisor Lawson created this nightmare. She agreed to 3 million square feet of data center to come right next to the battlefield [with the CorScale project]. Stop making us out to be the bad guys.”
County officials voted in favor of changing the area's comprehensive plan in November 2022, providing by-right zoning
The result, while disappointing to some, wasn’t a surprise. The Planning Commission had previously voted 4-3-1 to recommend approval of the application, and most councilors had already stated their position on the matter in the weeks and months before the vote.
However, councilors bickered repeatedly during the actual vote over process and lack of collaboration on the proposals, with Chair Wheeler threatening another recess if things didn’t calm down.
Supervisor Victor Angry’s decision to try and put forward the motion to approve the GPA as soon as the board had heard from the public without further discussion proved particularly irksome to other supervisors. At one point Supervisor Lawson said the board's Democrat majority had treated the Republican minority like a "battered wife."
Chair Wheeler and Supervisors Angry, Bailey, Boddye, and Frankline voted in favor,
with Supervisors Lawson and Vega voting against. The supervisors could be heard over the microphones continuing to bicker after the results of the vote were announced.
“This is a bold plan and it will change the landscape of Prince William County,” Wheeler said before voting in favor.
While the comprehensive plan amendment set the wheels in motion, there are potentially still many battles to be fought.
Both QTS and Compass have filed separate rezoning applications, and will require planning permission to actually develop the facilities. DCD understands landsales are conditional on QTS and Compass getting zoning approval for their projects, meaning landowners will remain in their homes for months, if not years.
and help Prince William County continue to flourish.”
Opposition groups are likely to fight each application as they come through, and upcoming elections could change the political landscape.
“I think once you're an obstructionist, you're always an obstructionist,” says Ghadban.
Speaking previously to DCD, Elena Schlossberg, executive director of the Coalition to Protect Prince William County, made it clear the group plans to keep fighting.
“I don't think QTS understands the passion of this community,” says Schlossberg. “And I don't think that this new board does either. We believe you don't have to sacrifice your natural resources, your environment, your hallowed ground, your clean drinking water, for economic development.”
DCD reached out to Schlossberg for further comment after the GPA vote, who told us: “We are not done fighting.”
There have been ongoing marketing campaigns from both sides, with each accusing the other of lies and disinformation. The side of the road along Pageland was littered with flyers both for and against development when DCD visited the area in November in the wake of the decision. We continue to receive regular updates and newsletters from both pro-PW Gateway and opposition groups, and further legal challenges and protests against the project are already underway.
At time of writing, two lawsuits have been filed against the county in the wake of the GPA authorization.
the landowners part of the proposal. This conflict of interest saw him recuse himself from the matter, which was controversial given the project was literally in his backyard in an area he was meant to represent. Local residents had started a recall petition to remove Candland and force a new election prior to the GPA decision, alleging ‘neglect of duty and misuse of office.’ He was also facing a federal lawsuit over censoring antiGateway views posted to his social media accounts.
“Candland’s resignation isn’t unexpected, it’s more than a year overdue,” the PWC Coalition’s Schlossberg said. “His financial conflicts of interest in the approval of the Digital Gateway have not only deprived his district’s residents of effective representation – they have tainted the entire county review and approval process, including the 2040 Comprehensive Plan.
“Compass Datacenters is committed to being a good neighbor and working through the County’s zoning process to solicit input and feedback from stakeholders on our construction and operating plans,” Chris Curtis, SVP of Development and Acquisitions for Compass Datacenters told DCD after the GPA vote.
QTS provided a similar statement: “QTS is pleased that the Prince William County Board of Supervisors recognizes the compelling economic and community benefits of the Digital Gateway project and has approved the proposal to move forward.
"We are eager to continue working with stakeholders and members of the community to make this project a reality
Lawsuits from the Oak Valley Homeowners Association, Inc. and Gainesville Citizens for Smart Growth have both filed lawsuits against Prince William County’s supervisors, are both seeking to have the decision reversed and prevent similar changes being passed in future.
Supervisor Lawson told local press she has vowed to continue the fight against the project, and encouraged residents to contact their respective supervisors to oppose the development.
December 2022 saw Supervisor Pete Candland resign over the matter, saying his ability to serve “has been greatly diminished” over the project and the fact the county’s attorney had advised he no longer votes on data center projects until the rezoning of his land was complete.
Candland was initially against the Gateway project, but later became one of
“But there is no fond farewell for farcical representation - and no pausing in a legal and political battle that is far from over.”
Whether the prospective replacement supervisor is for or against the Gateway project is likely to be a key issue in upcoming elections to replace Candland.
Chair of the board Ann Wheeler is also the target of a recall effort over conflicts of interest due to her investments in data center companies, including between $100,000-$500,000 in Amazon and QTSowner Blackstone.
The wider national press will likely move on from the latest battle for Pageland Lane, but the war is set to continue until the last data center is built.
As EMEA’s leading data centre construction partner, we deliver our clients’ vision through leading edge construction solutions.
Our commitment to the client puts them at the centre of everything and positions Mercury as a strategic partner.
We encourage and back our people to realise their vision of themselves.
We place them at the heart of what we do, providing challenging opportunities to develop within a great team in a supportive environment that allows them to reach their full potential.
We go beyond the call of duty with a bold promise that Mercury will always deliver.
This serious dedication turns clients into partners, people into teams, and builds relationships that thrive.
To learn more visit www.mercuryeng.com
The end of over-cooling
> Data centers are letting temperatures go up
An intro to liquid cooling
> There are many kinds of liquid cooling. Here’s how it works
Plant-based cooling
> Don’t use fossil fuels for immersion cooling
Cooling has been at the center of thinking about green data centers since the very dawn of the discipline, more than 15 years ago.
You might think it's all been sorted out by now - but if you thought that, you'd be wrong.
The last ten years have seen a sustained effort to reduce the energy used in data center cooling systems, traditionally provided by air-conditioning technologies, shifting energy to the racks.
At the same time, there's been a growing movement that says liquid cooling is going to be more efficient, and better for the environment.
This supplement questions both those assumptions - and provides a primer on liquid cooling, because we believe that, whatever the details, liquid cooling is the future.
Liquid cooling is not a new technology, and it's not a single technique. It's been around for 60 years, and is available in around half-a-dozen forms.
Our primer takes you through the history of the liquid cooling movement, which has been widespread in computers from mainframes to humble desktop computers.
We also walk you through the various options, from simple circuits of water, kept from the electronics by coldplates, through immersion tanks, up to two-phase systems where sophisticated fluids bubble and recondense.
As liquid cooling emerges from its niche, you will need to learn a lot of new tools.
Sponsored by
The biggest effort in the green data center movement has been around reducing the energy wasted in cooling data centers.
The old consensus was that data centers should be kept at a chilly temperature below 20°C, to be absolutely sure that the electronics were not stressed.
Hardware vendors disagreed, and industry bodies assured data center operators that warmer temperatures were safe.
Now the conservative colocation sector is acting on those recommendations.
But wait. Research engineers are sounding a cautious note. It turns out that the electronics use more power when the temperature goes up, to perform the same work.
Some are suggesting that the move to raise temperatures is a big mistake, based on over-reliance on simplistic efficiency measures like PUE (power usage effectiveness).
Could it be that we need a bigger dataset and more research to sort this out?
Finally, immersion cooling has earned its reputation for environmental friendliness.
It uses less energy, it's quieter, and it removes heat in a concentrated form which makes heat reuse practical.
But there's a small snag. All too often, the immersion fluids used by the tank makers are fossil fuel-based hydrocarbons.
We spoke to a food giant that wants to fill your immersion tanks with a plant-based alternative.
And apparently it's fully recyclable, too.
Fourteen years after definitive proof that warmer is better colocation companies are still struggling to turn their cooling systems down
In definitive guidance that it is perfectly safe to run data centers at temperatures up to 27°C (80°F). But large parts of the industry persist in over-cooling their servers, wasting vast amounts of energy and causing unnecessary emissions.
There are signs that this may be changing, but progress has been incredibly slow - and future developments don’t look like speeding things up very much.
When data centers first emerged, operators kept them cool to avoid any chance of overheating. Temperatures were pegged at 22°C (71.6°F), which meant that chillers were working overtime to maintain an unnecessarily cool atmosphere in the server rooms.
In the early 2000s, more energy was spent in the cooling systems than in the IT rack itself, a trend which seemed obviously wrong. The industry began an effort to reduce that imbalance, and created a metric, PUE (Power Usage Effectiveness) to measure progress.
PUE is the total power used in the data center, divided by the power used in the racks - so an “ideal” PUE of 1.0 would mean all power is going to the racks. Findings ways to switch off the air conditioning, and letting temperatures rise, was a major strategy in approaching this goal.
In 2004, ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended an operating temperature range from 20°C to 25°C. In 2008, the society went further, suggesting that temperatures could be raised to 27°C.
Following that, the society issued Revision A1, which raised the limit to 32°C (89.6°F) depending on conditions.
This was not an idle whim. ASHRAE engineers said that higher temperatures would have little effect on the lifetime of components, but would offer significant energy savings.
Figures from the US General Services Administration suggested that data centers could save four percent of their total energy, for every degree they allowed the temperature to climb.
Hyperscale companies are often best placed to pick up advanced technology ideas. They own the building, the cooling systems, and the IT. So if they allow temperatures to climb, then it’s their own equipment that feels the heat.
So it’s no surprise that cloud giants were the first to get on board with raising data center temperatures. Facebook quickly found it could go beyond the ASHRAE guidelines. At its Prineville and Forest City data centers, they raised the server temperatures to 29.4°C, and found no ill effects.
“This will further reduce our environmental impact and allow us to have 45 percent less air-handling hardware than we have in Prineville,” Yael Maguire, then Facebook’s director of engineering, said.
Google went up to 26.6°C, and Joe Kava, then vice president of data centers, said the move was working: “Google runs data centers warmer than most because it helps efficiency.”
Intel went furthest. For ten months in 2008, the chip giant took 900 servers, and ran half of them in a traditionally cooled data center, while the other 450 were given no external cooling. The server temperatures went up to 33.3°C (92°F) at times.
At the end of the ten months, the chip giant compared those servers with another 450 which had been
run in a traditional air-conditioned environment. The 450 hot servers had saved some 67 percent of the power budget.
In this higher-temperature test, Intel actually found a measurable increase in failure. Amongst the hot servers, two percent more failed. But that failure rate may have had nothing to do with the temperature - the 450 servers under test also had no air filtration or humidity control, so the small increase in failure rate may have been due to dust and condensation.
Academics backed up the idea, with support coming from a 2012 paper from the University of Toronto titled Temperature Management in Data Centers: Why Some (Might) Like It Hot.
“Our results indicate that, all things considered, the effect of temperature on hardware reliability is weaker than commonly thought,” the Canadian academics conclude. “Increasing data center temperatures creates the potential for large energy savings and reductions in carbon emissions.”
At the same time, server makers responded to ASHRAE’s guidelines, and confirmed that these new higher temperatures were acceptable without breaking equipment warranties.
Given that weight of support, you might have expected data center temperatures to rise dramatically across the industry - and you can still find commentary from 2011, which predicts a rapid increase in cold aisle temperatures.
However, look around for recommended data center temperatures today, and figures of 22°C and 25°C are still widely quoted.
This reluctance to change is widely put down to the industry’s reputation
for conservatism, although there are some influential voices raised against the consensus that higher temperatures are automatically better (see Box).
All of which makes a recent announcement from Equinix very interesting. On some measures, Equinix is the world’s largest colocation player, housing a huge chunk of the servers which are not either in onpremises data centers on in the cloud.
In December, Equinix announced that it would “adjust the thermostat of its colocation data centers, letting them run warmer, to reduce the amount of energy spent cooling them down unnecessarily.”
“With this new initiative, we can intelligently adjust the thermostat in our data centers in the same way that consumers do in their homes,” said Raouf Abdel, EVP of global operations for Equinix.
Equinix’s announcement features congratulatory quotes from analysts and vendors.
Rob Brothers, program vice president, data center services, at analyst firm IDC explains that “most data centers … are unnecessarily cooler than required,"
Brothers goes on to say that the announcement will see Equinix “play a key role in driving change in the industry and help shape the overall sustainability story we all need to participate in."
The announcement will "change the way we think about operating temperatures within data center environments,” he says.
Which really does oversell the announcement somewhat. All Equinix has promised to do is to make an attempt to push temperatures up towards 27°C - the target which ASHRAE set 14 years ago, and which it already recommends can be exceeded.
No Equinix data centers will get warmer straight away, either. The announcement will have no immediate impact on any existing customers
in any Equinix data centers. Instead, customers will be notified at some unspecified time in the future, when Equinix is planning to adjust the thermostat at the site where their equipment is hosted.
"Starting immediately, Equinix will begin to define a multi-year global roadmap for thermal operations within its data centers aimed at achieving significantly more efficient cooling and decreased carbon impacts," says the press release.
And in response to a question from DCD , Equinix supplied the following statement: "There is no immediate impact on our general client base, as we expect this change to take place over several years. Equinix will work to ensure all clients receive ample notification of the planned change to their specific deployment site."
Reading between the lines, it is obvious that Equinix is facing pushback from its customers, who are ignoring the vast weight of evidence that higher temperatures are safe, and are unwilling to budge from the traditional 22°C temperature which has been the norm.
Equinix pushes the idea of increased temperatures as a way for its customers to meet the goal of reducing Scope 3 emissions, the CO2 equivalent emitted from activity in their supply chain.
For colocation customers, the energy used in their colo provider’s facility is part of their Scope 3 emissions, and there are moves to encourage all companies to cut their Scope 3 emissions to reach net-zero goals.
Revealingly, Equinix does not provide any supporting quotes at all from customers eager to have their servers hosted at a higher temperature.
For Equinix, the emissions for electricity used in its cooling systems are part of its Scope 2 emissions, which it has promised to reduce. Increasing the temperature will be a major step towards achieving that goal.
"Our cooling systems account for approximately 25 percent of our total energy usage globally," said Abdel. "Once rolled out across our current global data center footprint, we anticipate energy efficiency improvements of as much as 10 percent in various locations."
Equinix is in a difficult position. It can’t increase the temperature without risking the displeasure of its customers, who might refuse to allow the increase, or go elsewhere.
It’s a move that needs to be made, and Equinix deserves support for setting the goal. But the cautious nature of the announcement makes it clear that this could be an uphill battle.
However, Equinix clearly believes that future net-zero regulations will push customers in the direction it wants to be allowed to go.
"Equinix is committed to understanding how these changes will affect our customers and we will work together to find a mutually beneficial path toward a more sustainable future,” says the statement from the company.
“As global sustainability requirements for data center operations become more stringent, our customers and partners will depend on Equinix to continue leading efforts that help them achieve their sustainability goals."
Surprisingly, after the industry seems to have reached a consensus that data centers should be run as warmly as possible, there are dissenting voices - some of them very authoritative.
There are two main objections to running data centers warmer. One is that data center staff working in a contained hot aisle will be subjected to a harsher working environment. The other is that the chips in the servers will also be subjected to more extreme conditions.
John Haile, a retired 24-year veteran of Telehouse, commented on a LinkedIn discussion about Equinix’s announcement: “The people that work in the data center generally have to work in the hot aisle once the row goes live. The temperatures in there are well over 40°C - it drys your eyes out.”
While many professionals are prepared to work at higher temperatures, and some even relish the opportunity to work in shorts, others question whether the effort is even beneficial in the first place.
Running with hotter air temperatures may create a completely spurious benefit, based on over-reliance on one efficiency metric, argues Professor Jon Summers, research lead in data centers at Research Institutes of Sweden (RISE),
Data centers measure efficiency by aiming for a low PUE (power usage effectiveness) which is created by shifting power consumption from the building’s air conditioning to the racks. This makes sense if the energy in the racks is all used for computation, but some are used for cooling fans, points out Professor Summers.
“Increasing temperatures will improve the ISO PUE of a DC, which a vast majority appear to cite as a measure of efficiency,” says Summers. His research that a reduction in energy used by the air conditioning will be offset by the increased energy used in servers.
“At RISE Research Institutes of Sweden, in the ICE data center we have researched the effect of supply temperature on DC IT equipment using wind tunnels, full air-cooled data centers, direct-to-chip, and immersion systems connected to well-controlled liquid cooling testbeds,” says Summers. “The upshot is that irrespective of the cooling method, the microprocessors draw more power when operated hotter for the same digital workload due to current leakages.”
This effect varies between different processors, says Summers, with Xeon E5-2769-v3 CPUs running at 50 percent workload drawing 8W more when the temperature was increased from 40°C to 75°C in a wind tunnel, with the server fans set to target a fixed CPU temperature.
Essentially, when the air inlet temperature goes up, the cooling work shifts from the air conditioning systems to the fans in the servers, which have to work harder. This automatically reduces the PUE, because the fans are in the racks, and PUE is designed to maximize the energy used within the racks, compared to energy used in the external cooling systems.
Running at hotter temperatures can create completely illusory benefits, says Summers: “With increased supply temperatures we do see an increased overall energy consumption even though the PUE drops.”
In immersion tanks, where systems have no server fans, Summers ran 108 of the same CPUs in an immersion tank. In this situation, his team found there was a six percent drop in power requirements at the same 50 percent workload when the tank coolant was dropped from 50°C to 30°C.
“Other than less energy consumed by the DC cooling equipment resulting in a lower ISO PUE, what are the reasons for pushing up air-supply temperatures?” Summers asks.
Summers’ colleague Tor Björn Minde, head of RISE’s ICE data center agrees: “Why in the world would you like to do this?”
Allowing warmer temperatures might make sense if the outside air temperature is above +30°C, says Minde, but otherwise “you should run it as cold as possible. The power draw of the IT is less at low temperatures. If you have freecooling, run it cold. You will have less fan speed overall both in the facility and on the servers.
Minde thinks the industry should aim for constant CPU temperatures, and use the air conditioning compressor only when the CPU temperature is getting too high.
Further work will be done on this – and Interact, a division of TechBuyer, has also been researching the issue, and will be publishing a paper with the IEEE in 2023.
Top images show how standard perimeter CRAH units create high-velocity airflows that translate into negative pressure at the front of racks at the beginning of the row. Bottom images show how chilled water units for nonraised floor applications help lower airflow velocities to balance pressure across the row.
Slab, or non-raised floor, data centers have helped cloud and colocation providers meet growing capacity demand by accelerating speed-to-market and reducing capital costs. Those benefits have, however, come with new data center cooling challenges. Cooling solutions not tailored to the needs of slab floor facilities can jeopardize equipment reliability and reduce cooling system efficiency. But with new challenges come new opportunities, and recent developments in control strategies and cooling technologies are enabling high performing cooling in non-raised floor environments.
When slab floor data centers were first gaining traction, the airflow control strategy that had proven effective in raised floor environments was applied to
these non-raised floor data centers. But this strategy — which manages airflow and fan speed based on pressure differential, or Delta P — hasn’t been as effective in slab floor data centers as it is in raised floor environments.
Without the duct provided by the space beneath the floor, pressure is more difficult to measure and manage in slab floor data centers. Data center designers also lose the ability to control airflow to racks using properly sized and positioned floor tiles. Instead of cold air being distributed directly to the front of racks through the tiles, air must travel the length of the row. To compensate, many operators drive fan speeds too high, wasting fan energy and resulting in lower return air temperatures that prevent cooling units from operating at their design efficiency.
The need for air to travel down the row also creates airflow patterns that can limit the ability to cool racks closest to the cooling units when standard data center cooling units are used. The velocity of the air at the beginning of the row has to be high enough to ensure adequate airflow at the end of the row. With standard cooling units,
that requires velocities so high they create negative pressures in front of racks at the beginning of the row. This increases the potential for temperature-related failures in these racks.
As a result, operators of slab data centers have had to compromise both cooling system efficiency and equipment reliability. But that is no longer necessary, as new strategies and technologies designed specifically for slab floor data centers are now available.
With control based on Delta P proving inefficient in slab floor data centers, Vertiv developed a control strategy based on the temperature differential (Delta T) between the supply air leaving the cooling units and the return air to the cooling units.
Temperature is much easier to measure than pressure, and by setting a temperature control point for return air above the supply air temperature, operators can ensure enough airflow is reaching each rack.
This strategy takes into consideration numerous failure conditions, such as blocked cold aisles, and provides monitoring to ensure air temperatures at the rack are precisely controlled and consistently meet temperature service level agreements (SLAs) — something that isn’t possible with a Delta P control strategy. The need to run fans at higher-than-necessary speeds to compensate for pressure variations across the row is eliminated, and return air temperatures are maintained at the setpoint to optimize cooling unit efficiency. For more on this control strategy, see the Vertiv white paper, Overcoming the Challenges in Cooling Non-Raised Floor Data Centers.
Chilled water cooling systems offer a number of benefits to cloud and colocation provides developing or operating slab floor data centers. One of the most significant is the ability of chilled water systems to reduce direct and indirect greenhouse gas emissions compared to other cooling technologies. Reductions in direct emissions are enabled by a chiller’s ability to use low global warming potential (GWP) hydrofluoroolefin (HFO) refrigerants. Indirect emissions are reduced through the overall efficiency of these systems, which can achieve very low power usage effectiveness
(PUE) values through the use of intelligent control systems. To learn more, read the Vertiv white paper, How Chilled Water Systems Meet Data Center Availability and Sustainability Goals.
To address the challenge of airflow distribution discussed previously, new chilled water cooling units have been engineered to meet the airflow requirements of slab floor data centers, including perimeter and thermal wall cooling units.
Perimeter units, for example, have been redesigned to relocate the fan at the top of the unit and create a larger surface area for air distribution. This allows these units to distribute more air at lower speeds, improving the ability to move air down the length of the row and reducing the risk of negative pressure at the beginning of the row.
New thermal wall units adapt the air handling unit (AHU) concept to the needs of slab floor data centers. Installed in the service corridor, they blow air horizontally to the server room, providing high volumes of air that move at low speeds. These systems are particularly well suited when high-density cooling units are required.
Both products can be integrated to the chilled water system manager, which optimizes the entire system by coordinating operation of external and internal units.
Developers and operators of slab floor data centers no longer have to accept compromises in cooling system performance to realize the cost and speed benefits enabled by eliminating the raised floor. By using control strategies and cooling technologies engineered specifically for slab floor data centers, they can leverage the environmental and operating benefits of chilled water cooling while effectively managing airflow and temperature across the facility. For more information on selecting the right cooling system for your data center see the white paper, Chilled Water Data Center Cooling for Non-Raised Floor Applications
With data center workloads ever increasing due to advanced analytics, AI, and the digitization of every process, the average rack power draw has shot up considerably. And, as we know, with more power draw comes more waste heat that needs to be removed from the rack and eventually the white space.
In the recent past, when racks consumed up to 20kW, air-based cooling methodologies could be relied on to keep the IT hardware operating safely and efficiently. But as some racks start to exceed 30kW or more, new cooling approaches need to be used.
This is in part due to the densification of IT hardware in general with each new CPU generation packing more processing capacity in smaller and smaller die sizes. Workloads such as artificial intelligence (AI) and machine learning (ML) require floating point operations which are usually delivered via a graphical processing unit. These GPUs are designed to have a normal operating temperature above 80° C (176° F) when fully utilized for a particular workload.
Although air-based cooling options exist for racks drawing more than 20kW, they are often cumbersome to install and maintain effectively, essentially passing the point of diminishing returns in terms of cooling capacity. As such, owners and operators of data centers are now cautiously looking towards liquid cooling for their new facility projects.
Liquid cooling of IT equipment seems like a new technology, but that cannot be further from the truth.
Liquids in general can be a great heat transfer medium and with a little chemical engineering, boiling and condensation points can be tailored precisely, improving the heat transfer using dielectric fluids.
Various forms of liquid cooling have been around since the late 1800s when they were used to insulate and cool extra high voltage transformers. The automotive industry is another ecosystem that relied, and still relies on, liquid cooling - the water in a typical auto radiator.
Liquid cooling entered the computer sector early in its history, when IBM released a series of enterprise-grade computers called System/360, in the early 1960s.
The System/360 has been one of the most enduring lines of commercially available computers. While the original hardware is now retired, S/360 code written in the early 1960s are still found in new mainframes today. It was also the first computer to have a unified instruction set, making upgrades or changes to the mainframe easier than ever.
The System/360 was also cooled with a hybrid approach using both air and liquid cooling. This was quite big and cumbersome to install, but IBM developed the hybrid model to accommodate increased heat loads. With these systems, as much as 50 percent of the heat dissipated was removed from the cooling air via water-cooled heat exchangers.
Interboard water-based heat exchangers
Source: Exploring Innovative Cooling Solutions for IBM’s SuperComputing Systems: A Collaborative Trail Blazing Experience
Dr. Richard C. Chu, IBM Fellow, Academician, Academia Sinica, ROC Member, National Academy of Engineering, USA
Today, liquid cooling is present in pretty much every desktop PC – and the concept has essentially remained the same. The cooling process is made up of three distinct parts: - the heat plate, the supply and return pipes, and the radiators and fans.
solutions? We’ll look at these next in the context of the data center.
The heat plate is essentially a metal plate that covers the whole CPU die with a small reservoir on top. The plate is engineered to be as conductive as possible in terms of heat. Any heat generated by the chip will be transferred to the reservoir on top.
The liquid in this closed loop will travel via the supply and return pipes to the radiators where heat will be pushed out of the PC enclosure through the radiator fins – these fins being actively cooled by fans.
Consumer-grade liquid cooling options have originally only dealt with CPU heat, but now almost every component of a modern-day PC can be liquid-cooled.
That is the consumer-grade option of liquid cooling – but what about largerscale deployments and enterprise-grade
When analyzing liquid cooling options for enterprise-grade IT hardware there are essentially two main categories of liquid cooling – Direct-to-Chip Liquid Cooling (sometimes called conductive or cold plate liquid cooling) and immersive liquid cooling.
When considering the phases (what state the fluid is in – either liquid or gas) that the coolant goes through we have five distinct types of liquid cooling as seen in figure above.
This method of cooling requires delivering the liquid coolant directly to the hotter components of a server - CPU or GPUwith a cold plate placed directly on the chip. The electric components are never in direct contact with the coolant.
With this method, fans are still required to provide airflow through the server to remove the residual heat. While the air-cooling infrastructure is greatly reduced, one is still required for the correct operation of this liquid cooling method.
Coolants can be either water or dielectric fluids, but water will infer a downtime risk of leakage, however, Leak Prevention Systems (LPS) are available. Single phase refers to the fact that the coolant does not change states - i.e from a liquid to a gas.
This is also the same method used in the previous desktop PC example.
The two-phase direct-to-chip liquid cooling method works like the previous single-phase method, the only difference
Layout of hybrid air/liquid approach in System/360being that the liquid coolant changes states - from a gas to a liquid and viceversa as it completes the cooling loop. These systems will always use engineered dielectric fluid.
In terms of heat-rejection, two-phase systems are better than single-phase systems and have a lower risk of leakage due to the coolant's state-changing nature. They do however require additional controls which will increase maintenance costs over the lifetime of the system.
This cooling approach uses a single-phase dielectric fluid and is in direct contact with IT components. Servers are fully or partially immersed in this non-conductive liquid within the chassis effectively removing all sources of heat.
Essentially, it is a rack turned on its back, filled with dielectric fluid - instead of mounting servers horizontally, they are now mounted vertically.
These systems are usually fitted with centralized power supplies and the natural dielectric fluid is cooled off through a heat exchanger using a pump which can be installed either inside or outside the tub, or by convection.
As with Single-Phase, in this method the IT equipment is completely submerged in fluid vertically within a tank. But, importantly with this approach, the dielectric fluid must be capable of changing states from liquid to gas as it heats up.
Dielectric liquids are used as electrical insulators in high voltage applications, e.g. transformers, capacitors, high-voltage cables, and switchgear (namely high voltage switchgear).
Their functions are to provide electrical insulation, suppress corona and arcing, and serve as a coolant. Generally, they are split into two categories, fluorochemical, and hydrocarbons.
Fluorochemical fluids, generally with a lower boiling point, are predominantly used for two-phase immersion cooling.
Hydrocarbons typically are not used for Two-Phase immersion cooling systems, as most hydrocarbons are combustible and/ or flammable. Therefore, hydrocarbons are typically only used in Single-Phase applications.
Both fluorochemicals (or fluorocarbons) and hydrocarbons (e.g., mineral oils, synthetic oils, natural oils) can be used for Single-Phase immersion cooling. Fluids with a higher boiling point (above the maximum temperature of the system) are necessary to ensure the fluid remains in the liquid phase.
The cooling can happen either passively via conduction or actively pumped. Both heat exchangers and pumps can be found inside the chassis or in a side arrangement where the heat is transferred from the liquid to a water loop.
This approach also involves no fans, so its operation is nearly silent (0 dB). In contrast, some air-cooled facilities can reach upwards of 80 dB in the data hall with workers requiring hearing protection for longer exposures.
Sometimes referred to as an "open bath,” this immersive liquid cooling method involves the IT equipment being completely submerged in fluid.
In such a system, submerged and exposed parts will create heat, turning the liquid into a gas, which rises to the surface and condenses on a coil, falling naturally back down once it cools off enough by turning back into a liquid state.
Considerations when deciding among various fluorochemicals and hydrocarbons include heat transfer performance (stability and reliability over time, etc.), ease of IT hardware maintenance, fluid hygiene, and replacement needs, material compatibility, electrical properties, flammability or combustibility, environmental impact, safety-related issues, and total fluid cost over the lifetime of the tank or data centers.
While far from mainstream, liquid cooling is positioning itself as the cooling solution for high-performance computing. Its mainstream adoption will however depend on advances in technology and chip designs.
Retrofitting already existing data centers is costly for some forms of liquid cooling, while the weight of immersion tanks makes it impractical for many current raised floor facilities
Immersion-based cooling is currently perceived as one of the greenest technologies in data centers, as it reduces the energy needed to cool a facility, while extracting heat in a quiet and efficient manner.
But there could be a problem. Data center cooling systems from the likes of Asperitas or Submer consist of large tanks of fluid in which electronics are submerged. Generally, that fluid is a synthetic oil composed of various hydrocarbons, ultimately derived from petroleum.
That might not be a big issue, because immersion cooling systems don’t burn their cooling fluid, it circulates within the tanks until it needs replacement.
But the hydrocarbon-based fluid will eventually need to be disposed of, and will reach the environment.
Could there be an alternative?
Peter Judge Executive EditorUS food giant Cargill thinks there is. The company is one of the largest in the US, starting 150 years ago as a salt distributor, and is now best-known for egg-based products. But it also works with grain and vegetable oil, and a few years back, it quietly branched out. Into data center cooling.
“We saw an opportunity for the renewables aspect, and the environmental aspect,” explains Kristin Anderson, Cargill’s business development manager for cooling solutions. “We're really excited about this product and the environmental opportunities.”
The product, NatureCool, is at least 90 percent based on soy oil, and designed to replace petroleum-based immersion coolants in data centers and cryptomining facilities. Because it comes from plants that have naturally trapped carbon, it can be said to be CO2 neutral - although uses land that would otherwise be used for food.
Environmentally friendly products can involve a trade-off on performance, but Cargill believes that doesn’t apply here. It claims the fluid has a 10 percent higher heat capacity than leading synthetic immersion cooling fluids.
It also passes safety standards, with a high flash point of 325°C (617°F). Unlike some other immersion fluids, it can't selfignite, because its flames will go out after the heat source is removed.
And there are other benefits in its practical use, Cargill claims. The company says that synthetic spills require expensive remediation, using solvents which then need to be cleaned up, using techniques which are highly regulated.
By contrast, Cargill says NatureCool spills just need soap and water.
In fact, when the fluid is outside the
tanks, Cargill says it can biodegrade quickly and easily, within ten days - even though it is stable and long-lasting inside the system.
Cargill has considered the lifecycle of the product and made it recyclable - not only the fluid but the packaging as well.
The company can supply fluids in tanker trucks, holding 5,800 gallons, but that doesn’t work for data centers, as the trucks can’t be driven into the facility and up to the tanks.
Instead, most customers use what is known as “totes,” 330-gallon containers made of heavy plastic with a metal cage reinforcement, that can be transported by forklift truck. Totes are about four-byfour-by-four (feet), and add to the price of the fluid.
Large facilities, with up to 500 tanks, can get through a surprising amount of
fluid, sources suggest. Individual tanks hold from 250 to 500 gallons and, while the fluid lasts a long time, it will have to be replaced at some point.
There are anecdotal stories of data centers placing orders for 25,000 gallons of coolant at one time, which amounts to around 60 totes.
What appears to be happening is operators are choosing to replace their immersion cooling after perhaps five or six years.
When this happens, if customers dispose of the fluid it will still have effectively zero carbon emissions, since it will be releasing captured carbon. However, there is a possibility that customers might still manage to reuse that fluid, if it can be processed to make it suitable for bio-diesel use.
Given that NatureCool is 90 percent soy oil, the other 10 percent might need to be removed, in some sort of processing, leaving an oil which can be safely burnt in diesel generators.
The reuse doesn’t end there, as the totes themselves are a potential source of waste. Industry practice is generally to discard them, but Cargill recycles totes.
Cargill’s cooling customers all get virgin totes rather than second-use containers, but customers are encouraged to send them back so they can be cleaned and reused or sold on the secondary market.
All too often, the recycled option is an expensive niche product, but Cargill appears to want to take a substantial share of the market.
The product was initially conceived around 2017, and started out in tests with small partners. It has been available commercially for four and a half years,
In the general data center sector, immersion cooling is still a small niche, as most operators are dealing with a huge installed base of aircooled systems
finding a market amongst early adopters of immersion cooling.
This year, in 2022, the company has built up enough momentum to hire a team to market the product and make a formal launch.
With its massive food volumes, Cargill can produce large volumes of NatureCool to meet potential demand, however, forecasting could be an issue, as the company’s initial market is mostly in the unpredictable cryptomining sector.
In the general data center sector, immersion cooling is still a small niche, as most operators are dealing with a huge installed base of air-cooled systems. It is difficult to get those data centers to consider converting to immersion cooling: it would involve junking their air conditioning systems, and investing in different sorts of support infrastructure and staff.
High-performance computing (HPC) has moved further towards immersion cooling, but it’s clear that cryptomining is the current opportunity. Crypto operators are not tied to existing installed hardware, and they simply want to run equipment as fast and cheaply as possible.
They routinely overclock equipment to get maximum performance, creating higher demands for heat removal, which immersion cooling can deliver.
Taken in context with the rest of Cargill’s
business, immersion cooling is obviously a good opportunity, because it provides a higher-margin outlet for vegetable oil.
However, there are already a variety of synthetic oils in competition, so there will be pressure on Cargill to keep the fluid cheap. Certainly, operating in the pricesensitive crypto sector will require that.
One might wonder if immersion cooling in data centers could expand so quickly that the facilities start to take raw materials away from the food sector, perhaps pushing up prices, but there’s no current danger of that.
The fluid is available internationally, shipped in ISO-standard shipping container tanks. Cargill is a large enough organization to have an entire transportation team that will handle this task, and also cover the minutiae of the international shipping process, including customs and VAT.
Users buying immersion cooling systems do not want to have to buy the coolant separately. In the event of any failure or incompatibility, this would mean fingerpointing and potentially a failure of the tank’s warranty.
For this reason, Cargill aims to sell its product directly through the tank vendors, and it will be getting it certified as compatible with those tank products.
We can also expect marketing campaigns which are based around its
environmental credentials, possibly linked to pending regulations on greenhouse gases and other chemicals with a global warming potential.
Cargill hopes that potential immersion cooling customers will demand a plantbased zero-emissions product in tanks, and ask vendors to endorse and supply it.
DCD has approached leading immersion cooling providers to ask if they are aware of NatureCool or have certified it, and the initial response seems to be favorable.
While some vendors are staying quiet for now, Asperitas says it is “excited” by the development.
Asperitas says there don’t seem to be any issues with compatibility, but will need to confirm this with OEMs. “We look forward to working with Cargill through a special OCP immersion cooling fluids group to assess performance using the newly published Figures Of Merit (FOMs),” said a statement.
Cargill has joined the Open Compute Project (OCP), an industry group aiming to reduce the environmental footprint of data center hardware, and hopes to raise the profile of immersion coolants.
“Immersion cooling is the new frontier of technologies that allows for more efficient, higher performing systems that also help make the IT industry more sustainable,” said Kurtis Miller, the managing director of Cargill’s bioindustrial business, and a contributor to the OCP's Requirements document for immersion cooling.
There are anecdotal stories of data centers placing orders for 25,000 gallons of coolant at one time, which amounts to around 60 totes
High-performing cooling in non-raised floor environments is here and ready to meet growing capacity demand and reduce operating costs. Ensure equipment reliability and improve cooling system e iciency in the existing footprint.
Sometimes, it feels like everyone is moving to the cloud.
While that isn’t exactly true, with the cloud loudly proffering to offer more sustainable, cheaper, easier, quicker - better - IT services, it is unsurprising that so many are heading to the hyperscalers.
It’s no different ‘down under’. In 2013, the Australian government set out guidelines for its government agencies to take a ‘cloud-first strategy.’ Spending on cloud computing has since risen from ~AU$4.7 million (US$3.18m) between 2010 and 2014, and is predicted to reach AU$20.8bn (US$14.1bn) in 2025
In other words, the Great Aussie Migration to the Cloud is looking good. Money is being thrown at the problem, and we are regularly seeing new agencies joining the list of those on the cloud.
But it has not all been smooth sailing. In the Australian Government’s Digital Transformation Agency’s ‘Secure Cloud Strategy,’ several obstacles and hesitancies were identified, including a lack of ‘common understanding of the cloud’ and ‘no confidence in how to meet compliance obligations.’
Concerns around data security when it comes to cloud computing are nothing new, and that this is a felt experience for the Australian government was only emphasized by the Global Switch Exodus
In August 2019, China’s Shagang bought the final quarter of Global Switch, making the cloud operator entirely under Chinese ownership. With many nations boycotting Chinese-owned technology and data services, this had a negative impact on the Australian Government’s trust in remaining with the provider.
In 2017, the Aussie Department of Defense exited the Global Switch facility (and went on to sign an AU$109.4 million (US$76.8m) contract with Data#3 for Microsoft Azure Cloud services in August of this year), and in July of this year, the Department of Home Affairs, the Australian Securities and Investments Commission, and the Australian Communications Media Authority officially broke their relationships with Global Switch off. The end of an era.
Richard Burley, CCO of DCI Data Centers, an Australian GovernmentCertified Service Provider, admitted that the Global Switch departures were not bad for other cloud providers, including DCI.
“We're part of the Commonwealth hosting certification framework and we're listed as certified strategic,” Burley explained. “Therefore, one must be a lowrisk entity. Because our end/beneficial investors are predominantly Australian, British, and North American, we tick that box.
“Then we agree to operate our facilities within the stewardship guidelines of the commonwealth, in return for which our facilities are permitted to be used by the public and private cloud service providers for all classifications of government use.”
However, is not only data sovereignty and compliance that offer an obstacle to the widespread adoption of the cloud.
In fact, according to Burley, it is geographical.
“If you look at cloud-ready certified data centers in Australia, they are 40 to
one located on the eastern seaboard in number and capacity, but the Australian population is six to one. The purpose of our business is to bring the cloud to the Edge, and by that, we mean to bring it away from the eastern seaboard concentration and more to the west of the country, and that explains why we started in Adelaide.”
Adelaide, located in South Australia, places it almost directly in the middle (width-ways) of the center of the country. While by no means ‘west,’ it is west of the cloud hotspots in New South Wales and Queensland.
DCI already offers a colocation data center in Adelaide, ADL01, but what they are currently working on are two cloud facilities: ADL02 and ADL03.
“We have an existing facility, ADL01. Next door is the ADL02, which will in fact be a twin to ADL03 in Mawson lakes.”
This decision is strategic. With the South Australian government already hosting some IT equipment in ADL01, having a cloud provider just next door is ideal, as well as having another facility to back it up, just 14 miles away in Mawson Lakes.
“It [ADL02] is a 4MW box with two halls, and 2,350 square meters of floor space. The PUE is 1.2, and that's even at very low loads. We have 800 racks, and it's designed to comply with cabinet security accreditations.”
It is not only DCI that is expanding its footprint in Adelaide. Companies like NextDC and CDC are also setting up sites in the city.
A dramatic expansion of any kind has sustainability concerns. In Australia, this is somewhat mitigated by a grid highly subsidized by green energy (in Adelaide, it is primarily wind power), but that grid is also highly compromised.
Climate change has exacerbated the already tempestuous weather in Australia, and the country is plagued by extreme weather incidents like storms and wildfires. For example, on December 12, 2022, more than 2,500 customers reported a blackout in Sydney as a result
of a massive storm.
Data centers themselves are built with these incidents in consideration. According to Burley, DCI facilities are all constructed with a ‘TVRA’ in mind: a Technology Vulnerability Risk Assessment.
“We make sure we build in above the one in 100-year, or one in a 200-year flood zone. Particularly in Darwin, there are storm surges, so we're building above the 120-year flood zone. We always have regard to bushfire risk, and so you tend to find a lot of exclusion zones and barriers to building in bushland.”
But while the data centers themselves are strategically located, the grid is liable to outages, and this is a huge obstacle to DCI.
“For us, the greater issue is the variability of the grid. In Australia, there's a good bit of instability occurring. And so, in Adelaide, we maintain the 72 hours of backup generation on-site fuel.”
This has been particularly necessary in the last month. South Australia has been experiencing the ‘worst statewide blackout since 2016.’ According to ABC, gusty winds, heavy rain, and 423,000 lightning strikes caused widespread damage to power lines across the state, leaving more than 34,000 people without electricity. Fortunately for DCI, their Adelaide facilities have not yet been affected.
The resulting floods are still having a significant impact on the grid, and as the state enters its sunny season and the risk of bushfires again goes up, this instability is unlikely to be resolved just yet.
Despite this, the data center industry in Australia remains set to expand over the coming years, particularly for the cloud.
Government agencies cannot consistently and successfully implement a cloud-first strategy until the cloud infrastructure exists at the regional Edge of the major cities in Australia. Until the over-concentration problem is overcome, we can expect to see continuing massive growth, regardless of how well the grid can support it.
Honeywell’s end-to-end portfolio of solutions is designed to keep your data center operations running at peak performance, night and day.
LEARN MORE
Sponsored by
In an industry undergoing massive change, some projects and people have stood out as the leaders of the pack.
Now in its 16th year, the DCD Awards show has always sought to uncover and highlight the individuals, teams, and projects that represent the best of what the industry has to offer.
59 judges from across the industry voted across 15 categories in the sector's largest and longest running awards series. After hundreds of submissions, months of discussions, and a careful tallying up of the vote, we present here the winners of the data center's original and unbiased awards show.
This supplement looks at those winners, delving into each project and profiling the people behind them.
While the show itself was held in London, UK, the finalists represented every corner of the world.
The winners were equally diverse, highlighting that the sector cannot just look to the US and Europe for inspiration and ideas.
Winners came from Mexico, Malaysia, India, South Africa, and more.
Predictions of the future are always flawed, but perhaps we can get a glimpse at what is to come in some of the award winners we saw this year. Augmented reality (AR) headsets and digital twin technology look poised to overhaul how we approach construction,
while machine learning technologies are already beginning to reduce server waste.
Data centers have always been a sector of enormous wealth, made possible by a huge appetite for power, water, and other resources. It's possible to be mindful of that, and work to improve our relationship with the wider world. That could be through circular centers aimed at recycling servers, or it could be through phasing out water use. It could also be shown in other ways, like working with local communities to improve digital literacy - connecting those around you, and not just those that can pay.
It is no secret that data center audiences tend to be on the older and maler side than most.
In our award highlighting the upcoming talent of the sector, it was good to see a diversity of faces competing for the awardproviding new perspectives for an industry that doesn't want to be stuck in the past.
But that doesn't mean those that have worked in the industry should feel they are being put out to pasture. We also highlight the lifetime achievement of someone who helped lift the sector, as well as the lives of veterans.
As we celebrate those that won in 2022, it's worth remembering that the process will soon kick off for the next one.
As the industry grows, its efforts to improve should grow with it. Will you help make 2023 better?
The UK is a thriving market for data centers, with cloud providers leading in recent years. Now, as the Edge begins to pick up its pace, companies are trying to work out how it fits into the nation’s digital landscape.
One UK digital Edge infrastructure provider, Pulsant, recently launched its own next-generation Edge platform called Pulsant Cloud to try to bridge the gap.
The company has been recognized with this platform, after scooping the award for Edge Data Center Project of the Year, which was sponsored by Moy Materials.
This award recognizes projects demonstrating a unique and strategic approach to how a successful Edge deployment is designed, set up, and operated.
According to the firm, this platform has been designed to extend the power of the Edge into complex hybrid environments.
It’s clear that the company is serious about this too, following over £100 million worth of investments into building its Edge digital infrastructure platform in the past year. Pulsant acquired two data centers during this period, in Manchester and Reading. It now counts 12 data centers across the country.
Pulsant wants to enable 95 percent of UK businesses to benefit from the major advances of Edge computing and give
regional enterprises and service providers unmatched scale and reach.
“The launch of Pulsant Cloud is another significant milestone in the development of our Edge infrastructure platform. We have invested in the network, our data centers, and now the hybrid cloud to give enterprises orchestration all the way to the Edge,” said the company in a statement.
According to the firm, Pulsant Cloud is able to “resolve the most significant control and optimization challenges facing organizations with hybrid environments.”
It’s been designed with delivering cost-control and workload flexibility needed for Edge.
Pulsant Cloud has been developed for the whole of the UK to access, through its Edge infrastructure platform. In simple terms, clients are able to develop Edge applications within a hybrid cloud environment via Pulsant’s 100Gbps fiber network.
The service keeps regional mid-market organizations connected, enabling them to build and deploy applications.
And the company is quite keen to point out that its coverage is nationwide, with its connected data center network stretching from London to Scotland, filling the infrastructure gap between regional businesses and innovative software providers. Bridging the digital divide across the UK is a key target of Pulsant’s.
Paul Lipscombe Telco EditorA partnership with Megaport, a network-as-a-service provider has given Pulsant the opportunity to connect to more than 360 cloud service providers, including some of the biggest hyperscalers around including Alibaba, AWS, Google Cloud, IBM Cloud, Microsoft Azure, Nutanix, Oracle Cloud, Salesforce, and SAP.
“We're delighted to announce we won the "Edge Data Center Project of the Year" award at last night's DCD Awards 2022,” Pulsant said.
“The award truly recognizes our work delivering a next-generation Edge infrastructure platform to UK regional enterprises and service providers, offering low latency access and hybrid cloud through our 12 UK-wide data centers. This is a huge testament to our dedicated team and everyone involved in the project.”
Online banking is a necessity for nearly everyone these days and is an absolute must for speedy transactions.
This method of transferring money has replaced cash, and stopped people from having to travel to banks. But all these transactions - and there’s a hell of a lot of them - must be going somewhere. And of course, the answer is data centers.
One such financial services company is Brazilian-based Itaú Unibanco. The firm has two data centers, both of which are located in Mogi Mirim, roughly 150km southwest of São Paulo.
The company’s work has been recognized after it claimed the Enterprise Data Center Evolution award, sponsored by Datalec Precision Installations.
This award recognizes the process of data center evolution that enables the enterprise to meet all the required objectives of its IT strategy.
Itaú Unibanco has an initiative to improve processes, restructure teams, and automate manual processes. Itaú Unibanco - Centro Tecnológico Mogi Mirim comprises two data centers with more than 10,000 sqm of IT space and 30,000 sqm divided between facility and support.
These spaces together generate a huge amount of assets that should be monitored and managed on a digital platform: the DCIM system. DCIM has around 1,500 facility assets with more than 120,000 automation monitoring points, 5,000 IT assets installed, and integrations with at least eight external systems.
But what is the DCIM platform?
This platform, allows for the full integration between data center infrastructure areas, simplifying and automating infrastructure delivery through integration with ITSM, Asset Management, and CMMS platforms.
According to Itau, its facilities infrastructure provisioning becomes fully automated due to the platform's advisory functionalities based on the capacity that indicates the best position for new equipment, aiming to maximize the energy and occupancy efficiency of IT environments.
Because of this DCIM platform, the data centers now have a digital twin tool, that allows for the automation of infrastructure delivery routines for lead time reduction, quality improvement, predictability of reactions and possible impacts of infrastructure changes, online monitoring of electrical loads, generation of instant CFD models, and others.
Another important focus for Itaú Unibanco has been its new colocation initiatives, and its DCIM platform has been able to support such initiatives.
With the goal of providing its clients a centralized management of their infrastructure, Itaú’s DCIM is able to centralize the management of colocation spaces on a unified platform, giving the customers a complete view of space and occupation, power consumption and demand, plus actual, and historical thermal conditions of IT space, and a detailed database of all IT assets installed in their environment.
The biggest problem in data center construction is errors where the building doesn’t quite match the plans. Misplaced concrete pads or mistakes in the fiber popups can put a data center build behind schedule or decrease the performance of the eventual building.
Around 30 percent of any construction project is “rework” - putting right such mistakes when they have been identified.
The Atom headset from XYZ Reality can wipe out rework, by validating the building in real-time as it is being constructed, using a 3D augmented reality view of the building designs overlaid on the construction site, before the engineers’ very eyes.
“The value proposition is, build it right the first time,” XYZ Reality CEO David Mitchell said in an interview with DCD “The industry is plagued by rework.”
Atom is a safety-certified hardhat with a built in head-up display and 16GB RAM and 1TB of storage. It connects to the HoloSite augmented reality platform and shows building plans as a 3D hologram.
It provides engineering-grade AR, with millimeter-level accuracy, able to align building information model (BIM) files with the built reality, showing the placing of electrics, pipework, and the eventual physical structure, and validating the build in real-time.
That’s a benefit which stood out for judges of the Mission Critical Tech Innovation award, sponsored by Jones Engineering Group.
The accuracy is crucial, as data centers are built with tolerances as small as 5mm. By eliminating the traditional sixto eight-week scanning and checking rework process, XYZ Reality says an Atom can pay for itself in six months by eliminating costly rework.
This also delivers a reduction in the wastage of energy and materials during the build.
PM Group deployed Atom in a beta test in November 2020, and field engineers quickly identified significant discrepancies between the design and the construction.
The system was fully deployed on-site and in 2022 PM Group made an important decision to integrate its Autodesk BIM 360 project management system with the Atom’s cloud platform, streamlining the process and enabling more inspections and swifter resolutions.
Now rework has been reduced to less than one percent of the project.
Data centers are keen to minimize the energy they use - but large parts of that energy are almost invisible. In particular, the energy used by servers can be hard to access.
Most data centers assess their energy efficiency using power usage effectiveness (PUE), a metric designed to show inefficiencies in the power and cooling infrastructure, but giving no insight into what happens within the racks.
Most servers are run very inefficiently, at low utilization, so the PUE scores can mask a lot of waste.
When operators refresh their servers, they hope to reduce their energy costs and their carbon footprint but, without actual figures, they rely on guesswork and industry myths about the environmental impact of their hardware.
This means they may inadvertently increase their carbon footprint, when all factors are taken into account.
Interact, from TechBuyer, aims to change that. It’s a non-intrusive, machine learning, SaaS tool that provides tailored server upgrade recommendations that will reduce the cost, energy consumption and carbon footprint of an organization’s server estate. The tool provides easy-to-use reports, and comparisons between multiple options, so users can find opportunities to improve efficiency, while saving money, energy, space, and CO2 emissions.
TechBuyer provides refurbished servers,
aiming to promote a circular economy and reduce wasted materials and energy in the data center sector. But customers are reluctant to take the opportunity because of a myth, widely believed in the industrythat new servers are so much more energy efficient that it is never worth repurposing second-user systems.
“This myth is contributing to huge amounts of unnecessary waste (energy and material) and emissions,” says TechBuyer’s Rich Kenny - so the company led a research project to compare the footprint of new and refurbished servers.
TechBuyer knew that, since 2014, the price-performance of servers has not been increasing as rapidly as Moore’s Law originally predicted, and refurbished servers are now as reliable and efficient as new ones.
An Innovate UK Knowledge Transfer Project (KTP) with the University of East London ran from 2019 to 2020, producing robust primary research, measuring the energy use and performance of new and refurbished servers.
As well as comparing the energy costs, the KTP looked at reliability. Since any broken parts are replaced during refurbishment, correctly configured seconduser servers are as performant and more efficient than new ones, the report found.
The KTP’s results were published in an academic journal - the IEEE’s Transactions on Sustainable Computing. But TechBuyer realized that the methods used in the project could be offered to organizations wanting to understand the cost and footprint of their
server estate. This work won the Energy Impact Award at the DCD Awards 20222, sponsored by Node Pole.
“Looking at the research, we saw the opportunity to apply the results beyond academia via a brand-new commercial tool,” says Kenny, who is now director of TechBuyer’s new Interact division, set up to offer the Interact tool to users.
Interact launched in 2020, served its first customer in 2021 and now works with cloud providers, and global leaders in financial services.
Interact is the only tool to go beyond PUE and calculate the energy savings from server configurations, and offer clear and usable guidance that can reduce the impact of an organization’s IT.
Its guidance is tailored according to the carbon mix of the electricity available in a given geography, and the type of data center under consideration - for instance enterprise, or colocation.
According to an analysis of more than 150 data centers, the tool’s recommendations could save the average data center 8.3MWh energy, 2.8 tonnes CO2e emissions and £1 million costs per year.
Put another way, customers can unlock 300 percent more compute power for just seven percent more energy.
For too long, data centers have treated servers and other IT hardware as a consumable that can be trashed when its lifetime is over.
The world is learning that natural resources are finite. Right now, electronic waste (e-waste) is the fastestgrowing stream of such material. More than 11 billion tons of it are produced each year and, despite directives and urging from national and regional governments, only 20 percent of it is recycled
Microsoft’s cloud is expanding rapidly, with millions of servers deployed in more than 140 countries. This represents a considerable throughput of hardware, and Microsoft has created Circular Centers at major data center campuses, to pioneer the use of regenerative and restorative cycles for e-waste.
Circular Centers process decommissioned cloud servers, sorting components and equipment to optimize material that can be reused or repurposed - an activity which earned them the DCD Environmental Impact Award for 2022, sponsored by H&MV Engineering
Microsoft’s plan is for the whole company to achieve zero-waste by 2030, and the company is on track to reuse or recycle 90 percent of its cloud computing hardware assets by 2025.
In 2021, the company ran a pilot Circular Center in its Amsterdam campus, a major site which was able to gather seven percent of the servers discarded by Microsoft’s global cloud operations.
This facility now processes decommissioned cloud hardware so that, 82 percent of all decommissioned assets are reused and recycled, The Centre is on track to reach 90 percent by 2025.
The Center uses routing software and Microsoft’s own Dynamics 365 ERP/CRM application.
At the start of 2022, Circular Centers were opened in Boydton, Virginia, and Dublin, Ireland, with Singapore following in the footsteps of June 2022, and most recently Chicago, Illinois. Microsoft plans to extend the model to most of its cloud assets.
As well as addressing Microsoft’s own waste stream, the Circular Center is offered as a model for others. The business process has been shared transparently, for others to adopt or adapt for their own situation.
“We are transparently sharing our approach, impact, and lessons learned to help other organizations reimagine business models towards a more sustainable and circular economy,” the company said in a statement.
In particular, Microsoft learned that a Circular Center must have involvement from multiple stakeholders and teams, from Finance to Construction to Planning to Engineering to Operations. Getting all these players on board is a crucial first step.
The company also learned that the circular economy is not limited to any one company. Its Circular Centers make partnerships with suppliers, third-party IT Asset Disposition partners, and the community.
The Circular Center approach feeds back into sustainable design and responsible sourcing of the original assets. As Microsoft designs much of its hardware portfolio, it has been able to build sustainability into the systems that it commissions.
The company now runs an Intelligent Disposition and Routing System (IDARS) that will deliver the zero waste plan, picking the best disposition path for every component.
Across the whole system, security of customer data, which may be held in discarded storage systems, is built in.
Much of the repurposed equipment is provided for schools and specialized skills training programs, and the Circular Centers are also creating new jobs with transferable skills, seeding a whole new generation of circular economy experts.
“The core of our Circular Center strategy is to empower customers and the world to decouple growth from the use of virgin resources,” says Microsoft. “But the impact goes beyond sustainability. Our Circular Centers increase value, ensure compliance, and increase resiliency. Circular Centers demonstrate the business opportunity available to any company if they place circularity and sustainability at the center of their operations.”
Microsoft's corporate vice president Noelle Walsh also won the Sustainability Pioneer Award, sponsored by Eaton. "I very much consider it an organizationwide recognition," she said. "My team is incredibly dedicated to making our ambitious sustainability goals a reality, and this award is for them. Thank you, DCD, for this recognition."
The global buildout of data centers around the world has improved digital services for many, ushering in a new world of connectivity.
But such a future has not been evenly shared, with billions left poorly connected or entirely unconnected. This means they are unable to partake in the global economy, don't have access to education resources, and can't learn about healthcare.
With the pandemic, this digital divide became even more apparent. As many simply shifted to remote work, the disconnected were forced to risk their health every day.
With the data center sector having benefitted more than most from the digital age, it is only fitting that it gives back.
That's why the DCD Global Awards 2022 winner of the Social Impact Award was Equinix for its work in India, with the award sponsored by Huawei.
The company worked with the Magic Bus Foundation to donate laptops, smart TVs, and a projector to 23 schools across and around Mumbai.
Volunteers at Equinix India then held virtual sessions with students with career advice, as well as with teachers on digital training. “[I] finally got a clear path towards my career,” one student said. “My heart is filled with gratitude for all volunteers who gave me this mind-blowing window of opportunity.”
The company estimates that its hardware donations will help around 6,000 students, while its virtual sessions have reached around 2,500 students.
Equinix said that it also initiated and sponsored Menstrual Hygiene and Awareness drives, along with an NGO and medical organizations, to teach menstrual health to adolescent girls in nine schools as well as women in the neighboring communities around its data centers.
The company currently operates two data centers in Mumbai, which it acquired in 2021 from GPX Global Systems for $161 million.
Incinerators were also provided to help dispose of menstrual pads. The company claims more than 1,500 girls have benefitted from the effort.
With the work being done in tandem with Magic Bus, other companies and individuals are welcome to donate time or money to help improve digital literacy in India.
Around 60 percent of India's rural population does not use the Internet, Nielsen reports, partially due to a lack of access, and partially due to a lack of digital literacy.
Though Singapore’s data center moratorium is slowly ending, demand is still outstripping supply in the city-state.
As a result, Johor, just across the border in southern Malaysia, is fast becoming the home of a number of new data center developments. Bridge recently launched the first phase of its MY06 data center, for which it won DCD’s Data Center Design Innovation Award.
This project was the first cooperative construction outside China between Chindata Group, and its wholly-owned subsidiary, Bridge Data Centers.
Located on a 40-acre site in the Sedenak business park outside of Johor, the new campus will span three buildings and have a combined capacity of 110MW at full build out.
The site, first announced in November 2021, uses a containerized modular building method, using modules that are made in a fabrication facility in China and assembled quickly on site in Johor.
Bridge said the team had to manage close to 400 containerized building and facility modules, complete assembly on site in less than 30 days using the company’s Prefabricated, Prefinished Volumetric Construction (PPVC) modular building method construction technique. PPVC uses free-standing containerized modules that are completed with internal finishes, fixtures, fittings, white spaces, and IT-fit-out.
The first phase of MY06 was delivered, from site due diligence, land acquisition, design, procurement, prefabrication, shipment, to site construction, testing, commissioning, and data hall handover, in just 12 months. The first 20MW phase of the site went live in October 2021, with TikTok-owner ByteDance as the anchor tenant of the site.
“Using such technics enables the project to be delivered ahead of time, being cost-effective and lowers environmental impact, while mitigating the labor shortage in the labor crunched eco-system. It also promotes safety, whilst producing high quality-controlled data centers by reducing errors and improved efficiencies,” the company said. “Led by experienced and highly positive Singapore, Malaysia ground team and China’s remote team, they made sure the structural design as well as each process and fitting up was timely for this fastpaced project implementation.”
However, construction was not without its challenges. Due to the Covid-related delays, some equipment did not arrive on site in time, and so the construction team had to adjust the equipment installation process and procedures to fit in the equipment’s delivery disruption from time to time. The China team also played an active role in adjusting the container schedule for leaving the Chinese factory.
The project utilizes immersion cooling in combination with plate liquid cooling technology (warm water-cooled plate
Dan Swinhoe News Editorindirect liquid cooling technology) and direct evaporative cooling technology (indirect evaporative cooling technology). MY06 phase 1 was designed using cool plate cooling design, while MY06 phase 2 will be using immersion cooling system; MY06’s cooling system is designed to achieve up to an annualized PUE of <1.2.
“We are deeply honored to receive the Data Center Design Innovation award at the DCD Global Awards 2022,” the company said after winning the award, sponsored by Schneider Electric. “It is a great inspiration to motivate us continuously design and operate our data centers in not only Malaysia markets but also in Thailand, India, and emerging markets.”
“With the development of MY06, comes the creation of an international and local supply chain, career advancement, and upskilling for the local workforce in the high-tech industry and imparting knowledge to the data center industry, benefiting the entire ecosystem and community.”
While not traditionally a data center hub, digital infrastructure projects are blooming across Africa.
The deployment of new subsea cables, roll-out of 5G, and a global trend towards the cloud means the continent is seeing huge amounts of new data centers. And South Africa –always the biggest hub on the continent – continues to see a number of new projects being developed.
One of those projects was from DigitalBridge’s Vantage Data Centers, which launched a new facility in Johannesburg, South Africa, this year. The company won DCD’s Middle East & Africa Data Center Development Award for its efforts.
In March 2021, Vantage was contacted by one of its strategic customers who “urgently needed significant IT capacity in South Africa.” A few months later in October, after a site was located and secured alongside planning permission and power, Vantage commenced the build of a data center campus in Johannesburg, the company’s first in Africa.
Located in Waterfall City in the Midrand area of Johannesburg, the first data center (JNB11) was completed in July 2022; a two-story, 35,000 square feet, 16MW building. Delivery took place ahead of schedule in just 10 months using prefabricated electrical containers and equipment designed and pre-manufactured by the company’s suppliers, with zero lost-time incidents
over 1.5 million working hours.
One challenge was the short delivery timescale required by the customer for the initial facility, exacerbated by the imminent rainy season (December to February).
Once fully developed the campus will consist of three facilities across 30 acres with 650,000 square feet (60,000 square meters) of data center space and 80MW of capacity. Vantage has said it was investing more than $1 billion into the site.
Powered by Eskom, the campus features a dedicated on-site, high-voltage substation. The buildings use a closedloop chilled water system generated through air-cooled chillers alongside an integrated economizer which allows reduced compressor energy based on outside ambient temperature. The facility has an annual PUE of 1.25.
To mitigate power rationing or load shedding – common to the region – a 260,000-liter back-up fuel system was installed in the first phase, equal to 48 hours of fuel at full load to serve JNB11. This will be expanded to 1.6 million liters once the campus is fully built-out.
Vantage has also signed a 20-year 87MWp power purchase agreement (PPA) with SolarAfrica to power the facility with renewable energy from a solar farm. Vantage has pledged to reach net zero by 2030.
“The project was a smooth process having thoroughly researched the right location to ascertain available land,
power and ease of planning – greatly assisted by the partnership forged with Attacq, a leading Real Estate Investment Trust (REIT) in the region, and the good relationship maintained with Eskom,” the company said.
“We are thrilled to announce that we've won the DCD Middle East & Africa Data Center Development Award in recognition of our Johannesburg campus,” Vantage said after winning the award, sponsored by Meesons. “This award recognizes the hard work and the expertise of our teams who worked diligently to complete our first African data center (JNB11).”
At the event, Abed Jishi, VP of design engineering EMEA, told DCD : “It’s awesome [to win] after all the hard work that we’ve done for the past couple of years to develop that campus.”
“It’s one of the most resilient data centers in the region. One of the biggest achievements for Vantage is to make sure that, building such a big campus in such a developed country, we’re still tapping onto renewable energy sources.”
“One of the big hurdles was the expertise, trying to find the right expertise for the different engineering discipline we go through. I can’t say it was an easy task, but South Africa being so connected to the world, it gave us the leverage to bring expertise to the country in conjunction with local expertise.”
How the Edge will be deployed across the world is one of the fundamental questions faced by the data center sector over the next decade.
Much of the industry's focus, including our own, has been on what's happening in the US and Europe. But perhaps we should have been looking to Mexico.
Mexico Telecom Partners (MTP), which is owned by Digital Bridge and Macquarie Mexican Infrastructure Fund, operates thousands of towers across the country.
Now, it hopes to get into the Edge. Under the project name of 'The Data Center in Your City,' MTP has begun to deploy small Edge containers in cities across the country.
The company now has some 59 Edge sites (deployed on behalf of a mobile customer), with 11MW of total power capacity, but it won the Latin America Data Center Development Award at the DCD Awards show for its work on the two most recent facilities.
Those sites, in León and Tijuana, are based on its new generation of design, which has achieved an Uptime Tier III design certification. The first was deployed in November 2021, with the second site coming a month later.
MTP developed a proprietary design for its containers, which it says allows it to have more control over changing the system based on feedback and experience. It has a LEGO-type architecture, to allow for adding and replacing components.
The deployments are unmanned, use free-cooling, aisle containment, and load distribution. They rely on temperatures above the average of the ASHRAE quadrant, and have a centralized BMS to spot issues.
The company claims a PUE of 1.5, and says each site has 2N in power.
MTP's Sales SVP Javier Wiechers Veloz told DCD that "it has been a lot of effort for myself and my team, as well as our investors and our leaders, thank you for this award.”
While not traditionally a data center hub, digital infrastructure projects are blooming across Africa.
Given the supply and labor shortages that have constrained the market in recent years, any data center construction that manages to remain somewhat on track is worth celebrating.
But NTT Global Data Centers' Phoenix PH1 facility had to overcome another challenge - the team was still being formed.
At the same time, it also had to compete for talent and resources in a region with larger, more established projects.
NTT GDC has moved to a standardized global design that it hopes will speed up future builds, but this project was its firstadding yet more complexity.
Dealing with these issues and still building a LEED-certified facility on schedule are among the reasons why the company won the DCD Awards 2022 Data Center Construction Team of the Year award, sponsored by ZincFive.
The company is building a 102-acre data center campus along the Elliot Road Technology Corridor in Mesa, Arizona. At full build-out, the campus will consist of seven buildings offering a total of 240MW of critical IT load, along with an on-site substation with 480MVA.
The data center relies on closed-loop chilled water system with air-cooled chillers and integral free cooling that
NTT GDC says minimizes water usage - a key feature in the resource-constrained Arizona landscape.
The architecture and engineering team began work in late 2020, while construction began in early 2021, with an aim of opening the first two-story 36MW building in February 2022.
Staffing proved an immediate challenge, as NTT GDC looked to expand rapidly beyond its RagingWire roots in the US and become a major player. It took until late summer 2021 for the final team to be established, with key hires made midway through the project.
The company said that it held off-site team-building events with new hires, general- and sub- contractors, and the original team to ensure that everyone knew each other well.
NTT GDC said that it held structured pull plan sessions that focused on the details of the schedule where everyone could give input on and commit to the schedule.
During the commissioning phase, all responsible parties reviewed and agreed to the commissioning plan and scripts. The company also held daily and weekly meetings to ensure teams were aware of upcoming activities.
"One of our biggest successes was keeping morale high through challenging moments by going above and beyond to celebrate wins," NTT said in its awards submission.
"Having an owner’s team that worked closely with and identified as an extension of the general contractor’s
Sebastian Moss Editor-in-Chiefteam was clutch in building and maintaining a trusting relationship and sense of shared success for all involved."
With the project just the first phase of a larger campus, the team put in miles of underground fiber and electrical duct banks for later phases.
As for the concrete and steel needed for the building, NTT GDC used its inhouse Vendor Managed Inventory to order equipment earlier than usual in hopes of getting ahead of supply chain challenges – but the challenges remained.
“Having strong partnerships with our suppliers and vendors coupled with a hybrid modular approach in our equipment yards allowed the project team to pivot rapidly and keep the project on schedule," the company said.
While it was the first of its new standardized builds, NTT GDC said that the approach has already begun to pay dividends.
"We had to have a clear scope so the contractors could buyout the project in a timely manner and not worry about an evolving design or ongoing changes that can cripple large projects," the company said. "This allowed us to lock in production slots early for labor, materials and equipment. Our approach to solidify the scope early and not change it paid off."
built its NAV1A Mahape Project in Maharashtra, NaviMumbai, India, on an industrial site, but it created a setting with a futuristic design - and built it in 20 months, despite multiple unusual challenges.
“The futuristic and sustainable data center design has helped our customers to meet their business goals and reduce TCO,” says the NTT project entry. “This is a 90MW facility spread 4.3 acres of land which help our customers to scale up as per their business requirement.”
The campus has high levels of power redundancy, and impressive environmental credentials, while providing employment for 500 staff.
The building process involved dealing with the challenges of a high water table and potential flooding - features which stood out for judges of the Asia Pacific Data Center Development Award, sponsored by the DCD>Academy.
In the past, the NAV1A plot held industrial plants, which had one immediate bonus: the availability of power which enabled a good level of power redundancy for the new facility. The site also has global connectivity with high-speed Internet to NTT’s Data Center Interconnect-network backbone.
The facility is aimed particularly at banking, financial services and insurance (BFSI) clients customers, so resilience and reliability were particularly strong requirements.
Although the availability of power is a
NTTplus, it represented a challenge: NTT had to connect the mega campus to an onsite gas insulated substation (GIS) with power distribution at 220kV.
That was the first time NTT India had dealt with this power configuration, so the build was carried out in phases, from an initial 22kV line, to an eventual 220kV line after commissioning.
During the build, engineers had to deal with water. The site has a high water table, 10ft below ground level, and is surrounded by a natural drain line.
This caused significant extra engineering work in basement area, which is used for parking and mechanical and electrical services. The construction had to quickly produce more than four industrial
bore-wells. So construction wasn’t delayed, the team operated submersible de-watering motors operating 24 hours a day.
Water figured again in a significant redesign. Mumbai has suffered serious floods as recently as July 2005, and water levels worldwide are rising. The plans were adjusted, to raise the entire building 1.5m from the current road level.
The facility was originally planned for wholesale use by hyperscale tenants, but a new client requirement emerged during the build. One floor had to be redesigned and implemented within the timeframe.
Operating at this speed needed a large labor force assembled from across India, which could have presented a safety challenge. NTT adopted safety programs including strict adherence to safety rules and regulations.
The safety procedures were made more complex by the arrival of the global Covid pandemic. During the imposed lockdown, the team extended its existing first aid resources, deploying doctors and nurses to take care of the entire construction crew and provide regular health check-ups.
Amid the lockdown, the team stayed safely within the construction campus, with all necessary facilities provided for laborers, engineers, and consultants creating a “family environment.”
The project began in late 2020, with a target schedule of 20 months, including the process of finalizing the location, a process
which has a massive impact on aspects like the facility’s access to renewable energy or ability to do free cooling.
The design has sustainability built in down to its smaller subsections. It uses modern diesel generators which minimizes fuel consumption, and power transformers with natural mineral oil coolant, which is environmentally friendly and biodegradable.
NTT also chose efficient and quiet chiller units, alongside the use of liquid immersion cooling and direct contact cooling, aimed at reducing the PUE and increasing efficiency. This is the first time a service provider has deployed these technologies in India.
The whole building is sealed to reduce energy loss, and rainwater is harvested with “zero discharge” methods. The building can store 100,000 liters of rainwater, all of which can be utilized throughout the campus. Recycled water is used in gardening and toilet flushing
Another detail was the use of local suppliers to minimize the energy use and delay involved in transporting materials. Over 95 percent of all construction material was purchased from within 800km of the eventual building, to minimize the carbon footprint.
One-third of the open terrace is given over to solar panels which provide renewable energy to the facility.
E-waste is managed, and organic waste is separated with dry waste turned into compost for in-house gardening purposes.
One feature is that NTT did not take this approach in isolation, simply creating benefits for itself. NTT’s investment in stable power to the site during construction, actually helped other companies to open their plants in the same location during this period.
California-based healthcare company Kaiser Permanente is one of the largest nonprofit healthcare plans in the United States, with over 12 million members. The company operates 39 hospitals and more than 700 medical offices, with over 300,000 personnel, including more than 87,000 physicians and nurses.
Like many companies, in recent years it has been looking to reduce its infrastructure footprint. As part of a program to reduce IT waste, the company was looking to exit a leased data center and migrate any non-decommissioned systems to the cloud or a Kaiser-owned facility.
But when dealing with healthcare systems, access to data and applications can literally mean life or death in some cases. So ensuring that the company was able to exit the leased site without downtime was a key imperative for the company. For its efforts, the company won the DCD Data Center Operations Team of the Year award.
Kaiser Permanente first entered the Irvine, California, colocation facility in 2008 due to limited floor space and power capacity in its owned data centers. Eventually, the company decided to terminate its lease at the facility as part of its DCO Urban Renewal (UR) Program.
The UR Program was developed to find
unused, under-utilized, abandoned, or partially decommissioned IT equipment that was consuming data center resources and decommission it. The company’s main goal was to close down the Irvine facility without any impact to Kaiser Permanente members or those who perform patient care.
The project encompassed migrating all compute environments to Kaiser Permanente-owned facilities and cloud providers, and/or decommissioning them. At the time the project started, the company’s footprint at the facility totaled 274 racks hosting just under 1,900 servers; the equipment covered 11,000 sq ft and required 1.4MW to power.
“Given the complexity of shutting down an entire data center, there were many roadblocks along the way. The risk of impact to the customers was extremely high,” according to Kaiser Permanente. “Every server connection had to be evaluated, every application studied, and every migration carefully planned.”
Before the team decommissioned the first equipment, there were several brain-storming sessions held with the core team members including compute, storage, network, application, design engineering, as well as non-technical members such as customer advocates, communications, and healthcare professionals.
operations at Kaiser, while Darren O'Toole, senior director of data solutions at the company, was the technical lead. The company also worked with IBM spin-off Kyndryl for the project.
During weekly planning sessions, the team developed multiple migration options for each application environment and jointly made the recommendations on which approach was best.
The teams then worked together to develop the step-by-step detailed migration plans; as each application was unique, the company wasn’t able to utilize a cookie-cutter implementation plan.
Each core team member had the right to object to anything in the implementation plan and voice their concerns openly.
“You can’t have a great plan without some healthy debates!,” the company said.
Given the healthcare-critical nature of the company, downtime was unacceptable, even during a decommissioning project.
“Since we provide urgent medical care to millions of members, there was no option to just shut a system down without studying every possible impact,” the company said.
In one scenario, after months of planning, the team determined that there was just too much risk involved in physically moving a storage system associated with approximately 80,000 users. Although much planning had gone into a “lift and shift” physical move over a weekend, it would have required an outage for all users.
Instead, after much consultation, the team determined that a no-impact, across the wire migration would be much easier for our user community, and were able to get all data migrated with no user impact.
One of the key challenges identified during the project was that end users were using their personal and shared drives to store information which could impact patient care if downtime was experienced during the migration. As a result, the company’s original plan to lift/shift some servers from Irvine to a Kaiser Permanente owned were changed on the fly to a network migration so no downtime would be experienced.
This took a tremendous amount of coordination and teamwork across multiple Kaiser Permanente organizations as well as our vendor partners,” the company said, but this network data migration approach has been utilized for other company efforts involving migrating large amounts of data across the network.
Kaiser said the project took “many long nights, weekends, and holidays to complete.” Just over 1,000 servers were decommissioned, resulting in more than $1 million through saved license and support costs, plus reduced need to build new data center space. The company saved more than $4 million through the termination of the Irvine lease.
As of August 2022, the Irvine data center was officially shut down with all compute and business operating equipment fully removed from the data center. And, crucially, the data center was shut down with zero impact to Kaiser Permanente members and employees.
On winning the award, sponsored by Excool, Kaiser’s Burneko said: “On behalf of the entire Kaiser Permanente organization we are very excited to be selected as the winner. It certainly did take a village across many arms of the Kaiser organization to accomplish the goal of closing down one of our colo facilities in Irvine, California. We are very honored to be selected by peers in our industry for this very prestigious award.”
The data center skills gap is a well known issue, but solving it has proved a slow and difficult challenge.
That's why the strength of the applicants for this year's Young Mission Critical Engineer of the Year at the DCD Awards 2022 was a cause for hope.
Michael Murray, associate director at award sponsor Kirby Group Engineering, said: "We've supported this category proudly each year for very good reason. We're nothing without the next generation of talent. Past winners have gone on to progress and advance their careers in many ways."
The winner this year was Niovi Papanikolaou, a consultant at Northshore IO Limited. "I'm thrilled. It's great to be recognized," she said.
During the submission process, colleagues raved about her enthusiasm, ability to ask the right questions, and rapid growth at the company.
After gaining an MEng degree in Mechanical Engineering and an MSc degree in Sustainable Energy
from Imperial College of London, Papanikolaou worked at ENGIE as an Energy Analyst for Energy Centres, before becoming an Energy Performance Engineer at SSE.
Then she moved into the data center sector. "I decided to go into the industry because the impact you can have with energy efficiency in this sector is massive, I can't even put it into words," she told DCD . "It's nice doing something that you see in the end as an actual result."
At Northshore, Papanikolaou became the leader of data center energy modeling projects in three months, developing models that consider technical variables such as CRAC/CRAH and UPS performance.
"The impact that you can have on energy efficiency is why I think other young people should join this amazing sector."
Looking to the future, she said that in five to ten years, she hopes that she "will still be in the sector and making a difference from an environmental perspective, which is my true passion, and I know that this can be achieved in data centers."
Finally, some fresh blood for an industry in need of new voices
DCD’s Outstanding Contribution Award goes to an individual who has made their own distinct mark on the industry, achieving a goal that is different to what anyone else might offer.
Lee Kirby is this year’s winner of the award, which is sponsored by Mercury Engineering - and he can certainly claim a unique data center career.
In his 40 years in the data center industry, he has assumed roles including the president of the Uptime Institute, the industry’s authority on reliability. He has led startups and turnarounds, and built global operations.
But alongside that, Kirby has had an equally stellar military career. In 36 years he has combined active service and reservist roles, with tours in multiple countries including Iraq (2009-2010) where he helped rebuild civilian infrastructure.
He retired with the rank of Colonel and serves as an advisor to many veteran support organizations.
But the work which earned him this Award combines his two passions. In 2013, he founded Salute Mission Critical, to help data centers operate more reliably, by providing trained military veterans - at the same time as ensuring those veterans have valuable work which recognizes their unique abilities and expertise.
“We started Salute Mission Critical in 2013 to build a bridge from the military to the data center industry,” he explained to DCD at our Awards event.
“There were a lot of unemployed veterans and the rate was too high to be acceptable. And our industry is short of talent. So we brought those two problems together.”
When Salute started, the problems were extreme. “When we first started, the unemployment rate in the States was over 25 percent for first-time soldiers who had deployed, come back, and gotten out of the military,” Kirby remembers. “Twelve percent of the people we were hiring were homeless at the time.”
Today the problem is not unemployment, but under-employment: “Veterans will be hired into positions that just aren't challenging - and I think our industry has a great opportunity to challenge them and let them continue to grow and contribute.”
Every soldier has had 1,000s of dollars and 1,000s of hours of training invested in creating skills and responsibilities that would not be available in the civilian sector: “They've got leadership skills, a work ethic, complex problem solving, and they can work under pressure.”
Military veterans have a special mindset, he says: “The one thing I really like is that mission failures are not an option. They will make it happen.”
They also draw out a special place from the public: “The great thing about veterans is people always have a heartfelt response to them”
However, all the staff in Salute earn their way: “No one ever makes a decision to hire veterans unless there's a commercial reason, because that's our capitalist society. We've shown there's a return on investment, that hiring veterans is good for business. It's the smart decision.
“I think it's a good decision morally, but I think it's a smart commercial decision and more people should get the training programs in place to do just what we've done.”
Despite his role as the founder and leader of the effort, Kirby says, “I don't feel deserving. This is the work of a lot of people with Salute Mission Critical. I feel humbled and honored to be even recognized for this.”
The key to improving energy consumption within data center operations for a more sustainable future HWLL.CO/DATACENTERS
LEARN MORE
As rack densities rise and chips get hotter, some are turning to immersion-based, open-tub liquid cooling to beat the heat.
In an open-tub scenario, there are two distinct types of coolant - single phase and two-phase - the phase meaning the state the coolant is in at any given moment during the cooling loop.
Single-phase coolants will remain in a liquid state while two-phase coolants will change from a liquid state to a gaseous one as the heat transfer occurs. We will explore both examples through two real-world deployments.
“We could go as high and as low with densities as we want to, we'd be able to support hundreds of kilowatts in single tank"
Oil and gas computing specialist DownUnder GeoSolutions (DUG) opened its 15MW 'Bubba' supercomputer in a 22,000 square foot (2,044 sq m) data hall built in partnership with Skybox Data Centers in Houston, Texas. It was deployed in 2019.
At 250 petaflops (single precision) once fully deployed, DUG's 15MW highperformance computing requires unique power and heat rejection systems to operate.
Designed in-house, the DUG HPC’s compute elements are entirely cooled by complete immersion in a dielectric fluid,
specifically selected to operate at raised temperature conditions. Slotted vertically in an open-tub – essentially a rack on its back, the heatsinks are removed and the chips are in direct contact with the fluid.
The fluid is non-toxic, non-flammable, biodegradable, non-polar, has low viscosity and, crucially, will not conduct electricity.
The heat exchangers are submerged in the tank with the computer equipment, meaning that no dielectric fluid ever leaves the tank, and it has a centralized power supply.
The deployment comes with a swathe of benefits, from considerably reducing total power consumption of the facility, to massively reducing the cooling system’s
complexity. Mark Lommers, chief engineer at DUG and the designer of this solution, told DCD that “for every 1MW of real-time compute you want to use, you end up using 1.55MW of power or thereabout” for traditional chilled watercooling systems.
In an immersion cooling system, lots of power-hungry equipment is removed. A prime example of this is the server fans. Lommers added that “there are no chilled water pumps and there are no chillers that get involved because there's no below room temperature water involved,” concluding that “the actual total power that we get from that is only 1.014MW, which is a big change over the 1.55MW that we had before.”
The cooling loop is massively simplified and thus more reliable. Even more so because it must deal with fewer changes in temperature, the overall system is more robust as fewer controllers need to work in tandem for an efficient operation.
As the components sit below the fluid level, the company claims that there is no is component oxidation and fouling. “We see a very, very high benefit in reduced maintenance costs and reduced equipment failure rate as well,” Lommers said.
In 2021 Microsoft deployed a two-phase immersion cooling solution for their public cloud workloads, developed in partnership with Taiwanese server manufacturer Wiwynn.
At the time, the company said that “emails and other communications sent between Microsoft employees are literally making liquid boil inside a steel holding tank packed with computer servers at this data center on the eastern bank of the Columbia River.”
Inside Microsoft’s steel holding tank the heat generated by the bare chips
makes the fluid boil. As vapors rise, they meet a condenser coil found in the lid of the tank. Vapors hit the coil and condense, turning back into a liquid state and falling back into the tub, effectively creating a closed-loop cooling system.
As with the previous example, the cooling infrastructure is greatly reduced as no air handlers or chillers are needed - a dry cooler (basically a large radiator) circulates warm coolant into an openbath immersion tank, and provides waterless cooling, no need to lose or evaporate water to cool.
Special server board designs are used that are smaller in size and blind-mate to their power connectors in the bottom of the tank, as in the other example effectively being slotted in and stacked horizontally.
Another less known aspect of this cooling approach is its ability to concentrate heat due to its use of radiators, this in turn enables real heat reuse scenarios like district heating where often air-cooled latent heat is too low grade to be of any real use.
Furthermore, Microsoft is also recognizing the overclocking opportunity such a solution brings. Husam Alissa, director of advanced cooling & performance, at Microsoft explained: “We could go as high and as low with densities
as we want to, we'd be able to support hundreds of kilowatts in single tank as we densify hardware.”
Microsoft has shared the designs and the learning behind this project with the wider industry through the Open Compute Project as its looking to grow the ecosystem of this technology.
But there’s a reason why Microsoft hasn’t deployed this system in all of its data centers - the ecosystem is not quite there yet, staff aren’t trained for the new approaches, and critical questions around cooling solution supplies, security, and safety have yet to be answered.
More crucially, it is not yet clear how large the market for the ultra-dense systems will be, with many racks still happy humming away at below 10kW.
Should that density rise, operators currently have a number of different approaches to choose from, and within that a variety of form factors and pathways they could take. There are no agreed standards, and no settled consensus, as the technology and its implementation remain in the early stages.
It will take projects like these, and their long term success, for others to feel comfortable to take the plunge.
While it was not the earliest to hop on the cloud computing bandwagon, Asia is arguably one of the strongest adopters of the cloud today. Indeed, data center growth is expected to accelerate in the Asia Pacific (APAC), driven by a new wave of hyperscale data centers designed to power the facilities of cloud giants and meet fastgrowing demand in the region.
how infrastructure as code offers them an easy path forward
Within the cloud, businesses are increasingly eyeing hybrid, multi-cloud deployments. The appeal of this approach lies in how it gives organizations the ability to shift workloads across cloud platforms for heightened resilience, while ensuring that they are not held beholden to any one cloud platform.
Multi-cloud deployments aren’t just something for nimble startups or technology-savvy enterprises either, but also for the public sector. For instance, the Singaporean government agency GovTech years ago shared how it is developing a hybrid, multi-cloud architecture
Businesses in the region are spoilt for choice in terms of rolling out multi-cloud deployments on the public cloud. In Southeast Asia in particular, one can now find multiple cloud regions from the top cloud players such as Microsoft Azure, Amazon Web Services (AWS), and Google Cloud, as well as those from Chinese cloud firms such as Alibaba Cloud, Huawei Cloud, and Tencent Cloud.
But for all the enthusiasm for multicloud deployments, what is often glossed over is the inherent complexity of a multi-cloud deployment. Fully understanding and leveraging the capabilities of one cloud platform is a demanding enough undertaking all by itself, and is even more challenging when additional cloud platforms are thrown into the mix.
And building cloud-native applications or repurposing existing services to function flawlessly on top of disparate clouds calls not just for cloud know-how but also requires a thorough understanding of their many quirks and differing architectures.
This is where HashiCorp comes into the picture. The San Francisco-based software company offers a suite of opensource tools designed to support the development and deployment of largescale cloud computing infrastructure. One of its linchpin products is Terraform, a well-established solution that lets businesses build and modify both cloud and on-premises resources using code.
This ability to manage and provision infrastructure with code instead of manual processes is known as infrastructure as code. Though HashiCorp is hardly the first on the scene,
it appeared to have found success as one of the most popular open-source tools for infrastructure automation.
Crucially, its plugin architecture has attracted a massive network of third-party providers that actively build compatible products to significantly broaden its appeal. And, as with other firms that offer open-source products, HashiCorp makes money by charging for the additional operational and collaboration features that enterprises need.
But how does HashiCorp ensure continual support for the top public clouds, considering that they are constantly evolving and updating their features? Grant Orchard, the APJ Field CTO at HashiCorp attributed this compatibility to a joint engineering effort with the cloud providers to minimize any gaps between feature delivery and their availability within Terraform.
And though the various cloud providers have their own infrastructure as code offerings, Orchard says the advantage of going with HashiCorp is that it works across clouds, giving enterprises a single, scalable solution.
So, what is it that attracts Asia Pacific (APAC) customers to his company’s suite of products? Orchard highlighted two reasons that he consistently hears from APAC firms that adopt his organization’s products: a broad ecosystem, and the ability to bridge the cloud skills gap.
“With the breadth of technologies in use by our customers across both traditional data center vendors, public clouds, and SaaS providers, they need a vendor whose focus is on the ecosystem. And with over 2,600 providers for Terraform, we fit that bill better than any other vendor in the industry,” he told DCD
In addition, Orchard says standardization through the HashiCorp Configuration Language (HCL) language used to configure its solutions using code can help address the ongoing skills shortage in cloud professionals. Indeed, HCL as used by Terraform was lauded in GitHub’s latest State of the Octoverse report as the fastest-growing language on GitHub.
Though the focus of infrastructureas-code is on provisioning infrastructure, there are secondary benefits to organizations. Referring to HashiCorp’s
managed offering which runs in its cloud, Orchard noted that it allows businesses to audit configuration changes with ease.
“[Another benefit] is providing audit controls through policy-as-code in Terraform Cloud. Requests that fall outside of compliance, and any decision to override them are all captured and logged. This makes the controls easier to implement, and the auditing process itself less arduous and expensive,” he explained.
Multi-cloud deployments are increasing in APAC. According to HashiCorp’s recent State of Cloud Strategy Survey 2022, over eight in 10 APAC respondents choose multi-cloud, with 46 percent already using multi-cloud infrastructures and an additional 38 percent saying they will be within the next 12 months. Financial services as the early adopters in this space, says Orchard, though uptake has also been strong across retail, resource, telecommunications, and the public sector.
One of the organizations that took to the cloud to complement its IT infrastructure in Manila would be the Asia Development Bank (ADB). With the pressing need to establish a new disaster recovery location in APAC, the team turned to Terraform to quickly build up its disaster recovery site on the Azure cloud in the Singapore region.
According to team lead Krista Lozada, HCL was easy to pick up and served as a unifying language between the network and server teams. And defining everything as code meant that the latest configuration is always captured, while changes can be quickly made and pushed out within minutes, instead of days or weeks.
For now, Orchard says organizations that are early adopters of the cloud are less prone to viewing multi-cloud challenges as a top problem. However, industries new to the space and not traditionally tech-savvy, such as the public sector, are experiencing this skills gap much more acutely.
Regardless, hybrid, multi-cloud deployments are the way forward – with infrastructure as code easing the journey. “If you were hesitating three to five years ago, I could argue that was prudence. Today I couldn’t make the same argument,” summed up Orchard.
Deploy your data centre with less risk using EcoStruxureTM Data Centre solutions.
EcoStruxure™ for Data Centre delivers efficiency, performance, and predictability.
• Rules-based designs accelerate the deployment of your micro, row, pod, or modular data centres
• Lifecycle services drive continuous performance
• Cloud-based management and services help maintain uptime and manage alarms
Discover how to optimise performance with our EcoStruxure Data Centre solution.
5G
The world’s first commercial 5G services launched in 2019 in South Korea, with the US, UK, Germany, and China quickly following.
In most of the world, 5G is now blossoming. The US now reports 5,000 cities covered, and China says it has over 250 million 5G subscriptions, served by two million 5G base stations. Network provider Ericsson says there will be one billion connections worldwide by the end of this year, beating 4G’s rollout by two years.
With Ericsson predicting five billion 5G subscribers by 2028, the equivalent of 60 percent of the world’s population, you might think that the whole world is adopting 5G - but you’d be wrong.
One entire continent is falling behind in 5G. In Africa, around a dozen nations have launched services (Botswana, Kenya, Mauritius, Madagascar, Nigeria, Seychelles, South Africa, Tanzania, Togo, Zimbabwe, and Zambia).
But Africa is a patchwork of 54 countries. And penetration is predicted to be slow.
By 2027, Ericsson predicts that 80 percent of phone users in Europe will have 5G service. At the same time, 5G subscriptions in Africa, home to 1.4 billion people, will hit just 10 percent.
Why will so few people in Africa get access to 5G services?
A handful of African countries have switched to 5G services, but how successful are the rollouts?
A big part of the answer is cost, and a lack of demand in the largely rural populations of Africa, says Mark Walker IDC Associate vice president for South, East, and West Africa at IDC Middle East, Africa & Turkey
“Concept-wise, 5G is a great technology - there’s no disputing that. From a physical science point of view, there are constraints, notably in terms of range,” says Walker, who is based in South Africa.
“The deployment model is for highdensity environments such as factories, so it’s not good for long-range comms. In Africa, a lot of things are long range and the other issues are availability and cost. You have to get those things right in Africa.”
Walker says operators will pick only certain opportunities.
“The cherry-picking is done based on usage patterns and industry uptake, so where it will have the biggest impact. This will tend to be in financial districts or where the government is, plus manufacturing environments.”
This might change, he says, if AI and IoT get traction in Africa, but that is “a bit of a chicken and egg situation.”
There’s another issue. A lot of use cases for 5G are around automation, and these can be a lot less compelling in Africa, where labor is relatively cheap: “It makes sense to deploy labor (because it’s cheap) to do things instead of deploying technology to do certain things that rely on 5G communications.”
Orange is certainly choosing its opportunities cautiously. The operator has 120 million customers in ten African countries, which gives it access to nearly 10 percent of the population, and it invests €1 billion ($1.1bn) every year in Africa and the Middle East.
Despite this, Orange has only just deployed its first 5G network in Arica, in Botswana,
“When it comes to launching 5G we’re aiming to do this country by country,” Jocelyn Karakula, CTIO, Orange MEA, said in an interview with DCD.
“The access to the technology is directly dependent on spectrum or location, plus the price of it, which varies from one country to another.”
Botswana came first because the spectrum was affordable, Karakula told us.
Next year, Orange plans to launch 5G services in three to six more countries
across Africa and the Middle East, Karakula said, with the Ivory Coast and Senegal likely high on the list.
Vodafone subsidiary Vodacom was the first to launch 5G in its home market, South Africa as far back as May 2020. Vodacom’s 5G is now available in all nine provinces in the country.
Vodacom also operates in several other African countries including the Democratic Republic of Congo, Lesotho, Mozambique, and Tanzania, while its subsidiary Safaricom operates in Kenya, and Ethiopia
Like Orange, Vodacom is launching where the demand is, said a spokesperson: “Our 5G coverage rollout will continue to be driven by relevant use cases, as well as consumer and corporate demands. We are currently deploying 5G at our existing infrastructure (where 2G/3G/4G sites are deployed already).
Advanced 5G use cases will need MEC (multi-access Edge computing) to support the technology implementation, which most African countries still need to deploy.”
Despite the interest in 5G, the demand for 4G and its services won’t go away overnight, and if anything will flourish further as operators across the world begin to switch off 2G and 3G services, to re-purpose this spectrum into 4G and 5G networks.
With South Africa outlining plans to switch off these legacy systems within the next three years, operators will be able to repurpose the spectrum into 4G and 5G.
“4G will continue to play an essential role in our network coverage plans,” Vodacom told DCD, stressing this is the best way to cover rural areas: “We continue to introduce new network sites in rural communities across South Africa, with 95.8 percent of the rural population now covered by our 4G network.”
Orange’s Karakula agrees: “When thinking about accelerating the rollout of 5G services, it’s important that we look to modernize our 4G networks first.”
4G addresses where the people of Africa are at the moment, it seems: “4G is really the accelerator for mobile data and the services that users are consuming,” said Karakula.
You can see this in Guinea where, far from launching 5G, MTN is only just launching the pilot stage of its 4G network in the country
Meanwhile, Namibia is still looking to expand its 4G network and eyeing investment from the private sector to support this. There will be a 5G launch there next year, but the priority seems to be around 4G at the moment.
With investment money tight, one organization seeking to benefit is Chinese vendor Huawei, which recently helped South African operator Telkom launch its 5G network.
The vendor has faced setbacks in the US, UK, Australia, and Canada, because of its links to the Chinese government, but perhaps hopes its investments will afford it more of a welcome in Africa.
In early November, during Huawei’s ‘5G Lighting Up Digital’ event, Benjamin Hou, president of Huawei Northern Africa Carrier Business said that the company “will further increase its investment in Africa to support the steady development of 5G to facilitate digital transformation in the region.”
Karakula wouldn’t say which network partner Orange is using in its 5G network, but suggested the operator has more freedom there, but would not put all its eggs into the same basket.
“We have no limitation in working with partners compared to Europe, where it has been more the focus of the conversation,” he said. “We are very attentive to the fact that we do not want to get too dependent on any supplier, be it Chinese, European, or American and so on. It's very important for us to have balance.
Returning to Ericsson’s predictions, even by 2028, 5G subscriptions will only account for 14 percent of Africa’s overall connections, while 55 percent will remain on 4G.
Strikingly, 2G, which is being phased out of networks in many countries worldwide, will still have more connections than 5G in 2028, in Africa.
Karakula flatly denies that this is an issue: “With 2G, 3G, and 4G it’s come late to Africa generally speaking compared to other markets such as Europe,” he said.
Despite Ericsson’s gloomy forecast, he asserts that “Africa has bridged the gap,” and is getting 5G “at the same time compared to other continents.”
Whether that is true or not, it looks as if Africa should consider pushing its 5G potential by first fully making the most of its 4G networks.
As demand for data and connectivity surges nationwide, Northern Virginia continues to reign supreme as the hottest market for data center builds worldwide. How do you stay competitive, though, in a market that is built out from a land and building perspective? It takes advanced planning and out-of-the-box thinking. For build-to-suit data center providers such as PowerHouse Data Centers, the key to successful construction of new powered shells is having the right strategies in place.
Choosing the right data center developer is vital to the success of your business, especially when looking for capacity in a demanding market. These are the characteristics PowerHouse Data Center recommends for tenants looking for a data center development partner:
In an industry as capital intensive as ours, financial backing is crucial. Data center companies that have joint venture commitments with credible investment management firms is the green flag you need to ease your mind. You can rest easy knowing that your project will be completed to world-class data center standards when the financials are properly in place to see a project to the finish line.
Data center companies that can provide swift assistance with technical real estate solutions are critical, especially in busy and sought-after markets like Northern Virginia where land is sparse and the know-how to
make real estate deals for hyperscalers is a must.
Utilizing an in-house development team that has decades-long relationships in the area to build out your projects from site planning to data center completion, while providing flexibility for clients’ needs is absolutely critical in today’s world. Having these types of resources in congested regions allows you to know your particular data center build is in good hands. You also know that you have the credible staff needed to work with a myriad of government agencies and planning boards to get aspects of your project approved.
Data center experience is more than just construction. It involves having highcaliber people with professional experience in all aspects of the data center industry to make everything run smoothly from start to finish. For hyperscalers that can’t afford any delays in their efforts to manage more data, this is a must have. Those seeking this type of professionalism should also look for a company with tenured executives and front-line personnel that have handson mission-critical experience leading multi-million dollar data design and construction, project management and operations.
and build-to-spec configurations with robust connectivity, future-proofing and customization is the way of the future. Customers now more than ever expect developers and owners to handle all aspects of their future powered shells. Even more important? They are looking for a company to handle everything, from land purchases to fast-track approvals to zoning approvals and build outs.
PowerHouse Data Centers is excited to offer state-of-the-art constructed facilities in the heart of Virginia's Data Center Alley to meet the rising data center capacity demands of hyperscalers. We offer the unique advantage of not only owning precious real estate in NOVA’s availability zones, but also serving as the land-site developers of those next-generation projects. Our relationships in this community, combined with our team that has decades of know-how experience in the telecom industry, give us numerous advantages, including speed to market.
PowerHouse’s Arcola Data Center project, set for completion in 2026 will provide 120 MW of max capacity power, with 80 MW of critical power for a twobuilding facility with 614,300 sq. ft. of developable space and a data hall area of 364,100 sq. ft. of space. PowerHouse’s data center campuses also include the site of the former AOL headquarters on Pacific Boulevard in Ashburn known as PowerHouse Pacific, which will host three state-of-the-art data center build-to-suit data centers for hyperscale tenants. Other projects under construction include PowerHouse ABX-1 at Beaumeade, which is bringing a 265,000 sq. ft. two-story powered data center online in 2023.
Even better? All of the above construction is also backed by a $1 billion joint venture commitment with AREP and Harrison Street to develop and construct world-class data centers.
To learn more about PowerHouse Arcola, Pacific and ABX-1 data center opportunities in Ashburn, click here to connect with the PowerHouse team or visit www.powerhousedata.com to learn more.
In a year where the global economy stagnated and investors turned against tech darlings, the tale of two companies stands out - Twitter and Meta.
Both have seen their valuations crater, and their data center plans thrown into disarray, due to the hubris of their billionaire owners.
In the case of Twitter, Elon Musk’s chaotic acquisition has been well-documented - first, he tried to get out of the deal, then he burdened the company with debt, and now he’s panicking as Tesla shares crater. At the same time, his leadership has been wanting, as far-right tweets cause advertisers to flee, and attempts to charge users have stuttered.
But such trials and tribulations are not the purview of DCD, there are enough publications covering that insanity. More interesting to us is what’s going on with their data centers. Musk is looking to cover the added $1bn in interest repayments he brought with him by gutting the company’s IT infrastructure.
Servers used for handling demand spikes are out, cloud contracts are being trimmed, and the company’s Sacramento data center may be killed off entirely. Already a server room at its headquarters overheated as no one was left to maintain it, locking staff out of their offices.
Every day, Twitter users note more errors and glitches, but the service has not crashed just yetheld up by the work of those fired, and those left on work visas. On LinkedIn, a number of Twitter data center employees have gone. How long can it continue?
With Meta, you’d be forgiven for thinking that the same CEO being in charge for 18 years would mean a little more stability. But Mark Zuckerberg is panicking. Young people are turned off of his platform, Apple’s privacy changes have threatened its core business model, and regulators are blocking his acquisitions.
His response has to pin everything on a pivot to the metaverse, a move that has so far resulted in mockery, poor reviews of its latest VR headset on virtual worlds, and thousands of layoffs.
Investors have abandoned the company, and public perception has soured. But perhaps he has a grand vision, and this is just the painful - but planned - transformation effort? Maybe, but its recent data center move suggests that things are not calm and collected behind the scenes.
The cancelation of its Odense data center and ‘rescoping’ of others for ‘AI data centers’ came just months after it signed a deal with its contractors, one which it is now reneging on. It was not carefully planned out. Worse, DCD understands that Meta still hasn’t actually worked out what it wants from the new data centers. It’s scrapping its plans, without knowing what to do next.
For those fortunate enough to not be beholden to such whims, these two companies prove to be a cautionary tale of what happens when you don’t plan carefully. As we head into 2023, and face a difficult economy, a failing planet, and technological transformation, it’s important to remember the value of acting with care.
- Sebastian Moss, Editor-in-Chief