Scale > eBook Size, speed, sustainability: Considerations for data center deployments at scale>>
13
Panel: When building green at scale, what type of innovation is needed? Ready to dive into water cooling? It pays to be different demand
26
Contents >> Introduction
15
08
23
In search of the world’s largest data center Building at speed
t’s no secret that the data center market - as well as the facilities themselves - are growing exponentially. With the data center market projected to grow by 615.96 billion USD from 2021 to 2026, is it now a case of go big or go home?
19
04
24
Unveiling a disruptive data center design
20
Perhaps this rapid evolution was spearheaded by the pandemic, but whatever the cause, building big is a whole different ball game. Our unprecedented demand for data means we want it all and we want it now, putting pressure on data center owners and operators to build bigger and faster than ever before.
Part one: Building at scale Finding the right scale for data center deployments
05
28 Conclusion 29 Scale>On
Customers are demanding more, but so are bigger data centers. Building big requires more planning, more power, more water, more space and generally more resource. But where does that leave us on the sustainability scale? In this eBook we explore the challenges that come with building these mega facilities and discuss how we can scale up sustainably.
Part two: Scaling up sustainably Q&A: Patrick Quirk, CTO, Nautilus
26 5 20 13 I
Part one: Building at scale
The bigger the build, the bigger the challenge, and big challenges call for better solutions. But when you want to build big, how big is too big? One size doesn’t fit all, and in this section of the eBook, we dive into how to find the right scale for data center deployments and go in search of the world’s largest data center, to contextualize just how massive these facilities can be.
4 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
And with data center projects experiencing ever tightening budgets and time constraints, we also explore the importance of building at speed, and take a look at some of the new ideas and innovations that will help us build not only bigger, but better.
T
he data center industry is used to growth. The market is getting bigger, the power demands are skyrocketing, and the dollar valuations are through the roof.“10 years ago, 20MW was huge,” Yondr chief development officer Pete Jones told DCD. “If someone offered you 20MW, you would have bought a Ferrari before you’d have done anything else.”
If things go wrong at scale the consequences are so much larger –you need to have a much more theseleadershipthick-skinnedrobust,teamforprojects>PeteJones,Yondr
Mega data centers are here to stay, but how and where we build them is changing
Finding the right scale for deploymentscenterdata
“Your burden goes up, and if things go wrong at scale the consequences are so much larger – you need to have a much more robust, thick-skinned leadership team for these projects.”
“There’s a certain complexity when you start to scale that isn’t just linearly proportional to the number of megawatts,” Jones warned, noting that the bigger you grow the harder it gets.
In just a few years, expectations have expanded massively –with 100MW+ data centers dotting the US countryside and growing in rural regions in the Nordics.
Sebastian Moss DCD
5 | DCD eBook • datacenterdynamics.com
“So we’re moving away from just the five key campuses, into almost all the tier one metros,” he said. In many places, that starts with a “toehold,” Henry explained, of around 3MW. “But then we have the ability to scale up, and it may be to what our standard design now is – an 88MW facility. And then you grow that to a campus where you may have four or five buildings within a Startingcampus.”smallbut in countless metros, and then expanding rapidly, “is really what we’re seeing as scale across the region,” Henry said.
This is a whole different kind of scale – an astonishingly large footprint across countless metros and regions. “I can anticipate that we’ll be in every country in EMEA at some point,” he said.
To pull this off, the company is in the midst of changing how it designs and builds its facilities, big andHistorically,small. every data center it has built has been different, based on the cutting-edge tech and ideas of the time. “It’s very difficult to shorten our lead time, and be able to be best in class on schedule and cost delivery when we have that continuous change,” Henry explained.“Weare[now]
applications are going to live in multiple buildings and multiple places.”Addin data residency laws, latency demands, and cutthroat cloud competition, and you have a reality where hyperscalers can’t just live out in the wilderness. Now, they’re coming for the suburbs and city
That’s when things really get complicated.“Howdoyou get 3x 100MW for each player, in every metro? For just the three [biggest] players, that’s 900MW you got to create to have three substantially sized availability zones in every Metro,” Yondr’s Jones said. “That’s not an unsubstantial challenge to pull off.”
into manufacturing facilities.”
The changes have helped Google bring construction time down from 22 months to less than 18 months. It hopes to squeeze that further, down to just 12 months – reducing cost and making it easier to predict demand.
standardizing not only our design, but our overall execution strategy, as well as developing all of our systems into a series of products that are built into an execution strategy that is really a kit of parts,” he explained.
“We’vecenters.been predominantly in five campuses within EMEA,” Google’s Henry said. “And those have been fairly large-scale data centers, ranging anywhere from 32MW to 60MW per data center,” with multiple facilities on each campus. “But we’re seeing a bit of a shift as to our strategy – scale for us now in the region is really looking at how do we get into all the metros that we need to expand into, and that’s happening at a rapid pace.
“The biggest companies used to build these 200-400 megawatt data centers and everything would be in there,” CyrusOne’s SVP of corporate development Brian Doricko said at DCD>Building at Scale. “But now those same firms are selling more and more of cloud services, [and customers] want to know their
This standardized system “takes a lot of design work on the front end to build a modularization strategy, rather than stick build in the field,” Henry said. “We’ve done that – in our new generation of data center design, we’re actually looking to take about 50 percent of our job hours off of the construction site, and move it
Google’s regional director of EMEA data center infrastructure delivery, Paul Henry, concurred. The company knows how to build huge campuses, he said, but is now focused on bringing costs “as close to the raw input cost as possible.”
The biggest builders have done a good job of getting really efficient, but you have to deliver faster and cheaper>Pete Jones,Yondr
Before breaking ground, Google creates a work package defining the entire bill of materials for a scope of work, including job hours and crew size, as well as component cost. “So very much akin to the Ikea strategy,” he said. “It’s all been pre-defined.”
Take cement – “at some level, you can’t get it any cheaper, same for steel,” he said. “The manufacturers that build UPSs, and generators, at some point they’re getting down to really razor-thin margins. The biggest builders have done a good job of getting really efficient, but you have to deliver faster and cheaper, and so forth.”
6 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
Hyperscalers are coming to town But Google and other hyperscalers, are not only changing how they build data centers. They’re also changing where they build.
Still, with hyperscalers now more than a decade into their cloud push, the process of building “these large-scale things in the middle of nowhere is a pretty well-oiled machine,” Jones noted, admitting that uptake of the (150-300MW)hyperscale-focusedcompany’sHyperBlochasbeen“ahellof a lot lower than MetroBlock” (40-150MW).
Master planning
>Paul Henry,Google
This organic growth has worked – “it’s such a miracle, and it’s such a great thing,” Noteboom said. But as we look back, “can we take a look at all of these attributes, and remove the conspicuous nature of the data center, remove all the complexity, remove all the knots of Northern Virginia. And then when we build a master plan community, we can ask what does that community look like?”This vision could lead to huge data centers built on huge campuses, which themselves are part of huge master-planned megacampuses. But given the scale of the Internet, it still could not be enough – with those sites then connected to growing facilities within metros, and to smaller Edge sites spread across theThatregion.may force a reckoning among data center operators as they find the industry increasingly butting up against the reality of living alongside humans.
more economical than an individual data center can ever hold,” Noteboom said. “On the energy side, the transition from in-building UPS to the community level allows for critical power-as-a-service using utility energy storage solutions.”
Instead, they hope to build reasonably sized facilities in city locations – where you face all sorts of regulations, permitting, local protests, and space issues.
“I think that there’s building blocks that are bigger, more efficient, and
Such limitations restrict the scale a hyperscaler can operate at some locations, Jones said. “I think choosing the right scale has to be case-specific– what are the constraints that exist market by market that might stop you from achieving scale?”
One of the ways to square that circle has been to relax site restrictions, Jones said. “10 years ago, end users would have a campus profile that said the site could not be near an airport, a train line, etc. Before you’ve even left your office, you’ve excluded two-thirds of the city.“Fast forward 10 years, it’s like ‘oh, as well as meeting all of those ludicrous kinds of constraints, it also has to be 100MW in size,’” he said. “Forget about it. So we’ve seen a real acceptance of the trade-offs [required to locate in a city].”
“And I think we are finding communities becoming more astute. They smell bullshit very quickly.”
Hyperscalers aren’t comfortable spreading that out across tons of small sites, Jones said. “They are saying ‘we need fewer, we cannot deal with even the contractual burden of managing 800 leases.’”
7 | DCD eBook • datacenterdynamics.com
Fitting data centers into the fabric of an existing city is always a challenge that will likely leave at least one community unsatisfied. How about
Or with cooling, he envisions cooling-as-a-service operating at the community level instead of the individual data center level. “Lastly, the data center now serves as the network exchange,” he said. “If we were to do Northern Virginia over again – instead of having 50 data centers that had hundreds of individual construction projects, building fiber that took many, many months and years to each of those buildings, all crossing the chasm of easements and rights of ways – we would have all of that pre-planned
and built out in service by network center.”Howabout, he suggested, building network centers designed end to end for network, which then connects to the surrounding data centers.“We’re talking about scale –data centers are getting bigger and bigger throughout Northern Virginia,” Noteboom said. “They’re next to schools, they’re next to condominium complexes, the noise they’re emitting is annoying to neighbors and others. Power plants have just sloppily thrown substations all over the place.”
We are standardizing[now] not only our design, but our overall execution strategy
“Data centers are going to consume as much as entire countries,” Jones said. “If you’re in local community, or you’re going for a permit or planning, I don’t think local job creation and renewables have ever been hotter issues to have evolved answers to.
“If we were to look at Northern Virginia, how would we do it differently if we could master plan it instead of just the natural, organic way that it grew on its own?” Scott Noteboom asked.
Such difficulties have also helped a cottage industry grow up to help hyperscalers navigate the complex and sometimes contradictory regulatory framework of different cities. Take Frankfurt: “There aren’t any big campuses, part of which is a land problem,” Jones said. “So you solve the land problem, and immediately you’re into a power problem. And then you run into a regulatory limit where once you build 18MW you have to start a new building. Then there’s Seveso legislation” (on how many hazards can be on site).
building an environment just for data centers?
As CTO of Quantum Loophole, Noteboom hopes to find out. “We’ve acquired north of 2,000 acres, with a gigawatt to start from a substation off our primary transmission that can scale to 3GW,” he said, with the company looking to serve as a master planner that runs the campus for hyperscalers and colos to then build on top of.
Back in 2014, China Mobile said that it would spend $1.92 billion on a 715,000 sq m (7.7 million sq ft) data center campus in Hohhot.
Finding out who owns the biggest data center on the planet is harder than you might think
Across the six halls, that’s 104,664 sq m (1.1 million sq ft) – or a tenth of the public figure – and it’s not clear how much of that space is dedicated to data centers.
Sebastian Moss DCD
There’shalls.one
ho owns the world’s largest data center?
State media reports from 2017 show images of six halls, which we matched to satellite photography.
In search of the world’s largest data center
8 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
Even with a margin for error on satellite photography or an extra floor we couldn’t see, it’s hard to understand how this cluster of buildings could be seen as the world’s largest data center. We reached out to data center technicians working at the facility, but have yet to hear back.
The visible buildings appear to be the beginning of a centrally planned urban and industrial project, where development was either abandoned
problem, however: It’s not clear the 2013 project is actually anywhere near that scale. We asked China Telecom, and the company –recently banned in the US – did not confirm nor deny its existence.
But, after trawling through a number of simple listicles, it became clear that the reality is a little more complex.According to numerous publications, the world’s largest data center is the China Telecom-Inner Mongolia Information Park. At a cost of $3 billion, it spans one million square meters (10,763,910 square feet) and consumes 150MW across six data
W
On its website, the company does show that there is a China Telecomowned data center in the region. Further digging finds that it is on Jinsheng Road in Horinger, Hohhot.
“With 5G and China’s ambitious plans, a lot more data centers will need to be built and China wants these to be inland (coastal cities are too crowded), and also wants them to use less energy (as it aims for carbon neutrality by 2060),” Jeroen Groenewegen-Lau, at the Mercator Institute for China Studies, told DCD. “Places like inner Mongolia, with abundant renewable energy and a cool climate, are set to benefit.”
Satellite measurements put each hall as 89m long, 46m wide, giving the buildings around 4,361 sq m per floor (we confirmed the accuracy of the measurements against cars viewable in the same images). State media photographs suggest four floors per building, making each data center span 17,444 sq m.
mid-way, or is still ongoing a decade after the campus began. Empty tenlane highways end abruptly, giving way to dry grasslands.
So let’s move on to the next largest data center. It turns out that it’s also in Hohhot – a region with low ambient temperatures, cheap power, and lots of land, making it an attractive investment. In fact, it’s meant to be right next to the China Telecoms data center, on what is referred to as the Shengle Modern Services Cluster.
Earlier this year, I tried to answer this question for a feature I was working on, assuming the result would be a quick Google search away.
At the time, it had space for 9,000 racks, with 100,000 servers. A stage two development will add 15,000 racks and 150,000-200,000 servers, while a further stage after that could add yet more space (although square footage calculations are complicated because the company is also building an exhibition center). This all could indeed make it one of the world’s largest data centers, but it’s not completed as far as we can tell.
It is possible that the companies began with large multi-billion dollar plans, and scaled back as demand failed to keep pace with their ambitions.Thenation
Size is certainly important because you’re able to do more and facilitate more customers. The other really important critical question here is just how much power can be delivered to the facility, and what is the density? Just because the size is large doesn’t ultimately mean you can fit as much as some smaller data
There are similar issues with several other purported massive China Mobile data centers. Companies often report the total size a facility could grow to, either based on what planning permission allows, or what sounds good to investors. Then they grow in stages, hoping to reach that ultimate number.Thismay
reports that three computer rooms were built, including a spare parts storage center (which matches a nearby building we found). A second phase was reportedly underway, but Soho said that work had yet to begin on new data halls as of 2020.
A similar scaling back of ambitions may have happened for the next facility that is often touted as the world’s largest: The Range International Information Hub.
China Unicom also claims to operate facilities at the Shengle campus, so it is not clear whether it owns any of the visible buildings. Representatives for both companies were not available for comment.
9 | DCD eBook • datacenterdynamics.com
“Ourcenters.viewwas that traditional data centers were very inefficient from both an energy and technology use perspective and that scalable
We searched the area again for data center-like buildings and only found two 12,240 sq m (131,750 sq ft) structures that fit the profile of a data center. Chinese publication Sohu
>BicenterllKleyman Switch
Steven Sams, an IBM executive at the time, told DCD that “we had a series of conceptual meetings [about 10 years ago] with the technical and executive teams for the project to talk about designing and building highly energy-efficient and scalable data
This project is real – but, again, the figures might not be.
Facebook’s Prineville data center
have been difficult for many of these eastern Chinese data centers. In 2018, China’s Ministry of Industry and Information
Located in Langfang, China and co-built with IBM, this was originally meant to be 585,000 sq m (6.3 million sq ft) facility, according to numerous reports. Situated between Beijing and Tianjin, it is a perfect place for a data center.
Technology found that demand for data centers in Beijing and Shanghai outstripped supply by 20-25 percent, but that in the northeast there were twice as many facilities as required.
is now trying to incentivize data center construction in the east and offload resources from cities, with major state subsidies – but such efforts will have been too late for the above projects.
The US Green Building Council lists show a 7,400 sq m (80,000 sq ft) China Mobile Hohhot Data Center Office which is LEED-certified, confirming its existence, and in 2019 the company said that it had completed the development of some data halls.
AT TOKYO’s Chuo data center
IBM declined to say if it was still involved in the partnership, suggesting that its spun out Kyndryl business might have an answer. Kyndryl did not respond to requests forMovingcomment.on.
Multiple reports say the next largest data center is AT TOKYO’s Chuo Data Center, with a total floor area of 140,000 square meters (1.5 million sq ft). The site is real, and indeed it is huge – it’s simply a hefty cube in the middle of Tokyo.Thecolocation facility is the largest single data center building in all of Japan. But is it the biggest in theIt’sworld?time to break away from poorly researched listicles and
look ourselves at the thousands of facilities we have written about over the past two decades.
technology virtualization which was emerging through cloud computing models required different data center designs flexible for different computing models and technologies.“Ihadvisited the site that had been defined for the massive multibuilding complex and the Chairman of the project in China. In January 2011 coinciding with a state visit to Washington by President Hu Jintao, the chairman and I signed an extensive agreement in Chicago in which IBM was the design principal.”Headded: “The work has obviously proceeded significantly over the last ten years, but without myButinvolvement.”thequestion that is critical to our search is just how much work has proceeded, and whether the focus is on data centers or other space. While many reports claim Range is a giant data center that is as large as the Pentagon, initial documents state that it will also include offices, apartments, and a hotel – so a lot of that space is not data
one of the largest proponents of massive data repositories is the National Security Agency. The NSA tried to keep most of its giant Utah data center a secret, but being a giant building in Utah, that’s not been entirely successful.
Tokyo.Another
Spanning two large data center structures, each with two halls, as well as surrounding infrastructure, the data center is believed to be around 139,000 sq m (1.5 million sq ft), of which only 9,300 sq m (100,000 sq ft) is data center space and more than 84,000 sq m (900,000 sq ft) is technical support and administrative space. That’s large, but just shy of what’s found in
There are some caveats: Do we count a building that contains multiple data center companies as a single site? Plus, Digital Realty operates another giant building with seven data centers within it just 2.5km away – should we include that?Before we get too deep into the weeds, let’s keep looking. Another potential mega data center can be found in China – Centrin’s data center in Wuhan. The company, which recently partnered with SpaceDC, claims it spans around 207,000 sqm (2.2 million sq ft) in the
still talks about the data center as a future project, saying that it has a “planned professional data center room area of one million square meters,” which is actually more than the initial pitch.
According to the plan, that size will be spread across 22 data centers. The website says that currently six have been built and that two more are in construction (although as the site refers to 2020 as a future date, this figure may be out of date). “It is estimated that by 2020, the park will have a computer room environment of 550,000 square meters.”
Thecenter-related.RangeTechnology website
10 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
fan of largess is Digital Realty, one of the biggest data center companies out there. It owns the Lakeside Technology Center (350 East Cermak), a huge carrier hotel in Chicago. With more than 70 tenants and a robust business from financial firms serving Chicago’s commodity markets, the building spans 102,200 sq m (1.1 million sq ft).
Most companies don’t build that large – economies of scale only go so far, and cloud providers and colo customers often value geographic redundancy over mega projects. But there are still those that like to goPerhapsbig.
In a post from November 19, 2021, the company said that the top of the main structure was just completed –suggesting some way was yet to go before the project is completed. The whole complex was initially planned for 2016 completion.
Let’s head back to the US.
At full build, the 650MW+ campus will include ‘up to’ 761,800 sq m (8.2 million sq ft) of data center space across 12 data centers. That would make it, without any doubt, the largest data center complex in the world.Butit is not at full build, and ‘up to’ includes a lot of wiggle room. Switch told us that campus currently features 120,800 sq m (1.3 million sq ft) of operational data centers, with two additional buildings under construction for another 111,500 sq m (1.2 million sq ft).
Of the dozens of buildings visible in photographs, only nine hold data center servers, a state media visit appears to suggest. “Each data center equipment room has three floors, and each floor has two modules,” Huawei’s William Dong told them. “In these modules we deploy servers, storage, and network devices.” Eventually, there may be 14 data centers on the site.
The NSA’s data center
Whatever the true size of the site – we have asked Huawei – it does not appear to be fully finished. The campus officially opened on December 20, 2021, but the main structure is expected to be completed this August. Images from a state media visit earlier this year show that the artificial river and lake are currently dry.
That means it’s a huge site, but not the largest. At the moment, it is
city’s Lingkonggang Economic and Technological Development Zone. However, the facility is not finished – it is still in the first phase, totaling 70MW of IT load, and hopes to grow to Work225MW.appears to be ongoing, with the company recently being awarded an Uptime Tier IV design work for upcoming data halls at the site. For now, the facility is too small.
11 | DCD eBook • datacenterdynamics.com
For now, it is too early to crown this facility as the world’s largest –although it gets points for being one of the strangest to look at.
It’s a peculiar-looking campus, more akin to a Disney theme park or a movie set, and yet the company claims it will be home to one million servers.AsHuawei’s largest data center campus, the site currently spans 480,000 square meters (five million sq ft), if local media can be believed. However, that also includes 98 training rooms capable of holding 3,000 people, R&D labs, an IT Maintenance Engineer Base, and what appears to be a ‘Huawei University.’
10,000 people are expected to visit the campus a year – much more than would come to see a standard data center.
It’s time to look at Switch, which has five large campuses dotted around America that it calls “Switch Primes.” The company has never been shy about its love of embracing scale, and its Switch Citadel campus is set to be its largest Prime site.
Facebook loves to go big, building multiple 41,800 sq m (450,000 sq ft) data centers on sprawling campuses around the world. The largest of
Can we beat that?
A different possible huge campus is one from Huawei, in the Gui’an New Area of southwest China’s Guizhou Province. The company often styles its campuses in an almost fairytale-like rendition of European architecture, with this one based on Prague facades.
those is in Prineville. Across nine buildings and 344,000 sq m (3.7 million sq ft), it is an astronomically large site – and it’s getting bigger. By 2023, the company plans to open two more 41,800 sq m halls, and this time they will be two stories each. That means a total of 427,000 sq m (4.6 million sq m).
As for the largest potential data center project, it is either Switch’s Citadel, Range, or Huawei’s cloud campus, depending on whose publicity you believe.
However, he told DCD, “the other really important critical question
But, as it stands, and as far as we can tell: The largest data center cluster owned by a single entity is Meta/Facebook’s Prineville data center
Other large potential data center projects include the 57,935 sq m (1.7 million sq ft) Digital Crossroad campus in Indiana, Corscale’s planned 213,700 sq m (2.3 million sq ft) campus in Northern Virginia, and Amazon Web Services’ planned 162,600 sq m (1.75 million sq ft) data center in Loudoun County.
not even Switch’s biggest: That prize goes to its Las Vegas Core site, with 250,800 sq m (2.7 million sq ft) with additional buildings 15,16,17 and 18 “currently under various stages of construction for a total square footage of 390,200 sq m (4.2 million sq ft),” Switch told DCD.
While this was mostly a thought experiment, Switch’s EVP of technical solutions, Bill Kleyman, noted that “size is certainly important because you’re able to do more and facilitate more customers.”
12 | DCD eBook • datacenterdynamics.com
Howcampus.aboutthe largest single data center building? It is not actually AT TOKYO’s Chuo Data Center data center, but may instead be another Facebook facility – the 170,000 sq m (1.8m sq ft), 11-story Singapore data center (although it is still in its first phase). There, land constraints meant that it made sense to concentrate a lot of servers in a single structure, something that companies usually avoid.
here is just how much power can be delivered to the facility, and what is the density? Just because the size is large doesn’t ultimately mean you can fit as much as some smaller data center.”Talking about other large data centers, Kleyman said that “if you have 10 million square feet, but your density is like 5kW a rack, then you’re wasting a lot of space. You’re doing something wrong.”
The company has bought a 2,100-acre property in Frederick County, Maryland, where it hopes to develop a 1GW campus consisting of 30-120MW data center modules it sells to other companies. That’s twice as much land as Switch has for its Citadel – but again, one could debate whether Quantum’s planned community of data centers can be counted as a singular data center campus, or rather a collection.
Neither site is bigger than Facebook’s current 344,000 sq m, but Core could briefly overtake it until Facebook grows to 427,000 sq m. Should Citadel fully build out – to a timeline Switch declined to disclose other than “within 10 years” – it would then comfortably overtake Facebook.
Huawei’s Gui’an campus
He added: “And when people tell you how much power is available at the facility, they’re saying how much power is available at the substation, or how much power is actually going into the facility? With Citadel, it’s 900MVA that’s in the building, and 1.5 gigawatts at the substation.”Still,thatis a ways off. If we look at future promises of data centers, then we should also consider Quantum Loophole.
Switch’s Citadel campus (rendering)
Scale
We visited the Sterling campus last year (see the magazine for more from the junket) and were given a tour by Stuart Dyer, the REIT’s business development manager.
Peter Judge DCD
DurvasulaCyrusOne
I
concrete walls were being cast.”
Take its Sterling II data center in Northern Virginia, which was built in 180 days. “A normal data
13 | DCD eBook • datacenterdynamics.com
center building has tilt-up concrete walls, which are cast on-site at the construction site,” Dorris said.
CyrusOne used a
CyrusOne brought the modular units to the site “and set them up in ‘lineups’ outside the facility. Using modular power units speeds up construction, saves money and reduces the building’s footprint because we don’t have to build additional rooms inside the data center to house power equipment.”
Elsewhere, the company “set up another off-site facility where we could assemble modular power units. Each unit included an uninterruptible power supply, a backup generator and a utility transformer, all housed in weatherproof containers.”
“This is a two-story building, with two 60,000 square foot ‘pods’ on the first floor, and another two on the second floor. That’s 240,000 square feet, 36 megawatts. On day one, we lit up one pod, but I had capacity to light up the other three on 16-week intervals.”Inthiscase
Building at speed
“If you look at how fast these webbased revenue generating companies are going, they absolutely want the product faster,” Tesh Durvasula, thenEuropean president of data center real estate investment trust CyrusOne, told DCD
“But for Sterling II, we set up a separate off-site facility where we could cast pre-fabricated concrete wall panels. We then brought those panels to the construction site on trucks and used them to set up the data center building. It saved time because we didn’t have to stop work at the building site while the
Sterling work
For CyrusOne, the trick has been to try to move the slower things away from the construction site. Using what it calls a ‘Massively Modular design’ “enables CyrusOne to commission large data center facilities in approximately 12-16 weeks, which is virtually an industry record,” Laramie Dorris, VP of design and construction at the US-based company, told DCD.
Every second counts
To shave further days off the schedule, Durvasula said, the company keeps “inventory available and in many cases pre-ships inventory to destination” ahead of starting work. “When we anticipate something happening we’ll get stuff to the site beforehand – even if we’re in the midst of a negotiation.”
“Every day we can save, every hour we can take off a project, matters – if we can get the product into the customer’s hands sooner, it means more money for everybody.”
How to launch a data center quickly, according to CyrusOne
CyrusOne’s Aurora data center campus in Chicago constructionmid-
Customization is the enemy of scale, you’ve got to give some amount of customization, but [most of] what you’re going to do is going to have to be standard>Tesh
n the classic 1994 film Speed, Keanu Reeves must keep a bus above 50 mph (80 km/h), or it will explode. With data demands rapidly growing, and hyperscalers’ appetites increasingly insatiable, data center construction can feel just as terrifying. You need to move quickly, while trying not to crash.
It’s an obvious point: the sooner companies can use their data centers, the sooner they can benefit from them. But it’s not an easy thing to manage – we’d all like to be faster at what we do, but some things just take time.
“Language barriers and cultural barriers aside, the rules of each country are very different and we’re working with all of our advisors and partners to make sure that we understand them as best we can. The rules are different here, but you have to play by them.”
border perimeter security, both audio visual biometrics and then component-level security. And then, very strict policies around that – so you’ve had to adjust your systems, your monitoring, your policies and procedures to accommodate that. Even the size of your lobby, you can’t have people milling around there anymore so you want to be able to get them in and out.”
Previously the company’s chief commercial officer, Durvasula is now heading CyrusOne’s push into Europe, building upon its acquisition of Zenium, and the creation of greenfield sites.
14 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
people’s expectation of physical security has changed, Durvasula said: “People are expecting concentric security,
60,000 sq ft (5,500 sq m) pod, but it also has a smaller design, half the size. “Typically, what we do is we build one large structure, and then we build out pods within that structure,” Dyer said. “Then we use the same generators, the same UPS systems, the same air handlers, the same PDUs across our portfolio. Having that rinse and repeat process.”Itishere the company has to be careful. Standardization allows for speed, but it can risk slowing innovation and preventing customization. “So somewhere between 70 and 75 percent of [the design] we’re going to keep standard,” Durvasula said, with the rest used for innovations learned from previous constructions, acquisitions, or customer requirements.Forexample,
usual local and national regulatory hurdles that define different nations: “Just generally speaking, I would say as you move further south in Europe it gets a little more complex. It takes a little bit longer in Spain than it would in Paris than it does in Germany than it does in London.
“The customers definitely won’t give you years to finish their project because they’re anticipating the capacity,” Durvasula said. “They typically will give you somewhere between 60 and 120 days. And after 120 days, just based on the sensitivity of that business and how intense they are about negotiating that point…” he trailed off.
After 120 days, perhaps it is time to get off the bus.
Every day we can save, every hour we can take off a project, matters – if we can get the product into the customer’s hands sooner, it means more money deliver on data center construction demand meet the need for
for >TesheverybodyDurvasulaCyrusOne Panel:Howcan you continue to
Standardization also makes maintenance easier, he noted: “if a technician knows that every time he/she goes into our data centers, they’re always going to have a nine foot clearing – not six feet in one market and ten feet in another – that makes it a lot easier to say ‘yes I can schedule seven chillers per hour, per day and I can be done with that whole site in two days.’”
speed? > atBuildingScale
and
CyrusOne Sterling II
One thing that is not different is the demand for ever faster speeds. Mainly targeting Fortune 1,000 companies and the hyperscale giants, CyrusOne expects its European customers to mostly be the same as its US ones.
For Durvasula, it is about finding the balance between changes and standardization. “Customization is the enemy of scale, you’ve got to give some amount of customization, but [most of] what you’re going to do is going to have to be standard.”
There, the company is facing the
e all know that the world demands more data centers. According to JLL, “the global construction pipeline also reached a new record in 2021, particularly in the United States, where it grew by 18.9 percent year-over-year, reachingHowever,727MW.”coping with the accelerating demand for new data centers around the world isn’t easy, for many reasons.
It’s becoming harder and harder to find suitable sites for new data center construction, and often, the sites are in such desirable locations,
W
permitting, and all the other timeconsuming preliminaries.
Efficiency: Data center operators also wish the build process could be more efficient, less wasteful, and less complicated. There are too many custom elements in a data center build that have to be done on-site, too many components, and too many steps.
acreage costs too much.
Unveiling a disruptive data center design
Sustainability: Finally, most organizations want to make their new data centers as environmentally friendly as possible from the onset, aiming for the lowest possible PUE, seeking carbon neutral sources of power, and even doing everything they can to reduce water consumption. We’ve seen countries, including Singapore, the Netherlands, and Ireland blocking, delaying, or heavily restricting data center builds due to environmental pressures. Organizations need more sustainable data center technologies to hedge
Speed: Building a data center typically takes 18 to 24 months, and that’s after site acquisition,
Site selection/land availability
Efficiency is better for business and the environment
15 | DCD eBook • datacenterdynamics.com
We offer a new approach to modular that leaps past conventional, prefabricated modular data center designs to deliver unprecedented speed to completion, massive improvements in sustainability, and we do it all without an increase in cost
16 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
Our approach, building
Our approach to cooling, relying on ambient temperature bodies of saltwater or freshwater as heat sinks, cuts water consumption and pollution to zero (with only a 4F increase in water temperatures) while delivering high density cooling at over 800 watts per square foot and 50-100kW per cabinet.
3. End-to-end management and operations
We provide basic building blocks for a bespoke design. Your data center can be standardized (according to our mix of designs) or customized for your individual requirements. We can accommodate different levels of redundancy and different
If you have heard of us, you’ve heard about our distinctive approach to data center cooling that doesn’t use refrigerants and doesn’t consume or pollute water. And perhaps you’ve heard about our successful deployment in Stockton, California, or our upcoming data centers in Maine, Limerick, and other locations.We’reoffering a new way to deliver a data center. We have the technology and proven processes allowing us to launch highperformance, sustainable data centers in less time, with a smaller footprint, and with fewer materials across every dimension, while increasing usable capacity and exceeding sustainability goals.
This design works because, unlike all other data center designs, we have consistent cooling distribution units that aren’t dependent on different chiller and cooling tower designs for different sized data centers. In our approach, we can accommodate tens to hundreds of megawatts with no cooling redesign requirements. Also, our patented technology can be applied in whole or in part on land or water, giving our customers exceptional data center placement flexibility.
standardized data center modules in factories, means that we can conduct 70 percent of data center construction work offsite, in parallel, across multiple factory locations if needed. The bulk of your data center is built, tested, and shipped without on-site personnel.
1.Nautilus cooling technology
At Nautilus, we’ve seen that organizations recognize that efficiency is better for business and the environment. We’re working to help organizations address all these challenges so that they’re able to reach for better business outcomes, faster than before, while increasing sustainability.
What Nautilus delivers
What we do rests on three critical, patented or patent-pending innovations.
can utilize traditional air cooling through hot/cold aisle, rear-door cooling units, direct-to-chip, or immersion. Each data hall has N+1 redundant leak proof cooling distribution units. Our systems are under vacuum, making them leak proof – water will never touch the IT load. They’re designed to maintain ASHRAE A1 Standards. Mechanical refrigeration is never required in your data center.
Nautilus integrated system design and build approach
To be clear, mechanical refrigeration is never required in your Nautilus data center.
their operations as more stringent sustainability regulations emerge.
Inside the data hall, customers
We offer a new approach to modular that leaps past conventional, prefabricated modular data center designs to deliver unprecedented speed to completion, massive improvements in sustainability, and we do it all without an increase in cost.
The integrated system modules include primary electrical, networking, fire suppression, reserve power, cooling distribution units, and hot aisles. To build a data center of any size, we can simply combine a mix of the prefabricated integrated systems.
2. Nautilus integrated system design and build approach Our cooling designs enable our distinctive approach to integrated design and build, going beyond conventional modular design by creating combined MEP, structural, and data hall floor/ceiling modules.
17 | DCD eBook • datacenterdynamics.com
Mechanical refrigeration is never required in your Nautilus data center
The Nautilus advantage
Our clients come to us for:
ceiling heights, for example. Also, once the right combination of integrated system modules are placed onto a fixed support structure (on land or water), a shell of any architectural design can be built around the modular data center.
• Architecture and engineering: We handle the entire architecture and engineering process, either applying our standard designs or customizing according to your needs.
We’ve proven, though our data center in California and upcoming data centers elsewhere, that our approach delivers a fully functional data center that exceeds industry standards. We can deliver a data center that’s:
• EPC: Our vetted and trusted partners can, if needed, assist with full engineering procurement construction onsite.
• Technology manufacturing: While we work with you to validate your site, we manufacture our integrated, modular data center building blocks, creating and testing them within the factory, guaranteeing quality with CI/CD, reducing delivery time, and lowering the risk of complications –keeping costs low and speeding deployment.
We can help you by assessing market location, site sizes, demographic needs, power, networking capacity, and more.
• Site selection and development:
Our approach, building standardized data center modules in factories, means that we can conduct 70 percent of data center construction work offsite, in parallel, across multiple factory locations if needed
1. Always sustainable: No greater than 1.15 PUE at any utilization level, 70 percent reduction in power needed, no water consumption, no refrigerants, no
Despite this ability to tailor to your requirements, the fundamentals of any Nautilus data center are the same from data center to data center. We go above and beyond compared with what a traditional design-build firm can offer.We provide a full suite of standardized controls and a full operational package that’s consistent for any Nautilus data center, which cuts training demands – if you have ten Nautilus-built data centers, any trained operator can move from one to another without re-training. Once we’re done with your data center, we provide everything you need to move it and operate it – or we can operate it for you, guaranteeing 100 percent uptime SLAs.
• Operations and maintenance: We can hand off a fully functional data center to your operations team, or we can continue to operate and maintain the data center for you, guaranteeing efficiency and reliability with 100 percent uptime SLAs.
• Project management: Integrating Nautilus technology to your project, through the entire project lifecycle.
Whether you’re focused on delivering data center capacity, delivering compelling new digital services to market, or improving the sustainability of your organization, Nautilus offers a way forward for you. With Nautilus, you can:
water treatments, no adulterants, very low thermal impact, minimal acoustical signature, very fish and wildlife friendly, and no cooling leaks.
• Have a better design, superior
than the field. On-site assembly is quick and easy. Conventional builds take 18-24 months, ours take 9-12 months to be fully Orfunctional.tosummarize, we maximize operational efficiency, optimize performance, lower risk, and reduce net build costs while cutting your build time in half.
2. Built for efficiency: For a 10MW data center, physical footprint is cut in half, requires 36 shipping splits vs. a conventional design that needs more than 120, simplified supply chains due to fewer components, one global voltage, one global certification, performance and resilience optimized due to full system factory testing, can be standardized or customized.
WATCHCLICKTO
3. Faster functionality: Fits a wider range of sites, reducing time for site selection and purchase. 70 percent of the build and testing work is done in the factory, rather
Conclusion
• Guarantee lower electrical consumption, no water utilization at all, no use of refrigerants, and a PUE no higher than 1.15, preparing your organization for future regulatory restrictions
• Cut your capital and operational Toexpenses.learnmore about how Nautilus is revolutionizing the industry, giving organizations a new way to move forward faster without added costs or environmental impact, watch the video below or visit www.nautilusdt.com.
construction quality, a smaller footprint, and less waste
• Accommodate unusual sites, including brownfields and even placing data centers on floating platforms
• Build a data center in 6-12 months
18 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
>The Nautilus design approach
19 | DCD eBook • datacenterdynamics.com
Part two: Scaling sustainablyup
In this section of the eBook, we examine how to scale up sustainably, and take a look at some of the technology and ideas that will help us get there. We speak to Patrick Quirk, CTO at Nautilus Data Technologies who offers us some interesting insights into how far we’ve come, the challenges the industry is currently facing and why economic advantage and environmental responsibility aren’t a balance to be struck.
We also take a deeper dive into the cooling aspect of these facilities and ask, when it comes to building green at scale, what innovations do we actually need to make this a viable reality?
Q&A
A: We do need to be adopting a more holistic approach. We’re starting to see it in the areas of the market that have the most to gain by taking risk. If you think about cryptocurrency and its blockchain functions, those guys get a bad rap because they consume a lot of power. But actually, they’re forcing the IT side and the data center side
Facebook, or Meta, are probably the most public about steps they’ve taken towards sustainability and everything they’ve fed back into the Open Compute Forum. They’ve been removing fans and excess equipment and really thinking about the IT gear from a system level inside their data centers, as opposed to what each individual server does. Until all parties involved in the data center are working on a single solution, we’re never going to get to the level of efficiency we need to.
Q: With sustainability now a necessity for a data center rather than a nice to have, do you think this shift in attitude will spearhead a change in the way we design and build these new, larger, denser facilities? What kind of changes can we expect?
But the one that’s not directly in our lane but doesn’t get a lot of credit is the IT virtualization and the advancement of a lot of the software techniques that are now being used for information that’s stored and processed in data centers. The efficiency gains from this progress essentially enables us to create more information with less power. So there have been advances from the data center industry itself on the critical infrastructure side, but we’ve also seen advances on the IT infrastructure side and the software side, all of which combined have made a tremendous impact over the last 10 years.
A: The primary barrier is probably all three areas of the data center working together, so the critical infrastructure, the IT infrastructure and the software. The IT industry is still very much about building a box and regardless of the number
Patrick Quirk, CTO at Nautilus Data Technologies
Q: What do you think have been the biggest technological strides made in sustainability in recent years in terms of the data center industry?
20 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
DCD
steps in the right direction from an efficiency perspective. The progress everyone has made, whether it’s the hyperscalers or innovative companies like ourselves to reduce the inefficiencies of keeping the gear cool are clearly advances.
A: From a technical perspective, sustainability is about having a closed loop solution, from the foundation of a data center, until the point that you decommission it. Having that circular lifecycle is really the key thing. Data centers effectively take power, use space and create information. The primary input there is power, so that’s the fundamental thing we need to focus on. It’s all about how we can be more efficient and ensure these inputs are in a closed loop lifecycle.
A: Liquid cooling is definitely the one that gets the most press, and the industry has made great
of servers inside, the need to make sure that box works, is reliable and doesn’t cause any trouble from a warranty perspective, so it has all kinds of inefficiencies built in. There is this myopic focus on that one single piece of IT gear.
The changes, the challenges and why there is no balance to strike
The more efficient you are, the less impact you will have. As for the impact you do have, you’ve got to go back and say, ‘how can we make this more sustainable.’ Using green power solutions wherever possible is a big one, and I’m not talking about buying carbon credits or buying the rights to a wind farm, because those are fundamentally greenwashing. What I’m talking about is getting your power from a truly sustainable source and knowing exactly where those electrons are coming from.
Patrick Quirk Nautilus
Q: So Patrick, what does sustainability mean to you, particularly as a CTO?
Q: Despite the positives, data centers are only getting bigger, so there is still a long way to go. What do you feel are the biggest challenges or barriers to achieving a truly sustainable data center right now?
Until we can get some of those existing data centers retrofitted to where they can handle a higher density level, you’re not going to see customers pushing for higher densities because it restricts the number of places they can take their IT load to. It’s not a good idea from
an economic point of view.
Q: In terms of sustainable operations, cooling of course accounts for the majority of a data center’s energy bill. What advances in technology are we seeing that can make cooling more efficient and therefore more environmentally and economically viable?
An IC is generally designed to operate from 0 to 70 degrees Celsius and a midpoint of 35 degrees is obviously going to be the optimal temperature. Today, we’re cooling our data centers down to between 22 and 24 degrees, so if we can actually optimize it around what’s best for the semiconductors and get that temperature to around 30-35 degrees, that would be a tremendous efficiency and sustainability improvement. So it kind of ties back to looking at this problem holistically.
Q: It is of course more ‘sustainable’ to work with what we already have (rather than build new). So as facilities continue to scale up, could advances in technology help retrofit older facilities into ‘green’ data centers?
A: Absolutely. And this kind of ties into the last point I was making.
A: Yeah, so as data centers get bigger - which isn’t necessarily a bad thing - there is efficiency in scale. But that means having to move data centers to different locations. When you start looking at critical infrastructure, it’s not typically placed in the middle of a housing estate or business district. But more and more, those are the places utilizing the information and the technology gains that have come from it, so the industry is being forced to look at sites differently.Thehyperscalers, although not necessarily setting up their massive facilities in the middle of residential areas, still haven’t made the leap to the industrial side of town, and that’s certainly what we believe to be the right answer, because that’s where the power is being generated. If you have a data center sitting closer to where the power is being generated, it can actually create more efficiencies and helps us with the sustainability problem. We need to get these companies to recognize being in an industrial location as the kind of fourth pillar of critical infrastructure.
Q: Can we expect the criteria for site selection change in keeping with this new sustainable trajectory? In what way?
21 | DCD eBook • datacenterdynamics.com
look at these preexisting warehouse facilities, changing out the cooling system and improving density are of course going to help. I was discussing rack densities with a colleague and he said he’s been hearing for 20 years that rack densities are going to go up to 150kW per cabinet, but we just keep plugging along at 6kW a cabinet. And it goes back to the fact that this has to be solved by all parts of the industry. Until everyone agrees we can go in and raise our densities beyond 6-8kW per cabinet, that’s what’s going to then drive the ability to retrofit some of these existing data centers to make them more efficient and more sustainable.
A: It depends on the building. If you find something with a solid foundation built a few hundred years ago, it’s almost always better to retrofit and improve it. But we went through a phase around the 50s where we weren’t building quality buildings. So in general, it’s always going to be cheaper to build new. You’re going to be able to get cheaper, more sustainable materials. There are a lot of advantages to building new because we know moreButnow.ifyou
A lot of other metrics out there are far too complex. PUE has this beautiful simplicity to it. So if we can come up with a way to extend it, whilst retaining that simplicity, we could capture more of the overall sustainability aspects we’re trying to measure
A: If you look at integrated circuit (IC) manufacturers, so the Nividias and Intels of the world, they’re reaching a point where they’re not going to have any choice but to move to some form of liquid cooling. The minute that happens and starts to become a significant enough portion of the landscape, there are actually more efficiencies that we can incorporate from removing that bulk heat out of the data center, because we can raise the water temperature a little bit.
>PatrickNautilusQuirk
to start thinking about building a holistic approach. They’ve come up with some pretty innovative solutions in that space because they’re motivated by the economics of it.So any improvements in sustainability need to be economically driven. Because if governments start forcing sustainability rules, that’s going to stifle innovation as it will end up picking a solution, or driving people towards a solution that isn’t necessarily best overall from a sustainability perspective. It’s a matter of those portions of the industry that can gain the most economic value, driving the innovation that can then get trickled back down to the rest of the industry.
Q: Scaling up isn’t cheap. Would you say advancements in cooling technology is opening the door to scale that may have previously been unattainable for some organizations?
gains happening, it might not be as much as that flagship nu,ber that gets put out there and would allow us to look at the whole system more holistically.Also,Ithink
Our answer would be that there are certainly better solutions, where we can still use the advantages that water has, for example, energy transfer over just a purely airbased system, but do it in a more sustainable manner. So rather than just boiling it off and having to deal with the residual chemicals that need to be added to prevent things like Legionnaires disease, we believe that the better solution is to do a water to water heat transfer and utilize the heat sinking and energy sinking capabilities of large bodies of water. Thermal plants have been doing this for well over 100 years.
including the power that’s required to process the water to make it semi-potable, to pump that water to the facility, the power required to pump that waste water back to the wastewater treatment facility and the power needed to cleanse it. Of course you’re not going to get the full measurement of impact there because you’re not necessarily taking into account the chemical aspect of it, but at least you’re capturing the power portion of it.
A: So we have our own view of this and we call it TRUE, so ‘total resource usage effectiveness’. And the idea behind that is, we’re really trying to capture a broader view and keep the simplicity of PUE without adding too much complexity to it. And honestly, it’s not an easy problem to solve, to keep it as simple and as useful as PUE has proven to be over its lifecycle.
you have the time measurement aspect of PUE. Data centers get away with planting a flag that says its overall PUE is, say, 1.25 because it’s averaged over a full year. So utilizing some of the metrics from a time based perspective that are already in PUE would help us get a better grasp on the right choices for a given location or workload.
>PatrickNautilusQuirk
Q: In terms of these metrics, PUE has been the star of the show for a while now. Would it be safe to say PUE alone is no longer enough, particularly as facilities continue to grow?
Q: Overall, we can’t improve what we aren’t measuring. Are there any new more effective methods of measuring our resource consumption we should be looking at to maximize operational efficiency?
One of the things we’ve not really talked about is the efficiency and sustainability value of taking up less physical space. Everytime I see an announcement for a new data center I’m still blown away by the fact they come out and say it’s 850,000 square feet of space. That’s a horrible use of space. What we should be talking about is, what is the efficiency footprint of this?
Everyone talks about trying to strike a balance. I almost think that what that image sends is that you’ve got two competing interests and that’s the wrong way to look at it
A: Data centers use a lot of potable water, and lately we’ve seen some of the hottest temperatures on record and there are of course areas that are severely water stressed, yet here we are creating potable water and just using it to cool IT equipment.
The minute we can get higher density on every motherboard and remove that latent heat in a more efficient way, the denser we can get, and the more dense, the better the processing. This is something our CEO Jim Connaughton likes to talk about a lot, that ultimately these data centers will become the engines of how we measure our environmental improvements and the impact we’re having. So it becomes a self fulfilling circular cycle that the information processed can then start helping us solve these big problems.
A: I think it definitely has an encore performance in it. One of the things that would make a significant improvrement would be to add a fourth level that looks at the efficiency of the IT equipment itself, because right now, PUE essentially stops at the cord into the gear. That’s one way we could extend PUE and would force a more cooperative effort around efficiency we don’t see today.Then
I think we also need to be taking into account all the energy required to facilitate bringing water to and from a facility. If you did all this it would become much clearer that although there’s huge efficiency
So when you start thinking about WUE, you can look at that from a pure PUE perspective if you start
a lot of other metrics out there are far too complex. PUE has this beautiful simplicity to it. So if we can come up with a way to extend it, whilst retaining that simplicity, we could capture more of the overall sustainability aspects we’re trying to measure.
So yeah, scaling isn’t cheap, but once we start looking at that as a holistic problem, leaps made in efficiency will allow for higher density, which then allows for better processing, and better processing means we can solve bigger problems which provides us with better solutions in the long run.
Q: Generally, the larger the facility the more water it needs, therefore WUE is becoming an increasingly prevalent metric. Can advancements in technology help us consume less water (or more accurately monitor what we are consuming)?
22 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
We need this total resource approach so rather than boasting about square footage, we can say things like, ‘we shaved 40 percent off the steel usage’ or ‘we utilized
Q: So, how do you ensure the economic value that’s created is part of the sustainability equation?
Q: Although sustainability is important, for businesses the bottomline will always be the top priority. What in your opinion is the key to the industry moving forward in a way that strikes a balance between economic advantage and environmental responsibility?
WATCHCLICKTO
Panel >BroadcastWhen building green at scale, what type of innovation is needed?
A: They’re not at loggerheads and there’s not a tradeoff between the two. It’s about how do you move it to the forefront and make
your decisions from a design, engineering and construction perspective and looking at those things to decide, ‘how do I build the most efficiency, more circular economic structure that I can, and there is economic value in doing that. There is ROI in that.
20 percent more concrete and did carbon sequestration on that concrete’, you know, what are the things that ensure we have that closed loop around what we’re building.
So I believe framing the conversation as trying to strike a balance is part of the problem. Ensuring we’re doing things in a circular manner, with sustainability as part of that, will ultimately drive the innovations and economic value that makes decisions easier.
23 | DCD eBook • datacenterdynamics.com
to strike a balance. I almost think that what that image sends is that you’ve got two competing interests and that’s the wrong way to look at it. They’re not really competing interests. We need to look from an economic value perspective, because we know that’s what’s going to drive all decisions.
A: Everyone talks about trying
24 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
organizations. Uptime is aware of only one major operator that runs DLC as standard: French hosting and cloud provider OVHcloud, which is an outlier with a vertically integrated infrastructure using custom in-house water cold plate and server designs.
A growing number of operators are looking to direct liquid cooling (DLC) for the next leap in infrastructure efficiency. But a switch to liquid cooling at scale involves operational and supply chain complexities that challenge even the most resourceful technical
When it comes to the use of liquid cooling, an often-overlooked part of the cooling infrastructure is heat rejection. Rejecting heat into the atmosphere is a major source of inefficiencies, manifesting not only in energy use, but also in capital costs and in large reserves of power for worst case (design day) cooling needs.Asmall number of data centers
Figure 1: Schematic of a data center oncethrough system
The intoHeatliquidultimatecooling:rejectionwater
Looking to liquid
Dissipating data center waste heat into bodies of water, rather than the atmosphere, can provide efficiency benefits year round
ptime Institute’s data on power usage effectiveness (PUE) is a testament to the progress the data center industry has made in energy efficiency over the past 10 years. However, global average PUEs have been largely stalling at close to 1.6 since 2018, with only marginal gains. This makes sense: for the average figure to show substantial improvement, most facilities would require financially unviable overhauls to their cooling systems to achieve notably better efficiencies, while modern builds already operate near the physical limits of air cooling.
JacquelineDavis Uptime Institute
U
have been using water features as heat sinks successfully for some years. Instead of eliminating heat through water towers, air-cooled chillers or other means that rely on ambient air, some facilities use the closed chilled water loop, which rejects heat through a heat
Once-through cooling projects have been a technical and a business success. The energy price crisis that started in 2021, and a corporate rush for sustainability credentials, have boosted the level of return
25 | DCD eBook • datacenterdynamics.com
for “free”: this does not come with a PUE penalty. Low-temperature air supply helps operators minimize IT component failures and accommodate future highperformance servers with sufficient cooling. If the water feature is naturally flowing or replenished, operators also eliminate the need for chillers or other large cooling systems from their infrastructure, which would otherwise be required as Still,backup.that these favorable outcomes outweigh the required investments were far from certain during design and construction, as all undertakings involved nontrivial engineering efforts and associated costs. Committed sponsorship from senior management was critical for these projects to be given the green light and to overcome any unexpected difficulties.
currently no off-the-shelf designs that are commercially available for data centers. While the facilities we studied vary in size, location and some engineering choices, there are some commonalities between the projects.Operators we’ve interviewed for the research (all of them colocation providers) considered their oncethrough cooling projects to be both a technical and a business success, achieving strong operational performance and attracting customers. The energy price crisis that started in 2021, combined with a corporate rush to claim strong sustainability credentials, reportedly boosted the level of return on these investments past even optimistic scenarios.Rejecting heat into bodies of water allows for stable PUEs year-round, meaning that colocation providers can serve a larger IT load from the same site power envelope. Another benefit is the ability to lower computer room temperatures, for example, 64°F to 68°F (18°C to 20°C)
> Jacqueline Davis Uptime Institute
exchanger that’s cooled by an open loop of water. These cooling designs using water heat sinks extend the benefits of water’s thermal properties from heat transport inside the data center to the rejection of heat outside the facilities.
The idea of using water for heat rejection, of course, is not new. Known as once-through cooling, these systems are used extensively in thermoelectric power generation and manufacturing industries for their efficiency and reliability in handling large heat loads. Because IT infrastructures are relatively smaller and tend to cluster around population centers, which in turn tend to be situated near water, Uptime considers the approach to have wide geographical applicability in future data center construction projects.Uptime’s research has identified more than a dozen data center sites, some operated by global brands, that use a water feature as a heat sink. All once-through cooling designs use some custom equipment — there are
We predict a more mature market Encouraged by the positive experience of the facilities we studied, Uptime expects oncethrough cooling to gather more interest in the future. A more mature market for these designs will factor into siting decisions, as well as jurisdictional permitting requirements as a proxy for efficiency and sustainability. Oncethrough systems will also help to maximize the energy efficiency benefits of future DLC rollouts through “free” low-temperature operations, creating an end-to-end liquid cooling infrastructure.
Withoutparameters.chilledair at the front of the server, the server fans would be unable to cool the devices and rapidly overheat and shut down. Within the server room, where we can see (and feel) all the heat generated by the servers, storage and network gear would cook the room (and the hardware) without chilled air.That brings us to the macro side. Somewhere in the building, a computer room air conditioner (CRAC) is blowing the cooled air into the server room. There, it’s drawn in by server fans, blown over the hot components, heated, ejected from the servers and the rack, and finally brought back to the CRAC. The hot air from server rooms throughout the building are brought into the CRACs, using a heat exchanger and compressed refrigerant, the heat is pushed out of the building using more
First, let’s start by talking about the differences between heat rejection at a micro level and a macro level. When we’re thinking about micro level heat rejection, we’re exploring all the ways to get heat out of data center devices – servers, storage, networking, and the like.
Phase changes from liquid to air are inefficient, require extra power, and force designs that use very large amounts of exhaust air or water to cope with the concentrated heat in the liquid
Traditionally, data centers cool at both the micro and macro level with air.On the micro side, let’s go into an individual server as an example, modern servers contain multiple processors, solid-state drives, RAM modules, and highspeed networking devices. All semiconductors in a server are typically designed to operate at ambient temperatures between 0C and 70C with custom heat sinks to extract the heat produced by the device and multiple fans to produce
airflow that removes the heat and ensures the devices operate within design
S
There’sair.
Exploring the difference of Nautilus’ water heat sink technology in the data center
It pays to be different
At the macro level, we’re thinking about ways to remove heat from a data hall. In both cases, there’s an energy transfer medium, like air or water, and the energy transfer medium could be the same at both micro and macro levels.
level by offering water-based heat sink cooling, and that’s a unique approach that gives us distinctive advantages over any other data center cooling method.
“ DCD eBook | Scale
26 | DCD eBook • datacenterdynamics.com
As we’ve talked to potential colocation customers and data center industry experts, we’re frequently asked how the Nautilus cooling technology is different from “liquid cooling” in a data center. And we completely get it –we’ve introduced new technology. And new approaches to existing problems can be challenging to communicate, but we think we can do it in a simple way.
Cooling the data center
o, you’re just liquid cooling, right?” … “What’s the difference between your technology and directto-chip or immersion?” … “Maybe you’re a little different, but do those differences matter?”
a loop for cool air to and from the server rooms, and another loop for hot air that’s ejected from the building. Another macro option is using a CRAH. The CRAH takes the chilled incoming air from the evaporative chiller and pushes it through the data hall. The warm
Nautilus works at the macro
1. We ideally keep heat in a liquid all the time, but always at the macro level.
3. We don’t blast very hot air out of a data center.
So you’ve seen the advantages of what we can do, but the critical takeaway is a simple one: most people who claim to be doing liquid cooling are only using liquid for part of the equation. We do it all. Our water heat sink tech makes a difference.
NautilusHowVideo:doesthecoolingsystemwork?
Instead of taking heat from a liquid and putting it into the air, we take heat from a cooling liquid and put it into water using a heat exchanger. Then we pump the warmer water (about 4-6 degrees Fahrenheit, in our design) into a natural body of water, a river, or a sea. We use natural water as a data center sized heat sink.
At the macro level, data centers with liquid cooling combine all the micro-level liquid from all the data center systems and then pump them to a central location for cooling. But here’s where they often make a mistake.Mostof
So is there a way to improve? We believe that’s where Nautilus comes in. We do liquid cooling at both micro and macro levels, and we keep all the liquids liquid.
27 | DCD eBook • datacenterdynamics.com
What are the advantages of the Nautilus method?
6. Our approach is thermodynamicallyas efficient as possible, with subsequent cost and reliability efficiencies.
2. We avoid the inefficiency of
Our approach minimizes the approachenvironmentalinaway other technologies can’t
How the water cooling system works
We also have within-data center cooling advantages that air can’t match. For example, in one of our data centers, we supply water that’s 70 degrees F to a 15-kilowatt rack and cool it. At that point, the water is 85F, which is still cool enough to cool another rack.
phase changes. We don’t have to convert a liquid into a gas like a heat pump or an evaporative chiller does.
Our approach minimizes the environmental approach in a way other technologies can’t. All the other mechanical cooling approaches use large amounts of water, produce large amounts of hot air, and typically use polluting coolants. Our approach makes natural water four degrees warmer than it was. That’s it.
And while the ideal way (most efficient way) is to keep both micro and macro heat in liquid, Nautilus works with any micro-level cooling technology. We can take micro-level air and put that heat into water. We
air is just dispelled out or pulled out using fans through the roof.
The Nautilus approach also has another advantage. With CRACs, CRAHs, and other technologies, it’s usual to push or pull air over dozens of feet. With immersion cooling, it is usual to pump the viscous hydrocarbons used over hundreds of feet before getting them to a heat exchanger. Our designers put the heat exchanger as close to the heat source as possible, cutting the energy needed to pump micro-level air, glycol, or oil.
can take direct to chip liquid and put that heat into water. We can take heat from immersion-cooled servers and put that heat into water.
Those phase changes from liquid to air are inefficient, require extra power, and force designs that use very large amounts of exhaust air or water to cope with the concentrated heat in the liquid (remember, liquid holds thousands of times more heat than air can).
them transfer the heat in the cooling liquid to air, using a heat pump, into a refrigerant loop, and then pump hot refrigerant out of the building, blow air over it, and expel masses of hot air out of the building. Others pump the heat, using refrigerant, to enormous evaporative coolers that take potable water, pump heat into it, and then evaporate it into the atmosphere, carrying heat away.
4. We don’t waste water by converting liquid water into water vapor.
5. We don’t use drinking water for evaporative cooling. We can use greywater, discharge from water treatment plants, or natural water, including saltwater.
But new challenges are bringing about new solutions, and as the industry has always done, it will continue to adapt through new innovations and Scalingideas.
28 | DCD eBook • datacenterdynamics.com >> DCD eBook | Scale
And new technology is only the tip of the iceberg, it’s not only how and what we build that needs to change, but how we think. If we can move away from the traditional methods standing in the way of sustainable success at scale, there’s no limit to what we can achieve.
Conclusion
up sustainably is possible, but in order to achieve it, it needs to be a collaborative process as opposed to a competitive one, sharing what we’ve learned along the way.
From this eBook it is clear to see that big data centers are here to stay, and will be a staple of our digital future if our demand for data is to be satiated.
29 | DCD eBook • datacenterdynamics.com TalksDCD>scale with Mark Flanagan, Kirby Click here to see the full presentation KohlerWilcoxson,scaleDCD>TalkswithIan Click here to see the full presentation Scale>On Demand