DCD eBook: Hybrid IT

Page 1

> eBook

Hybrid IT

Why having the best of both worlds could be the best way forward>>
DO IT RIGHT WITH SWEDEN'S FIRST CLIMATE-NEUTRAL COLOCATION PROVIDER LEARN MORE AT WWW.CONAPTO.COM ARE YOU READY TO EXPAND YOUR INFRASTRUCTURE IN THE NORDICS?

Contents

Inrecent years the market has been awash with “cloud first” messaging, suggesting that migrating as much of your IT infrastructure as possible to the cloud is primarily the best way to go. However, the reality looks somewhat different.

Different organizations have different requirements. Some applications simply aren’t tailored for cloud consumption at all or have security implications hindering such deployments.

And with a need for speed so prevalent today, applications requiring low latency might make them more suited to Edge deployments in onpremises or private cloud colocation solutions.

It’s time to dispel the myth that companies first and foremost must apply a strategy for moving everything to the cloud and instead view it from an actual need-based perspective. Organizations need to be able to adjust their infrastructure as needed. Enter, hybrid IT.

In this eBook we delve into the what, why and how of hybrid IT, who’s taking advantage, sustainable solutions, and why having the best of both worlds, could be the best way forward.

04 Chapter one: What is hybrid IT?

05 The hybrid hype

10 Success at the Edge

12 DCD>Broadcast series: Planning for hybrid IT

13 Chapter two: Making the change

14 How Dropbox pulled off its hybrid cloud transition

16 Are banks ditching their data centers?

20 Enterprise healthcare considers a cloud shift

22 Chapter three: Colocation and hybrid IT

23 The rebirth of colocation

28 Why the colocation craze is poised to continue

30 Hybrid cloud strategies drive customers to colocation data centers

32 Chapter four: Climate change and hybrid IT

33 Count your carbon

36

37

Panel: How is the world of colocation adapting to carbon-negative ‘hyperscaler’ pledges?

Panel: How do you know how well your cloud and edge data centers are really doing when it comes to sustainability?

10 14 28 23
>> Introduction 16 30

Chapter 1: What is hybrid IT?

Hybrid IT also referred to as hybrid cloud - combines the delivery of enterprise applications, data and services to accelerate innovation and keep the business running. Spanning across both on-premises and off-premises, it encompasses people, processes, and technology environments in a data center, private cloud, public cloud or around the Edge of the network.

In a hybrid IT environment, enterprises often blend Capex, Opex, as-a-service and pay-per-use consumption models. Consequently, a hybrid IT model enables organizations to lease a portion of their IT resources from a public or private cloud service provider.

A hybrid approach empowers organizations by provisioning their IT resources from the cloud and gaining the cost effectiveness and flexibility offered by cloud vendors, while still having full control over specific resources that they might not want to expose to the cloud.

In this chapter we focus on the advantages of going hybrid, the solutions available today, how to plan for the transition, and take a look at a hybrid success story.

4 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT

The hybrid hype

The advantages of a hybrid approach

The word hybrid is defined as, ‘something that is a combination of two different things’, generally taking the best of each ‘thing’, so that they work in harmony to ultimately, create something better.

In recent years, a lot of good has come from a hybrid approach. For instance, we now have hybrid cars, hybrid working, hybrid dogs –although if someone could explain to me how anatomically Pomskys (a combination of a Siberian Husky and Pomeranian) are created, it would answer a lot of questions.

Designer dogs aside, when we go hybrid in terms of IT (also known as hybrid cloud) we create a more flexible environment geared towards the challenges operators face today. Hybrid IT also allows companies to migrate systems at their own pace, acting as a gateway into new and emerging technologies. And there are significant benefits to this approach.

Spencer’s advert, this, isn’t just cloud, this is hybrid cloud.

Maintaining control of data is extremely important. However, it tends to become a bit more challenging when the cloud is involved. You will still need to secure your applications, ensure your databases are encrypted and limit access.

This might feel like stating the obvious but in fact, some of these security practices are not activated by default in some cloud environments - meaning that administrators need to go into their cloud solutions and actively enable them.

Working with hybrid IT allows you to make informed decisions on which assets would benefit from being stored in the cloud and which wouldn’t. It also allows you to keep tabs on all of your applications and the flow of data between cloud and on-premises resources.

Bridging legacy into the next generation

In some cases, the migration of legacy applications to the cloud is predicted to be so complex, difficult, and costly that the decision is made to leave them be.

It’s in situations like these when you may find that the application in question is still bringing you value, operating quickly and even supports your agile operations - and just because a cloud migration is possible doesn’t mean it’s beneficial.

In that scenario, it might be efficient to leave the application onprem or in a colocation data center. However, hybrid IT allows you to be ready in case a rapidly evolving technology landscape forces you to move the asset - or a substituting one - to the cloud further down the line.

Security, control and data integrity

First and foremost, let’s get security covered. It’s a tale as old as time that operators, for data security and control reasons, tend to fear the cloud, or at least approach it with a healthy dose of trepidation. But at the risk of sounding like a Marks and

5 | DCD eBook • datacenterdynamics.com
Claire Fletcher DCD

Efficiency, latency sensitivity and end-user productivity

We’ve learned that not everything belongs in the cloud. This is the main reason why technologies around the Edge and distributed computing are gaining traction.

A good hybrid IT strategy will take both the Edge, your data center, your colocation partner and the cloud into consideration. And above all, it will allow you to ease into new concepts like Edge without having to implement all your workloads at once.

Compliance, governance and risk mitigation

Hybrid IT models enable you to move data at your own pace, whether it’s a legacy or modern asset. Geopolitical challenges, as well as geographical distribution of resources, can be challenging when working with a cloud-only model.

However, hybrid IT allows you to leverage from existing systems, even if they’re part of a legacy, alongside more modern solutions. So, unlike a hybrid cloud scenario, hybrid IT will enable you to leverage cloudlike services to continue to use your existing systems while still planning for the future.

Multi cloud or single cloud?

Regardless of if you use a multi cloud strategy - or if you have chosen to gather everything at one single cloud service provider - the need for a cloud connected data center is key. Freedom of choice and flexibility are both driving forces for a multi cloud strategy while safety and less complexity speaks for a single cloud strategy. Common for both is the need for a high performing, secure connectivity solution.

Single cloud environment

A single cloud environment is applied by using one single cloud

provider to deliver all applications or services that an organization decides to migrate to the cloud. Single cloud environments are applicable on either private or public clouds - whichever better serves their current and future needs.

It enables organizations to move workloads to the cloud as their demand grows, with the option to expand the number of virtualized servers if their need increases beyond a single cloud server’s limit. Often, organizations with a single cloud model are using the cloud for a single service or application, such as email, enterprise resource planning (ERP), customer relationship management (CRM) or similar.

How does single cloud work?

Organizations that employ a hybrid model combining either a private or public cloud with an onpremises infrastructure also fit into a single-cloud environment. Hybrid clouds that utilize both public and private clouds would be considered a multi-cloud environment if the private cloud is served by a different provider.

Additionally, Infrastructure-asa-Service cloud environments may also be considered a single cloud if they utilize either a private or public cloud offering by an IaaS provider.

A single cloud environment may be more fitting for smaller or less technically complex organizations that would like to gain the many benefits of the cloud without risking it becoming overwhelming. It can also be a great starting point for start-ups with plans to grow in the future, but who use a narrower scope of cloud resources for the time being.

Multi cloud environments

Multi cloud is the use of multiple cloud computing and storage services in a single distributed architecture. Multi cloud also refers to the distribution of cloud assets, software, and applications across several cloud environments - using multiple cloud computing platforms to support a single application or ecosystem of applications that work together in a common architecture.

Multi cloud can include multiple public cloud providers, onpremises environments, private cloud infrastructure with a public cloud provider (hybrid cloud) or a combination of the above.

6 | DCD eBook • datacenterdynamics.com >> DCD eBook |
IT
Not everything belongs in the cloud. This is the main reason why technologies around the Edge and distributed computing are gaining traction

How does multi cloud work?

There are various architectural approaches to multi cloud. You can build different portions of an application stack in different clouds, with each portion accessing different systems and services that are required to work together. The intelligence in such an approach is often built into the application itself rather than on the infrastructure side of the stack.

In another scenario, the same application might be required to run in more than one cloud, where few (if any) code changes would be required for the different physical locations.

Although this approach used to be challenging to accomplish, modern container orchestration, such as Kubernetes, has made application portability across different clouds, both public and on-premises, far more feasible.

Why multi cloud matters

There are many reasons to consider implementing a multi cloud architecture:

• Building an application to utilize best-in-class cloud services for specific functions across two or more public cloud providers.

• Using two or more public cloud providers to reduce risk or cost.

• Avoiding vendor lock-in. Furthermore, many multi cloud environments still involve an onpremises architecture component. This approach is typically for economic, regulatory or technical reasons related to the accessibility of ancillary systems that were previously built to run in the data center.

Multi cloud solutions are sometimes inherited within an organization. For example, separate teams might have made different architectural decisions and then come together after an M&A or following a decision to integrate two autonomous applications.

In these situations, there is often

a lack of cohesiveness which makes an integration challenging. It is important to partner with an open, agnostic vendor who can help solve this problem and create a forwardlooking hybrid multi cloud strategy.

across multiple clouds, often including self-service.

• Flexible access to best-in-class public cloud services from multiple providers, including Amazon Web Services, Microsoft Azure, Google Cloud, IBM Cloud or Oracle Cloud.

Demand for cloud connected data centers

Today’s CIOs have changed their approach toward solutions that are tailored for their specific needs. They want to design and deliver with customer value top of mind - and with the ability to scale as business grows.

Benefits of a multi cloud architecture Organizations whose cloud environments incorporate a full breadth of enterprise capabilities will gain competitive advantage. Advantages follow the delivery of a consistent hybrid multi cloud experience based on frictionless consumption, self-service, automation, programmable APIs and infrastructure independence. This advantage ensures that customers can thrive by unleashing agility and latent abilities in their own organizations. Given a welldesigned and smoothly executed multi cloud strategy, many business benefits can be gained, including but not limited to:

• A common, integrated experience across multiple public cloud and private cloud environments.

• Deployment and management automation to efficiently meet disparate needs in a timely fashion.

• Simplified, trackable end-user consumption of IT resources

In this context, speed, scalability, and connectivity are key factors. A multi cloud solution from a cloud connected data center can play a crucial role for success. With hybrid multi cloud, the options can be kept open, and services or resources can be deployed when needed.

When a company needs new features or resources to their cloud, they can be deployed with flexibility and speed. This will boost innovation as well as competitiveness as all features and resources of public cloud from any provider are available if needed.

In hybrid multi cloud solutions, an optimal cloud environment can be obtained without having the customer forced to pay for anything that they do not actually need.

The new generation of colocation data centers are the cloud connected ones. They offer the platform for a hybrid IT solution with a mix of private and public clouds. Also, a great data center colocation provider has the know-how for how to provide a modern connectivity solution - as well as how to build the robust, redundant and secure data center colocation solution needed. 

7 | DCD eBook • datacenterdynamics.com
Regardless of if you use a multi cloud strategy - or if you have chosen to gather everything at one single cloud service provider - the need for a cloud connected data center is key

Conapto’s cloud connected data centers

perfect platform for your cloud

Like any kind of construction, when building your cloud connected data center, you first need a solid foundation.

Conapto’s secure, scalable and climate neutral data center colocation with 100% availabilitytogether with our CloudHub solution - is the perfect platform for building your cloud connected data center.

This solution enables you to safely place your on-premises or private cloud infrastructure in one or two interconnected colocation data centers and at the same time ensure private high-performance connections to public cloud providers of your choice.

This is the hybrid IT solution that enables you to be agile and innovative - and to seamlessly move between private cloud, public clouds and SaaS without lock-in and long agreements.

Conapto CloudHub

Mix and match your cloud applications by connecting to one or multiple of the world’s leading providers and access the best-ofbreed services. You can securely connect to multiple cloud regions from a single interconnection point with the security and integrity you need.

CloudHub is a secure cloud access service which enables private connections to multiple cloud providers. CloudHub gives you a private highway to the cloud, enabling secure and highperformance interconnections to multiple cloud providers like Microsoft Azure, Amazon Web Services, Google Cloud, IBM Cloud and Oracle Cloud.

CloudHub provides a Layer 2 Ethernet connection between the customer port in Conapto’s infrastructure and Conapto’s port that connects to the cloud provider’s infrastructure. All traffic from a customer port to the corresponding

8 | DCD eBook • datacenterdynamics.com
The
connected data center

cloud supplier is transported via Conapto’s MPLS backbone.

• One customer port for transport of chosen VLAN-capacity. Twoport redundancy can be ordered as an add-on service.

• One or several VLANs that connect via the customer port to the cloud provider with individual capacity.

Allow your cloud services to perform better by building your network architecture on a foundation of dedicated connectivity in a cloud connected data center. Whatever the strategy, Conapto CloudHub gives you the power, agility and speed for connecting your resources and doing smarter business in the cloud – all over the world.

Regardless of if you have a multi cloud strategy or if you have chosen to gather everything at one cloud service provider, you need a cloud connected data center solution – Conapto CloudHub offers the perfect platform for secure highspeed connectivity to the cloud. 

9 | DCD eBook • datacenterdynamics.com Paris Oslo Slough London Hong Kong Ashburn Chicago Dallas San Jose Amsterdam Frankfurt Seattle Houston Los Angeles Hillsboro 154ms 8ms 194ms 28ms 29ms 22ms 21ms 30ms 134ms 101ms 181ms 130ms 163ms 155ms 110ms
THE GATEWAY TO THE CLOUD Sandhamnsgatan 63A, 115 28 Stockholm +46 8 666 32 00 | contact @conapto.se www.conapto.com

Success at the Edge

Ahold Delhaize had a problem. Across thousands of stores in Europe, its IT footprint was getting out of hand.

In a single store, one could find a server for its own internal applications, another for external parties and checkout counters, yet one more for self-scanning devices, still more for the car park management system, firewalls, and further infrastructure for public and private Wi-Fi.

And that’s just in Delhaize’s own self-branded stores. After decades of mergers and acquisitions, the company has numerous subsidiaries and franchises, including the US-

based Food Lion, Giant Food, and Stop & Shop.

“Some of these franchises might have an affinity with IT and have bought the latest, greatest beautiful servers in a high availability setup, and others, were still running 12-year-old servers, using them as a coffee plate, and sometimes spilling coffee over them,” Johan Pellicaan, Scale Computing VP and MD for EMEA, said in a DCD>Inside Retail & Logistics panel

The result was a lot of fingerpointing between different vendors and divisions when something went wrong, extended downtime, and security concerns. Ultimately, however, Delhaize could not just

Some of these franchises might have an affinity with IT and have bought the latest, greatest beautiful servers in a high availability setup, and others, were still running 12-year-old servers, using them as a coffee plate, and sometimes spilling coffee over them

simply get rid of all the troublesome IT and move it all to the cloud.

“We need IT because some of our applications in the stores needed to have an Edge computing solution because of the design of the application,” Delhaize’s IT infrastructure manager, Frédéric Paulet, explained.

“And because of the latency - if you talk about POS (point of sale), and people are scanning items on our cash register, then you need to have a very fast time of response. So we had to choose to keep all the applications running in the store.”

Rolf Vanden Eynde, manager of network, strategic infrastructure, at the Dutch company, added: “Delhaize needed to rapidly deploy resilient in-store infrastructure to support existing workloads as well as new data and processing-intensive initiatives such as cashier-less checkout and customer safety and security measures.”

Facing this challenge, Delhaize called on the tech community for help. After a lengthy tender process, Delhaize settled on a solution codeveloped with Scale Computing

10 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
How retail giant Ahold Delhaize embraced the Edge - after filling its stores with a growing collection of incompatible IT
> Johan Pellicaan Scale Computing
Seb Moss DCD

and Lenovo, which would go on to win a DCD Award in 2021.

“The concept is very interesting - what Delhaize has created is what they call the ‘One-Box,’” Pellicaan said. “Basically it’s a complete, integrated, system with three servers and a firewall.”

He continued: “They use off-theshelf two-thirds rack servers, which was a key element also because it helped Delhaize to save space. And they have physical switches in there.”

For the system, the company moved from a physical firewall to a virtualized one offered by F5 Security.

After an eight-month trial at a limited number of stores, Delhaize in 2018 began to roll it out across the hundreds of stores that carry its own brand name. Then, in 2020, came a much greater challenge - trying to get it to work with the thousands of myriad franchise stores around the world.

In many cases, space was even more of an issue, with a two-thirds rack still too large. The team also realized that there was no need to have a traditional server, because there was no one on-site who could service or replace it anyway.

“As a result of this, the One-Box Level Two came out,” Pellicaan said. “And that contained a number of new elements.”

Notably, it shifted to the Intel NUC platform, essentially a tiny PC crammed into a small, contained box. “It basically means a completely new way of thinking about service and setup,” Pellicaan explained. “It’s specially designed. I would say, for the Edge where size is important.

“And when one breaks, we just put a new one in and send the other for warranty exchange. It’s a very easy setup, completely different, which makes the life a lot of people filled with much fewer headaches.”

There are other benefits he rattles off - lower power usage, fewer material emissions given the smaller size, and cheaper shipping. These were all critical factors in deciding to take this approach, Delhaize’s Paulet said. “We are always driven by the cost, that was really the main point.”

This cost-cutting focus also meant that the Level Two went back to physical firewalls.

“F5 was an expensive solution and also it needed people with a good knowledge, and it was really a mess to find people who are able to manage the solution,” Paulet recalled, adding that there were also issues where the virtual machine went down and took the service offline. “So we looked for another solution, looking for something cheap, then cheaper, and cheaper.”

They settled on a Fortinet virtual firewall, but then noticed that the physical version was even cheaper. It also allowed for another cost cutting measure - removing the two switches found in the Level One, and just using the physical firewall.

“The two drivers were cost and stability,” Paulet said.

The new One Box is now making its way through Delhaize’s sprawling network of retail outlets, bringing a modern Edge to some stores that date back to the 1800s. 

11 | DCD eBook • datacenterdynamics.com
Delhaize needed to rapidly deploy resilient in-store infrastructure to support existing workloads as well as new data and processing-intensive initiatives such as cashier-less checkout and customer safety and security measures
> Rolf Vanden Delhaize
Niels Kim/Wikipedia Commons

Panel

>Broadcast series Planning for hybrid IT

After all, a goal without a plan is just a wish

As the dust settles, the postpandemic business value of a hybrid IT environment has come into sharp focus for the modern enterprise. With the ability to rapidly deploy infrastructure in a third-party facility, scale with ease in the public cloud, whilst maintaining control over confidential proprietary data onpremise, it is no surprise that firms are doubling down on their efforts to move workloads between colocation and the cloud.

With the rise of AI, IoT, 5G and Edge adoption igniting the need for digital transformation within the enterprise sector, it is essential for enterprise IT and capacity planning professionals to balance the need for speed, agility, security and cost-efficiency with a focus on how their data infrastructure impacts the environment.

In this DCD>Broadcast series, we host the world’s leading voices on hybrid IT and multi-tenant computing and ask:

• Can public cloud fend off workload repatriation?

• Is there sufficient transparency around sustainability in your enterprise IT supply chain?

• How sale leaseback unlocks sustainability opportunity for major FMCG company

12 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Click on the episodes below to stream the series on demand

Chapter 2: Making the change

As discussed in chapter one, hybrid IT, AKA hybrid cloud infrastructure, is a blend of public and private cloud services in addition to on-premise resources, the combination of which is entirely dependent on the unique needs of each business.

Hybrid cloud can be used for a variety of different applications, whether it’s to separate sensitive workloads, process Big Data, expand an organization’s cloud presence or simply to cover temporary processing requirements.

The truth is, as our world becomes increasingly digitized, the application of cloud technologies is no longer an option for most organizations, as pressures to meet elevating customer demands continues to mount.

Today, hybrid IT is transforming the way we do business, and is particularly prevalent across large industries such as government, education, finance, healthcare, and retail. In this chapter we take look at some of the industries making the change, why they’re doing it and most importantly, how.

13 | DCD eBook • datacenterdynamics.com

How Dropbox pulled off its hybrid cloud transition

We explore Magic Pocket, and whether others could do the same

When file hosting service Dropbox first announced its hybrid cloud effort Magic Pocket in 2016, many saw it as a sign that the company was done with Amazon Web Services and was betting on an on-premise future. But the reality is more nuanced said Preslav Le, former lead developer at Dropbox.

The company has always had its own data center presence, but Dropbox needed more capacity and soon grew to become a major customer of Amazon S3 (Amazon Simple Storage Service) after joining in 2013. It didn’t take long for the company to wonder whether it made more sense to do it themselves.

“We used AWS S3 because storage at scale was an extremely hard problem to solve,” Le said. “It was only a few years later, when we really believed we could tackle this problem better for our needs, that we even tried.”

The result was Magic Pocket, one of the largest data migrations off the cloud in web history. This, Le said, has allowed for significant cost savings and more control - but is not something that most other companies could easily replicate.

Over a two-and-a-half-year period, the company built its own massive on-premises platform, officially launching it in 2015. This involved a huge amount of software work including switching from programming language Go to Rust mid-way through to reduce memory use - and getting deeply involved with the hardware to ensure that every ounce of possible storage was squeezed out of a rack.

“It’s not only the language we changed,” Le said. “We also significantly improved the architecture. We moved from using a file system to just managing the drive directly - we literally open the drive as a block device and then we have our own formats. This allowed us to gain a lot of efficiencies from avoiding the file system, but also move quite a bit faster.”

For example, the company could adopt shingled magnetic recording (SMR) hard disk drives without waiting for drivers to support them. SMD disks can be much denser by writing new tracks that overlap part of the previously written magnetic track, somewhat like overlapping roof shingles.

“This is one of the examples where we were able to work closely with hard drive companies and were able to move much faster than some other companies,” Le said. “They need to build a new file system, etc. Some of the big players still don’t use SMR.”

14 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Seb Moss DCD

It’s all about keeping one’s options open. For the initial migration out of S3 to Magic Pocket, we built the ability to move data back and forth between the two locations. Over the years, we decided that it’s worth retaining that capability

He added: “If adopting other cloud providers made sense, we’d do that too.”

There are other areas where the cloud comes first, too. “Some workloads from our analytics and dev box and other auxiliary things, we’ve moved to the cloud, where we can allow people to move faster, and the cost is acceptable.”

The company helps design its own custom servers, cramming more and more storage into its data centers. “We replace our hardware every four years, but have at least a couple of new generations in those four years,” Le said.

“Back when we started, we worked with four terabyte drives. Now we have 20 terabytes... but we also increased the number of drives per chassis so we really increased the density quite a bit.”

By 2016, the company said that it had moved around 90 percent of its files over to on-prem, comprising four data centers. “What we’ve seen in the last couple of years is that we tend to move more things on-prem than towards the cloud for our core storage production,” Le said, but declined to share the exact percentage.

The initial move was a big risk. “Looking back, it really turned out to be a great investment for both our velocity and the business,” Le said. “Amazon and the cloud have to solve really broad problems - just imagine all the different usage patterns for S3. Our usage patterns are much simpler, and we understand them, so we can [build for them].”

So does this mean Dropbox has dropped the cloud, and is essentially an on-premises business now?

Not so, Le argues. “Magic Pocket is

this very famous system, and often people say, ‘what’s the Magic Pocket team?’ We don’t have one, we have the Storage Team. The reason we call it Storage is because their job is not to do Magic Pocket.

“Their job is to provide the best, most reliable and cost-efficient storage for Dropbox. So if ever Amazon can innovate and they’re better than us, and they’re cheaper, or we can secure better deals wherever makes sense, their job is to advocate us moving the data back.”

Indeed, in places where Dropbox doesn’t have the scale, or prices differ, it still relies on S3 - including the UK, mainland Europe, Japan, and much of the non-American world. It does, however, operate its own Point of Presence network.

It’s all about keeping one’s options open, Le said. “For the initial migration out of S3 to Magic Pocket, we built the ability to move data back and forth between the two locations. Over the years, we decided that it’s worth retaining that capability.

“So if we ever decide because of supply chain issues, Covid, or whatever, that you want to spin over some capacity to S3, we can just do it with a click of a button - we don’t need to write code, you don’t need to deploy, you can literally click a button and then some data can go back.”

If you have the scale and the team, you should try to really embrace hybrid cloud

The cloud still makes sense for most businesses, Le said. “I think if you’re starting a company, just go use the cloud. Operating your own infrastructure comes with a cost.

“And the only way to justify it is if A) You have a very good understanding of the problem. B) You have the right scale - usually, that means a huge scale: with Magic Pocket we store exabytes of data. And then there’s C) Do you have the right talent?”

Dropbox is also fortunate that it is primarily a storage-focused company, so it’s hard to get locked into the cloud. Users of more specialized cloud services or databases are increasingly finding themselves trapped on platforms that are hard to extricate their workloads from.

“Sometimes vendor lock-in is okay when building a prototype. It’s a small scale, it’s not expensive, just go use AWS. But if you’re building something where your business margins are seriously affected, then you should seriously think of vendor lock-in.”

That’s why, if you have the scale and the team, “you should try to really embrace hybrid cloud,” he said.

The cost of R&D on Magic Pocket “has not been hard to sustain” since the initial flurry of investment in the shift. “There are all these other costs like hardware and data center operations but whenever we compare costs, we take all those things into account.

“Magic Pocket was a really sound investment that really paid off multiple times over.” 

15 | DCD eBook • datacenterdynamics.com
> Preslav Le Dropbox
>
Preslav Le Dropbox

Are banks ditching their data centers?

It is the 27th of June, 1967. A sweltering day. Outside of a Barclays bank in Enfield, North London, a crowd has gathered to witness the unveiling of the world’s first Automated Teller Machine (ATM). Despite the 27°C (81°F) heat, the assembled officials wore traditional suits and ties - but there was a sense that something was changing.

The introduction of the ATM was a huge technological leap for what had been a relatively static sector. But by the 1960s the seeds of digitization had been sown in the banking world.

Banks already used mainframes, and these developed into data centers during the later decades of the 20th century, enabling new ways for customers to contact their bank.

In 1989, Midland Bank launched First Direct, a branchless bank operating through call centers.

Other banks and financial services such as Smile and Egg encouraged banking at a distance by phone and online. By 2001, Bank of America reported that three million of its customers used online banking.

When change came for the banks

But all of this was gradual change. It’s only recently that things have sped up - and the 2008 financial crisis was a major catalyst. It’s possible that the 2008 crash will be seen as the real beginning of the end for in-house data centers at banks.

In the recession that followed the crash, banks lost the trust of the general public.

Perhaps in response, they began to work more closely with FinTech companies on new approaches. Online banking had arrived, and mobile banking was availableincreasingly capable smartphones put the two together so people could handle money on the move without branch visits.

Meanwhile, FinTech companies had new tools, and began to promise that AI and the ubiquity and speed of mobile networks would allow them to dramatically advance the culture of convenience and immediacy. For the banks’ part, they were struggling to see how that convenience could be delivered.

In the wake of the financial crisis,

regulations were adding additional hurdles, increasing their workloads and the complexity of functioning in an attempt to reduce risk. This set of pressures forced banks to rethink a lot of things. They welcomed in FinTech companies and moved faster in offering more flexible services.

Welcoming the cloud

They also dug back inside their own infrastructure, and questioned whether their internal digital infrastructure was up to the job.

The on-premise data center began to look like a liability, rather than an asset. Banks moved their servers from their own back offices into shared spaces such as colocation data centers. But the next step - moving to a cloud-based infrastructure running on shared machines in a centralized data center - could still seem more risky.

“If you have followed financial industries’ data centers, I would say going back even as recently as 10 years ago, a financial services company embracing colo or cloud

16 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Or are they not quite ready to let go?
Georgia Butler DCD

would have been viewed as highly unlikely,” Marcus Hassen, a group manager from the US financial holding company Truist said, in a DCD panel.

For Hassen, the rate of change is impressive, given banks’ conservatism and the recent development of the cloud. Online applications didn’t start much before Salesforce.com, and the public cloud didn’t take off until Amazon got serious about offering Amazon Web Services.

“Public cloud has only even been a segment since around 2006 when Jeff Bezos wanted to find ways to diversify,” said Hassen. “You have to hand it to the hyperscalers for the way they’ve been able to sell many CTOs and CIOs on the cloud being the superior business model.”

Digitization on steroids

Ten years on from the financial crash, we had the cataclysmic incident of a global pandemic, which hastened digitization in many ways.

During the pandemic those of us with jobs that could be done from home were forced to huddle indoors. This led to a boom in the data center industry - but a boom primarily reflected in spending on cloudbased services, not on on-premises systems.

Following the pandemic, it’s clear that all sectors, not just banking, are shifting resources towards the cloud, and away from on-premises facilities.

In IDC’s Worldwide Industry CloudPath Survey (May 2020), 57 percent of banks responding to the survey said that they already run in hybrid environments, with another 31 percent moving to hybrid models in 12 months, and a further nine

In IDC’s Worldwide Industry CloudPath Survey (May 2020), 57 percent of banks responding to the survey said that they already run in hybrid environments, with another 31 percent moving to hybrid models in 12 months, and a further nine percent moving to hybrid in 24 months

percent moving to hybrid in 24 months.

Ali Moinuddin from Uptime Institute spoke of this transition: “Over the last few years, what we’ve seen is that more and more organizations are being much smarter about how they are deploying their IT assets, and which venues they are using.

“They often have a multi-cloud, multi-colo service partner, and they are also running their own data center. They’ve transformed their own infrastructure, which they were planning to do before the financial crisis. But after the financial crisis, many of those were taken off the balance sheet. And hence, we’ve seen a significant increase in the use of cloud infrastructure and more importantly, colocation service providers.”

The future is hybrid

This being said, on-premises IT is not (yet) dead. There is a good reason why banks are moving into hybrid - keeping their own on-premise infrastructure alive alongside new applications in the cloud.

That reason is risk. It turns out that using multiple cloud providers can create a smokescreen behind which single points of failure may hide.

17 | DCD eBook • datacenterdynamics.com

Moinuddin explained this in further detail: “As you start to distribute your infrastructure across multiple service providers, you start to increase complexity. And as you start to increase complexity, you can actually increase the level of risk in terms of potential outages that may happen within critical IT services that are supporting critical business services themselves.

“There are some concerns around the risks associated with concentration, whereby certain significant service providers could be hosting a number of financial institutions which are critical to a domestic economy in a specific region in the same availability zone.

“So if there was an outage event, it wouldn’t just be one bank that is impacted, it would be several banks, which would have an actual very specific, and very negative impact on the reputation of the financial services sector.”

In February this year, five banks

simultaneously went down in Canada, leaving customers unable to use online or mobile banking, or use their debit cards. There was no explanation given as to the sudden outage.

Life without cash?

Another impact of the pandemic was a move away from using cash.

Half a century on from that first cash machine, we are starting to move towards a cashless society. During the pandemic, many shops accepted only card payments to limit physical contact as much as possible, and in 2020 cash payments reduced by 35 percent. Since the pandemic, things have not really bounced back.

The unforeseen consequence of this is an increased dependence on those online systems. Hard cash is something that can be reliably carried and used, whether or not we have online access to working

digital banks. Mobile banking debit cards and credit cards all rely to a greater or lesser extent on online services.

In this world, when services fail people can be left extremely vulnerable. It is essential that banks protect themselves and their customers against this risk. When banks assess the relative reliability of the cloud and on-premises IT, they must be aware that the stakes are getting higher.

In this situation, it can be tempting to keep IT on-site and under your control. Charles Hoop, global lead IT sourcing and category management at Aon told DCD that: “a lot of this is philosophical, religious almost, in terms of some of the biases that drive it [desire to stay on-prem].

“As things have been outsourced, it’s all third party. I don’t know that there are many electrical engineers on staff who can read a single line and actually spot that single point of failure.”

But Hoop believes that increased control is not enough to justify the cost of on-premise systems compared with the cloud: “I think if you just looked at the dollars and cents, the technical cost benefits, I can’t see why you’d be building your own facility.”

18 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Over the last few years, what we’ve seen is that more and more organizations are being much smarter about how they are deploying their IT assets, and which venues they are using

The real cost of on-prem

Of course, building your own facility from scratch would trigger a lot of additional costs. But in the banking sector, we are often not talking about building new data centers, but upgrading within an already functioning on-premise facility, or moving resources from that facility to colocation or cloud.

This kind of process is in itself costly, but it can ultimately save money in the long run, if it is done right. The real solution comes from planning and understanding what data and computing need to be in the cloud and what should remain behind.

Given all that, Ali Moinuddin argues that the true future is in hybrid.

“We are seeing a steady migration from legacy enterprise assets into both public and private cloud, and colocation. As we [Uptime Institute] have been developing our Financial Services Assessment, about 50 percent of the banks have told us that currently, they have a no public cloud policy. These were global banks from across the world.

“However, they are building private clouds within colocation, and their own enterprise data centers.”

In January 2022, it was announced that JPMorgan had done just this: built its own data centers for hosting private clouds.

The company spent $2bn on new data centers in 2021 despite having an overall strategy to get IT into the cloud. The spend was met with criticism, and even a drop in share prices, but the company stated that the investment was necessary in order to provide data centers and cloud services in new markets like the UK.

“We spent $2 billion on brandnew data centers, which have all the cloud capability you can have in private data centers,” chief executive Jaimie Dimon told analysts on a call.

“All the stuff going to these new data centers, which is now completely up and running, is on apps. Most of the applications that go in have to be cloud-eligible. Most of the data that goes in has to be cloud eligible.”

This is still part of a long-term plan to become fully cloud-operated. But Moinuddin argues that: “We’ve seen some financial services, as they scale their requirement in the cloud, realize that not everything needs to be in the cloud and they start to repatriate some of the data and services that were being outsourced.”

Some apps aren’t cloud-friendly

Rocco Alonzi, AVP of data center operations and governance for Canadian financial company Manulife, has experienced just this issue.

“When we started looking at if we could move this application that we’ve had for many years into the cloud, it may not be cloud-friendly, and it may not work properly, and then you have to ask what is it going to cost to do that? You never really reclaim the RTI [research technology and innovation].

“But there’s definitely hybrid IT coming into play and it should for a couple of reasons. Way back in the day, if a data center was bursting at the seams, we would need to build a new data center or we would have to pack up and move everything.

“But as you start moving your loads into the cloud, you can maintain that data center and probably have it razor-sharp in the sense that it’s only the critical application processing that sits there.”

While several banks, including Barclays and Natwest, have welcomed cloud computing with open arms and are looking to move entirely to the private cloud, banking as a whole seems currently unwilling or unable to fully leave enterprise data centers behind.

On-premise computing does still offer solutions to the most critical and secure data, a reluctance to move towards a hybrid IT architecture could render traditional banks unable to keep up with the newer FinTechs who embrace the changes and trends in the industry, and lose out financially in the long run.

All tech has a life cycle

While the ATM was a great advance, it has peaked. Cash machines are being removed from the walls in many sites as people use less cash. At the same time, the data centers which backed that generation of banking have also passed their peak, with fewer being built, and many of the old ones closing. But it’s not the end of the line for on-premises data centers just yet. They aren’t dying out, they just need to find a new role.

19 | DCD eBook • datacenterdynamics.com
We are seeing a steady migration from legacy enterprise assets into both public and private cloud, and colocation
> Ali Moinuddin Uptime Institute
I think if you just looked at the dollars and cents, the technical cost benefits, I can’t see why you’d be building your own facility
> Charles Hoop Aon

Enterprise healthcare considers a cloud shift

Checking up on healthcare IT

The pandemic forced businesses to dramatically bring forward IT plans as consumer habits changed virtually overnight.

Some companies saw demand dry up, while others struggled to keep up with a surge in activity. Getting the transition wrong could have meant the end of a firm’s existence, as users shifted to faster platforms more suited to the new world.

But for the healthcare sector, this digital transformation was a literal matter of life and death.

Pandemic push

“The pandemic accelerated the change,” says Bashir Agboola of the Hospital for Special Surgery (HSS) in New York City.

“Part of that shift has also been a change in where technology gets deployed and where computing occurs and where data is stored and manipulated,” Agboola explained during a DCD>Debate on the healthcare sector.

“More and more is going from on-prem data centers into cloud facilities. That is what has allowed us to have the successful response that we have had over the past two years to the pandemic as far as technology and care delivery is concerned.”

Agboola is a firm believer in what the cloud can do for the future of patient care.

“I am hearing the same conversation at other healthcare providers - should we spend millions of dollars upgrading our data centers, or do we just begin to move workloads into the cloud?,” he said.

“More and more, particularly those with aging data centers, are beginning to move their workloads out. They’re not investing in data center infrastructure anymore - no one wants to spend tens of millions of dollars to upgrade a data center.”

Expertise is the crucial factor

But in a separate DCD>Debate, the company’s senior director of data center services, Keith Montalvo, was a little more circumspect.

“I think one way I decide whether to outsource or do it in-house is how quickly does the technology need to be deployed,” he said. “Do we have the expertise in-house to implement or not?”

Agboola also cited staff expertise as a critical factor, noting that the shift to cloud will require new staffers versed in cloud technologies, “but remains a challenge to find strong talent to

help you with that journey.”

Shane Brauner, CIO of biotech software company Schrödinger, concurred: “For a lot of businesses, I don’t think running a data center or replacing hard drives in a system is super core to their business. So being able to take our resources internally and move them to start building talent as a cloud engineer, rather than racking servers, it’s a gamechanger.”

Montalvo cautioned that losing too many data center technicians has its drawbacks - especially when it comes to colocation usage.

“I think one of the things I’m seeing in the industry is, as folks farm out their internal data centers, and maybe leverage colo space more, the internal knowledge about critical infrastructure, like power and cooling, and even, just regular facility infrastructure is kind of being lost,” he said.

“We’re becoming more dependent on the colos to manage all that. And I think there is a due diligence internally that we need to maintain some knowledge about facility design to keep our colo partners honest around redundancy.”

He added: “You need to be versed in maintenance requirements around facilities like thermal analysis

20 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Seb Moss DCD

on breakers, understanding input breaker diversity, understanding UPS capacity, understanding distribution redundancy, etc. Some folks will sell you on circuit redundancy, but then you don’t know that both circuits are coming from the same distribution panel.”

Montalvo believes that “anyone looking to go to a colo should have an audit checklist of things you’re going to ask the colo about their infrastructure: What kind of cooling do they have? Is that redundant? Do they do adequate testing on the generators? Is it weekly? Is it monthly? What is the capacity of

those generators? What are the switchover systems? What is the end of life of those critical pieces of equipment? And when do they look to refresh them?

“These are all things that I think, internally, companies should be armed with.”

Cloud is the future

While HSS and other healthcare providers are maintaining some on-prem presence, and expanding into colocation, the future lies in the cloud, Agboola said.

“In my organization, when we

put together our enterprise cloud strategy, one of the key decisions we made was that we will build netnew capabilities in the cloud, rather than trying to build those on-prem,” Agboola said. “And then we think about cloud migration for other stuff opportunistically.”

The cloud allows for new tools and systems “to be quickly ramped up,” he continued. This is what is being demanded by customers, “as they come to healthcare with experience working with retail and banking. There’s so much change there, where many consumers have never set foot into a banking hall in a long time. They do all their banking remotely, with technology.

“Consumers have similar expectations of healthcare providers,” he said, something only popular with the cloud. “We really have no choice, if we fail to reinvent digitally, it will be the end of the business.” 

21 | DCD eBook • datacenterdynamics.com
I am hearing the same conversation at other healthcare providers - should we spend millions of dollars upgrading our data centers, or do we just begin to move workloads into the cloud?
>Broadcast How do you optimize connectivity whilst living between on-prem, colo and cloud? Panel Click to watch

Chapter 3: Colocation and hybrid IT

For some years, there’s been a school of thought that colocation is out of date, and will eventually wither away in favor of the cloud. But that idea runs counter to the facts.

The colo market is stubbornly and continually growing. But it’s not the same market as it once was. Early cloud adopters are partially returning to colocation - and these born-again colo users are very different to the old school.

Particularly in the aftermath of the pandemic, our insatiable demand for data is now forcing organizations to adopt a hybrid approach. Bringing together cloud and colo means these companies are able to deliver the speedy, agile, resilient service customers have come to expect, as well as the ability scale up or down as needed.

In this chapter we examine why colo is here to stay, and rather than driving customers away from colocation as might be assumed, discover why the advent of hybrid cloud strategies is in fact having the opposite effect.

22 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT

The rebirth of colocation

Early cloud adopters are coming back to colocation services. But the born-again colo customers are very different, and providers face completely new challenges

23 | DCD eBook • datacenterdynamics.com

It’s been fashionable to see the cloud as an all-consuming future. The cloud can handle massive workloads, services are easy to buy, and are scalable. So why would anyone go to the trouble of buying racks and servers and installing them in retail colocation space? Surely you should let the cloud handle the grunt work, and get on with your real job!

Market figures tell a different story. Averaging out forecasts from a bunch of providers, it seems the colocation market as a whole is growing massively, at around 16 percent per year. Over the next ten years, that adds up to a market that will quadruple in size, going from roughly $46 million in 2020, to $200 billion in 2030.

Market researchers say the retail colocation sector is bigger than wholesale colocation, where whole data centers are rented by large operators - and retail colo will keep its lead at least till 2030. What’s going on?

Cloud is massive - and efficient

First off, it’s more complicated than that. Cloud data centers really are massive because, alongside the ones leased in wholesale colo deals, hyperscalers own a massive number of sites, which they’ve built themselves. These are huge beasts, with power demands up to 1,000MW.

The colocation market as a whole is growing massively, at around 16 percent per year. Over the next ten years, that adds up to a market that will quadruple in size, going from roughly $46 million in 2020, to $200 billion in 2030

from a floor space perspective.”

But hyperscale includes some behemoths which are actually giant in-house IT services, like Facebook/Meta, Bachar points out: “Facebook is probably one of the biggest data center operators in the world nowadays. But they’re serving their own enterprise needs. They’re not a public cloud service - they’re running their own internal cloud.”

Bachar says hyperscale cloud data centers do indeed have a big advantage over other sectors, in their ability to deliver cheap IT power: “These sites are usually located in remote areas where the land is inexpensive, and power is available from multiple green sources.

If those sites don’t have connectivity, the hyperscalers have the muscle to provide it: “The large companies who are building those mega data centers need to bring connectivity into those sites and be creative to create the network backbone. And each and every

On these sites, hyperscalers “start with one or two buildings, and then expand in a replication mode, on the same site,” Bachar says. “They create a very high level of efficiency operating the data center with a PUE of 1.06 to 1.1.”

In his view, the hyperscalers are “creating a very, very significant level of green data centers.”

Colocation has challenges

Smaller colocation sites are very different, he says. They were set up to host physical servers owned by enterprises which “decided not to actually build their own data center but actually to put part of their IT load into a colocation site.

“These are small sites between 50 and 75MW, and in some cases can be even smaller than 15MW. They are closer to urban areas - because historically those sites actually have been put closer to the headquarters of their customers.”

>> DCD eBook | Hybrid IT

These colo providers situated in urban areas have big challenges, says Bachar: “These buildings are not scalable. Because they’re sitting in urban areas, the size they have been built to this the size they’re actually going to operate under for the remainder of their life. They don’t have expansion space.“

A second challenge is, “they are heavily regulated - because the closer you get to the middle of the city, the heavier you are regulated for emissions, power availability and every aspect that impacts the environment around you.”

So the odds are stacked against smaller colocation companies. But their market share resolutely refuses to decrease - and there’s a surprising reason for this. According to Greg Moss, a partner at cloud advisory firm Upstack, large numbers of early cloud adopters are moving capacity out of the cloud.

Cloud defectors come back to colo

“The public cloud as we know it has been around for 12 years, right? I mean, the big three - GCP, Azure, and AWS. Everyone sees the growth, everybody sees people going pure cloud, and just running to the cloud kind of drinking the Kool-Aid. What they don’t realize is there’s two sides to that coin.”

According to Moss, the early adopters, the “sexy, innovative”

who went all-in on the cloud twelve years ago, “are now at a point where they’re pulling out at least a portion of their environment, it could be 20 percent, it could be 80 percent, and hybridizing, because what they’ve realized over the last 12 years, that cloud isn’t perfect.

“To really get the efficiencies from an economic and technical perspective, you really need to be in some sort of hybrid environment.”

Companies started with a “knee jerk reaction” to put everything in AWS, he says: “Why? Because some board member mandated it, or because our competitors are doing it, or because it’s the rage right now.”

Later on it goes sour, because

“Someone’s losing their job, because they realize they’re spending 30 percent more than they were - and the whole exercise was around cost reduction and innovation!”

The trouble with cloud It turns out that going to the cloud isn’t a simple answer to all questions: “It doesn’t solve anything. It just hands your data center environment to a different company. If the data center just went away, and is miraculously living in the ozone, then fine. But it’s not. You’re just shifting infrastructure around in a different billable model. It makes sense: some people want to consume hardware in a day to day or hour by hour function.”

The hyperscale cloud operators can afford to lose some custom, says Moss, because they still have massive growth due to the late adopters: “AWS, GCP, and Azure are still seeing so much growth right now, because of healthcare, because of not-for-profit, because of legal, because of all the non-sexy companies that are just now getting comfortable enough to move to the cloud.”

But the early adopters really aren’t happy - and they have problems: “They’re stuck for five to 10 years, because no one’s going to pull out of a massive migration or massive decision after just doing it - regardless of the outcome. So

Innovative companies who went all-in on the cloud 12 years ago are now pulling out a portion of their environment because they’ve realized the cloud isn’t perfect > Greg Moss Upstack

And there’s company politics: “There’s a person who’s been there 15 years, who just doesn’t want to do more than what he’s doing. He picks up his kid every day at school at three, and he knows that if the IT sits in AWS, he can continue to do his job and leave at three and pick up his kid. He could be the gatekeeper.

“I’ve seen large companies dismiss $50 million a year savings because the gatekeeper, a $150,000 employee, just doesn’t let the management know that there’s an opportunity.”

Sooner or later, those early adopters can get past the gatekeepers, and start shifting the balance of their IT provision towards a hybrid model with some loads returning to colocation. But these customers are a new generation, and they will want more than just the resilient racks with power and networking, that were good enough in days gone by.

Born-again colo needs: Bare metal and cloud onramp

“You can’t just have great resiliency, you have to have a total solution. That means big buckets - a data center that’s resilient. And some sort of bare metal or custom managed component, like Equinix Metal for instance.

“And then there’s the connectivity to the large public clouds - through a partner like Megaport or a direct onramp. Those are the three components that make up hybridization.”

The capacity speaks for itself, while bare metal is a way to own dedicated capacity in someone else’s infrastructure. Customers can need this to meet privacy rules which require customer data to have a specific location away from shared hardware.

And the need for on-ramps to the public cloud is obvious. If customers

are building hybrid clouds that include public cloud services as well as their own colocated servers, there should be easy to use links between the two.

Unlike the early cloud enthusiasts, the born-again colocation customers are thinking ahead, says Moss. Privacy rules might force some loads onto bare metal in future. Or they might open up a new commerce branch which would have seasonal peaks - and that could require a quick link to the cloud.

They’re thinking ahead because of the trouble they’re experiencing coming off their cloud addiction, but also because, if they pick the wrong colo, they could have to move all their IT.

And, as Moss says, “nobody wants to move a data center. It’s the biggest pain in the ass.”

There are companies that will physically move racks of servers from one facility to another, but Moss says: “They charge $5,000 in insurance for every million dollars in hardware, even if you’re moving three blocks away. If you move $10 million worth of hardware, your insurance cost is going to be upwards of $50,000. And will they even turn back on?”

Power and networking

According to Bachar, the new colo customers have another demand: they are much more power-hungry: “If we look at the technologies in the mega data centers and the colos, 80 percent of the IT load is compute and storage servers now.

“We’re starting to see the emergence of AI and GPU servers, which are growing at a much faster pace than the compute and storage servers, and specialty storage servers going hand in hand with the GPUs and AI.

“And the reason for that is that

we’re starting to deal with very large data sets. And to process those very large data sets, we need a server, which is beyond the standard compute server.”

But GPU servers, and GPUs integrated standard compute servers demand more power: “Those high-power servers are challenging our infrastructure. If you look at a typical high-end GPU server, like the ones from Nvidia, these servers are running between 6000W and 8000W watts for every six rack units (RU). That is very difficult to fit into a standard colocation where the average power per rack is 6kW to 8kW.”

On those figures, a standard rack is 42 RU, so a full rack of GPU servers could demand a sevenfold increase in power.

One thing which would help is more flexibility: “Am I taking a highpower rack or a low power rack? Can I actually mix technology within the rack. We need a very flexible capability in the data centers.”

New apps also need more network bandwidth, says Bachar: “Networking today is 100 and 400 Gigabit Ethernet as a baseline. We will continue to grow this to 800G and the 1.2Tbits in the future.”

Can small colos cope?

All these changes are placing huge demands on small colocation firms, while there’s a surge in demand for what they provide, and that is a big factor driving the current surge in colocation mergers and acquisitions, says Moss.

Smaller colos realize that they can’t actually fund all the changes they need to be truly successful: “So you see a lot of these smaller data centers selling off to the larger guys.”

Meanwhile, he says: “The larger guys are buying them because it speeds their go-to-market - because the infrastructure is already in place.

26 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT

It takes a long time to build a data center. You could probably get away with a brownfield build in the US within 18 months. If it’s greenfield, it’s more likely in three years.

A lot of requests are on a shorter timescale than that: “Imagine you are Equinix, you have three data centers in a market and they’re all bursting at the seams. You have very little inventory left. But one of your largest customers, or an RFP from a new customer, says ‘In 12 months, we’re going to need a megawatt and a half.’ But you can’t build in that time.”

In that situation, the large player can buy a smaller regional player, whose data center is only 30 percent full, and put that customer in there.

“You invest some money in upgrades, you bring it up to standards, and you get certain certificates that aren’t there, and you now have an anchor tenant, and maybe the facility is 60 percent full,” says Moss.

“The bank loves it, because the bank takes on the existing customer leases to finance, and they also take the new signature tenant lease, that’s probably 10 years long.”

The other customers are happy too, as the data center gets a perhaps-overdue facelift, along with the addition of those new musthave features, bare metal services and on-ramps.

The odds are on big colo players

Small colo players often rail against giants like Equinix or Digital Realty (DRT), claiming they overcharge for basics like power and cooling, as well as services like cross-connects - links between two servers in the network.

It’s very cheap for a large colo to activate a network link between two of its customers, who may even be in the same building - and yet customers are charged a high price

for those cross-connects.

Multinationals don’t see that as a problem, says Moss: “A company like Equinix or DRT has everything that you would need to be successful. You are going to pay a premium, but that premium, if utilized properly, isn’t really a premium. If I’m using Equinix in three countries, I may be paying 30 percent more in space and power, but I’m saving a hell of a lot of money in my replication costs across those three data centers because I’m riding on their fabric.

“A local 200-person business in Pennsylvania, whose network engineer wants to touch every part of the hardware, is going to TierPoint, because it’s two miles down the road,” he says.

“He doesn’t have this threecountry deployment, he has just, 10 racks in a cage and wants to make sure he’s there if something fails. There’s still plenty of that going on in the country, but most of the money’s being spent with companies like Equinix and DRT.”

Bigger issues on the horizon

But there are more issues to come, which will have even the largest players struggling. Bachar sums these up as Edge and climate.

Colocation providers are going to have to keep providing their services, offering increasing power capacity, from a grid which is having to shift to renewable energy to avert climate catastrophe.

“Our power system is in transition,” says Bachar. “We’re trying to move the grids into a green grid. And that transformation is creating instability. Grids are unstable in a lot of places in the world right now, because of that transition into a green environment.”

At the same time, capacity is needed in the urban locations where grids are facing the biggest crisis.

At present, all Internet data has to go through exchange points. “In the United States, there are 28 exchange points covering the whole country. If you’re sending a WhatsApp message from your phone to another phone, and you’re both in Austin, Texas, the traffic has to go through Chicago.”

The next stage of colo will need localized networks, says Bachar: “In the next three to five years, we’re going to have to either find solutions to process at the Edge, or create stronger and better backbone networks. We’re having a problem with Edge cloud. It’s not growing fast enough.”

The colocation data centers of the future will have to be in urban areas: “They will have to complement and live in those areas without conflict,” says Bachar. That means they must be designed with climate change in mind - meeting capacity needs without raising emissions.

“We cannot continue to build data centers like we used to build them 15 years ago, it doesn’t work. It doesn’t help us to move society forward and create an environment for our children or grandchildren.” 

27 | DCD eBook • datacenterdynamics.com
We cannot continue to build data centers like we used to build them 15 years ago, it doesn’t work. It doesn’t help us to move society forward and create an environment for our children or grandchildren
> Yuval Bachar
Hyperscale veteran

Why the colocation craze is poised to continue

The past two years have drastically altered our view of how markets currently work on a global scale. For most data center operators, the last few years have been largely challenging, with the shift to remote working and having to re-learn how to handle new kinds of projects that require different processes.

The industry understands that consumers’ increased usage of gaming and streaming activities, as well as firms’ demand for greater storage capacity, is not a passing fad.

Nevertheless, there is a silver lining to this. Despite the undeniable negative effects of the pandemic, it has been a driving force in the creation of technologies that serve our increasingly digital lifestyles. The growing trend toward digitization and greater dependence on computing has helped the data center industry considerably.

Secondly, there is now a greater demand for cloud services that places additional strain on the data center/colocation markets; but this same demand has driven us to

innovate, expand capacity, and adapt to become even more robust.

Thirdly, the pandemic has raised awareness of the industry’s most pressing issue: sustainability. Most, if not all, operators are now revising their methods, whether it’s from tapping into solar energy or using hydropower.

Our sector is undoubtedly busy and is showing no sign of slowing down. So, let’s take a look at some of the fastest growing trends right now and what we can expect to see more of in the years to come.

28 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Everything has changed since the pandemic and our sector will face continuing demands
Giancarlo Giacomello Aruba
The industry understands that consumers’ increased usage of gaming and streaming activities, as well as firms’ demand for greater storage capacity, is not a passing fad
> Giancarlo Giacomello Aruba

Post-pandemic aftermath

It is no secret that the pandemic has put everyone behind schedule, this has particularly been the case for operators in the colocation market.

With new restrictions in place that didn’t exist before, we had to navigate the challenges in operating remotely, as well as re-learn how to take care of data centers. The true test here was to continue working regardless of circumstances.

The biggest impact, however, was felt by those on the supply side. These folks that have been delivering product supplier were always fully aware of how, when, and at what cost they needed to deliver.

But everything has changed, and they’ve had to adjust and rethink entire processes as a result. For example, copper has now more than doubled in price. Meanwhile, vendors are taking more than twice as long to get raw supplies as they were before.

Greater resiliency or larger volumes?

The colocation market is currently divided into two areas. On the one hand, there’s this massive demand from the larger enterprises who are buying huge amounts of data center capacity to meet customer demand for cloud-based services.

These businesses purchase capacity volumes at a large scale and try to save as much of it as possible. It is the costliest option, and the standard infrastructure design might appear less robust in the long run, compared to other options.

SME enterprises, on the other hand, are striving for as much resiliency as possible, as well as high-rating certified infrastructures, to safeguard their data and services. To ensure that their data and services are always safe and easily accessible, their demand is centered around resilient and reliable facilities.

In today’s market, these are the two types of demand we’re seeing a lot of and can expect to see more of in the future.

Environmental factors to consider

An important outcome to stem from the pandemic is a greater focus on the factors impacting the environment. The industry has collectively turned its attention to re-evaluating and enhancing operations to be ‘greener’.

At this stage, it’s difficult to measure it because industry-wide sustainability initiatives have been underway for some years now, but the pandemic has undoubtedly hastened things. Back then, industry requirements were minimal. Since then, it has become increasingly important – in fact, it has shot to top of the agenda for businesses today. The key is to be carbon neutral, acquire the right certifications,

source sustainable materials, and harness renewable energy.

At the same time, we are noticing an increase in demand for higher density colocation – the ratio between square meters and capacity. In the data center sector, reliability is arguably the most vital thing; reliability requires redundant infrastructures, and redundancy leads to losses and less effective results.

Having a guaranteed capacity of operation is of particular importance to factories that require large quantities of energy, so when energy prices go up across the entire market, our business is directly impacted.

With the current socio-economic issues across Europe, it will be interesting to see how far the price of energy will increase. However, highdensity colocation is the trend for companies who wish to strengthen and streamline their critical IT infrastructures, as well as reduce their footprint and costs.

What the future holds

The market will continue to grow, especially given the expected demand for its services in the future. Furthermore, more services will be made available as operators shift away from legacy infrastructures and towards more digitally oriented systems.

Companies will also look to change their operational approach; we can expect to see a move away from the “faster” machines to ones that harness less power. Power is undeniably the greatest colocation cost, so that remains a major concern. With the ongoing war in Ukraine, this is likely to intensify, as our access and payment for electricity is likely to change.

Nevertheless, the data center and colocation market have proven resilient, pushing through many obstacles the industry has faced over the years, and I anticipate we’ll have no difficulty finding the right solution for our customers. 

29 | DCD eBook • datacenterdynamics.com
Companies will look to change their operational approach - we can expect to see a move away from the faster machines to ones that harness less power
> Giancarlo Giacomello Aruba

Hybrid cloud strategies drive customers to colocation data centers

Lines are becoming blurred between cloud and data center, with organizations finding that cloud isn’t always the most cost-effective option

We all know the old saying: no one ever got sacked for buying IBM. Well today, many organizations appear to have the same attitude towards the cloud providers, as they look to shift their compute requirement away from in-house facilities and are buying into the one-stop cloud platform for their computing requirements.

Who wants the cost of big ticket items?

However, removing the big ticket technology items from your organization’s balance sheet, and pushing the vast majority of your applications and data into the cloud may be a premature and overly expensive solution in the longerterm.

These days, organizations are leveraging more applications to stay

competitive in their markets and for many they are finding that not all of these are best suited to the public cloud.

The main cloud operators, companies such as AWS, Microsoft and Google, to name just a few, have worked hard to position their platforms as easy to implement, simple to scale and reasonably priced.

The growth of artificial intelligence and machine learning applications presented the cloud with an opportunity to promote itself as the go-to solution for dataheavy compute requirements.

With low-cost start-up contracts, many organizations were attracted by the massive compute capability on offer, without properly investigating the scale-up costs or the difficulty of switching contracts once budgets were stretched.

Many customers of cloud platforms learnt the hard way and, as a result, cloud repatriation is on the rise. According to IDC, in the report, ‘Increased services, pullback from public clouds huge IT disrupters’, cloud repatriation has grown increasingly popular in recent years with 80 percent of companies planning to repatriate at least some of their workloads that

30 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT
Many organizations now view a hybrid approach to data compute capacity as essential to their developing strategies
> Michael Akinla Panduit
Michael Akinla Panduit

are currently hosted in the public cloud.

A key reason for the change in mindset is the diverse workloads that come with multiple applications, which can be highly complex and have unique requirements for server instances, storage volumes, as well networking, power, heating and cooling and not forgetting physical location.

Blurred lines

The lines are becoming blurred between cloud and data center with more opportunities for organizations to site workloads where it maximizes the benefit of the data compute. Many organizations now view a hybrid approach to data compute capacity as essential to their developing strategies.

And now many colocation providers are investing in ways to help tenants improve efficiencies between public and private cloud workloads and controlling costs and meeting SLAs.

Energy costs relating to cooling account for around 37 percent of overall data centre power consumption, and it has become increasingly important for the sustainability credentials for data centers of all persuasions to institute

pathways to increasing energy efficiency across their sites.

However, converged infrastructure solutions reduce time to production for tenants by up to 80 percent by using pre-configured solution that are fully tested, validated and fast to deploy.

Automated environmental monitoring within the technology space and often implemented with converged infrastructure, will help improve cooling and equipment efficiency as well as increase operational effectiveness, which can help reduce overhead costs, often quite considerably.

Interconnection to partner ecosystems

The pandemic has demonstrated to senior management and technical teams that virtual business practices can be effective. Quick to implement this are colo tenants and providers that have developed new ways to engage at each stage of the relationship cycle.

Colo is fast becoming the destination for connecting enterprises, service providers and cloud platforms. Interconnection services are the physical connections that enable data exchange between two or more partners at the fastest available

speeds by combining highperformance networks with physical proximity.

Leading colo operators now offer tenants leading interconnection services to streamline migration across facilities as well as providing easy access to partner ecosystems.

For latency sensitive applications distributed architecture provides the deployment flexibility to support the highest spec customer requirements. This level of connectivity in place means deploying compute and storage across multiple locations, that most suit the application and cost requirement, are not only possible, but essential.

Colo success lies in finding ways to help tenants improve efficiencies between public and private cloud workloads while controlling costs and meeting SLAs. Hybrid cloud architectures provide the flexibility organizations need to allow application workload requirements to determine where they should run.

Inevitably sustainable

Sustainable practices have become important touch stones in boardrooms around the globe, as well as with customers, investors, governments, and the public. Colos are rising to the challenges with numerous sustainability initiatives.

Customers want to be associated with suppliers that can demonstrate positive environmental policies and actions. Many data centers have negotiated renewable energy utility contracts and other off-setting policies to reduce their net-carbon emissions.

Physical network infrastructure is a strategic foundation that helps future-proof colos in a hybrid compute environment. It provides the solutions that ensure smart, scalable, and efficient connectivity as the platform for colos and their customers to compete and succeed in this hybrid global marketplace. 

31 | DCD eBook • datacenterdynamics.com
Cloud repatriation has grown increasingly popular in recent years with 80 percent of companies planning to repatriate at least some of their workloads that are currently hosted in the public cloud
> Michael Akinla Panduit
Colo is fast becoming the destination for connecting enterprises, service providers and cloud platforms
> Michael Akinla Panduit

Chapter 4: Climate change and hybrid IT

With data centers now consuming as much as 1.5 percent of the world’s power, over the last few years, sustainability has been rapidly ascending both corporate and customer agenda.

As touched upon in the previous chapter, customers now want to be associated with suppliers that can demonstrate positive environmental policies and actions.

That said, with greenwashing rife across the industry, it’s often difficult to tell what constitutes tangible action and what doesn’t. This is where hybrid IT could well help organizations move away from lofty climate claims and realize their ESG (environmental, social

and governance) goals.

According to AWS, ‘the average corporate data center has a dirtier power mix than the typical largescale cloud provider.’ Therefore, a combination of cloud, on-premise, and colocation services could be a significant step in carbon cutting. Whatever approach you choose, sustainability in your IT strategy is no longer a nice to have, but a necessity.

In this chapter we take a look at carbon counting, how colocation facilities are adapting to carbonnegative ‘hyperscaler’ pledges and beg the question, when it comes to the environment, how do you know how well is your data center is really doing?

32 | DCD eBook • datacenterdynamics.com >> DCD eBook | Hybrid IT

Count your carbon

Organizations deciding whether to run a data center or move to the cloud should do some carbon accounting

33 | DCD eBook • datacenterdynamics.com
Peter Judge DCD

In a world drowning in data, network traffic is already a complex task. But the rise of Edge computing will complicate matters all the more, requiring bigger and smarter networks than ever before.

Enterprises considering their options will automatically look at the financial impact of any options. They should also look at carbon emissions.

And they may find that decisions about their IT resources - including whether to run a data center - will have a big impact on their emissions.

Have you set targets?

If your company has set targets for limiting emissions, then you will need to track those emissions so you know if you have met the targets.

Even if fighting global warming is not a number one corporate goal for your company, there are plenty of other good reasons why you will have to keep track. Among other things, proposed changes to the SEC rules on reporting risk could mean US companies bigger than a certain size (having $25 million in assets) will have to report their carbon emissions. Other nations have similar rules.

So having a weak story on emissions can harm your prospects for raising money from investors and other sources. At some point, you need to do carbon accounting.

The major standard for carbon accounting is the Greenhouse Gas Protocol (GHG Protocol), a global standardized framework which measures emissions from private and public sector operations and their ecosystems. It is a joint effort from the World Resources Institute (WRI), and the World Business Council for Sustainable Development (WBCSD).

The GHG Protocol is where the Scope 1, 2 and 3 emissions are defined (see box out).

Carbon accounting uses ideas from lifecycle analysis (LCA) and there is also an ISO standard (ISO 14064) for it.

There are concerns that ISO

A full data center can be run more efficiently than an empty one, so when enterprises shift their IT into the cloud, it is often counted as a reduction in greenhouse emissions

14060 might not be exactly in line with the GHG Protocol. For this and other reasons, large companies have set up Carbon Call, a movement to make sure carbon accounting is actually useful and consistent.

The new SEC rules are likely to apply to the most obvious emissions your company produces - Scope 1 (direct) emissions and Scope 2 emissions produced by your energy suppliers.

You may also have to report on Scope 3 emissions - those you cause within your entire ecosystem of suppliers and customers - which is normally a much larger figure

If you have set targets for Scope 3 emissions, then you will have to account for them. And the SEC could well come after you for detailed figures.

What’s this got to do with your data center?

Given that IT is likely only a small part of your carbon footprint, it can get overlooked, or dealt with too quickly - but there’s a serious debate in the data center and cloud sector, over who has the best story on emissions.

If you are calculating the carbon footprint of your IT, you must determine the emissions (Scope 1, 2 and hopefully Scope 3) of the servers and network equipment you run in-house. If you build a data center, there will be significant Scope 3 emissions embodied in the equipment and the construction of the building.

But the chances are high that you also run some of your IT in the

cloud. You will need to account for the emissions that causes. But will those emissions be counted in the same way you account for in-house IT?

When the cloud began to take off in the 2010s, cloud providers asserted that they were reducing the carbon footprint of their customers, because the IT resources in the centralized cloud data centers were deployed more efficiently.

All the IT loads were virtualized and aggregated on the smallest number of servers, so there was less wasted hardware - a full data center can be run more efficiently than an empty one, so when enterprises shift their IT into the cloud, it is often counted as a reduction in greenhouse emissions.

In 2020, a study led by Laurence Berkeley National Laboratory found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used - and therefore little increase in Scope 2 emissions.

The result was attributed in part to small inefficient enterprise data centers being replaced by more efficient capacity in the hyperscale facilities run by cloud service providers.

Coauthor Arman Shehabi of LBNL said: “Less detailed analyses have predicted rapid growth in data center energy use, but without fully considering the historical efficiency progress made by the industry.”

How green is your cloud?

The cloud leader Amazon Web Services (AWS) has lost little time in capitalizing on this, and offers a free tool for customers, which tracks the carbon footprint of cloud resources in AWS data centers. It then helpfully helps users compare this with what they might emit if they ran those resources in an in-house facility.

Needless to say, the in-house figures are estimates made by Amazon, and AWS instances always come out much better. In many instances, they come out an unlikely 88 percent better. The tool is also

34 | DCD eBook • datacenterdynamics.com

A study by LBNL found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used.

limited in only reporting monthly aggregate totals.

Amazon has promised that it will have net-zero carbon emissions by 2040, so the company tells users that moving to the cloud is a sure-fire way to reduce emissions.

“If you are an AWS customer, then you are already benefiting from our efforts to decarbonize and to reach 100 percent renewable energy usage by 2025, five years ahead of our original target,” said AWS evangelist James Barr in a blog post.

Barr says, “the AWS path to 100 percent renewable energy for our data centers will have a positive effect on [customers’] carbon emissions over time.”

However, it’s worth pointing out that the AWS tool only takes account of Amazon’s plans to use renewable energy (Scope 2) in the AWS cloud, ignoring Scope 3.

And there are question marks over the way AWS accounts for its own emissions, since it makes heavy use of power purchase agreements (PPAs). It pays for renewable energy to match the amount of energy it uses - but it matches variable renewable sources with AWS’s steady consumption - so its PPAs may only cover about half the energy used in the AWS cloud, according to a report written by McKinsey for the Long Duration Energy Storage Council.

AWS is not alone - Google also offers a carbon footprint tool to cloud customers, but this does also include useful features such as a reminder to switch off server instances which are not being used.

Microsoft also offers a footprint tracker for customers of its Azure cloud. Again, it will be important to make sure this is tracking emissions in the same way you follow them for your in-house emissions. And also it will be important to note that the tool has a vested interest in presenting a good record on Microsoft-hosted resources.

Look for a third party

Given the potential conflicts of interest, you may want a third-party to measure your cloud footprint. One company that claims it has this is Cirrus Nexus, a company that has moved into cloud carbon accounting from straightforward financial measures.

“The same data that we collect for cost optimization also works for carbon,” Cirrus Nexus CEO Chris Noble told DCD at the launch of its TrueCarbon tool. “If a company is running 100 VMs in a data center, we can tell them the most cost-optimized place to run that - whether it be in that data center, some other data center, or another cloud provider. At the same time, we can say you’re causing X amount of kilos of carbon to be produced - and you’ll produce less carbon somewhere else.”

The Cirrus Nexus tool examines cloud use in real time, and crossreferences that with the known footprint of the data centers used in the regions they operate.

Customers can set their own internal carbon price, which then creates an incentive to move resources to the least environmentally damaging cloud.

“The business is now incentivized to go and put it in a less carbongenerating region, or a less carbongenerating data center,” says Noble.

As with all the other cloud carbon accounting tools, the job of comparing with in-house resources remains. For that job, you will need to have your own internal expertise - or work hard to find someone outside your organization with no vested interest in selling cloud or onpremise solutions. 

Understanding

Scope 1, 2 and 3

Carbon emissions are not simple to account for. As well as the green house gases you produce yourself on site (for instance by running a diesel generator), there are more which you are indirectly responsible for.

Scope 1

These are the direct greenhouse gas emissions produced directly from operations that your company owns or controls.

Scope 2

These are the indirect emissions cre ated by generating the energy used by the company. This includes electric ity, but also includes steam, heating or cooling if your organization buys those in.

Scope 3

This is the potentially vast category of emissions created within your supply chain. This includes both upstream and downstream emissions. If your company has a building con structed, there will be a lot of scope 3 emissions in the materials such as steel and concrete, and Scope 3 would also include the emissions embod ied in making the equipment such as IT systems that you use, and in providing you with raw materials to carry out your business. Scope 3 also includes downstream emissions from products shipped, used and eventual ly recycled by customers.

35 | DCD eBook • datacenterdynamics.com
36 | DCD eBook • datacenterdynamics.com How is the world of colocation adapting to carbon-negative ‘hyperscaler’ pledges? >Broadcasts: Watch on demand Panels Click here to receive a link to the full presentation
37 | DCD eBook • datacenterdynamics.com
How do you know how well your cloud and edge data centers are really doing when it comes to sustainability?
>Broadcasts:
Watch on demand
Click here to receive a link to the full presentation
Welcome to our new Conapto Stockholm 4 South data center adding 20 MW and 6400m2 of computer room space. Together with our existing Stockholm 2 South, this brand new facility will form our upgraded south campus with a total capacity of 24 MW and 7600m2 of computer rooms. Opening September 2023 this will be the perfect place for your next data center deployment. Sustainable, Secure and Well-connected! NEW 20 MW DATACENTER IN STOCKHOLM Find out more on www.conapto.com

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.