Sponsored by
Enterprise Supplement
INSIDE
The search for control over hybrid IT spend US Government goes Cloud First
Banks and healthcare weather the storm
It’s time to count your carbon
> The US Cloud First initiative aimed to shutter data centers
> Two key sectors responded differently to the pandemic
> IT resources in house and in the cloud will face a carbon audit
What if your data centre could be designed constructed, and tested before it reaches your facility?
Guess what, it can. Learn more about EcoStruxure™ Modular Data Centres
se.com/datacentre 2 | DCD Magazine • datacenterdynamics.com
Editorial
Sponsored by
Contents 4. Down with data centers The US government has spent a decade closing data centers and attempting to move to the cloud. What has it learned?
10. Are banks ditching their data centers? Finance is a conservative sector, but the Crash and the pandemic are changing things 13. Healthcare's cloud shift At the sharp end of the pandemic, healthcare firms found they had to come up with new procedures 14. C ount your carbon Financial reporting rules could require you to track your emissions - and that may affect your IT choices
In search of control
R
ight now, the story of enterprise IT is a search for control.
In-house and on-premise data centers grew massively in the 1990s and early 2000s, and then alternatives emerged in the form of both colocation and the cloud. During this period, enterprises began to realize - or be persuaded - that they should be addressing efficiency and also reducing their carbon footprint. That's led to a situation now, where any IT choice will be contingent on issues which might have seemed tangential or irrelevant ten years ago. In this supplement we take a look at how this environment is affecting IT choices in several sectors.
Cloud First for Government
4
10
14
For more than ten years the US public sector has been given an overarching goal: use the cloud to get efficiency gains (p4). Big changes like that, at big organizations, don't get anywhere unless the idea can be summed up simply - so it's often been presented as a drive to close data centers. This mission looked like a never-ending struggle at many stages. It seemed that for every data center that was closed, a new one was found. But now we can take stock and we can see that progress has been made. It's also clear that the experience has created some lessons which could be learnt outside the public sector.
Banking on a new way Finance can be as big and hidebound as the public sector but a series of shocks have forced banks to take stock of how they get their IT done (p10). The rise of mobile apps and the decline of cash means that banks IT systems must reach more users and serve them better than ever before. At the same time, cloud is offering opportunities to become more flexible. Not long ago, IT resources outside of the banks' premises would have been unthinkable. Now it's the norm.
Healthy developments During the pandemic, medical organizations had to meet new demands and adapt to new situations (p13). Maybe the cloud's role has been oversold however. DCD hears from healthcare providers who point out that no strategy will work well if you don't have in-house skills to manage it. That goes for on-premises data centers, but it also goes for cloud provision.
Carbon counting Finally, enterprise IT is increasingly driven by environmental measures as much as by financial issues. Carbon accounting is becoming an important measure alongside regular accounts. This may have a big impact on choices such as whether to move to the cloud or maintain an onpremises facility. Check out our final article, for some pointers on how to get your carbon figures straight and make the best decision for the world.
Enterprise Supplement | 3
Down
with data centers Peter Judge Global Editor
The US Government is ten years into a project to reduce and eliminate data centers and shift to the cloud. The end is not yet in sight
A
dd up all the Federal government bodies in the US, and you have a giant, sprawling enterprise. There’s a lot we can learn from what it has been doing with its data centers. Back in 2009, President Barack Obama’s administration decided that the IT provision of Federal agencies needed coordination, and appointed the United States’ first Federal CIO, Vivek Kundra. Among other things, Kundra turned his attention to data centers which were, at the time, just emerging as a potential issue of concern. The great thing about Federal efforts to consolidate data centers is they show us a large-scale effort taking place in the public eye. Private sector organizations have had similar issues, but they rarely get the same scrutiny.
4
DCD Supplement • datacenterdynamics.com
Public Sector Enterprise
Cloud First
The data center hydra
virtualization, power metering, and killing unused “zombie” servers.
Federal bodies were all operating with little coordination and strategy, with a result that new data centers were being opened willynilly. In 2010, Kundra issued a “Cloud First” instruction, that the government should use cloud computing where possible, to save costs and reverse the data center sprawl. By June 2011, Kundra had left his government job to become a Harvard professor (and then swiftly on to an executive position at Salesforce.com). But his initiative has continued, under various names, and become an epic quest for efficiency.
How do we assess the progress of an initiative, when the environment has changed so rapidly? Delie Minaie, principal in digital and cloud solutions at Booz Allen, says the goals themselves have also changed: “Ten years later, the paradigm is changing around Washington from 'Cloud First' to 'Cloud Smart' going beyond accelerating agency adoption of cloud-based solutions to enabling the way the mission gets executed.”
By August 2017, under the new definitions, the government now said it had 12,062 data centers. Some had been commissioned since the original FDCCI, but many were simply newly classified as data centers.
Before he left, Kundra followed up Cloud First with the Federal Data Center Consolidation Initiative (FDCCI). By 2015, he said the federal government needed to close 40 percent of its data centers, to hit a target of 800 facilities. The IT resources in those facilities should all be moved to the cloud, or else consolidated into fewer, more efficient data centers.
In the intervening years, the definition of data centers had broadened. Given the broad brief to shut data centers and get more efficient, agencies first had to find out what facilities they had. And they found there were a lot of racks and cabinets in closets distributed throughout agencies.
Things got interesting when that plan encountered reality, and Government advisors realized how hard it is to actually count data centers. The more they looked, the more they found. Pretty soon, the FDCCI had a target of shutting down 1,200 facilities, a move which was reckoned to save the government between $5bn and $8bn per year. That figure then grew even more. In 2014, preparing for a new streamlining initiative, the Federal Information Technology Acquisition Reform Act (FITARA), the Government Accounting Office did a count, and found good news and bad news. The good news was that under FDCCI, a massive 3,300 data centers had been closed. The bad news was that like a hydra, government IT had spawned new facilities. There were still another 11,700 facilities that needed closing (many of them “created” by new classifications for data centers).
Given that, she tells DCD: “the verdict is still out on what constitutes ‘success’.”
In 2010, a facility counted if it was 500 square feet or more, and met stringent availability requirements. There were only 2,094 data centers that met those criteria. In the ensuing years, the definition grew. It still excluded print servers and network wiring closets, but some said it now included “any room with a server”.
At this point, the Office of Management and Budget suggested splitting facilities into “tiered” data centers (with a UPS, dedicated cooling and a backup generator) and “non-tiered” facilities. OMB wanted 25 percent of tiered data centers to go by October 2020, along with 60 percent of non-tiered ones. Twenty-four agencies fell under the FITARA and the DCOI, and in 2019 the Government Audit Office checked on progress, finding 6,250 data centers had been closed by August 2018, saving more than $2.37 billion over the years 2016 to 2018. This wasn’t enough for the GAO, which found only 13 agencies were on target: "Several agencies indicated that they were seeking revised closure goals because they viewed their goals as unattainable,” said the GAO’s report. Some of those non-tiered facilities were too essential to go, agencies pleaded.
Look up the stack In parallel, agencies found that the real savings came by looking up the stack and finding the services which should be consolidated. No agency should be operating two email services, for instance - and before they commission any new service, agencies should see if other bodies have already developed something that meets their needs. In 2016, the Government added another specific program: the Data Center Optimization Initiative, (DCOI), which went into more detail. All the remaining data centers should improve efficiency using measures such as
Better utilization Remember that DCOI added some efficiency measures? Back in 2009, Kundra’s team had been shocked to find that some server utilization rates were as low as five percent. The OMB demanded this be raised to 65 percent. The DCOI also wanted all tiered data centers to have 100 percent energy metering and 80 percent utilization - and a power usage effectiveness (PUE) of 1.5 or lower. Any new data centers that were allowed, should achieve a PUE of 1.4. Progress to these targets has been
In 2010, the US government operated 2,094 data centers. By August 2017, it had 12,062 data centers but defined what a data center is differently Automation Enterprise Supplement Supplement || 55
slow. PUE targets were met by eight of the 24 agencies, but energy metering, server utilization and automatic monitoring were only in place at three agencies each in 2019. The poster child for success was the National Oceanic and Atmospheric Administration (NOAA) which adopted a cloud-first policy, which has enabled it to deal with surges in web demand in the storms and weather events which have plagued the US since hurricanes Irma and Harvey in 2017. Alongside this, analysts and advisers have been urging the authorities to keep an eye on the goals and address the actual activities of the organizations, not just their assets. As Minaie puts it: “To truly realize the benefits, an interdisciplinary approach centered around IT modernization is needed for federal enterprises to provide ROI, enhanced security and resiliency while deliberately reducing the data center footprint.
needed to be done, particularly against performance metrics. In 2022, there were still 29 data centers slated to close by the end of the fiscal year, but things actually looked better. All 24 agencies met their cost savings goals for fiscal year 2020 - amounting to $875.10 million saved in 2020, and $335.88 million saved in 2021 up to August.
"Closures and savings are expected to slow in the future according to agencies' Data Center Optimization Initiative (DCOI) strategic plans"
Where are we now? In 2021, the OMB announced the FDCCI had saved $6.24 billion, but again said more
6
DCD Supplement • datacenterdynamics.com
The trouble is, as the most obviously wasteful facilities go, the DCOI is running out of low-hanging fruit. Future closures will be tougher and have smaller paybacks. "Closures and savings are expected to slow in the future according to agencies' Data Center Optimization Initiative (DCOI) strategic plans,” said the OMB’s 2022 report. “For example, seven agencies reported that they plan to close 83 data centers in fiscal year 2022 through 2025, and save a total of $46.32 million." It’s also hard to assess how well optimization is going. If agencies can convince the OMB that data centers have to be the way they are, they get an exemption. It seems that a very large proportion get such exemptions, as around one-third of agencies are reporting that the DCOI targets are “not applicable.”
Public Sector Enterprise
What can we learn? At this point, it’s worth looking across at the private sector, where plenty of large organizations have made bold plans to move to the cloud. The results of those strategies don’t always get thoroughly scrutinized, but it’s clear they don’t always go well. “Government IT professionals operate in an entirely different ecosystem than their private sector counterparts,” points out Minaie, “one that includes complex, high-stakes mission sets and a budget of taxpayer dollars overseen by Congress.” Despite this, the private sector can learn a lot from government, she adds: “While the sections are distinctly different, the core tenets of cloud security bear equal importance for the private sector.” On the one hand, the government has an effective, repeatable approach she says, because “government-grade standards and processes” and regulatory requirements such as FedRAMP, NIST, HIPAA, and HITECH, will “drive peak performance and strengthen security posture.” On the other hand, she adds: “The private sector has achieved compute and energy efficiency gains that would be difficult and costly for federal agencies to replicate going at it alone.” One private sector institution that did share its cloud-first experiences was JPMorgan - and its results were as mixed as those of the public sector. Despite a strategic drive to the cloud, the bank spent $2 billion on new data centers in 2021, out of a total tech budget of $12 billion. The news brought the bank sharp criticism from investors, who were conditioned to expect savings from the cloud, and a small drop in its share price. Answering a probing question from Mike Mayo of Wells Fargo Securities on an earnings call, Jaimie Dimon was forced to explain that, no matter how well it used the cloud, it still had to keep its data centers going - and even open new ones. We will come back to banks, and meet Dimon again elsewhere in this supplement. But for now, let’s observe that they have a lot in common with the public sector, including high levels of bureaucracy and conservatism. The difference is that the private sector is often better funded. Government agencies might find it harder to win
an increased tech budget, and shrug off spending $2 billion on activity that diametrically opposes a strategic direction.
Change the scorecard In the public sector, agencies have to report and measure their progress, but it’s clear that metrics aren’t simple. Moving to the cloud is no panacea, and Federal watchdogs want to see better ways to measure progress - tracking actual benefits rather than proxies. Each year, Federal agencies are graded by a scorecard issued by the House Oversight and Reform Committee, guided by the Government Operations subcommittee. In that committee, there’s a strong movement to get better measures of progress, according to Meritalk, a site which reports on Federal services. The Democrat committee chair Congressman Gerry Connolly said: “The scorecard needs to evolve to reflect the changing nature of IT services and to guarantee we are accurately assessing the modernization and IT management practices of federal agencies. A Republican colleague on the committee, representative Jody Hice, asked if continued consolidation was worth doing: “I think it’s a fair question as to whether indeed we’ve reached a point of diminishing returns. Beyond the current scorecard, I believe it’s time to take a hard look at how FITARA can evolve from this point.” “The goal here is to incentivize progress,” said Connolly, “not to get a gold star on our foreheads.”
An outside view Although there have been hiccups along the way, some outside observers are impressed with the progress. “While there may be some edge cases, we have seen a trend in adoption of colocation services where they support cloud, or hybrid colocation use cases and maximize resilience, security and efficiency with an eye towards sustainability,” says Minaie. “Almost every federal agency has progressed toward climate change goals, looking inward to seek greater efficiency and access to Edge computing which unlocks greater speed, flexibility and productivity,” she says. “Federal
"We have seen a trend in adoption of colocation services where they support cloud, or hybrid colocation use cases and maximize resilience, security and efficiency with an eye towards sustainability"
agencies that work with cloud and digital ecosystem partners inside a vendorneutral colocation provider are well positioned to meet their data center modernization and sustainability needs. But she’s realistic that this is not a journey to a single end goal: “‘Cloud Smart’ is the North Star where it will be a business imperative to redefine how the mission gets executed promoting service management, innovation and adoption of emerging technologies. “As we see more cloud adoption across the federal landscape, federal agencies must focus on upskilling their workforce, enhancing security postures, and sharing knowledge in best practice cloud acquisition approaches.” She thinks the pandemic drove cloud but won’t produce a complete shift: “While the Covid-19 pandemic was a forcing function for cloud adoption, I think we will live in this hybrid on-prem/ cloud world for quite some time. “The good news is that we are making progress and federal agencies are actively getting out of the hardware management business and paving the way to a new generation of federal IT that’s far more agile and resilient than in decades past.”
Automation Enterprise Supplement Supplement || 77
A Recipe for Award-Winning Data Centre Digital Transformation Newcastle City Council Turns to Schneider Electric and their partner Advanced Power Technology for Data Centre Resilience and System Visibility by Lavina Dsouza
N
ewcastle City Council has recently transformed its data centre operations, consolidating its main IT systems into a single data hall, with upgraded power and cooling infrastructure and new management software by Schneider Electric. In the process, it has improved resilience and uptime, simplified the management of all its infrastructure equipment, and made part of its data centre available to other organisations, which helps to offset the costs of its operations.
Setting the Scene at Newcastle City Council Newcastle City Council employs over 5,000 people providing local-government services to citizens throughout the city. Its data centre hosts numerous applications, including those supporting council tax collection, social services, library services, education and road traffic management. It also has links with the IT systems of other essential public-service bodies such as the NHS and Police. Given the vital nature of these services, the Council’s IT systems must run reliably around the clock and any downtime will have a significant effect on the local populace.
and disorganised, many infrastructure elements were nearing their end of life and in need of regular maintenance, and management of the infrastructure was labour intensive and time consuming.
The Challenge: tangled legacy issues
“We had three different server rooms with links between them,” says James Dickman, Senior ICT Solutions Analyst at Newcastle City Council. “Telecoms routers were in one room and servers in another, so it was difficult to manage them. We also had separate UPS systems in each room, and air handlers for cooling, many of which were old and in need of replacement.”
The Council’s IT systems had grown steadily over the years to support the evolution of its e-Government approach with the automation and digitisation of many of its activities. But the situation had evolved to the point where the data centre layout had become haphazard
“Also, we had the inevitable ‘spaghetti effect’ of legacy systems with numerous cables installed under the floor over many years, now causing choke points and were becoming very difficult to manage and maintain.”
8
DCD Supplement • datacenterdynamics.com
The Solution: standardisation and consolidation As part of a refurbishment of its Civic Centre, Newcastle City Council consolidated its data centre into a single room with a raised modular floor. Following a competitive tender, the Council chose EcoStruxure™ for Data Centers, Schneider Electric’s IOTenabled, open and interoperable system architecture for the new facility. The data centre was designed and built by Schneider Elite Partner, Advanced Power Technology (APT). The new integrated data centre infrastructure solution incorporates a variety of equipment from Schneider Electric including APC™ NetShelter™ Racks, Galaxy range UPSs and PDUs, monitoring and management software. 40x NetShelter IT racks are installed in
SCHNEIDER Electric | Advertorial software, EcoStruxure IT Expert. In addition, the technical environment is being monitored using an APC NetBotz appliance together with temperature and humidity sensors. The visibility this gives to the operation of the data centre is a marked improvement on the previous monitoring capability, according to James Dickman:
three aisles with cold aisle containment to optimise cooling efficiency. For uninterruptible power, Newcastle City Council have standardised on the Galaxy range UPSs, specifically the Symmetra PX 250 modular system. In N+1 redundant configuration, the new UPS solution enables Newcastle City Council to scale power protection and runtime as their business requirements evolve and change. Standardisation on the UPS has greatly improved the data centre’s ability to withstand power outages. “Previously, we were able to withstand a loss of power for about 20 minutes,” says Dickman. “Now we can operate for three hours on batteries, if needs be. We also have a backup generator, which we didn’t have before, to provide alternative power in the event of a lengthy loss of our mains supply.” James Dickman continues, “Our resilience and uptime have been greatly improved. On one occasion recently, there was a power outage which affected many buildings close to the Civic Centre where the data centre is housed. But the UPS systems took over, the backup generator came online when it was needed and 20 minutes later, the system rectified itself once power was restored. Nobody knew there had even been an issue until I checked the system logs the following morning!” A further benefit of the EcoStruxure for Data Centers solution is a more effective approach to data cable management. More structured cabling provides greater certainty about connectivity within the data centre, reducing complexity and the potential for human error, improving maintenance and serviceability with easier and safer access. The cable management solution also increases cooling efficiency by improving airflow in the cabinets, as well as providing improved scalability by simplifying moves, adds and changes in the space.
EcoStruxure IT aiding Newcastle City Council to make the most of its data centre power The new data centre is managed using Schneider Electric’s next generation data centre infrastructure management (DCIM)
“We did have various monitoring systems in place before,” he says, “but they were not integrated, and we still had to perform manual checks to make sure everything was functioning properly. Now there are sensors in each one of the racks allowing them to be monitored constantly. We also have CCTV in the data centre which we never had before so that we can be alerted to any security issues.” The monitoring and management capabilities of EcoStruxure IT enables the City Council’s data centre operations team to identify any emerging concerns early – such as batteries suffering impaired performance as they near end of life. Armed with such information, upgrades and maintenance can be scheduled and performed with the minimum of downtime, avoiding any disruption to the ongoing provision of services to both internal and external customers.
Benefits: greater insight and more efficient operation The results have been greatly improved visibility of the data centre operations and consequently provided a greater ability to respond to issues as they arise. “We know everything is being monitored constantly and that gives us great reassurance,” says James Dickman. “Any issue gets flagged and can be routed by the system to mobile devices, like smart phones, which is very useful if events occur out of hours.” Another key benefit of the software is that the power consumption of each of the IT equipment racks can be monitored. Power consumption data not only helps the Council to improve its own electrical efficiency, but also opens up elements of the facility to cooperation with other bodies.
that helps to offset its overall operating costs. It also makes possible the operation of a reciprocal disaster-recovery operation with another council which greatly improves the resilience and continuous uptime of each body.
Schneider Electric and APT meet the data centre needs of Newcastle City Council James Dickman says, “working together, Schneider Electric and its Elite Partner, APT were able to deliver a new data centre while the building was being refurbished on what was effectively a construction site. They drew up the specification, delivered the solution and had everything up and running with no unplanned downtime! Such pressure on all parties involved means it’s not an experience I would like to go through
again – but delivery of the new facility was highly successful.” “The project to design and deliver a new data centre for Newcastle City Council demonstrates how each service and product line provided by Advanced Power Technology comes together to deliver on performance and resilience,” said John Thompson, Director of APT. James Dickman concludes, “As a public body we are always looking for cost and energy efficiencies. Schneider Electric and APT were able to design and deliver an overall data centre solution that meets our needs and our expectations. The new facility enables us to meet our service commitments to all stakeholders while minimising the carbon impact of delivering IT services.” You can download the case study here.
For example, about 10% of the data centre’s real estate is now leased out to other public sector bodies including HM Courts and the arbitration service ACAS. By carefully monitoring the power supply of each rack, the Council can charge accurately for its hosting services, producing a revenue stream
Enterprise Supplement | 9
Are banks ditching their data centers? Or are they not quite ready to let go? Georgia Butler Reporter
I
t is the 27th of June, 1967. A sweltering day.
Outside of a Barclays bank in Enfield, North London, a crowd has gathered to witness the unveiling of the world’s first Automated Teller Machine (ATM). Despite the 27°C (81°F) heat, the assembled officials wore traditional suits and ties - but there was a sense that something was changing. The introduction of the ATM was a huge technological leap for what had been a relatively static sector. But by the 1960s the seeds of digitization had been sown in the banking world. Banks already used mainframes, and these developed into data centers during the later
decades of the 20th century, enabling new ways for customers to contact their bank. In 1989, Midland Bank launched First Direct, a branchless bank operating through call centers. Other banks and financial services such as Smile and Egg encouraged banking at a distance by phone and online. By 2001, Bank of America reported that three million of its customers used online banking.
When change came for the banks But all of this was gradual change. It’s only recently that things have sped up - and the 2008 financial crisis was a major catalyst. It’s possible that the 2008 crash will be seen as the real beginning of the end for in-house
10 DCD Supplement • datacenterdynamics.com
data centers at banks. In the recession that followed the crash, banks lost the trust of the general public. Perhaps in response, they began to work more closely with FinTech companies in new approaches. Online banking had arrived, and mobile banking was available - increasingly capable smartphones put the two together so people could handle money on the move without branch visits. Meanwhile, FinTech companies had new tools, and began to promise that AI and the ubiquity and speed of mobile networks would allow them to dramatically advance the culture of convenience and immediacy. For the banks’ part, they were struggling to see how that convenience could be delivered.
Enterprise Banks In the wake of the financial crisis, regulations were adding additional hurdles, increasing their workloads and the complexity of functioning in an attempt to reduce risk This set of pressures forced banks to rethink a lot of things. They welcomed in FinTech companies and moved faster in offering more flexible services.
Welcoming the cloud They also dug back inside their own infrastructure, and questioned whether their internal digital infrastructure was up to the job. The on-premise data center began to look like a liability, rather than an asset. Banks moved their servers from their own back offices into shared space such as colocation data centers. But the next step - moving to a cloudbased infrastructure running on shared machines in a centralized data center - could still seem more risky. “If you have followed financial industries’ data centers, I would say going back even as recently as 10 years ago, a financial services company embracing colo or cloud would have been viewed as highly unlikely,” Marcus Hassen, a group manager from the US financial holding company Truist said, in a recent DCD panel. For Hassen, the rate of change is impressive, given banks’ conservatism and the recent development of the cloud. Online applications didn’t start much before Salesforce.com, and the public cloud didn’t take off until Amazon got serious about offering Amazon Web Services. “Public cloud has only even been a segment since around 2006 when Jeff Bezos wanted to find ways to diversify,” said Hassen. “You have to hand it to the hyperscalers for the way they've been able to sell many CTOs and CIOs on the cloud being the superior business model.”
Digitization on steroids Ten years on from the financial crash, we had the cataclysmic incident of a global pandemic, which hastened digitization in many ways. During the pandemic those of us with jobs that could be done from home were forced to huddle indoors. This led to a boom in the data center industry - but a boom primarily reflected in spending on cloud-based services, not on on-premises systems. Following the pandemic, it’s clear that all sectors, not just banking, are shifting resources towards the cloud, and away from on-premises facilities. In IDC’s Worldwide Industry CloudPath Survey (May 2020), 57 percent of banks responding to the survey said that they already ran in hybrid environments, with another 31 percent moving to hybrid models in 12 months, and a further nine percent moving to hybrid in 24 months. Ali Moinuddin from Uptime Institute spoke of this transition: “Over the last few years, what we've seen is that more and more organizations are being much smarter about how they are deploying their IT assets, and which venues they are using. “They often have a multi-cloud, multi-colo service partner, and they are also running their own data center. They've transformed their own infrastructure, which they were planning to do before the financial crisis. But after the financial crisis, many of those were taken off the balance sheet. And hence, we've seen a significant increase in the use of cloud infrastructure and more importantly, colocation service providers.”
The future is hybrid This being said, on-premises IT is not (yet) dead. There is a good reason why banks are moving into hybrid - keeping their own onpremise infrastructure alive alongside new applications in the cloud. That reason is risk. It turns out that using multiple cloud providers can create a smokescreen behind which single points of failure may hide. Moinuddin explained this in further detail:
"After the financial crisis... we've seen a significant increase in the use of cloud infrastructure and more importantly, colocation service providers"
“As you start to distribute your infrastructure across multiple service providers, you start to increase complexity. And as you start to increase complexity, you can actually increase the level of risk in terms of potential outages that may happen within critical IT services that are supporting critical business services themselves. “There are some concerns around the risks associated with concentration, whereby certain significant service providers could be hosting a number of financial institutions which are critical to a domestic economy in a specific region in the same availability zone. So if there was an outage event, it wouldn't just be one bank that is impacted, it would be several banks, which would have an actual very specific, and very negative impact on the reputation of the financial services sector.” In February this year, five banks simultaneously went down in Canada, leaving customers unable to use online or mobile banking, or use their debit cards. There was no explanation given as to the sudden outage.
Life without cash? Another impact of the pandemic was a move away from using cash. Half a century on from that first cashmachine, we are starting to move towards a cashless society. During the pandemic, many shops accepted only card payments to limit physical contact as much as possible, and in 2020 cash payments reduced by 35 percent. Since the pandemic, things have not really bounced back. The unforeseen consequence of this is an increased dependence on those online systems. Hard cash is something which can be reliably carried and used, whether or not we have online access to working digital banks. Mobile banking debit cards and credit cards all rely to a greater or lesser extent on online services. In this world, when services fail people can be left extremely vulnerable. It is essential that banks protect themselves and their customers against this risk. When banks assess the relative reliability of the cloud and on-premises IT, they must be aware that the stakes are getting higher. In this situation, it can be tempting to keep IT on-site and under your control. Charles Hoop, global lead IT sourcing and category management at Aon told DCD that: “a lot of this is philosophical, religious almost, in terms of some of the biases that drive it
Enterprise Supplement 11
[desire to stay on-prem].” “As things have been outsourced, it's all third party. I don't know that there's many electrical engineers on staff who can read a single line and actually spot that single point of failure.” But Hoop believes that increased control is not enough to justify the cost of on-premise systems compared with the cloud: “I think if you just looked at the dollars and cents, the technical cost benefits, I can't see why you'd be building your own facility.”
The real cost of on-prem Of course, building your own facility from scratch would trigger a lot of additional costs. But in the banking sector, we are often not talking about building new data centers, but upgrading within an already functioning onpremise facility, or moving resources from that facility to colocation or cloud. This kind of process is in itself costly, but it can ultimately save money in the long run, if it is done right. The real solution comes from planning and understanding what data and computing need to be in the cloud and what should remain behind. Given all that, Ali Moinuddin argues that the true future is in hybrid. “We are seeing a steady migration from legacy enterprise assets into both public and private cloud, and colocation. As we [Uptime Institute] have been developing our Financial Services Assessment, about 50 percent of the banks have told us that currently, they have a no public cloud policy. These were global banks from across the world.
"All the stuff going to these new data centers, which is now completely up and running, is on apps. Most of the applications that go in have to be cloud-eligible. Most of the data that goes in has to be cloud eligible." This is still part of a long-term plan to become fully cloud-operated. But Moinuddin argues that: “We’ve seen some financial services, as they scale their requirement in the cloud, realize that not everything needs to be in the cloud and they start to repatriate some of the data and services that were being outsourced.”
Some apps aren’t cloud-friendly Rocco Alonzi, AVP of data center operations and governance for Canadian financial company Manulife, has experienced just this issue. “When we started looking at if we could move this application that we've had for many years into the cloud, it may not be cloud-friendly, and it may not work properly, and then you have to ask what is it going to cost to do that? You never really reclaim the RTI [research technology and innovation]. “But there's definitely hybrid IT coming into play and it should for a couple of reasons. Way back in the day, if a data center was bursting at the seams, we would need to build a new data center or we would have to pack up and move everything. But as you start moving your loads into the cloud, you can maintain that data center and probably have it razor-sharp in the sense that it's only
“However, they are building private clouds within colocation, and their own enterprise data centers.” In January 2022, it was announced that JPMorgan had done just this: built its own data centers for hosting private clouds. The company spent $2bn on new data centers in 2021 despite having an overall strategy to get IT into the cloud. The spend was met with criticism, and even a drop in share prices, but the company stated that the investment was necessary in order to provide data centers and cloud services in new markets like the UK. "We spent $2 billion on brand-new data centers, which have all the cloud capability you can have in private data centers," chief executive Jaimie Dimon told analysts on a call.
12 DCD Supplement • datacenterdynamics.com
the critical application processing that sits there.” While several banks, including Barclays and Natwest, have welcomed cloud computing with open arms and are looking to move entirely to the private cloud, banking as a whole seems currently unwilling or unable to fully leave enterprise data centers behind. On-premise computing does still offer solutions to the most critical and secure data, a reluctance to move towards a hybrid IT architecture could render traditional banks unable to keep up with the newer FinTechs who embrace the changes and trends in the industry, and lose out financially in the long run.
All tech has a life cycle While the ATM was a great advance, it has peaked. Cash machines are being removed from the walls in many sites as people use less cash. At the same time, the data centers which backed that generation of banking have also passed their peak, with fewer being built, and many of the old ones closing. But it’s not the end of the line for onpremises data centers just yet. They aren’t dying out, they just need to find a new role.
Enterprise Healthcare
Enterprise healthcare considers a cloud shift Checking up on healthcare IT
T
he pandemic forced businesses to dramatically bring forward IT plans as consumer habits changed virtually overnight.
Some companies saw demand dry up, while others struggled to keep up with a surge in activity. Getting the transition wrong could have meant the end of a firm’s existence, as users shifted to faster platforms more suited to the new world. But for the healthcare sector, this digital transformation was a literal matter of life and death. “The pandemic accelerated the change,” says Bashir Agboola of the Hospital for Special Surgery (HSS) in New York City. “Part of that shift has also been a change in where technology gets deployed and where computing occurs and where data is stored and manipulated,” Agboola explained during a DCD>Debate on the healthcare sector. “More and more is going from on-prem data centers into cloud facilities. That is what has allowed us to have the successful response that we have had over the past two years to the pandemic as far as technology and care delivery is concerned.” Agboola is a firm believer in what the cloud can do for the future of patient care. “I am hearing the same conversation at other healthcare providers - should we spend millions of dollars upgrading our data centers, or do we just begin to move workloads into the cloud?,” he said. “More and more, particularly those with aging data centers, are beginning to move their workloads out. They're not investing in data center infrastructure anymore - no one wants to spend tens of millions of dollars to upgrade a data center.” But in a separate DCD>Debate, the company’s senior director of data center
services, Keith Montalvo, was a little more circumspect. “I think one way I decide whether to outsource or do it in-house is how quickly does the technology need to be deployed,” he said. “Do we have the expertise in-house to implement or not?” Agboola also cited staff expertise as a critical factor, noting that the shift to cloud will require new staffers versed in cloud technologies, “but remains a challenge to find strong talent to help you with that journey.” Shane Brauner, CIO of biotech software company Schrödinger, concurred: "For a lot of businesses, I don't think running a data center or replacing hard drives in a system is super core to their business. So being able to take our resources internally a "nd move them to start building talent as a cloud engineer, rather than racking servers, it's a game-changer." Montalvo cautioned that losing too many data center technicians has its drawbacks - especially when it comes to colocation usage. “I think one of the things I'm seeing in the industry is, as folks farm out their internal data centers, and maybe leverage colo space more, the internal knowledge about critical infrastructure, like power and cooling, and even, just regular facility infrastructure is kind of being lost,” he said. “We're becoming more dependent on the colos to manage all that. And I think there is a due diligence internally that we need to maintain some knowledge about facility design to keep our colo partners honest around redundancy.” He added: “You need to be versed in maintenance requirements around facilities like thermal analysis on breakers, understanding input breaker diversity, understanding UPS capacity, understanding
Sebastian Moss Editor-in-Chief
distribution redundancy, etc. Some folks will sell you on circuit redundancy, but then you don't know that both circuits are coming from the same distribution panel.” Montalvo believes that “anyone looking to go to a colo should have an audit checklist of things you're going to ask the colo about their infrastructure: What kind of cooling do they have? Is that redundant? Do they do adequate testing on the generators? Is it weekly? Is it monthly? What is the capacity of those generators? What are the switchover systems? What is the end of life of those critical pieces of equipment? And when do they look to refresh them? "These are all things that I think, internally, companies should be armed with.” While HSS and other healthcare providers are maintaining some on-prem presence, and expanding into colocation, the future lies in the cloud, Agboola said. “In my organization, when we put together our enterprise cloud strategy, one of the key decisions we made was that we will build net-new capabilities in the cloud, rather than trying to build those on-prem,” Agboola said. “And then we think about cloud migration for other stuff opportunistically.” The cloud allows for new tools and systems “to be quickly ramped up,” he continued. This is what is being demanded by customers, “as they come to healthcare with experience working with retail and banking. There's so much change there, where many consumers have never set foot into a banking hall in a long time. They do all their banking remotely, with technology. “Consumers have similar expectations of healthcare providers,” he said, something only popular with the cloud. "“We really have no choice, if we fail to reinvent digitally, it will be the end of the business.”
Enterprise Supplement 13
Count your carbon Organizations deciding whether to run a data center or move to the cloud should do some carbon accounting Peter Judge Global Editor
E
nterprises considering their options will automatically look at the financial impact of any options. They should also look at carbon emissions.
you need to do carbon accounting.
And they may find that decisions about their IT resources - including whether to run a data center - will have a big impact on their emissions.
The major standard for carbon accounting is the Greenhouse Gas Protocol (GHG Protocol), a global standardized framework which measures emissions from private and public sector operations and their ecosystems. It is a joint effort from the World Resources Institute (WRI), and the World Business Council for Sustainable Development (WBCSD).
Have you set targets?
The GHG Protocol is where the Scope 1, 2 and 3 emissions are defined (see Box).
If your company has set targets for limiting emissions, then you will need to track those emissions so you know if you have met the targets. Even if fighting global warming is not a number one corporate goal for your company, there are plenty of other good reasons why you will have to keep track. Among other things, proposed changes to the SEC rules on reporting risk could mean US companies bigger than a certain size (having $25 million in assets) will have to report their carbon emissions. Other nations have similar rules. So having a weak story on emissions can harm your prospects for raising money from investors and other sources. At some point,
Carbon accounting uses ideas from lifecycle analysis (LCA) and there is also an ISO standard (ISO 14064) for it.
If you have set targets for Scope 3 emissions, then you will have to account for them. And the SEC could well come after you for detailed figures.
What’s this got to do with your data center? Given that IT is likely only a small part of your carbon footprint, it can get overlooked, or dealt with too quickly - but there’s a serious debate in the data center and cloud sector, over who has the best story on emissions.
There are concerns that ISO 14060 might not be exactly in line with the GHG Protocol. For this and other reasons, large companies have set up Carbon Call, a movement to make sure carbon accounting is actually useful and consistent.
If you are calculating the carbon footprint of your IT, you must determine the emissions (Scope 1, 2 and hopefully Scope 3) of the servers and network equipment you run in-house. If you build a data center, there will be significant Scope 3 emissions embodied in the equipment and the construction of the building.
The new SEC rules are likely to apply to the most obvious emissions your company produces - Scope 1 (direct) emissions and Scope 2 emissions produced by your energy suppliers.
But the chances are high that you also run some of your IT in the cloud. You will need to account for the emissions that causes. But will those emissions be counted in the same way you account for in-house IT?
You may also have to report on Scope 3 emissions - those you cause within your entire ecosystem of suppliers and customers - which is normally a much larger figure.
When the cloud began to take off in the 2010s, cloud providers asserted that they were reducing the carbon footprint of their customers, because the IT resources in the
A study by LBNL found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used. 14 DCD Supplement • datacenterdynamics.com
Enterprise Carbon centralized cloud data centers were deployed more efficiently. All the IT loads were virtualized and aggregated on the smallest number of servers, so there was less wasted hardware - a full data center can be run more efficiently than an empty one, so when enterprises shift their IT into the cloud, it is often counted as a reduction in greenhouse emissions. In 2020, a study led by Laurence Berkeley National Laboratory found that between 2010 and 2018, there had been a massive surge in computing capacity in data centers, with only a marginal increase in energy used - and therefore little increase in Scope 2 emissions. The result was attributed in part to small inefficient enterprise data centers being replaced by more efficient capacity in the hyperscale facilities run by cloud service providers. Coauthor Arman Shehabi of LBNL said: "Less detailed analyses have predicted rapid growth in data center energy use, but without fully considering the historical efficiency progress made by the industry.”
How green is your cloud? The cloud leader Amazon Web Services (AWS) has lost little time in capitalizing on this, and offers a free tool for customers, which tracks the carbon footprint of cloud resources in AWS data centers. It then helpfully helps users compare this with what they might emit if they ran those resources in an inhouse facility. Needless to say, the in-house figures are estimates made by Amazon, and AWS instances always come out much better. In many instances, they come out an unlikely 88 percent better. The tool ia also limited in only reporting monthly aggregate totals. Amazon has promised that it will have net-zero carbon emissions by 2040, so the company tells users that moving to the cloud is a surefire way to reduce emissions. "If you are an AWS customer, then you are already benefiting from our efforts to decarbonize and to reach 100 percent renewable energy usage by 2025, five years ahead of our original target," said AWS evangelist James Barr in a blog post. Barr says "the AWS path to 100 percent renewable energy for our data centers will have a positive effect on [customers'] carbon emissions over time." However, it’s worth pointing out that the AWS tool only takes account of Amazon’s plans to use renewable energy (Scope 2) in the AWS cloud, ignoring Scope 3. And there are question marks over the way AWS accounts for its own emissions, since it makes heavy use of power purchase
agreements (PPAs). It pays for renewable energy to match the amount of energy it uses - but it matches variable renewable sources with AWS’s steady consumption - so its PPAs may only cover about half the energy used in the AWS cloud, according to a report written by McKinsey for the Long Duration Energy Storage Council. AWS is not alone - Google also offers a carbon footprint tool to cloud customers, but this does also include useful features such as a reminder to switch off server instances which are not being used. Microsoft also offers a footprint tracker for customers of its Azure cloud. Again, it will be important to make sure this is tracking emissions in the same way you follow them for your in-house emissions. And also it will be important to note that the tool has a vested interest in presenting a good record on Microsoft-hosted resources.
Look for a third party Given the potential conflicts of interest, you may want a third-party to measure your cloud footprint. One company that claims it has this is Cirrus Nexus, a company that has moved into cloud carbon accounting from straightforward financial measures.
Understanding Scope 1, 2 and 3 Carbon emissions are not simple to account for. As well as the greenhouse gases you produce yourself on site (for instance by running a diesel generator), there are more which you are indirectly responsible for.
Scope 1
"The same data that we collect for cost optimization also works for carbon," Cirrus Nexus CEO Chris Noble told DCD at the launch of its TrueCarbon tool. "If a company is running 100 VMs in a data center, we can tell them the most cost-optimized place to run that - whether it be in that data center, some other data center, or another cloud provider. At the same time, we can say you're causing X amount of kilos of carbon to be produced - and you'll produce less carbon somewhere else."
These are the direct greenhouse gas emissions produced directly from operations that your company owns or controls
The Cirrus Nexus tool examines cloud use in real time, and cross-references that with the known footprint of the data centers used in the regions they operate.
Scope 3
Customers can set their own internal carbon price, which then creates an incentive to move resources to the least environmentally damaging cloud. “The business is now incentivized to go and put it in a less carbon-generating region, or a less carbon-generating data center," says Noble. As with all the other cloud carbon accounting tools, the job of comparing with in-house resources remains. For that job, you will need to have your own internal expertise - or work hard to find someone outside your organization with no vested interest in selling cloud or on-premise solutions.
Scope 2 These are the indirect emissions created by generating the energy used by the company. This includes electricity, but also includes steam, heating or cooling if your organization buys those in.
This is the potentially vast category of emissions created within your supply chain. This includes both upstream and downstream emissions. If your company has a building constructed, there will be a lot of scope 3 emissions in the materials such as steel and concrete, and Scope 3 would also include the emissions embodied in making the equipment such as IT systems that you use, and in providing you with raw materials to carry out your business. Scope 3 also includes downstream emissions from products shipped, used and eventually recycled by customers.
Enterprise Supplement 15
Deploy your data centre
with less risk using EcoStruxureTM Data Centre solutions.
EcoStruxure™ for Data Centre delivers efficiency, performance, and predictability. •
Rules-based designs accelerate the deployment of your micro, row, pod, or modular data centres
•
Lifecycle services drive continuous performance
•
Cloud-based management and services help maintain uptime and manage alarms
Discover how to optimise performance with our EcoStruxure Data Centre solution. se.com/datacentre ©2022 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998_20645938