The Data Centre Issue | EM360°

Page 1

THE

Is Prefabricated Modularity the Future for the Data Centre? Kevin Brown Schneider Electric

DATA CENTRE ISSUE

Why OpenStack is Winning the Open Source Race Dr Mike Kelly, CEO DataCentred

Uptime's Advice to Data Centre Owners: Involve Operations at the Start Andy Lawrence - 451 Research


DATA CENTRE

Uptime's advice to data centre owners: involve operations from the start Andy Lawrence 451 Research

T

he term ‘Data centre efficiency’ means a lot of things to different people. 451 Research’s Andy Lawrence discusses some best practices of how an enterprise operations.

should

prioritise

its

data

centre


To improve uptime, data centre owners and operators have traditionally focused on the physical infrastructure that supports IT, incorporating independent redundancies, monitoring systems, failover schemes and more. On the whole, the strategy has worked, yet research by Uptime Institute (an independent division of The 451 Group) and others shows that large-scale outages continue to plague the data centre industry – and that some operators continue to do much better than others. The level of downtime that still occurs might be surprising to some, given the significant economic consequences of service disruption and widespread use of standards, techniques and technologies dedicated to maintaining continuous availability. A major reason for some of the continuing problems, according to Uptime, is that design alone cannot guarantee data centre efficiency or availability. Operations management (e.g., capacity management, change management, incident management), maintenance strategies, staff and contractor training, and emergency-response procedures all affect availability. There are signs that this message is beginning to sink in: Uptime is reporting that a growing number of data centre owners and multi-tenant data centre (MTDC) clients are requiring third-party validation of


operational best practices to ensure optimal facility performance. (An analogy might be that airline owners and passengers don't just want to know that an aircraft is certified as functional when it leaves the factory – they want to be assured that the crews know how to maintain and fly it safely.)

As the physical infrastructure of the data centre becomes increasingly ‘commodified’, operational performance rises in prominence In its latest ‘Start with the End in Mind’ initiative, Uptime goes even further. In new data centres, it says, operations hold the key to efficiency (including availability). Operations is the ultimate client in any data centre expansion, Uptime asserts, and as such should be integral to the project from conception. By focusing on the way the data centre will be run from the earliest planning stage, owners increase the efficiency, uptime and ROI of their facilities while reducing cost and risk. Most data centre outages are caused by human error Partly as a result of the success of the Tier-classification system and the general adoption of redundancy in data centre designs, overall data centre outages caused by component failures are rare, and attempts to increase uptime solely through


improvement of the physical infrastructure are reaching a level of diminishing returns. Less than one-third of the unplanned outages reported in a recent survey conducted by the Ponemon Institute for Emerson Network Power were attributed to equipment failures, and respondents reported that most of those outages were avoidable: almost all were attributed to either human error or an equipment failure that may have been prevented had adequate training, monitoring or maintenance procedures been in place. These results are not dissimilar to Uptime's Networks data. Uptime's Networks have tracked incidents and outages in member facilities for over 25 years, and have compiled a detailed dataset of more than 5,000 incidents in over 400 data centres. Although outages in Networks member facilities are exceedingly rare (approximately one per decade), virtually all can be traced back to human error. These ďŹ ndings reinforce the importance of a comprehensive management and operations program to ensure data centre availability and maximise eďŹƒciency. Organisations that closely align data centre operations with business objectives and use industry best practices as the benchmark for continuous monitoring and improvement optimise data centre performance and realise the most eďŹƒcient return on their investment possible.


The new focus on training and operations from Uptime opens up further possibilities: while availability is generally good, almost all research suggests that energy efficiency and use of capacity is not. Over time, the focus on training and ongoing operations may offer a new channel for disseminating best practices in capacity management and energy efficiency. Will independent verification of operational best practices become a requirement? As the physical infrastructure of the data centre becomes increasingly ‘commodified’, operational performance rises in prominence. For MTDC operators and other IT service providers that need to meet client-imposed uptime requirements, an objective performance assessment can be a key differentiator. Most organisations have internal training and review procedures in place, and there are some standards developed for other industries or other purposes (e.g., ISO, ITIL, SSAE 16, SAS 70, EN 50600) that have been adapted to also address data centre facility availability. Historically, however, third-party validation of operational best practices based on a data centre-specific system has not been generally available. That has now changed, and facility owners are taking note. Uptime Institute, in consultation with industry stakeholders, has


developed one operations standard that is delivered based on two operations-assessment protocols specifically designed for data centres and created by data centre owners: Tier Standard: Operational Sustainability, for Tier-certified facilities, and the M&O (Management and Operations) Stamp of Approval, for data centres that are not Tier certified.

…certification could be a key differentiator for an IT service provider. Both these methodologies address the site management behaviours and decisions that impact long-term data centre performance, such as staffing and organisation (staffing levels, qualifications development;

and

skill

mix);

preventative

training

maintenance

and

professional

programs

and

processes; operating conditions and housekeeping; planning management; coordination practices and resources; and more. Are we entering a new stage in data centres, where operations are certified? Certainly, design and build certification has become increasingly important in recent years. It is now common for owners to include design or constructed facility certification requirements in datacentre construction request for proposals (RFPs) — and for potential tenants to ask for certification from MTDC operators.


Now that credible operational certification is available, an increasing number of owners and tenants are including requirements for operational certifications in their facility management RFPs; some even carry significant penalties if the contractor fails to meet or sustain minimum standards. For example, the Province of Ontario recently included a requirement for operational certification in an RFP with a $1m penalty should their IT service provider fail to comply. Operations holds the key to reliability Operational excellence is not just about availability, but also efficiency. Uptime Institute research and field experience indicates that even in new builds, operations holds the key to efficiency. The design-build phase is typically less than 5% of the data centre's lifespan, yet the team responsible for 95% of the facility's life — the operations team — are often not involved until the facility is commissioned. This is a mistake, Uptime states: organisations that view data centre expansion as a 'design build operate' process rather than a function of change management put the efficiency, uptime and ROI of their facilities at risk. Uptime reports that data centres where operations staff were integral to the construction process from conception run more


reliably and profitably from day one. And according to Uptime, conception really does mean 'conception': in the most efficient and reliable data centres, those who operate it are brought into the new build, retrofit or expansion process in the preconstruction/planning phase. This ensures that the team that will run the facility on a daily basis is involved in the decisions that will affect how efficiently it can be run. This observation is the inspiration behind Uptime's ‘Start with the End in Mind’ initiative. Led by Lee Kirby, CTO of Uptime Institute and former senior executive at Lee Technologies, Uptime's new program details how design/build and operations development should occur simultaneously. A typical data centre build, retrofit or expansion process involves five phases: pre-construction, design, construction, commissioning and turnover. Involving the operations team at each phase of the process will ensure not only that the facility is engineered to optimise maintainability, but also that the operations team can provide continuity for knowledge management and transition to production. Certifications, if desired, are incorporated as milestones, and review and optimisation of operational procedures continue as an iterative process throughout the


facility's lifespan – ensuring that, as Uptime puts it, “it doesn't end in tiers.” The following table shows the activities that should occur concurrently to ensure the facility is running optimally on day one.

Business objectives

Construction phase

Operations team activity

Datacentre strategy

Pre-construction/planning

Operations strategy

Operations planning

Design

Operations planning

Operations program development

Construction

Develop standard operating procedures (SOPs), methods of procedure (MOPs) and emergency operating procedures (EOPs); implement operations systems

Operations readiness

Commissioning

Develop critical MOPs, SOPs; populate operations systems; perform operations training; review and optimise procedures

Turnover and transition

Turnover

Implement operations program; review and optimise all SOPs, MOPs, EOPs; refine procedures

Sustained operations

DATA CENTRE

Periodic review and optimisation of preocedures to ensure continual improvement


DATA CENTRE

Is Prefabricated Modularity the Future for the Data Centre? Kevin Brown Schneider Electric

K

evin Brown, VP Data Center Strategy and Technology at Schneider Electric, talks to us about the changes in data centre infrastructure, the forces behind the spread of prefabricated data centres, and the reasons why this disruptive technology will


transform the data centre industry. What do we mean by this term ‘prefabricated modular data centres’? First of all, it’s important to understand the term ‘modular’. You can build modular data centres in a traditional way or you can do it in a prefabricated way. Modularity to us means that I deploy only the amount of the data centre capacity that I need when I need it and then scale as I go. Prefabrication really involves pulling as much of the construction into a factory environment so that you can get more predictable performance and faster deployment. The key thing is that prefabrication to us is what’s really the critical development in the industry and is really about taking the complexity of what is in the field in trying to bring it into the environment of the factory.

Companies now are laser focused on their capitalisation […] making sure that they’re realising the benefits of any capital expenditure as quickly as possible.


Is the concept of prefabrication widely in use? What’s interesting about the data centre industry is that for a number of years our industry has been trying to understand whether they should be using prefabrication. However, what else is fascinating to me is that other industries have been prefabricating solutions for years. So, if you think about the power infrastructure industry, which is very close to Schneider Electric’s background, there’s been prefabrication of power infrastructure for a very long time. So, the question that really should be asked is why hasn’t the data centre industry been able to take advantage of this approach? Prefabrication is accepted in many industries but I don’t think it’s really fully accepted in the data centre industry. And I really believe it’s because the industry providers themselves that have not made available an offer that meets customers’ demands. So, what we know and the customers I talk to want the benefits of prefabricating — they want the predictable performance, they want to know that what they designed is what gets built. They want to increase the speed of deployment, so that they can get the data centre up and running faster. Prefabrication is the best way to approach that problem, but up until now all of the offerings that have been given to these customers have been very limited.


I like to think that we are in the midst of going into a perfect storm for prefabricated solutions. If the idea is generally accepted, why don’t we see more use of prefabrication and why isn’t a compromise acceptable? Most data centres are not being built in a completely new environment. Almost every data centre is dealing with the issues of the site, where they might be going into existing building with only a limited number of options they may consider. Therefore if you want to go and prefabricate, you have to be able to deal with the constraints of the site, which means every one might be just a little bit different. So we can start with a standard starting point but we might need to modify that design in order to meet that site constraints. Also, there are many customers who have preferences on how they want to see the data centre designed in terms of power or cooling infrastructure. However, there isn’t really a vendor out there that has the ability to come to a customer and work with them to meet their preferences and site constraints. And to do so on a global scale. How is the market for prefabrication evolving?


I like to think that we are in the midst of going into a perfect storm for prefabricated solutions. When you look at where the industry has come over the last few years companies now are laser focused on their capitalisation, and they want to make sure that they’re realising the benefits of any capital expenditure as quickly as possible. If you talk to co-location companies for instance, they talk about speed to revenue. As a result, what’s happening is that a focus on the financial return for their investment is driving them to look at other options. Therefore, we’re in a very good place as an industry to start capitalising on this trend towards prefabrication. But again, this isn’t just about being able to make a module or being able to make a box. It’s about trying to start with a reference design. Customers want to see how my whole data centre goes together and then show me how I can build that. They don’t want to have those discussions about only one part of the architecture; they want the architecture to encompass the IT space, the power infrastructure and the cooling infrastructure. When you have these work hand in hand, it’s a much more powerful message and we just think the time is right for a broader adoption of this prefabricated approach.

DATA CENTRE


DATA CENTRE

Why OpenStack is Winning the Open Source Race Dr Mike Kelly CEO - DataCentred

O

penStack has arrived. Its recent adoption by non-digital native blue chip companies, including Disney, Wal-Mart and BMW for production environments undoubtedly proves the cloud platform’s viability, while the fact that it is


supported by multiple established vendors, including RedHat, Oracle, IBM and Dell is testament to their view of its potential. OpenStack’s avoidance of vendor lock-in, the versatility and scalability of the platform, its approach to security, combined with the vast community of cutting edge developers that the project offers, creates an enterprise environment that is highly customisable to your business’ needs.

The fear of vendor lock-in has been a major barrier for enterprises considering cloud computing. No risk of vendor lock-in The fear of vendor lock-in has been a major barrier for enterprises considering cloud computing. While open source software significantly reduces the cost of using cloud and is ideal for businesses concerned about being tied into costly long-term software contracts, OpenStack takes this one step further. Its open APIs means that users can cost effectively explore different providers to establish which services and tools are best suited to their requirements, and are able to switch easily and cost-effectively, and given OpenStack is supported by nearly every major IT vendor, it offers the widest range of high quality providers from which to choose.


Speed and flexibility are of the essence With the boom in technology-led start-ups, more traditional businesses have to innovate to remain competitive. Household brands like Disney have adopted OpenStack in conjunction with a devops approach, which allows them to accelerate their development cycles leading to faster innovation and enhanced customer service. The highly flexible nature of OpenStack architecture supports this. In a hybrid cloud environment, enterprises are able to easily and cost-effectively move workloads from the private to public cloud taking advantage of the benefits each has to offer: the ability to scale up on the public cloud during peak times while benefiting from the security, regulation and lower cost of operating a private cloud. Security is critical Security is a key issue for enterprises to consider when moving to cloud-based deployment – breaches are costly in terms of reputation, time and money. OpenStack has prioritised security through its Keystone Identity Service, a role-based access system that can be controlled at the level of users, roles and projects, and which currently supports token-based authentication and user-service authorisation. The specific features include a range of protocols that protect information stored in the private cloud


which includes the recording of all security operations and transactions, conforming to compliance standards, and securing gateway payments. Largest open source community Perhaps, however, OpenStack’s greatest appeal is that it boasts the largest open source community of over 25,000 global contributors, including some of the world’s most cutting edge developers, continually working to improve the platform’s open source code. Collaborating around a six-month release cycle that creates a tight feedback loop between developers and testers, software development is able to progress faster and the enterprise customers of those contributing vendors, have access to the latest software innovations as a result. Ultimately it is this highly collaborative community, supporting OpenStack development, which will enable it to win the open source race. Because the software can be configured to the specific requirements of any business, this means that enterprises have control over the speed, path and cost of enterprise development. This means that OpenStack users, have much greater freedom to set their own course, rather than being trammelled by the rules of unresponsive providers.

DATA CENTRE


www.enterprisemanagement360.com

EM360° are committed to helping your business grow Our flexible marketing options ensure your campaign is perfectly tailored to meet your business & marketing goals

Online campaigns Thought leadership & education

Brand awareness

Bespoke publishing options

Digital magazine

Video content

Radio

Podcasts

Lead generation

Email news

Webinars

Mobile app

To find out how EM360° can make sure your message reaches the people who matter, contact Patrick Agyeman on +44 207 148 4444 or get in touch by email: pagyeman@imipublishing.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.