CLOUDCOMPUTING WORLD Issue 1
August 2014
The cloud: OpenStack builds momentum
Understanding cloud load balancing Why planning should be central to your cloud adoption process Journey to the cloud: challenges posed by security Will Linux cause problems with load balancers? Launch Partners
ARE YOU ON CLOUD NINE? One major cloud computing company is, after we saved them more than ÂŁ19m. We showed our client there was a better solution for their data centre needs and, after two well-thought-out acquisitions, they saved big. Could you too unlock savings from your critical environment? Speak to our Data Centre Solutions team today. Visit our website to find out more here or challenge us on the spot by calling +44 20 7182 3529.
A
CONTENTS 6
CCW News All the key news in the world of cloud.
CLOUDCOMPUTING WORLD Issue 1
8
Understanding the need to reduce data centre PUE levels
12
How CRM changed cloud and cloud changed CRM
16
Customer-defined data centres
18
Removing the risk for data centre and enterprise IT
22
Cloud: the 60-year-old hot topic
August 2014
The cloud: it’s older than you might think
Power issues in today’s data centres
Moving business applications into the cloud
Understanding cloud load balancing Service prices differences under the microscope
24 26 30
Redefining cloud service delivery
WhiteSpider develops a cloud solution for Parsons Brinckerhoff
Giving data centres a new perspective
A well-balanced hybrid Cloud Load balancing for a more robust cloud environment
Audiocast: total remote/cloud security becoming reality says veteran pen tester Looking towards an open source cloud future - cost cutting without service reduction Launch Partners
CLOUDCOMPUTING WORLD 26 St Thomas Place, Cambridge Business Park, CB7 4EX Tel: +44 (0)1353 644081 info@cloudcomputingworld.co.uk www.cloudcomputingworld.co.uk
OpenStack Builds Momentum Understanding data centre software
Cloud Computing in an On-Demand World Why planning is essential when it comes to the cloud
32
Journey to the cloud: challenges posed by security
34
Why planning should be central to your cloud adoption process
36
Security questions to ask your cloud provider
38
Understanding cloud disaster recovery services
40
Taking your first steps into the cloud
How the cloud brings challenges, as well as benefits
Breaking down the planning process into more manageable steps
Reducing security risk with due diligence
How the cloud can make your IT systems more robust
Strategies for adopting the cloud
44
Will Linux cause problems with load balancers?
46
Using OpenStack in an all-IP environment
How next-gen Linux containers could cause problems
LGN Media, a subsidiary of The Lead Generation Network Ltd Publisher & Managing Director: Ian Titchener Editor: Steve Gold Production Manager: Rachel Titchener Advertising Sales: Bob Handley Reprographics by Bold Creative The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. This publication is protected by copyright © 2013 and accordingly must not be reproduced in any medium. All rights reserved. Cloud Computing World stories, news, know-how? Please submit to steve@lgnmedia.co.uk
Deutsche Telekom taps into the cloud
CLOUDCOMPUTING 3
FOREWARD Hello everyone, What is the cloud to you? Welcome to this, the first issue of Cloud Computing World, which I’m hoping will entertain and inform you on the highly topical subject of cloud computing. Is the cloud new? It depends on who you speak to, but for me, the concept of the cloud dates all the way back to 1986 when I purchased my first portable mobile phone for the princely sum of £1,825.00. Plus VAT, naturally. That Cellnet handset cost me £25 a month for line rental, before I even made a call. And in the event I was unable to answer the call - for any reason - the call went to voicemail – at a cost of 25 pence per minute. And where was the voicemail service? In the cloud, distributed across one of three (of eight) Cellnet mobile switches, with the voicemail promptly replicated between the switches to allow me to dial in from anywhere in the UK where the cellco had coverage - or via the PSTN, where calls were routed to the Slough NOC (Network Operations Centre). Can you tell I write about this stuff? But I digress. That was my first experience with cloud services. Today, some 28 years later, I have a wide variety of cloud services that use on a regular basis, ranging from multiple email service providers, cellular visual voicemail, Dropbox, Gmail and all the way through to 200 gigabytes of business backup storage with data mirrored across three data centres spread across Europe. And that’s before I talk about our Netflix, Roku and Spotify accounts for use in the house and when out and about. This is the modern person’s use of the cloud. And it’s not just me - you probably recognise many of these services yourself, as you subscribe and use them on a regular basis. But we have a long way to go before these cloud services are mature. For starters, what happens when a given CSP goes bust? And what happens if one CSP takes over another - how would the services be merged? And would my business cloud data still be stored in a European data centre? Or a US one? And what about the Patriot Act where a US-owned but UK-based cloud service provider is concerned? It’s these type of questions that I’m hoping to answer in Cloud Computing World - I hope you enjoy the new publication. May all your IT problems be little ones. Steve Gold Editor - Cloud Computing World
4 CLOUDCOMPUTING
Storage performance up to 30 times faster than leading cloud providers < 1ms network performance SLA
Secure Network Architecture
Embedded WAN Optimisation
Standard Global Architecture
You only pay for what you use
Test drive our cloud service for free, Click Here for a 14 day trial.* * No credit card is required for this 14 day free trial
REGULARS
CCWNEWS A
ttix5, the data protection software specialist, has taken the wraps off DynamicRestore, an instant cloudbased disaster recovery platform. The new service is billed as providing users with immediate recoverability in the event of a loss of critical servers and data. According to Luv Duggal, Attix’ general manager, DynamicRestore is guaranteed to increase the efficiency and delivery of business continuity and disaster recovery. Lost servers or data, he says, can have dramatic cost-implications for businesses when they are not recovered to an operational level in minimal time. “Even with this in mind, there is still a large segment of the market that is unable to buy expensive recovery solutions because of the high level of investment involved. What we have created, is a means of helping small and medium enterprises around the world employ world-class security, at the SME price point - without sacrificing quality for the end-user, or profitability for the service provider,” he explained. CCW notes that DynamicRestore forms part of the new Attix5 Dynamic product, which includes the features of he company’s current Attix5 Pro platform, combined with the new DynamicRestore technology. www.attix5.com
H
ibernia Networks has added the Cork Internet eXchange (CIX), the regional data centre for Southwest Ireland, as a new Point of Presence on its network. The PoP allows Hibernia to further expand its high capacity, international services throughout Cork, Munster and the island of Ireland. Built in 2007 and open for business in March 2008, CIX is a critical piece of communications infrastructure for Cork and Munster. The facility is responsible for delivering IP connectivity to thousands of businesses and tens of thousands of homes from Kerry to Waterford via large telcos and regional ISPs. According to Hibernia, CIX connects upstream to an extensive list of fibre providers and has a 30-metre telecoms mast onsite, with a line of sight to Cork City and Cork County.
6 CLOUDCOMPUTING
All the key news in the world of cloud. Please don’t forget to check out our Web site at www.cloudcomputingworld.co.uk for a regular weekly feed of relevant news for cloud professionals.
CIX customers will gain access to Hibernia’s Project Kelvin network. Project Kelvin is an extensive submarine and terrestrial cable deployment that directly connects Northern Ireland to North America and Europe. The sub sea cable comes ashore at Portrush, Northern Ireland and connects to Hibernia’s terrestrial fibre optic ring consisting of over a dozen Irish towns and cities, providing local and global commerce opportunities between the island of Ireland and the rest of the world.
CSP’s on-going strategy to provide cloud infrastructure services to support business growth initiatives. “The new US node brings our total cloud presence up to seven distinct locations throughout the United States, Europe and Asia Pacific and will enable customers operating across multiple geographic locations, including the US, to quickly and efficiently realise the benefits of enterprise cloud services on their global operations,” he said. www.telstra.com
www.cix.ie
G
ridstore, the SDS (Software-Defined Storage) provider of Windows Servers and Hyper-V Gridstore, has announced the integration of Gridstore 3 with Microsoft System Centre 2012, a move it says will enable its delivery of the Cloud Data Centre. According to the firm, the integration with System Centre allows for management of all resources via a single console. This central management, says Gridstore, provides for better overall efficiency and flexibility. With System Centre Virtual Machine Manager (SCVMM) integration, Gridstore is billed as delivering policy-based provisioning and orchestration of storage resources at VM-level granularity including key characteristics such as Quality of Service and Data Protection Schemes. System Centre integration will be available by the end of Q3 - the company says that all current and new customers running Gridstore 3 can upgrade with no disruption or hardware change. www.gridstore.com
T
elstra has unannounced new cloud infrastructure services in the US, expanding on its offering already available in the UK, Hong Kong, Singapore and Australia and strengthening its global virtual private cloud solution for multinational customers. Martin Bishop, Telstra’s global lead of network applications and services, said the US extension - which will be located on the East Coast - is in important milestone in the
M
ore than one-third of IT security pros are sending sensitive data outside of their organisation without encryption Despite headline-making breaches that have called attention to the importance of data encryption, nearly 36 per cent of IT security professionals admit to sending sensitive data outside of their organisations without using any form of encryption to protect it. The research, from Voltage Security, took in responses from more than 200 IT professionals towards encryption, big data security and EU data privacy regulations. The survey showed that almost half of respondents indicated that they are not de-identifying any data within their organisations. The ability to “de-identify” information, by employing standards based encryption technologies such as FPE (Format Preserving Encryption) is said to provide very effective mechanisms to secure sensitive data, as it is used and managed at the personal and professional level. Voltage says that discussions surrounding data residency, lawful intercept and protecting data from advanced threats have been top of mind for many years. While recent stories shine a spotlight on the risks to data, including theft and extortion, the need to both protect data from inadvertent risk while ensuring the business isn’t constrained is a clear problem every business needs to solve. www.voltage.com
DATACENTRES
Power issues in today’s data centres
UNDERSTANDINGTHENEEDTO
REDUCE DATA CENTRE PUE LEVELS Mark Adwas discusses some of the power consumption challenges - and solutions to those challenges - that face modern data centre facilitators and managers. By Mark Awdas, Engineering Manager, Cannon Technologies
Introduction The rising price of energy - coupled with a rising understanding amongst management of the social responsibilities that companies have in reducing their energy consumption footprint - means that data centre owners, their clients and managers have been revisiting power consumption issues in a big way over the last few years. In parallel with this, the data centre industry has developed a measure of how effectively a data centre uses its energy. Known as the PUE (Power Usage Effectiveness) this measure quantifies for what application and how much energy is being used. PUE is defined as the ratio of total amount of energy used by a computer data centre facility to the energy delivered to the computing equipment. This is calculated by taking a measurement of energy use at or near the facility’s utility meter. We then measure the IT equipment load after the power conversion, switching, and conditioning processes are completed. The Green Grid According to The Green Grid (www.thegreengrid. org) - an industry consortium active in developing metrics and standards for the IT industry - the most useful measurement point is at the output of the computer room PDUs (Power Distribution Units). This measurement should represent the total power delivered to the server racks in the data centre. Data centre association the Uptime Institute reports that a typical data centre has an average PUE of 2.5 - this means that, for every 2.5 watts in at the utility meter, only one watt is delivered to the IT load. The Institute estimates that most facilities can - using the latest (2014) technologies - achieve a 1.6 PUE using the most efficient equipment and best practice. This ratio can usually be achieved in most data centres using a relatively simple set of steps to boost the power efficiency levels, and which
8 CLOUDCOMPUTING
also have the advantage of generating a good ROI (Return on Investment) as far as Capex (Capital Expenditure) is concerned. The steps that can be taken will include the retirement of legacy hardware in order to significantly reduce the power and cooling requirements of the IT systems - and so create a greener data centre. It’s worth remembering here that legacy hardware - once it has been suitably `scrubbed’ of stored data (where appropriate) - can often be traded in with many vendors and their dealers. PUE in practice So how does PUE work in practice? Well, in a data centre with a PUE of 2.5, supporting a 600W
InfoBurst Reduced power consumption in the data centre can help to reduce our reliance on non-renewable energy sources
DATACENTRES server actually requires the delivery of 1,500W to the data centre as a whole. Unfortunately, most organisations lack any power-consumption metering which can break down usage at a level that allows them to gauge the results of their optimisation efforts. To help solve this problem, efforts to monitor energy use should start with the creation of a manufacturer’s `power profile’ for each rack in an existing data centre. Each department with an IT facility - and not just within a data centre itself - faces their own separate challenges that can cloud (no pun intended) the power consumption and efficiency issue for the systems concerned. For example, facilities staff can be struggling with limits on rack and floor space, power
InfoBurst Keeping cable and power blocks tidy makes life easier for rack amendments and other changes
availability, and kit, whilst IT staff will be try to ensure they have sufficient processing power, network bandwidth and storage capacity to support their upcoming IT initiatives - as well as ensuring sufficient redundancy to handle system disruptions. Although balancing the needs of these two processes may sound relative easy, their complexity is often compounded by the fact that - in the past - facilities staff and IT professionals have tended to treat their operational costs separately, spreading their overall costs across the organisations and making it difficult to assess their full impact. Because of the operational differences that exist between facilities staff and their IT colleagues, it is clear that optimising data centre energy efficiency requires a high degree of careful planning. This is in addition to the deployment of components such as power, cooling, and networking systems that can meet both current needs and also scale for future requirements - and so minimise TCO (Total Cost of Ownership) issues, both now and in the future. The scalability issue is such that, when data centres reach 85 to 90 per cent of their power, cooling, space, and network capacity, organisations must seriously consider either expanding their existing data centre or building a new one - this is, we have observed, a difficult strategic decision that can have a major impact on the company’s bottom line. Adopting a green strategy The good news, however, is that adopting a `green strategy’ can show how best practice for capacity expansion can increase the energy efficiency of a data centre - and also help to increase density, reduce costs, and extend the life expectancy of existing data centres. In a green data centre, the mechanical, electrical, and spatial elements (facilities) - as well as servers, storage, and networks - are usually designed for optimal energy efficiency and minimal environmental impact. The first step in energy-efficiency planning involves measuring existing energy usage. It’s worth noting that the power system in a given data centre is a critical element in the facilities infrastructure, so knowing where that energy is being used - and by which equipment - is essential when creating, expanding, or optimising a data centre. As energy costs continue to rise, it is clear that aligning the goals and requirements of business, facilities, and IT departments will become more critical to optimising overall energy use and reducing the power costs in enterprise data centres. Following the strategies outlined in this article - including the processes of monitoring current energy usage, retiring idle servers, and deploying energy-efficient virtualised servers - can help enterprises take a major step toward the realisation of a green data centre.
CLOUDCOMPUTING 9
DATACENTRES In many data centres, between 5 and 15 per cent of servers are no longer required and can usually be turned off. The cost savings from retiring these idle servers can be considerable Average server performance has also increased - today’s servers are far more powerful than those of a decade ago, and virtualisation allows enterprises to take advantage of that performance to consolidate multiple physical servers onto a single virtualised server. It is worth noting that server upgrades can also help in this regard. One of the pivotal moments in the evolution of data centre efficiency was the introduction of version 1.0 of the European Commission’s `Code of Conduct on Data Centres Energy Efficiency’ (http://bit.ly/1luw7kK ) back in 2008. In many ways the publishing of this code was something of a wake-up call for the data centre industry - and has helped to generate a better industry understanding of the need to `go green’ where data centres are involved. The Green Grid, however, has not rested on its laurels, as last year the IT/energy industry association teamed up with ASHRAE - formerly known as the American Society of Heating, Refrigerating and Air Conditioning Engineers and which has re-positioned itself as a sustainability association - to publish a review of the PUE standard. Entitled `PUE: A Comprehensive Examination of the Metric,’ (http://bit.ly/1eo5o4E ) this is the 11th book in the Datacom Series of publications from ASHRAE’s Technical Committee 9.9.
“Mark discusses some of the power consumption challenges - and solutions to those challenges - that face modern data centre facilitators and managers” Its primary goal, says ASHRAE, is to provide the data centre industry with unbiased and vendor neutral data in an understandable and actionable way. At the time of the book’s publication, John Tuccillo, chairman of the board for The Green Grid Association, said that data centres are complex systems for which power and cooling remain key issues facing IT organisations today “The Green Grid Association’s PUE metric has been instrumental in helping data centre owners and operators better understand and improve the energy efficiency of their existing data centres, as well as helping them make better decisions on new data centre deployments,” he explained.
below Reducing energy requirements translate to real cost savings on power bills
Conclusions As energy costs continue to rise, it is clear that aligning the goals and requirements of business - as well as facilities and IT departments - is now critical to optimising energy usage and so reducing power costs in enterprise data centres. Our broad recommendations to help reduce these costs - as well as optimising the power consumption for all types of data centres - is to closely monitor a centre’s current energy usage, as well as retiring idle servers and deploying energy-efficient virtualised servers wherever possible. Our observations also suggest that, if you are involved in the management or operation of data centres, then the PUE ratio will matter to you. In view of this, you should also be looking at reducing the power consumption of the data centre and so improve your facility’s benchmark along the way. The human element in the data centre power efficiency stakes should also not be ignored - especially in today’s facilities management arena. Vendors and data centre staff should always be able to advise clients on how to reduce temperatures and energy usage using technologies such as innovative hot- and coldaisle designs. Since the UK Carbon Reduction Commitment (CRC) obligations were enacted back in April 2010 (http://bit.ly/1luwLPb ), it should be clear that vendors and data centre providers need to work together in developing industry standards and ratings that work. Cannon Technologies believes that the data centre industry - from the power suppliers all the way to the rack makers - needs to work together to improve efficiencies and so ensure that we are all at the forefront of efficient and green data centre operations. www.cannontech.co.uk
10 CLOUDCOMPUTING
Digital Realty Data Centres Powering the World’s Leading Companies
9 of the Top 15 INVESTMENT BANKS 5 of the Top 5 CLOUD SERVICE PROVIDERS 3 of the Top 5 SOCIAL MEDIA PROVIDERS
www.digitalrealty.co.uk
CLOUDBUSINESSISSUES
Moving business applications into the cloud
HOWCRMCHANGED CLOUDANDCLOUD CHANGEDCRM
By Ian Moyse, Sales Director Workbooks, Eurocloud UK Board Member and Cloud Industry Forum Governance Board Member
Ian Moyse explains the close relationship between the cloud and business applicationsâ&#x20AC;Ś
InfoBurst Customer Relationship Management in the call centre - all smiles when things are running smoothly
Introduction CRM (Customer Relationship Management) is one of the forerunners of cloud technology and remains one of the great success stories in the space - and has been dramatically changed as it has moved from an on network-led market, to the verge of being dominated by cloud offerings. The cloud is, of course, a heavily hyped term in both the IT and business sector and has come to cover a wide range of options as vendors have jumped on the bandwagon, many cloud-washing their old solutions to be able to use this hip new term - for example many have simply put a Web front end admin console or added Web update portals to be able to claim they are cloud-enabled. True cloud solutions outweigh these pretenders and are truly changing the way IT is digested and moving us from an IT domicile to a business led agenda. Traditionally, for example, customer and
12 CLOUDCOMPUTING
contact management solutions were on network products from legacy vendors and remained a limited market of DOS and early Windows based solutions such as ACT, Goldmine, Maximiser, Superoffice and the like. These solutions provided the ability to share information usually limited to organisation, people, activities and notes and act as a company shared database of clients and prospects. Then along came Siebel (founded in 1993) delivering a wider functional experience and richer customer information and really termed the market CRM. By the late 1990â&#x20AC;&#x2122;s Siebel had become the dominant player, with a peak market share in 2002 of 45 per cent. Salesforce In 1999 Salesforce was founded with a SaaS (Software-as-a-Service) -only offering - and
CLOUDBUSINESSISSUES
InfoBurst CRM - if the IT is running smoothly, everyone is happy...
remains so today - that rapidly started to disrupt the status quo of vendors aforementioned and has grown to now be one of the top 10 sized IT vendors worldwide, proof positive that SaaS CRM is both a lucrative space and one customers are flocking to. Alongside Salesforce a wide range of other Cloud CRM providers have sprung up to disrupt, replace and become heavy competitors to the legacy providers. The cloud enables these vendors to develop quicker (3-4 release updates a year compared to a typical 1 every 2-3 years with a software vendor), reach further and wider (cloud vendors can attain worldwide profile plus customers quickly and affordably in comparison to the costly and slow model to launch in the old software product model). They can also be more agile when it comes function and flexibility (cloud models need far less testing, support the required browsers and mobile support and your away, compared to a product based system having to work on a wide range of operating systems and version and worry about software incompatibilities, network and hardware issues and a testing regime that can simply never cope with the wide variety of customer on network device environments). The cloud enables CRM vendors (and others) to innovate and compete on a global market, it empowers a vendor such as Workbooks to deliver a rich, intuitive Web-based system that can compete fairly with vendors such as Salesforce, something previously difficult to do in a product world.
Cloud customers rely less and less on brand equity to make a decision and increasingly have more choice available to them. For example a USA business can find a UK cloud provider and turn on the service, and use and be supported equally well from the other side of the world. SaaS-based CRM now contributes 50 per cent of all new sales and is expected to reach 70 per cent market penetration within a few years and cloud CRM providers lead the way in winning awards (Workbooks won CRM of the Year in 2014 and 2013 with most of the finalists being cloud only vendors) and market CRM reports such as G2 Crowd showing the leading players all being cloud based CRM offerings. On-network CRM providers still have customers, but in the majority are fighting to retain their share, they are not experiencing the growth and certainly not at the pace that cloud CRM vendors are delivering. Microsoft is the exception Microsoft, of course, is the exception to this, having to have an on network option alongside its cloud CRM whilst it transitions its own business market approach from being an on network vendor to a cloud focused vendor - having realised the market shift a few years back when Microsoft moved 95 per cent-plus of all its development quickly to focus on its cloud offerings. Once the shift is complete and the market accepts Microsoft fully as a cloud first vendor, when will the step come where Microsoft joins the throws vendors offering cloud CRM as their only form factor option for their own advantage?
CLOUDCOMPUTING 13
CLOUDBUSINESSISSUES Cloud CRM was there right at the start, displacing existing approaches and disrupting the status quo of approaching business application deployment methods and it has proven consistently that this is increasingly the customers preferred approach. Cloud solutions are now designed work well over slower links and transient connections, making even remote customers who would have previously found their bandwidth limiting, viable users of the SaaS based CRM options available. Increasingly also we have seem customers having higher connection speeds and demanding more mobile access from any device, anywhere at any time (mostly from user demand and not led from the business itself) all needs well suited to a cloud based CRM solution. Legacy solutions still survive, but the emphasis is on survive whilst cloud CRM is termed as thriving. We are now at the tipping point where cloud is an everyday term - whilst many still do not understand it or its nuances, seeing it only as the Internet, few have not heard of it or seen the branded marketing it is featured in, and accelerated adoption has started. The cloud is extremely disruptive - this is nothing new to those who are familiar with Clayon Christensen’s theory of disruptive innovation and those ignoring it in vendor land and supply channels do so at their peril. Many still dismiss the cloud, demanding on network only, not for a logical reason, but normally on an emotive basis, believing the Internet to be insecure, and reasoning, therefore, that the cloud will be. This approach is not new and has affected the adoption of ‘new things’ across industries. Take the motor car - when it was first introduced it was deemed the ‘devil’s work’, with a man carrying a red flag having to walk down the street in front of each car and people were recorded as believing that ‘if you went in a car and it travelled at over 20 miles an hour it would rip the skin from the human face.’ Now, of course, we smirk at such things, but at the time that was a very real belief and emotion towards replacing a horse and cart with a car. We are experiencing something similar with the cloud. Ignoring the cloud Ignoring cloud computing and the new form factor underpinning it can be a dangerous
“We are now at the tipping point where cloud is an everyday term - whilst many still do not understand it or its nuances, seeing it only as the Internet”
tactic, enabling you to miss out on competitive advantage, flexibility, cost savings, functional benefit and greater resilience. Many examples already exist of major brand name leaders not recognising the change being driven by the cloud in general and the rapid effect user acceptance can have on changing the historical norm. Take for example Blockbuster video – once a world leading brand, now gone, devastated by the likes of Netflix and Lovefilm (Amazon) who changed the delivery method for consumers renting a movie from taking a video tape home, to clicking and streaming your choice, which is faster, quicker and cheaper. The brand equity Blockbuster had was not enough to overcome a new cloud based option that customers chose to choose. Not because of the cloud or because of disliking Blockbuster, simply because someone made it better and delivered something the customer preferred. The same happened with Kodak as photography rapidly went digital and online with cloud based uploads and sharing replaced the old format. The music industry with ITunes vs bricks and Mortar music stores is going through the same transition as are other markets. So to undertake a belief that cloud will not affect IT delivery and to not truly consider it fairly in any business application or IT project is a naive approach that may leave you and your business out in the cold. Conclusion The cloud is not a be all and end all, it is not right for every customer in every situation, just as the horse and cart still having its place in certain situations – i.e. the right tool for the right job - but it will be advantageous in the highest majority of situations. The technology sector’s ability to change has accelerated. Moore’s Law back in 1965 predicted silicon power would double every two years. But what its creator, Gordon E. Moore, couldn’t have predicted was the dramatic economies of scale the cloud would eventually bring to all of our lives. For one, it has helped lead to a drop in price for essentials like computing power and storage by making them more accessible. But also, it’s enabled conveniences no one ever would have imagined four or so decades ago. The cloud has not only driven down costs, but it’s helped increased our satisfaction with and expectations of our Internet experience. It’s enabled mobility and delivered immense computing power to anyone, anywhere at any time. Perhaps an update to Moore’s Law will be formed to hypothesize that the number of applications running the in the cloud will double every two years; based on today’s adoption and consumption rates, however, it’s also possible we could see it being represented as the computing power available to an individual consumer - via the cloud - doubling every two months. www.workbooks.com
14 CLOUDCOMPUTING
11 – 12 November 2014 RDS, Dublin Cloud & IT Security Ireland is a NEW independent Conference & Exhibition at which Enterprise and business organisations can see the latest solutions available and receive independent practical information on the business arguments, software, technology and solutions they need to make better informed decisions. The Conference Utilising a combination of Case Studies, Panel discussions, Technical papers and interactive forums the conference will showcase the latest in new ideas, software, solutions and Best Practice. The Exhibition Featuring leading companies, brands and value added resellers this is your chance to and compare the latest in technology, software, innovative solutions and source the suppliers who can assist you.
Themes addressed will include: •
What are the available options
•
How do I assess my future needs
•
Considerations when migrating to the cloud
•
Does one size fit all?
•
Security and the Cloud
•
Future Technology
•
Virtualisation and Storage
•
Big Data
11-12 Nov 2014 RDS, Dublin. Co-Located for success Cloud & IT Security Ireland benefits from being co-located within DataCentres Ireland the leading IT technology infrastructure event in the country.
To register your interest and receive more information contact Hugh on +44 (0) 1892 518877 or email hughrobinson@stepex.com
DATACENTRES
Redefining cloud service delivery
CUSTOMER-DEFINED
DATA CENTRES
The official opening of one of its newly-expanded data centres by the UK Home Secretary prompts Bill Strain to re-define cloud service delivery... By Bill Strain, CTO, iomart
Introduction The phrase ‘Software Defined Data Centre’ has been the mantra for those of us working to build the next generation of data centres since it was first coined at VMworld back in 2012. It means that the provision and operation of the data centre infrastructure is entirely automated by software with minimal human intervention. However a recent visit by the UK Home Secretary Theresa May to officially open one of our newly expanded data centres in Maidenhead, Berkshire, has made me think we need a new
16 CLOUDCOMPUTING
phrase to describe what we’re doing in our DCs. It was the first time Mrs May had visited a data centre and she echoed the thoughts of many who venture inside when she said: “It is interesting to see that the cloud has a physicality to it and isn’t just something up in the ether.” When a senior government minister is genuinely intrigued by the physical infrastructure that powers the delivery of cloud services we need to listen. Few ministers have been inside a data centre, yet they are collectively responsible for
InfoBurst Bill Strain shows the Home Secretary what a data centre looks like...
DATACENTRES the G-Cloud framework, which was set up to encourage the adoption of cloud services by the public sector. The whole G-Cloud initiative has been pushed by the need to allow local authorities and other public sector organisations to find easier ways to procure services from companies like ourselves on a pay-as-you-go basis instead of having to endure lengthy and often-expensive procurement processes. So it is vital that the people responsible understand that the companies who own and manage data centres are focused on giving them fast and effective ways of getting the cloud services they require. The same goes for other senior decision makers, few of them probably get the chance to step inside so we need to illustrate how valuable data centres are to the economy by explaining what goes on in them in much simpler terms. This applies to how we educated members of the public, as much as it does to small business owners, officials in local government, right up to the CEOs of the biggest corporations. We need to be focused on the customer. The people who are increasingly using cloud services do so because it adds value to what they do. It might make their own jobs easier, for instance allowing a busy IT department to backup data quickly and securely without having to assign staff to physically change and store tapes, or it might allow them to deliver better products and services to their own customers, for instance by enabling accountants to use financial software which they access via the internet to provide a service to their clients. Customer defined After initial scepticism, the value of the ondemand, pay-as-you-go cloud services model is now being embraced by government and enterprise business but it is also being driven and changed by the needs of those same organisations.
This makes me think that what we should be talking about today is the ‘Customer Defined Data Centre’ (CDDC) rather than defining DCs by the way they use software to set up the servers and the network inside them. The importance of the customer in the delivery of our services should be at the forefront of how we architect the physical infrastructure that makes up the backbone of the cloud. The innovative Cisco and Corning fibre technology we’ve deployed in the data centre the Home Secretary visited allows us to provision automatically and dynamically through our control panel, whatever services our customers need, at any time, on any scale. The technology has been designed with our end-users, our customers, in mind, providing them with what they need to do their work. The challenge for us was to make sure that each rack of servers that goes in the seven data halls of the facility is capable of catering for every network requirement, for all business groups, encompassing both initial and rapid future expansion as and when required. There is of course a benefit to us - we no longer have to physically plug wires into servers, which therefore reduces our management burden - but there is also huge benefit to the customer. Conclusion We are managing thousands of servers and the high capacity networks that deliver the computing power to support modern business in the age of digital. No longer do companies have to make huge capital investments in their own hardware on their own premises, instead they invest in us and so we need to have that same investment in them. By talking about not just software defined but Customer Defined Data Centres, I think we can show that we are transforming our networks to deliver the highest levels of agility, performance and flexibility to drive the development of the new world economy.
UK Home Secretary Theresa May opens new iomart data centre The UK Home Secretary, The Rt Hon Theresa May MP, officially opened a multi-million extension to a data centre, which is owned and operated by iomart group in June of this year. The Home Secretary was given a guided tour of the new highly secure, state-of-the-art, 1500 square metre extension to the data centre on the Clivemont Road industrial estate in Maidenhead. The Rt Hon Theresa May said: “Data centres are an important part of the global economy so I’m delighted to open this new facility for iomart. The technology on show is impressive and will allow businesses to be better connected than ever.” iomart purchased the data centre as part of its acquisition of Maidenhead-based web hosting company RapidSwitch in 2009. This upgrade, says the firm, makes it one of the most advanced data centres in the UK and showcases the first major deployment of brand new technology from Cisco, which allows network infrastructure and services to be automatically provisioned and scaled for customers.
Angus MacSween, CEO of iomart, said: “We are delighted that The Home Secretary has officially opened our next generation data centre and seen first-hand the technology involved in creating the infrastructure needed to support the dynamic and ever-changing web hosting and data storage needs of SME and enterprise business.” “Our data centres are the motorways of the future and this facility enables us to provide flexible and bespoke services to our customers and puts us at the heart of the next generation of software defined data centre technology,” he explained. The new extension took 12 months to complete and has capacity to hold up to 630 racks containing up to 30,000 physical and as many as 500,000 virtual servers. It has been designed to meet the needs of all the different hosting brands that make up the iomart Group of companies.
www.iomart.com
CLOUDCOMPUTING 17
CASESTUDY
WhiteSpider develops a cloud solution for Parsons Brinckerhoff
REMOVINGTHERISKFOR DATACENTREANDENTERPRISEIT
How cloud computing helped a company with more than 150 offices around the world...
InfoBurst How Parsons Brinckerhoff called on the assistance of WhiteSpider to implement a wide-scale cloud topology across its many offices around the globe.
18 CLOUDCOMPUTING
CASESTUDY
Introduction Parsons Brinckerhoff was suffering from a problem common in many established enterprises where the IT infrastructure had grown with the company. The need to respond to growing demands by adding new technologies resulted in a piecemeal infrastructure with the associated risks, inefficiencies and inflated costs. The company had already started to work on its DCCAMP (Data Centre Consolidation and Migration Project) when it was introduced to WhiteSpider. Working alongside the client’s IT team, WhiteSpider’s team of experts quickly identified the major areas where improvements could be made and applied its unique ea4 framework for enterprise architecture to help the client achieve its key objectives of reducing risk, consolidating and simplifying its enterprise architecture and cutting the overall costs of running its IT systems whilst improving performance. The result was a positive evolution from a fragmented environment into a coherent, reliable, scalable and future-proof architecture that delivers greater performance at a fraction of the operating cost. The client in depth Parsons Brinckerhoff is a global consulting firm assisting public and private clients to plan, develop, design, construct, operate and maintain thousands of critical infrastructure projects around the world. Founded in New York City in 1885, Parsons Brinckerhoff is a diverse company of 14,000 people in more than 150 offices on five continents. With a strong commitment to technical excellence, a diverse workforce, and service to its clients, the company is currently at work on thousands of infrastructure projects throughout the world, ranging from the mega-projects that
InfoBurst The Palm Jumeirah - just one of many locations for PB’s cloud deployment.
define an entire region to smaller, more local projects that keep a community humming. The company offers skills and resources in strategic consulting, planning, engineering, program management, construction management, and operations and maintenance. It provides services for all modes of infrastructure, including transportation, power, energy, community development, water, mining and the environment. The challenge - and objectives Parsons Brinckerhoff is a company with over 130 years’ history and a federated structure. So inevitably its IT systems had evolved in a haphazard fashion, responding to needs as they arose in various parts of the business, with each business unit choosing its own solutions and standards. The company had been having issues for some time with power outages, makeshift server room arrangements and legacy equipment, which could no longer be maintained. Its systems were under-utilised and difficult to manage. However it was the arrival of Hurricane Sandy in October 2012, and the near-disastrous flooding of the primary data centre in Carlstadt, New Jersey, that served to highlight the level of risk that the company faced. With increasing single point of failure events and a site that was both unsuitable for future development and nearing the end of its lease, Hurricane Sandy was the final straw that brought forward the company’s plans for consolidation and migration of its systems, with the aim of creating a private cloud platform and a fully maintained data centre, proofed against disasters and with built-in resiliency. Parsons Brinckerhoff therefore needed to conduct a thorough review of all its systems in order to create a truly robust, consolidated architecture that would be resilient, easy-to-
CLOUDCOMPUTING 19
CASESTUDY
WhiteSpider develops a cloud solution for Parsons Brinckerhoff
manage and future-proof. With little more than a year until the lease on the existing data centre site expired, the client looked to find a partner who could help them manage this in the timescales available. The company needed a partner with the experience in large-scale migration projects, plus the technological vision and expertise to design, plan and implement a solution that would deliver a good Return on Investment, excellent performance and significant cost savings. The solution - the DCCAMP project Parsons Brinckerhoff had a disparate infrastructure with many different systems in different business units and a very dispersed estate across several sites. WhiteSpider had to react quickly to review and understand the objectives of the project, including the key services, dependencies and stakeholders, with first results needed within just a few days. Using its unique ea4 approach, providing a framework for developing and implementing enterprise architectures, WhiteSpider was able to engage quickly with the client team and carry out a high level audit of the service environment, dependencies, locations and user footprint. The information from the audit provided valuable insight for WhiteSpider to plan client’s migration and transformation strategy, including the size, type and location of a co-located data centre provider. WhiteSpider also supported the client in the procurement process, helping to define objectives, core requirements and selection criteria for the new data centre environment. This included writing the RFP document and helping to evaluate the proposals and choose the right data centre provider and location. WhiteSpider also used the knowledge gained from the audit to inform the process of designing a new, agile service delivery platform for the client, based on the creation of its own private cloud infrastructure. As part of this enterprise architecture process, WhiteSpider also helped to manage the comparison of technologies for the new architecture in a technology bake-off. Planning the migration involved several enterprise alignment steps in a staged migration for the client’s various sites, bringing all the company’s IT systems and data centre facilities into one consolidated infrastructure. This involved creating and implementing a consolidation and migration plan for all systems across a number of sites. It included a new design and infrastructure for the company’s headquarters at One Penn Plaza in New York, moving many of its servers and consolidating into a smaller space, rationalising technology to create a more coherent infrastructure. In addition the client was able to vacate its premises in Carlstadt and move to its new co-located data centre environment in Culpeper, Virginia, with economies of scale and the cost advantages of co-location.
20 CLOUDCOMPUTING
InfoBurst Hurricane Sandy of 2012 required a move to a new colo data centre in Virginia...
The migration plan undertaken with WhiteSpider as part of its enterprise alignment services included consolidation of all systems, the new design and infrastructure in Parson Brinckerhoff’s HQ and the new data centre in Culpeper. It involved deploying new technology solutions and standards, a new virtualisation platform and storage platform, in order to create a powerful private cloud environment for Parsons Brinckerhoff. One of the major gains was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system. About the ea4 Framework ea4 was developed out of the WhiteSpider team’s desire to see technology used effectively in order to transform the way global enterprises work. It does this by delivering enterprise standards through the four key elements of the ea4 framework and based on total vendor independence. The first element of ea4 is `enterprise auditing,’ aimed at gathering an in-depth level of understanding of a customer’s organisation and its business requirements, as well as technical knowledge on the operational environment.
CASESTUDY “One of the major gains was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system” The project has already shown significant gains in terms of reduced cost, improved performance and greater manageability. From a disjointed infrastructure, with significant risks and areas of under-utilisation, the client now has a new structured and streamlined architecture and capabilities, already delivering benefits. The infrastructure is based around a new and risk-free data centre environment delivering a private cloud-based environment. The client’s headquarters in One Penn Plaza have been refurbished and the consolidation of servers into the purpose-built ROBO Room (Remote Office Branch Office) has meant that costly real estate space has been freed for other activities, whilst servers are housed in more appropriate conditions with better cooling and power supply.
This is followed by a detailed plan and design, the ‘enterprise architecture’ element, engaging with the business to understand the key business objectives in relation to the IT infrastructure and assets and developing a blueprint to design and build a core foundation of processes and systems. The third step is `enterprise alignment’ - once the architectural designs are defined, they can be implemented through a comprehensive portfolio of services that maps across all aspects of IT infrastructure. In the fourth `enterprise assessment’ element, WhiteSpider uses its experience in modelling, capacity planning and performance management to ensure that the network and applications are tuned to deliver optimal performance and reduce business risk. One of the major gains here was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system The results The DCCAMP project had a number of clear objectives to help Parsons Brinckerhoff build a robust and agile private cloud environment that would provide high performance IT services for all its business units globally now and into the future.
Cooling requirement Initial studies indicate that cooling requirements have been reduced by 80 per cent, power consumption reduced by two-thirds (66 per cent), and the server room footprint is down from 108 sq meters to just 15 sq metres -a reduction in floor space of 88 per cent, representing a cost saving, at New York real estate prices, of $600,000 per year. The new infrastructure across the client’s sites has also improved connectivity and futureproofed the network - with the expectation that the current infrastructure will need little upgrading in the next 3-5 years. Resiliency has also been improved and the overall performance available to users is significantly greater, with the capability for up to 10 Gbps to the desk. In addition availability of the new service environment has now reached the desired five-nines on a 24/7/365 basis, due to the elimination of risk, over subscription, device failure and power outages, plus new maintenance contracts around new technologies. The new environment has been designed and configured in line with industry best practice and therefore it is more agile around service delivery, easier to operate and manage, and integrates seamlessly with legacy equipment and components. As a result it is delivering substantial cost savings, including the operating costs, streamlined time to deliver new services, reduced equipment footprint and maintenance costs. www.whitespider.eu
CLOUDCOMPUTING 21
DATACENTRES
Giving data centres a new perspective
CLOUD:
THE 60-YEAR-OLD HOT TOPIC
Andrew Roughan discusses the nature of the cloud and how data centres fit into a cloud-based feature... By Andrew Roughan, Commercial Director at Infinity SDC
A short history of cloud For something that started in the 1950s, cloud computing might seem to be late to the buzzword party. In fact, that pervasive, omnipresent trend of today is technically more than 60 years old. In those days, of course, time-sharing allowed multiple terminals to share the physical access and CPU time on mainframes. But the vision for cloud was already there: in the 1950s, scientist Herb Grosch predicted that the world would operate on dumb terminals powered by about 15 large data centres.
22 CLOUDCOMPUTING
Commercialised in the 1960s, cloud computing evolved through the early VPNs of the 1990s, virtualisation and the dotcom bubble that fuelled Amazon’s rise to success, until the point in 2008 when Gartner remarked that cloud computing could “shape the relationship among consumers of IT services, those who use IT services and those who sell them.” The research firm later observed that businesses were “switching from company-owned hardware and software assets to per-use servicebased models” so that the “projected shift to
InfoBurst The data centre: concentrated and power hungry technology...
DATACENTRES computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas.” More recently, in October 2013, Gartner predicted that Cloud Computing would account for the bulk of new IT spend by 2016. Cloud is clearly reaching its apex. Cloud confusion The length of time that that the cloud has taken to reach this point perhaps accounts for the confusion that continues to surround it. There is, for example, confusion about cloud technology, confusion over IT infrastructure development and now, with the illusion of unbounded capacity in the cloud, confusion about data centre options and their place in the IT strategy. Public, private, hybrid, on premise, co-located - with so many options and approaches, many mid-sized enterprises are finding it difficult to understand the myriad data centre solutions on the market. Many companies have commenced their IT transformation journey, but the data centre typically continues to be viewed simply as real estate. No longer can there be a single procurement approach. Multi-sourcing is here to stay. The data centre must become more than that. At the heart of the transformation to the cloud, it needs to become more relevant to the enterprise in supporting the transition from basic virtualisation to its latest stage of evolution: software-defined data centres (SDDC). This means understanding both the enterprise IT revolution and the individual needs of each business. The goals for businesses moving to the cloud tend to be similar: whether private, public, or hybrid cloud, users seek to increase agility, boost flexibility, reduce time to implement, enable efficient international operations and reduce costs. This does not mean that all companies can be herded in the same direction; they won’t take the same journey in the IT transformation and will have different needs. A cloud by any other name Some industries are more accepting of cloud than others. At one end of the scale, the retail industry tends to be very comfortable with the concept and adoption of cloud and can articulate how it works and its benefits. At the other end of the scale, those driven by strict regulatory standards – charity-funded research organisations and legal in particular – are extremely cautious about cloud. A huge disconnect between the business and IT sides of these industries means that to them, cloud is public, out of their control and a security risk. That being the case, the mere use of the cloud word causes ripples even when looking to deploy private clouds. More palatable to the lawyers, partners and research leaders is terminology such as “utilising the benefits of automation and orchestration in an on-premises environment”.
Will your data centre flex like your IT? Whichever path feels best suited to each business, it needs to be agile, able to burst and ultimately dynamic. As part of the journey to the cloud, CIOs have typically deployed virtualisation to increase the utilisation rates of their owned IT assets, while also outsourcing to “as-a-service” providers to reduce the overall size of the owned IT estate. However, the virtualisation journey can be unpredictable. At the start, companies expect an overall reduction in their owned IT assets but find it difficult to accurately predict by how much. Whether in-house or outsourced there are data centre costs that require a level of capacity that is almost impossible to foresee and plan for. In addition to the planning, there are times when capacity needs to increase so that new IT can be deployed before older assets are retired. Often, and despite growth in data, the net IT assets shrink as a result of these changes. This can strand power and space capacity and create unrecoverable costs. Seasonal or campaign-based peaks, such as retail holiday sales, midnight on New Year’s Day for mobile operators and major charity events such as Children in Need create what we in the industry call demand peaks. The data centre needs to have the provision to cope but should be flexible enough that the user isn’t paying for that full capacity all the time unnecessarily. The next stage: software-defined data centres As businesses continue along the IT journey, milestones they reach include converged infrastructure, private cloud and software-defined data centres (SDDC). The owned IT assets will range from nonvirtualised legacy IT, to virtualised private cloud IT and the management and support applications that provide the augmentation, management and security of the SDDC. However, unable to predict the power densities and resiliencies required for those IT assets, planners face having to over-cater for an unknown future. This leaves the CIO with a specific issue to contend with - how to manage the data centre capacity to provide the right-sized private cloud environment at each stage of the IT journey. It is vital that CIOs consider the attributes they need from a data centre as they continue along their IT journey. For example, space flexibility with no minimum commitment; the ability to only pay for power used rather than the maximum power capacity; or predictability of the cost of change. One thing is clear - a new breed of flexible data centre must emerge to put the CIO back in the driving seat of the outsourced data centre. Ultimately, what these changes all provide the CIO with is high levels of flexibility and agility. www.infinitysdc.net
CLOUDCOMPUTING 23
ENERGY CONSUMPTION
Load balancing for a more robust cloud environment
AWELL-BALANCED HYBRIDCLOUD
By Jason Dover, Director of Product Management, KEMP Technologies
Jason Dover looks at why - and how - organisations are adopting the hybrid cloud and the importance of good balancing
Introduction Back in early 2007, I recall this opening statement by an enthusiastic speaker at a tech conference: “Even though you might not realise it, over 95 per cent of you are already consumers of cloud computing services.” This came just after the same speaker had asked everyone to answer by a show of hands, whether or not they were Yahoo and Gmail users. Seven years on from this early evangelism at the start of the cloud hype cycle and we’re at a point where cloud computing is real. The forming of the cloud Interestingly, even though the mid-2000s marked the beginning of cloud computing, the concepts were born more than five decades ago. Mainframe computing laid the groundwork of pooled resources in a cloud-like infrastructure shared by dispersed users in the 1950s; with the vision of an interconnected globe with access to easily scalable programs, resources and data, regardless of location and without the bounds of a rigid system infrastructure. Even though the full potential of this vision wasn’t realised then, fast- forwarding to 2006, brings us to a time where Amazon delivered a resurgence of this notion with the development of Amazon Web Services (AWS) and then Elastic Compute Cloud (EC2). This made possible the delivery of cloudbased storage and compute for companies to rapidly provision services without large capital expenditures or a rigid system infrastructure. This model has dramatically changed computing and since then, Infrastructure-asa-Service (Iaas) and Software-as-a-Service (SaaS) frameworks have multiplied by an order of magnitude. IT decision makers now have a plethora of options when it comes to leveraging public cloud service offerings to augment their overall IT delivery strategy. Despite the advances and benefits in public
24 CLOUDCOMPUTING
cloud computing, governance implications, economics, concerns over reliability and security for custom business critical applications has staved off adoption of a public cloud-only model by an overwhelming majority of organisations. These limitations have proven to be a main driving force for a hybrid approach to cloud computing. Unfortunately, hybrid cloud is often over-simplified as merely being an IT environment that leverages public cloud infrastructure for some applications and on-premise infrastructure for others. While this definition does start to paint a picture and in the strictest sense is true, it misses the mark of conveying the depth of the expected outcome of building a hybrid cloud infrastructure in the first place – integration of heterogeneous services both in front of and behind the corporate firewall with such symmetry that each single entity behaves as part a part of a bigger whole.
InfoBurst Load balancing - more than just a balancing act...
ENERGY CONSUMPTION Challenges While this all sounds good, actual execution isn’t easy. Successful hybrid cloud implementation assumes a well-architected private cloud as opposed to simply a well-built traditional IT infrastructure. This means that adoption of hybrid cloud starts with the transition from a traditional on-premise environment to one that includes concepts and supporting technologies that enable functionality normally associated with public cloud – self-provisioning for application owners, dynamic resource scaling, a charge back model for lines of business, orchestration for automating repeatable tasks and a high-visibility management platform to monitor how and where services get deployed. It’s the familiarity with the very nature of the public cloud model that has fuelled the business and technical requirements in the enterprise for essentially, an IT-as-a-Service framework that allows for agile self-service, provisioning and consumption monitoring, while simplifying the load on application owners. Because on-premise legacy data centre environments were not built with these principles in mind, transitioning can be a challenge. Hybrid cloud also opens the possibility for workload overflow processing or cloud bursting so that applications can bring up new instances as needed in the public part of the hybrid cloud once data centre capacity is reached. Load balancing instances, among other dynamic, virtualised network functions, is a core enabler to make service assurance and optimised delivery possible. However, without application delivery controller (ADC) technology running natively in the cloud, virtualisation admins can find it challenging to deterministically know where data centre capacity exhausts and how much external resources will need to be consumed in varying scenarios for proper planning. Additionally, applications actually built with the capabilities to traverse public and private cloud boundaries bring about the additional challenges of ensuring that the underlying data is in the right place at the right time, as well as dealing with enforcement of the same governance and security policies regardless of where active instances are operating. Where is it all heading? Fortunately, these challenges are not insurmountable. Cloud-focused security solutions with the capability of propagating a unified set of policies across cloud borders have come onto the market. Technology leaders such as VMware, Microsoft and IBM have launched many new offerings to help companies build better private clouds and extend the benefits of a virtualised infrastructure beyond the on-premise data centre. And finally, advancements in application delivery technology have made possible the use of complex traffic steering algorithms across a fabric of private and
public clouds based on business rules that dictate how company resources should be consumed. These enablers all have driven the adoption of a hybrid cloud strategy in the enterprise and the outlook is positive. Modern IT delivery’s need for increased agility, rapid provisioning of innovative applications and focus on quickest time to market of applications coupled with the current gap left by an all-in public cloud model all mean one thing hybrid cloud is here to stay.
Cloud load balancing revealed An Application Delivery Controller (ADC) directly assists in the management of client connections to enterprise and web-based applications. ADCs are normally deployed behind firewalls and in front of application servers and make networks and applications more efficient by managing the processing of traffic shaping and distribution. The ADC directs client access requests to the best performing servers based on factors such as concurrent connections, CPU load and memory utilisation. This makes sure that bottlenecks do not occur to reduce performance; and if a server or application fails, the user is automatically re-routed to another functioning server. This process is seamless to the user and critical to delivering an optimised and reliable experience. When it comes to the private, public or hybrid clouds, ADCs ensure the availability of applications while maximising performance, regardless of the user location or device. In a hybrid-cloud environment, traffic running at normal levels is directed to dedicated, optimised application servers. However, when traffic spikes occur, the load balancers will direct this ‘spill over’ to servers that can be located on public cloud. In some hybrid cloud environments, dependencies between cloud and on-premise devices may also exist. The high availability of ADFS Servers delivered through a load balancer can provide guaranteed access to on-premise Active Directory servers for MS-Office365, for example. Cloud balancing simply increases the choices from where a given application should be delivered and can make application routing
decisions based on a wider range of network as well as business variables, such as the ability to meet a SLA or the value of a transaction based on a per user or customer basis. Other criteria could include user location, time of day, regulatory compliance, energy consumption and contractual obligations. When it comes to load balancing and traffic management across public cloud providers, it is important to consider some of the inherent limitations. For example, the built-in load balancer provided in Microsoft Azure does not offer Application Layer (Layer 7) visibility to provide the best level of service to users. While basic Layer 4 balancing directs traffic based largely on server response times, Layer 7 switching uses application-layer criteria to determine where to send a request to provide more granular control. This leads to an improvement in the utilisation of data and application traffic management and at the same time allows the virtual machines to be used more effectively. It is possible to deploy a third-party Layer 7 virtual load balancer that runs directly on the cloud platform rather than just directing traffic to the cloud network. Deploying a virtual ADC with an application in the cloud ensures that the organisation is able to monitor and manage the health of the application and make global routing decisions to deliver optimum performance and resilience. A virtual ADC can also provide a platform for global load balancing and DNS routing to enable internal and external cloud implementations to behave as if one single network. www.kemptechnologies.com
CLOUDCOMPUTING 25
SOFTWARE
Understanding data centre software
OPENSTACK
BUILDS MOMENTUM David Fishman explores the future of OpenStack… By David Fishman, Global Vice President, Mirantis
InfoBurst Open architecture makes life a whole lot easier...
Introduction In this Q&A, David Fishman, global VP of marketing for commercial OpenStack distribution vendor Mirantis looks at OpenStack’s current position – and future developments
That `closed garden’ makes AWS analogous to the Apple of the cloud; by contrast, OpenStack is the equivalent of Android, helping organisations tailor it to their specific needs, and avoid being locked into a single vendor’s cloud solutions.
How has OpenStack got to where it is now? Many companies have wanted to build a Googleor Amazon-like infrastructure for their operations, but didn’t want to outsource for several important business reasons. For example, they saw the value of cloud infrastructure, but they felt that Amazon could not guarantee data privacy and security, or they had limited opportunities to tailor the infrastructure to their specific needs, such as SLAs (service level agreements).
A range of software, hardware and service companies have joined OpenStack. What’s in it for them and for end-users? For the end-user, the benefits of OpenStack are rapid deployment, easier scalability of cloud infrastructure, and importantly there’s no vendor lock-in because it’s open. It provides tremendous flexibility, allowing customers to configure their infrastructure exactly to their needs and to integrate with existing systems.
26 CLOUDCOMPUTING
// Cloud Solutions // Business Continuity // Managed Service Provider
SIRE helps businesses make the best use of IT systems to create a competitive advantage. We are an award winning supplier of leading edge cloud technologies, systems and processes. As specialists in Tailored Cloud solutions, we have been providing organisations with reliable, flexible and financially viable IT infrastructure coupled with a robust business continuity plan for over two decades. With SIRE alongside, you are free to get on with running your business, leaving us to make sure your IT infrastructure is protected, optimised and keeping pace with technical and legislative changes.
SIRE’s Cloud Solutions offer reliability and scalability: • Cloud Consultancy • Tailored Clouds • Private Clouds • IaaS and PaaS Providers • Virtualisation • Data Protection
For more information about cloud technology and solutions, please contact one of our specialists on 01344 758700.
www.sire.co.uk
// your essential partner
SOFTWARE We also recently benchmarked how quickly private clouds could be provisioned using OpenStack, and hit a rate of over 9,000 virtual servers launched per hour for 8 hours in a multi data centre set-up. The result was 75,000 virtual machines running, which is the scale required by the largest banks (such as Barclays), or mobile telecom infrastructure (such as Ericsson). For software, hardware and service companies, they realise that their customers increasingly want cloud infrastructures that enable rapid change. That works in two ways. First, the transparency and common interfaces that span compute, network, and storage, mean that companies can more easily update and automate the software that serves their customers, and improves the ROI on the infrastructure. Second, the common standards that OpenStack enables means that vendors can continuously compete for a piece of that infrastructure, without being locked out by their rivals. It seems enterprise adoption has been a little slow so far – is this true? Naturally, organisations have been approaching cloud deployments with an element of caution, but I believe momentum is building very quickly now. For example, Ericsson has committed to using Mirantis OpenStack as the foundation for its telecoms networks, internal data centers and cloud computing services for its customers. Cisco recently announced its huge InterCloud initiative will be OpenStack-based. There’s a great deal of pent-up demand for faster, more agile infrastructure. Some argue there is a lack of clarity about what OpenStack does. Do you agree? One of the key points that needs to be communicated about OpenStack is that it’s more than just open-source cloud software. It’s commoditising cloud infrastructure, so that cloud deployments can become more vendor-agnostic, with broader interoperability. The aim is to make it easier for customers to build their cloud the way they want, with the best tools for the job, and adapt to marketplace opportunities over time. One of the things that will help this is opensourcing OpenStack cloud certifications, to remove the traditional software vendor ecosystem lock-in that says “we only certify this particular solution with our software.” Open certifications – which are supported by over a dozen infrastructure vendors, including VMware, NetApp and HP, as well as OpenStack users such as Yahoo, Dreamhost and AT&T, are making OpenStack the more buyer-friendly ecosystem. This way, using the open certifications approach, buyers can see for themselves using publicly available dashboards which solutions work best with each other.
28 CLOUDCOMPUTING
Understanding data centre software
What business model will accelerate adoption of OpenStack? Its openness and vendor-agnostic nature are the keys to OpenStack’s rapid adoption, and the fact that OpenStack users are realising it can be used to add more computing capacity in minutes, as opposed to the several weeks or months that it can take to buy and provision new hardware. It’s this that will drive OpenStack’s momentum. How should organisations use OpenStack for the best results -- within a heterogeneous environment? One of the hallmarks of open source – and particularly so for OpenStack -- is the rapid pace of innovation. For example, Mirantis has partnered with VMware to make it possible to extend VMware environments with OpenStack, so that companies who have invested in ESX hypervisors can benefit from using OpenStack for their IaaS, and protect their innovation. OpenStack has evolved, and continues to evolve rapidly. The concerns that CIOs might have had 18 months or two years back have been addressed as commercially-supported OpenStack distributions resolve the concerns about security, scalability, support and so on, while still giving customers all the benefits of openness and interoperability. Where do you see OpenStack going over the next year or two? OpenStack adoption will accelerate in the years ahead, moving even faster than Linux did a few years back. There are four key trends that are driving OpenStack adoption. First, the overwhelming majority of companies building applications for strategic advantage are using cloud as a platform; as a result, they’re comfortable building applications that leverage cloud resources rather than traditional servers. Second, open source is no longer foreign and mysterious. Most IT organizations know how to use it and manage it effectively, and understand the benefits it brings. Third, the vast majority of infrastructure vendors recognize that OpenStack accelerates market adoption of new technologies, and as the market shifts to cloud, they want a piece of that. Finally, the ability of SaaS companies to offer more compelling, information-driven value to their customers is a lesson in competitive advantage. Any organization that uses IT to innovate is going to look for better, faster ways to make that infrastructure more nimble, and more capable attracting and keeping customers. The flexibility and agility of OpenStack can play a central role in achieving that competitive advantage. www.mirantis.com
ARE YOUR COLLEGUES AS WELL-INFORMED AS YOU ARE? Volume IV, Issue 3 2014
www.netcommseurope.com
NETCOMMS europe magazine is the first, £35/€50
and only, pan-European journal dedicated to the network communications infrastructure marketplace. NETCOMMS europe features news, legislation and training information from industry-leading bodies, application stories and the very latest information on cutting edge technology and products. NETCOMMS europe compiles editorial contribution from worldwide industry figureheads, ensuring that it is the No. l place to find information on all aspects of this fast-paced industry.
Cloud - the next phase:
If you think your colleagues would be interested
Resilient cloud networking revealed
in receiving their own regular copy of NETCOMMS europe simply register online at
FEATURES Optical fibre - the future of mobile FEATURES Measuring data centre PUE FEATURES Building Britain’s appetite for repair
NETCOMMS VOL 4 Issue 3 2014.indd 1
www.netcommseurope.com. And don’t forget to renew your own subscription every now and then, to make absolutely sure that you never miss an issue of the most up-to-date publication in the industry!
LGN Media is a trading name of the Lead Generation Network Ltd, 26 St Thomas Place, Ely, Cambridge, CB7 4EX.
Tel 01353 644081 www.netcommseurope.com
26/05/2014 20:50
OPINION
Why planning is essential when it comes to the cloud
CLOUDCOMPUTINGINAN
ON-DEMANDWORLD Amit Khanna looks how the architecture of cloud can help businesses to scale more effectively and with lower costs... By Amit Khanna, Vice President - Technology, Virtusa
Introduction Cloud computing is changing the IT landscape and redefining how software is being built, deployed and managed. Enterprises have come to a stage where they cannot ignore cloud computing any longer, or the tangible benefits that it can deliver. As companies and employees demand more flexibility from their IT, and lower costs, cloud
30 CLOUDCOMPUTING
InfoBurst Cloud computing - full of technology acronyms?
OPINION usage will only increase. Yet what is it about cloud that enables this? What are the cost benefits? How do the economies of scale work? Keeping cloud costs down Firstly, it is worth noting that cloud computing is not a single technology; it is in fact a computing paradigm that combines many existing technologies to provide distinct characteristics, such as: • Multi-tenancy: Allows multiple application, users and entities to share computing resources • Scale: Software can scale almost linearly by leveraging shared resources • Elasticity: The resources used (compute and networking) automatically adjust to the peaks and troughs of the computing demands • On demand: The time taken to provision and de-provision the resources is negligible • Pay as you go: No upfront infrastructure investment required, pay as you use. Each of these aspects of cloud computing results in lower overall costs for enterprises. For example, the fact that many clients share cloud platforms means that cloud vendors are able to realise much higher utilisations than they can from using traditional models. This higher utilisation of resources results in cost savings, which can then be passed on to clients. Most businesses see a huge variance in their computing requirements. Examples include high demand during the office hours, or peak seasons, such as holiday shopping etc. Traditionally, these businesses had to plan for investments in technology infrastructure and solutions that would support the peak usage, resulting in a lot of capacity lying un-utilised during off-peak season. Now, the elastic nature of the cloud allows enterprises to scale in accordance with demand. Excess capacity can be automatically released, resulting in overall cost savings. Moreover, cloud computing allows for this elasticity with little to no manual intervention. Most of the time discussions around cost in cloud are heavily focused on operational aspects. However, there are far more important cost benefits of cloud computing, i.e. opportunity costs and cost of failure: Opportunity cost – Cloud computing enables enterprises to respond to business needs at a much faster rate than traditional IT. For example, if the business has an opportunity which involves adding more capacity or opening up an office in a new geography. Cost of failure – The fact that cloud computing offers pay as you go models obviates the need for heavy upfront capital expenditures for any new products and services. This means enterprises can not only bring these products to market faster, but they can also experiment a lot more, as no heavy additional investments are required.
Supporting innovation In addition to the cost savings implicit with cloud computing, cloud computing provides other benefits, such as: simplification and standardisation of IT architectures from the consumer stand point, consolidation of infrastructure and application investments, and increased virtualisation of the entire IT landscape of an organisation. Here are some of the direct and indirect ways in which cloud computing technology can cause which benefits organisations: 1. Levels the competitive landscape across industries – Cloud computing will have a profound shift in how IT is consumed by both enterprises and end consumers alike. 2. Accelerates convergence of technologies – Cloud technologies will increasingly be the platform around which other technologies, such as mobile and big data solutions, rely upon. 3. Creates a platform for innovation – with cloud computing providing a platform that can potentially scale indefinitely, the focus shifts from technology to business innovations. 4. Causes shift in enterprise IT buying patterns – Enterprises which have been traditionally dependent on CIO organisation for IT solutions will now have their business units consuming IT solutions directly - thanks to the simplification caused by cloud based consumption of solutions. So what’s next? Before plunging headlong to cloud adoption, companies will have to do the required groundwork and plan their adoption based on their business needs. It is important for enterprises to see the big picture about the impact cloud computing adoption will bring to their long term IT infrastructure needs. This requires careful planning, with all aspects clearly thought out before taking the step towards cloud adoption. Different organisations will have different technology needs based on the markets they operate in, their scale, and the competitive scenario among others to consider. Today, the focus for enterprises is not just to sell products and services in the markets, but also how to create value for their customers. While adoption of cloud computing does require companies to relinquishing control in some ways, the opportunities that arise out of performance improvement, reliability and scalability override many of the concerns. Cloud computing technology is set to revolutionise the Information Technology paradigm unalterably in the not so distant future. These benefits will be propositions that will ensure adoption of Cloud computing technology to scale significantly higher than present levels in the not so distant future. www.virtusa.com
CLOUDCOMPUTING 31
SECURITY
How the cloud brings challenges, as well as benefits
JOURNEYTOTHECLOUD:
CHALLENGESPOSEDBYSECURITY Phil Turner explains how to contain the security challenges that the cloud creates... By Phil Turner, Vice President of EMEA, Okta
Introduction The cloud offers a host of benefits to businesses, from control over applications and ease of accessibility, to fast access and openness. Yet, despite the clear benefits of cloud-based services, security still remains a barrier to cloud adoption. According to Okta’s research report - Identity and Management in a Cloud and Mobile World - data security risk is by far the most significant concern around the use of cloud applications within organisations, with 70 per cent of respondents citing it as a concern. But, in reality, most information is actually more secure in the cloud than a lot of the costly on premise infrastructures. When it comes to cloud security, cloud businesses have to build secure data centres that are independently audited, adhere to standards such as Soc 2 Type II - and are used by hundreds to thousands of tenants. Add to this the reputational and business damage that a cloud provider would suffer should their data not be secure and it’s easy to see why it’s in their vested interest to uphold high levels of security. So why then are so many businesses concerned about security in the cloud? Why visibility is a problem The real danger of cloud adoption arises from the lack of visibility and control. While the cloud provides employees with the freedom to choose, control and manage their own applications, businesses now have to contend with a whole host of different device and applications, not all of which are vetted by the IT department. According to the report, one third (37 per cent) of employees are believed to be accessing a minimum of eight cloud applications a month, without IT jurisdiction. But in reality problems are likely to much worse than estimated with only nine per cent of IT decision makers highly confident that they have
32 CLOUDCOMPUTING
full visibility of all the applications being used by their employees. As a result, it’s no surprise that only six per cent are confident that cloud applications are integrated into their existing governance and IT security policies. The issue of visibility also stretches beyond the internal enterprise, with access to cloud applications now encompassing suppliers, consultants or contractors. Indeed, 70 per cent of organisations use portals comprised of multiple applications to engage with partners, customers and other external users, with nearly two-thirds (64 per cent) needing third parties to access cloud apps at least once a month. By opening their virtual doors to partners and suppliers and allowing them access to data and information, businesses are also opening the door to a number of risks. Today, a supply chain can consist of tens, or even hundreds, of different suppliers, each of which provides businesses with another potential point of failure, or entry point for a cybercriminal to attack. As well as the risk of malicious attacks, there’s the risk of counterfeit products entering the supply chain or a loss of intellectual property caused by data leakage, whether intentional or accidental. There’s also the risk of ideas being copied, particularly in innovative sectors such as the high-tech, automotive and pharmaceutical industries. In this new complex environment, what can businesses do to ensure their sensitive data remains protected? Minimising the risk There are a number of simple steps that businesses can take in order to secure applications and multiple access points. Rather than relying solely on passwords to authenticate users, multifactor authentication can ensure users are who they say they are and reduce the risk of unauthorised access.
SECURITY InfoBurst Data extracts from Okta’s rpeort: Identityand Management in a Cloud and Mobile World...
D
H
A
E B
F
C
Another way to safeguard applications is to provide a single-access point to all applications, such as a centralised portal. This enables businesses to quickly and easily automate all customer and partner user management functionality. IAM (Identity and Access Management) has become an important tool for businesses looking to regain control of their IT security, with 57 per cent believing the adoption of cloud-based services has made IAM more of a priority in recent years. Services such as cloud-based IAM can not only provide businesses with a better way to secure and control a magnitude more users, devices and applications that span traditional company and network boundaries, but let businesses see who has access to applications and data, where they are accessing it, and what they are doing with it. Conclusions: stand still or differentiate The cloud is the next logical architecture to deliver business applications at a massive scale and the right cost model, but it’s clear that security is still seen as both a benefit and a barrier to cloud adoption. Rather than shying away from the cloud due to security concerns, businesses should look towards cloud providers for support and help to alleviate any concerns around security, access and control. Companies can elect to stand still and bury their heads in the sand like an ostrich, or differentiate themselves through new business models enabled by an agile cloud infrastructure. To me, it’s down to people and that is the only element I think that will “truly” hold back cloud adoption. Security issues have always been around and they have always been addressed – for some people it’s a useful delay to stop the inevitable change that is coming. www.okta.com
G
Integration with infrastructure and covered by IT polices
Full visibility by IT Department
A B C D
E F G H
Not confident at all (2%) Not particularly confident (20%) Somewhat confident (72%) Highly confident (6%)
Not confident at all (4%) Not particularly confident (33%) Somewhat confident (54%) Highly confident (9%)
Journey to the cloud Cost reduction
62% Creating / maintaining IT security
60% Effecient resource utilisation
47% Driving business growth / innovation
42% Improving IT / business alignment
39% Supporting new technologies (e.g. mobile, social, cloud)
28% Risk management, regulatory compliance
28% Speedy ROI on projects
20%
CLOUDCOMPUTING 33
OPINION
Breaking down the planning process into more manageable steps
WHYPLANNINGSHOULDBE CENTRALTOYOURCLOUDADOPTIONPROCESS Russell Cook explains how breaking down the cloud planning process can make the task a lot more manageableâ&#x20AC;Ś By Russell Cook, Managing Director, SIRE Technology
InfoBurst The cloud: an amalgam of many different technologies...
Introduction Unlike computers in the workplace, the evolution of cloud computing has been quite rapid - instead of the three decades of evolution we have seen with PCs, cloud technology has evolved in just a few short years to its current state of play: an economic and highly flexible IT resource that can be scaled up or down, as and when required. For most organisations, however, implementing a cloud platform in their business is a little more complex than opting for an off-thepeg set of office PCs and a server, and installing the system over a weekend - it takes a fair bit of planning, we have observed. This planning - as with all good preparations
34 CLOUDCOMPUTING
- is perhaps best undertaken by breaking down the process into a series of four easily-managed steps, categorised as analysis, risk assessment, due diligence and implementation. The initial step, analysis, involves identifying the benefits and risks to your organisations, with benefits splitting into the financial aspects, flexibility and scalability - and with risks breaking down into the challenges of standardisation, the uncertainty of flexible pricing, and licensing issues. On the risk assessment front, managers need to look closely at compliance issues very early on in the planning process, covering topics such as data protection, legal compliance issues - both from a UK and a European perspective - and
SECURITY understanding where your company’s data is going to be stored. This is an important issue, we have observed, as cloud service providers often duplicate their data - your data - for resilience purposes, but do not always tell their clients where these backup copies are located. This can be a problem on the compliance front, as data stored in cloud resources outside of the European Union can fall foul of data privacy and security legislation. And then there is the complex issue of whether a US company is involved with the cloud service provider in any way, as the US Patriot Act requires all US companies and their subsidiaries to allow the US government - and its agencies complete access to its data, including the cloud files of its clients. The due diligence step then involves discussing the project with potential suppliers, asking questions about the provision of support services, who ultimately owns the data, what layers of contracts with third parties exist, and what lock-ins are imposed. You should also be asking questions about what will happen to your data when the contract is up and your data is transferred to another supplier, or what plans are in place in the event that the supplier goes out of business, for whatever reason. You may, for example, want to know what facilities exist for you to obtain direct physical access to your cloud data and what are the logistics involved with completing a site visit and removing data on suitable media, such as tape cartridges or similar. It is also necessary at this stage to decide which type of cloud resource is the best for your company - e.g. public, private or hybrid - and which applications are provided by the cloud vendor e.g. SaaS (Software as a Service), PaaS (Platform as a Service) and so on. The final stage - implementation - is arguably the easiest, as the deployment and test process, followed by an effective pilot program and its evaluation should be a breeze - assuming the earlier stage have been completed reliably. Business continuity One of most frequently overlooked aspects of the cloud planning process is that of business continuity (BC), an element that is often confused with disaster recovery. BC involves planning for a worst-case scenario - and then stepping back to lesser scenarios, and planning accordingly. We take BC issues very seriously here at SIRE, and in June of this year we joined the Business Continuity Institute (BCI), an organisation that has established itself as the leading international institute for business continuity and certification for both organisations and individuals keen to be recognised for a professional approach to this relatively new area of technology and business. Being accepted as members of the BCI gives
SIRE’s services and knowledge real credence and allows the company to display its BCI membership, as well as participating in some of the organisation’s initiatives and campaigns. Conclusions There is a lot of talk about cloud computing and many SMEs may be wondering if this can really benefit them or is just for larger organisations? The answer, we have observed, is that, yes, cloud computing is the next stage in the Internet’s evolution and, when managed correctly, provides the means through which everything, from computing power to computing infrastructure, applications and business processes can be delivered to your business as a service, wherever and whenever you need it. Our observations also show that the cloud offers any organisation significant benefits, including flexibility and business continuity, regardless of its size or the nature of its business. If effective planning and suitable allied process are carried out, we have found that clients can enjoy the considerable cost savings that accrue from a well-planned and implemented cloud process. It is worth remembering that the economic imperative behind the cloud can sometimes lure clients into believing that the lack of human interaction in automated cloud service provision can often reduce the selection process to a `lowest cost is best’ route This is actually a false economy, as opting for the lowest cost service over the slightly less cheap may lead to extra costs in the longer term. Our observations suggest that a premium economy approach to buying in business cloud services is often the better option in the longer term. www.sire.co.uk
CLOUDCOMPUTING 35
SECURITY
Reducing security risk with due diligence
SECURITYQUESTIONS TOASKYOUR CLOUDPROVIDER
Stephen Coty explains some of the questions you should be asking your cloud service provider... By Stephen Coty, Chief Security Evangelist, Alert Logic
infoBurst Securing the cloud - a complex process that needs to be carried out correctly...
Introduction The cloud is here and it’s only set to grow. This is because its scalability and on-demand capacity present the perfect medium to support businesses and the need to be agile. The benefits are many, ranging from the ability to more effectively manage costs (which makes the finance team happy) to not having to worry about installing and maintaining hardware in data centres that don’t have enough space, power or cooling (which keeps the IT team happy). Offloading the burden on to a cloud provider that [says it will] take care of everything from performance and storage to email is certainly an attractive proposition. However, this doesn’t mean that these are the only considerations to take into account when undergoing a cloud project. Companies need to do their due diligence and ensure that it is the
36 CLOUDCOMPUTING
right choice for their business, just like any other business decision. Part of this has to be thinking about the scale and type of information that will be placed in and in transit within a cloud provider’s infrastructure. Therefore, businesses that do take advantage of cloud infrastructure must give importance to the security of data that is put in the cloud careful deliberation, whether they are about to make the move to the cloud or even after. This is for a number of reasons: The same type of attacks typical to on-premise data centre environments are moving to the cloud – What used to be historically on-premise based attacks, such as malware, botnet and brute force attacks, are now targeting cloud environments. A big driver for this is that businesses are starting to deploy traditional enterprise
SECURITY applications like ERP and VDI (Virtual Desktop Infrastructure) in the cloud. Hackers that see this happen run vulnerability scans and brute force attacks, that attempt to siphon valuable company data, in hopes of finding and taking advantage of lax security policies in the cloud. Furthermore, as more end user applications move to the cloud, malware and botnet attacks follow suit. The breadth and depth of attacks means that threat diversity in the cloud is on the rise – threat diversity is basically a measurement of how many different types of attacks exist and companies are facing. This year, threat diversity in the cloud increased to rival that of on-premise data centres. This means that companies need to be just as vigilant with the same security sophistication in the cloud that would normally apply to protect an enterprise’s on-premise data centre. The point solutions typically relied upon to combat these threats are not enough – To gauge the effectiveness of security solutions, such as anti-virus protection, in major public clouds around the world, new patterns of attacks and emerging threats were observed through a honeypot project. One particularly interesting and disturbing observation was that 14 per cent of the malware collected was considered undetectable by 51 of the world’s top anti-virus vendors. So, that’s the cold, hard facts out of the way and certainly not to say that businesses should stop using the cloud- there are just way too many benefits. The good news is that there is a lot that organisations can do to protect themselves in the cloud; and the first step is to get educated on what their businesses and applications require from a compliance and security posture. The following guide of what questions you should be asking your service provider when it comes to security in the cloud is a good starting place. Make sure that the cloud service provider can answer these questions confidently and comprehensively so you feel confident that it takes the security of your business critical data seriously. 1. What is their data encryption strategy and how is it implemented? Encryption is the industry ideal for protecting critical data by making it unreadable to unauthorised parties. While there are many considerations to give when it comes to encryption, preferably, the cloud service provider will be able to answer questions like who controls the keys and what standard of encryption is used. 2. What is the hypervisor and provider infrastructure-patching schedule? As previously explained, malware and exploits continue to rise, so it is important that the cloud service provider patches and updates their infrastructure on a regular and frequent basis. This will minimise the threats to their customers’
data by fixing any “holes” that malicious actors can exploit to gain access to their systems. 3. How do you isolate and safeguard my data from other customers? Due to huge capacities, cloud providers will undoubtedly (unless specified as private) house data for more than one company (multi-tenancy). Ask how they segment the data, what controls they have in place to make sure data isn’t accidentally shared, and how those controls are implemented. 4. How is user access monitored, modified and documented? Naturally, where security is concerned, it is vital to know who is accessing the data so that it remains uncompromised. It is also important that separation of duties are in place so that the service providers administrator does not have endto-end authority and control over your data. 5. What regulatory requirements does the provider subscribe to? There are a number of regulatory controls that a cloud service provider can adhere to in order to demonstrate best practice and compliance. If you are putting cardholder information in the cloud, for example, you will want to make sure that the provider is PCI compliant. If it adheres to industry standards, such as ISO27001, it is a good indication that it takes security and the integrity of your data seriously. 6. What is the provider’s back-up and disaster recovery strategy? This is often referred to as resiliency. Like most services, occasional downtime is an inevitability. Find out what the provider’s track record is in availability and make sure there is transparency into its infrastructure. It may very well be that you will be responsible for your own back up of information, so make sure the boundaries are defined and each party knows its responsibilities. The recent Code Spaces demise, for example, could have been avoided if they had a separate backup of their infrastructure: without it, they lost everything. 7. What visibility will the provider offer your organisation into security processes and events affecting your data from both front and backend of your instance? These are just some of the questions that you may want to be asking a cloud service provider about the security of sensitive information residing in the cloud.. Depending on the level of confidence and completeness of the answers, they will help you quickly judge how safe your data is with the cloud service provider and how seriously they take the security of the data that backs and fuels your business. www.alertlogic.com
CLOUDCOMPUTING 37
INFRASTRUCTURE
How the cloud can make your IT systems more robust
UNDERSTANDING CLOUDDISASTERRECOVERYSERVICES Peter Godden looks at how virtualisation is helping organisations strengthen their disaster recovery positions. By Peter Godden, Vice President of EMEA, Zerto
Introduction It seems by the stream of TV advertisements and buzz in the technology press that cloud computing is a methodology that can solve deeply intractable problems in the data centre. However, many organisations often adopt cloud to help solve one initial issue, using the cloud as both a remedy and a test bed to gain an understanding of the potential. A survey last year at Amazon Web Services Global Customer and Partner Conference found around two thirds (60%) cited cost savings and disaster recovery as the factors most heavily driving cloud storage adoption. However, the desire to use the cloud is tempered by the practical realities and additional fears. To quantify this position, Zerto conducted a further survey which found cost and complexity are both the biggest concerns with ‘difficult to manage’ coming in close third. Even the companies that have a DR implementation, only 23% are confident their DR will work in the case of a real emergency.
38 CLOUDCOMPUTING
One of the fundamental problems with using the cloud for IT recovery is that current arraybased replication techniques are not well suited to the increasingly virtualised workloads that are becoming more common, across the IT landscape. Array-based replication products are provided by the storage vendors and deployed as modules inside the storage array. Examples include EMC SRDF and NetApp SnapMirror. As such, they are single-vendor solutions, compatible only with the specific storage solution already in use. Currently the most popular replication method in use in organisations, array-based replication, does not have the granularity that is needed in a virtual environment or to replicate these virtual environments into the cloud. Mapping across For example, mapping between virtual disks and array volumes is complex and constantly changing, creating management challenges and
InfoBurst Zerto’s technology: creating a powerful Disaster Recovery platform...
INFRASTRUCTURE additional storage overhead. Often, multiple virtual machines reside on a single array volume, or logical unit. An array-based solution will replicate the entire volume even if only one virtual machine in the volume needs to be replicated. This under utilises the storage and results in what is known as “storage sprawl.” Because array-based replication lacks the visibility and granularity to identify specific virtual machines in different locations, organisations tend to put all disks from an enterprise application into a single storage logical unit, when in fact there are operational advantages to splitting them up over a number of logical units. Array-based replication has several other important disadvantages that limits its suitability to a cloud based DR position. Essentially, it is designed to replicate physical entities rather than virtual entities. As a result, it doesn’t “see” the virtual machines and is oblivious to configuration changes – and due to their dynamic nature, virtual environments have a high rate of change. As the starting position for a successful cloud DR strategy, a growing trend is to use hypervisor based replication technology which protects virtual machines (VMs) at the virtual machine disk format file level rather than at the LUN or storage volume level, thus replication can be done without the management and TCO challenges associated with arraybased replication. Because it is installed directly inside the virtual infrastructure (as opposed to on individual machines), Hypervisor based replication is able to replicate within the virtualisation layer itself, so that each time the virtual machine writes to its virtual disks, the write command is captured, cloned, and sent to the cloud recovery site. This is more efficient, accurate, and responsive than prior methods.
Hypervisor replication Hypervisor based replication is fully agnostic to storage source and destination, natively supporting all storage platforms and the full breadth of capabilities made possible by virtualisation, including high availability, clustering, and the ability to locate and replicate volumes in motion. Hypervisor based replication technologies are becoming standard in a virtualised environment, but even with the technology there are still a number of options that should be considered, as although cloud is well suited to DR but it is not a one hat fits all approach. It is helpful to define the options as this helps to understand the benefits and limitations of the different cloud based approaches. The first type of approach is a Private Cloud where business continuity and disaster recovery sits between two or more geographically separate sites, all under the control of the enterprise’s IT team and deployed as a private cloud. This approach allows enterprises to create a flexible and dynamic environment in which their IT departments can scale and mobilise applications depending on needs and resources at any point in time by delivering IT infrastructures across multiple geographical sites. Taking this approach also helps enterprises to evenly distribute production load between multiple data centres and recovery sites. However, this is more complicated to set-up and manage and places more technical heavy lift on the internal IT department. Conclusions The advent of virtualisation and the growth of cloud computing offer a significant opportunity to strengthen disaster recovery processes. With the inclusion of hypervisor based replication technologies and the benefits of private and as-a-service options, the cost and complexity of disaster recovery options is falling, offering the economies of scale to drive down costs even further.
The Zerto 2.0 option Whatever path enterprises chose in their application deployment, Zerto provides a BC/DR solution that fits. Zerto Virtual Replication is the only cloud-ready BC/DR platform providing enterprise-class protection to applications deployed in virtualised environments and private or public clouds. The technology enables Disaster Recovery-as-a- Service and true, cloud BC/DR for cloud service providers and enterprise customers, respectively. Enterprises can expand BC/DR support to include not just the traditional data centre, but also smaller branch offices and other sites through multi-site capabilities. Additionally, this lowers barriers to entry for the enterprise to evaluate the cloud for other applications in the environment, perhaps a tier 2 application. The multi-tenancy features greatly increase efficiencies at the disaster site, especially if there are geographically separate production sites replicating over to the same disaster site.
One infrastructure, managed centrally through VMware vCentre and vCloud Director, can now simplify management and reduce operational costs. CSPs (Cloud Service Providers) are able to attract new customers by offering a cost-effective service that enables customers to effectively evaluate the CSP without complete dependency. CSPs can make the price very attractive to enterprises as they do not have to create a completely duplicate infrastructure with matching hardware, software and networking. Additionally, they do not have to have a widely specialised team and can focus on what they have in their environment. Finally, with true multi-tenancy, economies of scale can be leveraged to further drive down costs for customers. www.zerto.com
CLOUDCOMPUTING 39
INFRASTRUCTURE
TAKINGYOURFIRST STEPSINTOTHECLOUD
InfoBurst Breaking down the cloud planning and adoption process into small segments can make life a lot simpler...
40 CLOUDCOMPUTING
Strategies for adopting the cloud
Gordon Howes discusses the strategies that companies need to adopt when embracing the cloud...
By Gordon Howes, Director, VMhosts,
INFRASTRUCTURE
Introduction Anyone that is in the cloud industry knows that cloud computing - and indeed hosted services are nothing new for businesses. Companies have been adopting cloud technologies for many years now, and cloud deployment is now often the first choice when looking to roll out a new application or service. With that said however, does the same apply to companies of all sizes? Are smaller SME companies well versed enough to know about the benefits of hosted services? The cloud can be a daunting topic for many businesses; some will already know the benefits of moving some of all of their services into the cloud, however may not know who to turn to and what the first steps required are to make it all happen. Other companies may have little to no knowledge of the technology and the process involved - and will often find the whole topic very confusing. From connectivity to costs there are a number of questions that businesses need to know the answers to before they take their first step into the world of cloud computing. Many of them will be obvious to people already leveraging the cloud, but for those that aren’t in the know, they are questions that need answering before a move to the cloud is viable. There will undoubtedly be more questions in addition to the following few, however as a cloud provider, these are the most common ones asked of us. Isn’t the cloud expensive? This is often a common misconception. In traditional IT purchasing a server or a piece of hardware is being acquired for a particular purpose. It may be a new piece of software that needs a dedicated operating system to run on
InfoBurst Treading a cloud tightrope - simply a question of balance...
or maybe an upgrade to an existing application. Whatever the reason when purchasing hardware there is usually an element of hardware guesswork involved that can lead to large up-front costs. Many organisations don’t have the time or resource to run capacity plans for every application or service. Typically, when an IT department is considering purchasing a server to perform a particular process, the company is investing in hardware and software and will usually have an expectation that it will need to last around 3 to 5 years. As it is unsure what the company’s requirements will actually be in 3-5 years’ time, there has to be a bit of guess work - all be it an educated guess. If not enough hardware is specified then the company will be purchasing expensive upgrades before they know it. If over specified, then the company has not made the best use of its large up-front investment. More often than not for fear of under specifying hardware requirements, many IT departments would over-estimate the hardware they needed, leaving the business with a big up-front bill and a woefully under-utilised server. Cloud hosted services often work on a monthly-based costing model with little or even no up-front investment. The guesswork is taken more or less out of the equation as hosted server resources can be scaled up or down when the business needs it. Resource is calculated on the applications performance at that time, rather than what it might be doing further along the line. Cloud computing is a utility based computing in the same way gas and electric are utilitybased energy resources. If more or less resource is needed, the price can be scaled up and down easily depending on the usage requirements.
CLOUDCOMPUTING 41
INFRASTRUCTURE
Strategies for adopting the cloud
Microsoft’s own Office 365 then you are already making use of the technology.
By paying for cloud resources in this way, businesses can more effectively budget for their computing needs with minimum capital expenditure outlay on day one. How do I know I’m ready for the cloud? This can be looked at in one of two ways: Being ready for the cloud from a physical point of view or being ready for the cloud from a business point of view. With regards to physically being ready generally speaking, cloud is all about connectivity, as long as you have relativity decent connectivity from where you are connecting from, then typically you’ll be fine. Have a chat with a cloud service provider and they will be able to advise you on how adequate (or inadequate) your connectivity is for the solution you are looking for. You’ll often be pleasantly surprised, there are a huge amount of hosted services that work over relativity slow connections. Being ready for the cloud from a business perspective can be a trickier one to answer. As the term cloud is quite broad and can encompass a variety of different services, there isn’t a one size fits all solution. Take some time to audit your current applications and processes, often if a problem has been around for a long time, users may silently accept that it’s “just the way it is” rather than making a problem known. It is quite often the case that businesses think about moving services to the cloud when the hardware becomes end of life and needs replacing. Instead of finding the capital expenditure required to purchase new equipment, check to see if the application or service would work in a hosted model. As an example, services such as email hosting, remote access and backup are all extremely viable hosting options. In fact many business have no idea they are actually already making use of hosted services. If you’ve ever used applications like Dropbox or
42 CLOUDCOMPUTING
InfoBurst Planning your cloud component strategy - not as easy as it first looks...
What should I look for in a provider? With a vast array of cloud providers out there, how can you make an informative choice with whom to choose as a hosting partner? Here are a few options to help you make the best choice for your business: Check for any certifications or codes of practice - by checking to see if a provider adheres to any standards helps to set your mind at rest that the provider has passed and is committed to regulated guidelines. Typically this means that they have process and procedures in place to help protect your data and services. Ask for a data centre tour - sometimes it can help you to understand and trust the hosting provider. By physically seeing where your data is held, often goes a long way to trusting the provider. Be aware of any companies that refuse a tour unless a very good reason is given - they may not be all that they seem. Check for any testimonials or ask for references - speaking to a providers existing customers will go a long way to making an informed choice. Ask if the provider has any disaster recovery or business continuity plans of their own. Check if the provider can offer any geographically redundant high availability or disaster recovery options - if they have ask them what they are and how they work Do I have to move all my systems into the cloud? Not at all, although of course you are welcome to do this if you want to and in some cases it makes perfect sense, however cloud is an enabling technology. This means it complements your existing infrastructure and allows you to extend your IT department by moving certain processes to it. A good example of this is backup and disaster recovery services as these can be very expensive and problematic to run in house. By moving your backup to a cloud provider, you are immediately making use of the cloud without moving any of your company’s servers to a hosted service. Will my company have to hire any cloud experts? Not at all - it is the responsibility of the cloud provider to maintain and manage the infrastructure the service is provided on, meaning there is no requirement to employ or hire any cloud experts. If you are a company that use outsourced IT have a chat with them about any migration plans for moving to the cloud. Also ensure you check with the cloud provider as well. Most of the time they will offer some free migration advise, although any complex migration may have a charge attached to it. www.vmhosts.co.uk
THREE PHASE POWER Three Phase Power Designed to bring maximum power to your servers, the G4 three phase range are built to exacting standards to ensure maximum safety for your facility.
Available with: • C13 C19 Locking outlets • C13 C19 Fused outlets • BS1363 UK outlets • Continental outlets • Individual circuit protection per outlet • Overall metering of V, A, kWh, Harmonics, PF.
G4 MPS Limited Unit 15 & 16 Orchard Farm Business Park, Barcham Road, Soham, Cambs. CB7 5TU T. +44 (0)1353 723248 F. +44 (0)1353 723941 E. sales@g4mps.co.uk
Vertical rack Mount
Maximise you rack space, specify mixed connector PDU’s built to your exact requirements to give you just the solution you are looking for.
Horizontal rack Mount
Thermal overload protection or fused outlets mean that you only loose a single socket in the event of a fault, not the whole PDU thereby removing the risk of a total rack failure.
SOFTWARE
How next-gen Linux containers could cause problems
WILLLINUXCAUSEPROBLEMS
WITHLOADBALANCERS? Richard Davie discusses some of the current challenges with cloud-based load balancer technologies... By Richard Davies, CEO, ElasticHosts
Introduction Modern IT infrastructure needs to be highly flexible as the strain on servers, sites and databases grows and shrinks throughout the day. Cloud infrastructure is meant to make scaling simple by effectively outsourcing and commoditising your computing capacity so that, in theory, you can turn it on and off like a tap. However, most approaches to provisioning cloud servers are still based around the idea that you have fixed-size server “instances”, offering you infrastructure in large blocks that must each be provisioned and then configured to work together. This means your infrastructure scaling is less like having a handy tap and more like working out how many bottles of water you’ll need. There are traditional approaches to ensure all these individual instances work efficiently and in unison (so that those bottles of water don’t run dry or go stagnant); one of the more popular tools
44 CLOUDCOMPUTING
for cloud capacity management today is the load balancer. In fact, load balancers are quite often bought alongside your cloud infrastructure. The load balancer sits in front of your servers and directs traffic efficiently to your various cloud server instances. To continue the analogy, it makes sure everyone drinks their fill from the bottles you’ve bought, using each bottle equally, and no one is turned away thirsty. Horizontal scaling If your infrastructure undergoes more load than you have instances to handle, then the load balancer makes an API call to your cloud hosting provider and more servers are bought and added to the available instances in the cluster. Each instance is a fixed size and you start more of them, or shut some down, according to need. This is known as horizontal scaling. Existing virtualisation technology also allows
InfoBurst Containers - can be filled, emptied at will...
SOFTWARE
individual server instances to be scaled vertically after a reboot. A single instance can be resized, on reboot, to accommodate increased load. This would be like going from a small bottle of water to a 5-gallon demijohn when you know that load will increase. However, frequently rebooting a server is simply not an option in todayâ&#x20AC;&#x2122;s world of constant availability, so most capacity management is currently done by adding servers, rather than resizing them. However, there are many challenges with this traditional horizontal scaling approach of running multiple server instances behind a load balancer. The current situation wherein extra servers must be spun up to handle spikes in load means greater complexity for those that have to manage the infrastructure, greater costs in having to scale up by an entire server at a time, and poor performance when load changes suddenly and extra servers canâ&#x20AC;&#x2122;t be started quickly enough. Since computing power is provisioned in these large steps, but load varies dynamically and continuously, it means enterprises are frequently paying to keep extra resources on standby just in case a load spike occurs. For example, if you have an 8GB traditional cloud server, which is only running 2GB of software at present, then you still will be paying for 8GB of provisioned capacity. Industry figures show that typical cloud servers may have 50 per cent or more of expensive - but idle - capacity on average over a full 24/7 period. The latest developments in the Linux kernel have presented an interesting alternative to this approach. New capabilities of the Linux kernel, specifically namespaces and control groups, enabled the recent rise of containerisation for Linux cloud servers in competition to traditional virtualisation.
InfoBurst Load balancing - think of the process as tapping a series of containers...
Container-based isolation Container-based isolation, such as Linux Containers (LXC), Docker and Elastic Containers, mean that server resources can be fluidly apportioned to match the load on the instance as it happens, ensuring cost-efficiency by never over- or under-provisioning. Unlike traditional virtualisation, containerised Linux cloud servers are not booted at a fixed size, but instead individual servers grow and shrink dynamically and automatically according to load while they are running. Naturally, there are certain provisos to this new technology. Firstly, as it currently stands, a Linux host can only run Linux-based cloud servers. Also, the benefit of not needing a load balancer at all is most relevant to servers, which scale with the resources of a single large physical host server. Very large systems that need to scale beyond this will still require load-balanced clustering, but can also still benefit from vertical scaling of all of the servers in that cluster. Conclusions Vertical scaling of containerised servers can therefore handle varying load with no need to pre-estimate requirements, write API calls or, in most cases, to configure a cluster and provision a load-balancer. Instead, enterprises simply pay for the resources they use, as and when they use them. Going back to our analogy, this means you simply turn the tap on at the Linux hostâ&#x20AC;&#x2122;s reservoir of resources. This is a giant leap forward in commoditising cloud computing and takes it closer to true utilities such as gas, electricity and water. www.elastichosts.co.uk
CLOUDCOMPUTING 45
CASESTUDY
Deutsche Telekom taps into the cloud
USINGOPENSTACK INAN ALL-IPENVIRONMENT Introduction Deutsche Telekom is piloting TeraStream, an all-IP network that delivers triple play and other services from the cloud, as a model for next-generation operator networks. TeraStream also is a proving ground for software-defined networking (SDN) and network functions virtualisation (NFV), as Deutsche Telekom looks to automate and orchestrate cloud services to launch new revenue-generating services and adapt to customer needs more quickly. Deutsche Telekom has partnered with A10 Networks to develop a carrier-grade, IPv4over-IPv6 `softwire’ solution as a virtualised network function, enabling Deutsche Telekom to differentiate and scale cloud services. A10 Networks’ software-based and API-driven architecture, commitment to open standards like OpenStack, and a willingness to create innovative solutions were key to helping Deutsche Telekom develop what is widely regarded as one of the most innovative service provider networks today. The challenge • Build a new, elastically scalable model for the core central-office data centre optimised for performance, low latency and cost • Deliver IPv4 services to customers in a native IPv6 network • Automatically provision IPv4 and other L4-7 services quickly and efficiently • Architect in compliance with core ETSI NFV documents • Maintain prime directive of simplicity and openness The results • Increased business agility with virtual carriergrade networking service and pay-as-you-go licensing based on A10 Networks’ cloud services architecture • Differentiated services on a per-subscriber basis • Reduced time-to-deploy IPv4 over IPv6
46 CLOUDCOMPUTING
•
Axel Clauberg explains how OpenStack has been the key to a new all-IP triple play network offering... By Axel Clauberg, Vice President Aggregation, Transport, IP and Fixed Access, Deutsche Telekom
softwire service with highly responsive partners Deutsche Telekom TeraStream Virtualises IPv4 Services with vThunder CGN
Hyper-connected Today’s hyper-connected world has not been kind to service providers. The demand for broadband has exploded, as customers want always-on connectivity for work and play, but don’t want to pay a premium for their growing bandwidth consumption. In fact, fierce competition among traditional telcos, cable operators and mobile operators is driving ARPU (Average Revenue Per User) lower and lower. Capturing new market growth, such as overthe-top (OTT) video and cloud services, requires innovation and speed. Yet many service providers are hampered by the complexity of their networks, which drives up lead-time and cost, while their more nimble competitors and OTT service providers deliver services that are faster, cheaper and better. Traditional service delivery times, which require weeks or months to configure using conventional networking technologies, are no longer competitive. Innovation and agility Deutsche Telekom is on the vanguard of this change. As a leader in next-generation operator networks, Deutsche Telekom is piloting TeraStream, an all-IP cloud-enabled network, at Hrvatski Telekom in Croatia. In TeraStream, Deutsche Telekom says it has re-imagined the network to deliver all services, including voice, IPTV and Internet access, as cloud services that are provisioned on demand. Deutsche Telekom has taken bold steps to fundamentally change how it delivers new services faster, at a lower cost and with a better user experience. TeraStream is an integrated packetoptical network that runs IPv6 in the core and is built on an infrastructure cloud model. TeraStream has drastically simplified network architecture and embraces the concepts of
CASESTUDY SDN (Software-Defined Networking) and NFV (Network Functions Virtualisation), including software appliances, COTS (Common-Off-TheShelf) hardware, and automated provisioning and service orchestration. “We designed TeraStream as an architecture that breaks many of the rules on the operator side,” said Axel Clauberg, Vice Present of Aggregation, Transport, IP and Fixed Access at Deutsche Telekom AG. “The attitude of ‘things-were-always-donethis-way’ doesn’t exist here. We questioned all layers and all protocols in today’s network, and asked ‘how would you run an efficiently managed IP network moving forward?’ We realised that if we truly wanted to change our cost base, we needed to change the mode,” he explained. TeraStream is an open multi-vendor network, which allows for greater innovation and avoids vendor lock-in. “It is really key for operators to build a foundation based on an open platform,” said Clauberg. “We don’t want a dependency on a single vendor in our critical infrastructure.” TeraStream uses OpenStack for cloud orchestration, allowing it to control the compute, storage and network resources in its data centers, while empowering customers to provision resources easily. TeraStream virtualises network functions so they can be chained together to create customized communications services quickly and as needed.
Figure 1: TeraStream is a model for next-generation operator networks – an IPv6 network that’s built on an infrastructure cloud model.
Virtualising network functions As an IPv6 network, TeraStream does not have native support for IPv4. Yet it must still deliver IPv4 as a service to its customers to support legacy applications. “There is an expectation that IPv4 traffic will go down significantly by the end of the decade, but we’ll need to deliver that function for some time,” said Clauberg. “Producing IPv4 as a service is ideal, because we can react based on our current load and we don’t need to drastically overprovision the way you might in a physical appliance scenario.” The TeraStream team looked for a partner that could drive a scalable, virtualised Softwire encapsulation service in its data centres. There are multiple ways to transport IPv4 traffic over IPv6, and the team considered
“TeraStream is an open multi-vendor network, which allows for greater innovation and avoids vendor lock-in.”
Mapping Address over Port (MAP) as well as Lightweight 4 over 6 (LW4o6), an emerging IETF standard that’s an extension of Dual-Stack Lite (DS-Lite). In DS-Lite, address translation is done at the operator, while LW4o6 moves this translation to the customer premise equipment. The team decided that the LW4o6 approach would scale more efficiently and allow tenants to be managed individually. The search for a virtualised Softwire solution led the TeraStream team to A10 Networks. “We were looking for a partner who could develop LW4o6 softwires and prove that it works,” said Clauberg. “We felt there was common ground with A10 Networks,” he added. A10 moved quickly to implement LW4o6 in its Thunder Series CGN, and TeraStream deployed vThunder as a virtual service. With vThunder, TeraStream has a high-performance, highly transparent and scalable solution for its customers, which is delivering a strong return on investment. The Thunder CGN product line is part of the A10 aCloud Service Architecture, which enables cloud operators to dynamically provision Layer 4-7 tenant services while improving agility and reducing cost. In addition, aCloud on-demand licensing helps operators in providing cloud services consistent with cloud consumption model. The aCloud Services Architecture integrates with OpenStack, SDN network fabrics and cloud orchestration platforms, so operators can dynamically deliver application and security services and policies per tenant. Automation through OpenStack and integration with aCloud on-demand licensing makes it possible to turn up new services for customers as they are needed, and tear them down once they’re no longer needed. A10 tuned vThunder to use LW4o6 and deliver
CLOUDCOMPUTING 47
CASESTUDY
optimal performance, scalability and automation, which allows TeraStream scale elastically to support more customers and to deliver a better experience. “When you virtualise a network function coming from hardware, there is a lot of potential for optimisation and automation,” said Clauberg. “A10 was very helpful to optimise the performance so we could serve our customers without burning hardware resources,” he added. Clauberg went on to say that IPv4-over-IPv6 Softwire is the first example of a high-volume, data-plane-oriented network function that was virtualised. “When people talk about NFV today, they are focusing on the control plane, not the data plane. But if we truly want to change our cost basis, we have to look at virtualising network services also touching the data plane,” he explained. A business model built for the cloud TeraStream is taking advantage of A10’s Pay-asYou-Go licensing model so it can offer on-demand cloud services to customers on a subscription basis. With the Pay-as-You Go licensing model, TeraStream can offer and deliver IPv4 and other advanced L4-7 networking tenant services with automated metering, reporting, billing and license management, as is necessary in a cloud environment. “A10’s pay-as-you-go licensing is key,” said Clauberg, adding that a flexible licensing scheme is win-win, because it makes the vendor profitable and it makes us profitable. About Deutsche Telekom Deutsche Telekom is one of the world’s leading integrated telecommunications
48 CLOUDCOMPUTING
Deutsche Telekom taps into the cloud
Figure 2: TeraStream is a proving ground for network functions virtualisation. It uses Lightweight 4o6 softwires to elastically scale the delivery of IPv4 traffic to customers.
companies with over 142 million mobile customers, 31 million fixed-network lines and over 17 million broadband lines (as of December 31, 2013). The group provides fixed-network, mobile communications, Internet and IPTV products and services for consumers, and ICT solutions for business and corporate customers. The CSP is present in around 50 countries and has approximately 229,000 employees worldwide. The group generated revenue of 60.1 billion euros in the 2013 financial year - over half of it outside Germany.
About A10 Networks A10 Networks is a specialist in application networking, providing a range of highperformance application networking solutions that accelerate and secure data centre applications and networks of thousands of the largest enterprise, service provider and hyper-scale web providers around the world. The company’s products are built on our proprietary Advanced Core Operating System (ACOS), a platform of advanced networking technologies, which is designed to deliver substantially greater performance and security. A10 Networks software based ACOS architecture also provides the flexibility that enables A10 Networks to offer additional products to solve a growing array of networking and security challenges arising from increased Internet cloud and mobile computing. www.a10networks.com www.telekom.com
CLOUDCOMPUTING WORLD CLOUDCOMPUTING CLOUDSERVERS
CCW is the UKs first digital publication totally dedicated to the subject of cloud computing.
WORLD
Issue 1
June 2014
CCW reaches an audience of over 15,000 individual subscribers
The cloud: itâ&#x20AC;&#x2122;s older than you might think
on a bi-monthly basis, delivering them up-to-date information on this fast paced subject, enabling them to use the processing power of the cloud and its unlimited opportunities for collaboration to enhance and grow their
Understanding cloud load balancing
Service prices difference
s under the microscope
Audiocast: total remote/cl
oud security becoming
Looking towards an open
source cloud future - cost
reality says veteran pen
cutting without service
tester
reduction
CLOUDCOMPUTING 1
businesses.
CCW - The Format CCW is fully interactive and will be available on all major electronic devices from the first issue â&#x20AC;&#x201C; thanks to the use of the digital format, content in the publication will be freed from the two dimensions of print and include rich media that readers will not find in any other place. In this context, advertisers and editorial contributors will be able to present content in a rich media format. Put simply, this means
that content submissions with move beyond the printed page and into the realm of video and audio. We believe this offers those involved a much greater opportunity to engage, entertain and inform our readers. CCW will also deliver advertisers real-time and identifiable metrics, enabling advertisers to calculate their ROI and identify where response comes from.
For Editorial Enquiries Steve Gold steve@lgnmedia.co.uk 0114 266 3063
For advertising enquiries Ian Titchener ian@lgnmedia.co.uk 01353 644081
www.cloudcomputingworld.co.uk 26 St Thomas Place, Cambridge Business Park, Ely, Cambridgeshire CB7 4EX
01353 644 081
ISO 9001 | ISO 14001 | ISO 27001 | PCI DSS LEVEL 1
IS YOUR DATA IMPORTANT TO YOUR BUSINESS ? THEN IT’S TIME TO MOVE INTO THE GATEHOUSE DATA CENTRE
Aƚ ƚŚĞ 'ĂƚĞŚŽƵƐĞ ĂƚĂ ĞŶƚƌĞ LJŽƵ ĐĂŶ ƚĂŬĞ ĂĚǀĂŶƚĂŐĞ ŽĨ ŽƵƌ ĂǁĂƌĚ ǁŝŶŶŝŶŐ ŽůŽĐĂƟŽŶ ^ĞƌǀŝĐĞƐ͘ dŚĞ ĚĂƚĂ ĐĞŶƚƌĞ ƉƌŽǀŝĚĞƐ LJŽƵ ǁŝƚŚ Ă ƐĞĐƵƌĞ͕ ĞĸĐŝĞŶƚ ĂŶĚ ĐŽƐƚ ĞīĞĐƟǀĞ ƉůĂĐĞ ƚŽ ŚŽƵƐĞ LJŽƵƌ ďƵƐŝŶĞƐƐ /d ĞƋƵŝƉŵĞŶƚ͕ ǁŚŝůƐƚ ĂůůŽǁŝŶŐ LJŽƵ ƚŽ ƌĞƚĂŝŶ ĐŽŶƚƌŽů ŽǀĞƌ LJŽƵƌ /d ĞŶǀŝƌŽŶŵĞŶƚ͘
SECURE DATA CENTRES TRUSTED ADVICE
DATA CENTRE CONSULTANCY | COLOCATION | OPERATION | MIGRATION
0845 251 2255 ŝŶĨŽΛŵŝŐƐŽůǀ͘ĐŽŵ ǁǁǁ͘ŵŝŐƐŽůǀ͘ĐŽŵ