CCi Issue 23

Page 1

Issue 23

Cloud Computing Intelligence

www.cloudcomputingintelligence.com



1

ISSUE 23

EDITORIAL DIRECTOR Marcus Austin, t. +44 (0) 7973 511045 ma@cloudcomputingintelligence.com

From the editor

PUBLISHING DIRECTOR Scott Colman, t. +44 (0)7595 023 460 e: scott@futurepublishingsolutions.com SPECIAL FEATURES EDITOR Graham Jarvis, t. +44 (0)203 198 9621 e. graham@futurepublishingsolutions.com

Welcome to CCi Issue 23 T

here’re some really interesting predictions, and some general themes came through as common to all, and the most common theme is that 2016 is the year when we’re finally going to see hybrid start to take off. For some years we’ve been talking about the potentials for hybrid cloud, and how businesses need the ability to have the security and low-latency that a private cloud can bring, but with the ability to burst to the public cloud, should they need the capacity. The problem has been that while it’s easy to talk about hybrid cloud actually doing it requires a set of tools that just weren’t ready for the mass market. However in 2016, according to our experts, we will start to see enterprise-ready tools, that will make producing hybrid solutions a simple process, or, at least, it will if all our experts are right. I know we’re looking forward, but I thought I’d indulge in a little look back at just what a difference cloud can make to a business when it’s used correctly. A friend recently recounted a problem he’d had with a customer. The customer was getting a little frustrated as

the site kept on rejecting the short video he kept trying to upload. My friend was slightly perplexed as the file size limit was more than enough for short videos. So he asked the customer what size the video was, “it’s a broadcast quality 800MB file,” came the reply. It took just two minutes to make the changes, and now the file size limit is 2Gb, it could just as easily have been 20Gb or 200Gb. The beauty is neither he nor the client has had to compromise, because everything is hosted in the cloud, my friend’s business hasn’t had to shell out for any new infrastructure to accommodate the larger files size, and the customer has got their solution, and while they pay a little bit more storage than before they’re happy. The ability to be flexible and agile is the real secret of cloud, and it’s something that isn’t pushed enough, and that probably includes even here in CCi. Happy New Year Marcus Austin Editorial Director, CCI Magazine

WEB & DIGITAL Apal Goel, t. +91 (0)97 171 6733 e. apal@futurepublishingsolutions.com CIRCULATION & FINANCE MANAGER Emma Colman, t. +44 (0)7720 595 845 e. emma@futurepublishingsolutions.com DESIGN & PRODUCTION Jonny Jones, t. +44 (0)7803 543 057 e. jonny@futurepublishingsolutions.com Editorial: All submissions will be handled with reasonable care, but the publisher assumes no responsibility for safety of artwork, photographs, or manuscripts. Every precaution is taken to ensure accuracy, but the publisher cannot accept responsibility for the accuracy of information supplied herein or for any opinion expressed. Subscriptions: CCi Magazine is free to qualified subscribers in the UK and Europe. To apply for a subscription, or to change your name and address, go to www.cloudcomputingintelligence.com, click on ‘Free Subscription – Register Now,’ and follow the prompts. Reprints: Reprints of all articles in this issue are available (500 minimum). Contact: Emma Colman +44 (0)7720 595 845. No responsibility for loss occasioned to any person acting or refraining from acting as a result of material in this publication can be accepted. Cloud Computing Intelligence (CCi) Magazine is published 10 times in 2014 by Future Publishing Solutions Ltd, and is a registered trademark and service mark of Future Publishing Solutions Copyright 2014. Future Publishing Solutions Ltd. All rights reserved. No part of this publication may be reproduced or used in any form (including photocopying or storing it in any medium by electronic means and whether or not transiently or incidentally to some other use of this publication) without prior permission in writing from the copyright owner except in accordance with the provisions of the Copyright, Designs, and Patents Act (UK) 1988, or under the terms of a licence issued by the Copyright Licencing Agency, 90 Tottenham Court Road, London, W1P 0LP, UK. Applications for the copyright owner’s permission to reproduce any part of this publication should be forwarded in writing to Permissions Department, Future Publishing Solutions Ltd, First Floor Offices, 2-4 Swine Market, Nantwich, Cheshire, CW5 5AW. Warning: The doing of an unauthorised act in relation to copyright work may result in both a civil claim for damages and criminal prosecution.

First Floor Offices, 2-4 Swine Market, Nantwich, Cheshire, CW5 5AW n +44 (0)2031 989 621 n info@futurepublishingsolutions.com n www.futurepublishingsolutions.com


Energy Efficient

Multi-layer security

WHAT’S BEHIND YOUR CLOUD? Tier III+ resilience

A cloud is only as reliable as the data center behind it. As the popularity of cloud-based services continue to soar, so does the importance of having the right infrastructure in place to sustain it. Our data centers provide multi-layer security, diverse power, carrier-neutral connectivity and Tier III+ levels of resilience – all designed to support exceptional cloud services. Visit our website to find out more and make sure your cloud is rock solid.

www.zeniumdatacenters.com

Carrier neutral connectivity


3

ISSUE 23

Contents

16

32 14

26

News 4-12 News

Featured

34

14 How to increase your network agility in 5 steps 16 How to make IT keep up with the speed of business 20 Disaster Recovery: Reducing the business risk at distance 24 Big data: from investment to business as usual 26 Solving your storage challenges in a virtualised environment 32 StaaS and deliver: Welcoming cloud economics to enterprise storage 34 Cloud security and your cloud provider

24

20


4

News

ISSUE 23

Dark DDoS attacks to rise in 2016

D

istributed Denial of Service attacks used as a smokescreen to distract victims from other hidden activity, such as network infiltration and data theft are set to increase in 2016 Corero Network Security predicts a rise in DDoS attacks being used as a smokescreen to distract victims – aka ‘Dark DDoS’ , with ransom demands associated with DDoS attacks tripling in 2016. According to 2016 predictions from Corero Network Security’s latest Trends and Analysis report attackers are continuing to use sub-saturating DDoS attacks with increasing frequency, with shorter attack durations to distract IT teams by causing network disruptions. The vast majority of DDoS attacks experienced by Corero customers during 2015 were less than 1Gbps, and more than 95% of these attacks lasted for 30 minutes or less. Corero’s Security Operations Centre has also recorded a sharp increase in hackers targeting their customers with

Bitcoin ransom demands. During October 2015, 10% of Corero’s customer base was faced with extortion attempts, which threatened to take down or to continue an attack on their websites unless a ransom demand was paid. If the volume of DDoS attacks continues to grow at the current rate of 32% per quarter, according to Corero’s latest Trends and Analysis Report, the volume of Bitcoin ransom demands could triple to 30% by the same time next year. The growth is being fuelled by the increased automation of DDoS attacks, which allows cyber criminals to enact hybrid, multi-vector attacks and expand their reach on an industrial scale. The Armada Collective cyber attackers recently claimed that their DDoS attacks can be as powerful as one Terabit per second, but the increasing industrialisation of DDoS attacks could soon reap even larger attacks. Corero’s Security Operations Centre is already seeing a rise in automated DDoS

tools being deployed. In these situations, attackers leverage one attack technique, such as a DNS flood, and if unsuccessful, automatically enact a second technique, such as an UDP flood, and keep leveraging different attack techniques automatically until their target’s Internet service is successfully denied. Dave Larson, COO at Corero Network Security, explains: “The highly sophisticated, adaptive and powerful Dark DDoS attack will grow exponentially next year as criminals build on their previous successes of using DDoS attacks as a distraction technique. Traditional approaches to DDoS defence simply cannot catch these sophisticated attacks – only by using an always-on, inline DDoS mitigation solution that automatically removes the threat and provides realtime visibility will IT teams be able to harden their security perimeter to deal with this emerging security threat.” W. www.corero.com

CTERA adds cloud-to-cloud backup

C

loud Server Data Protection for Enterprise CloudOps allows businesses to backup and restore public cloud applications to other public clouds and on-premise including Amazon Web Services, Microsoft Azure, IBM Cloud, and OpenStack Backup and recovery business CTERA Networks today announced a fully automated and secure in-cloud and cloudto-cloud data protection solution that allows enterprises to protect any number of servers across any cloud infrastructure. CTERA Cloud Server Data Protection is designed for enterprise Cloud Operations (CloudOps) teams to back up their public cloud solutions to a choice of the same public cloud, another public cloud or back to their data centre. In an interview with CCi, Jeff Denworth, SVP Marketing at CTERA Networks explained that the solution has been up and

running since late summer and has been used by many of CTERA’s Fortune 2000 clients and is in response to requests from customers for “in-cloud data protection” for businesses who are running as many as 30,000 servers on AWS, and need a “continuous data-protection solution.” The solution is based on the backup capabilities developed as part of the CTERA Enterprise File Services Platform, and adds an additional cloud layer to allow backup of in-cloud workloads, all of which are managed from a single pane of glass management console from the cloud infrastructure of an organisation’s choice. The solution works with all the major cloud vendors including Amazon Web Services, Microsoft Azure, IBM Cloud, OpenStack and more and enables organisations to be platform and hypervisor agnostic. The solution has applicationconsistent backup and granular, file-

level recovery of Microsoft SQL Server, SharePoint, and other applications running in any cloud infrastructure. Which means if you have an application running on an instance of Windows Server 2012 on AWS you will be able to backup and restore the application to an Azure Windows Server 2012 instance as long as the version is compatible. With Sharepoint and SQL Server you can restore to a different version. CTERA also adds a multi-tenant data protection management system capable of protecting thousands of tenants and tens of thousands of servers from a single console – where data is encrypted by the user at the source to eliminate the risk of an administrator recovering sensitive data to the wrong user. W. www.ctera.com


Ready for NOW? Introducing a new approach to data centre services that gives you the agility and flexibility you need to take control of Now.

Scale up and scale down

Only pay for the power you need

Call us on +44 (0) 20 7079 2990 For more information visit www.infinitysdc.com

Platform flexibility


6

News

ISSUE 23

Red Hat CloudForms 4 adds Microsoft Azure Support for hybrid cloud

R

edHat’s management platform CloudForms gets a significant update with support for Microsoft Azure, new advanced container management and enhanced self-service Red Hat has added a significant number of new features to its CloudForms hybrid cloud management solution. Based on the open source ManageIQ project the new RedHat supported solution adds support for Microsoft’s Azure public cloud, as well as enhanced support for containers. Speaking to CCI, Red Hat’s General Manager, Cloud Management Strategy Alessandro Perilli, called the new version a “massive release” aimed at allowing businesses to be able to take different workloads and “scale them up and scale them out on to different platforms.” As Perilli points out the problem with current cloud solutions is it’s a massive managerial challenge and usually involves working

on “3-4 different platforms each, with its own management console creating silos of management.” With CloudForms the aim is to allow businesses to use a “single pain of glass” to build and manage the bi-modal, stand-alone and hybrid solutions on Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS) layer and on premise. The support for Azure comes on top of previous versions support for OpenStack and AWS and according to Perilli is the result of strong demand from Red Hat customers to add the public cloud solution, and also follows on from the Red Hat and Microsoft’s recent partnership announcement. Likewise the support for containers also follows requests from customers to get a “greater visibility” into their containers, although this is limited to support for Kubernetes based containers including

workloads running in OpenShift Enterprise by Red Hat and the infrastructure hosting OpenShift. Additionally CloudForms also gets an improved dashboard, and charts within CloudForms enable users to better see the relationships between different cloud platforms and container hosts running in production. Red Hat CloudForms now has support for: Amazon Web Services; Hyper-V; Microsoft Azure; OpenShift by Red Hat; OpenStack; Red Hat Enterprise Virtualization and VMware. Red Hat CloudForms 4 is the third release based on ManageIQ and since the last release the community has added new contributors including: Booz Allen Hamilton, who contributed code to integrate Project Jellyfish, its open source cloud broker. W. www.redhat.com

Organisations say yes to cloud ERP

N

ew report finds senior finance professionals and large organisations are open to ERP in the cloud with almost a third already signed up A survey commissioned by Hitachi Solutions Europe has revealed that 31% of organisations have moved all or part of their Enterprise Resource Planning (ERP) systems to the Cloud, or are in the process of doing so. Of the other 69% of respondents who have not moved their ERP systems to the Cloud, almost half (44%) said they would consider moving to the Cloud in the future. Of that 44%, two thirds (67%) said they would contemplate moving in the next two years. Respondents were asked what they would do differently if they were migrating their ERP to the Cloud again. Recommendations from

survey respondents included making “a more informed choice on the actual providers” and “look[ing] more seriously at third party provision”. Looking at the responses from larger organisations (those with over 500 employees) suggests that it is no longer just smaller organisations taking a Cloud-based approach to ERP. In fact, 27% of larger organisations surveyed said they have moved all or part of their ERP to the Cloud, or are in the process. Tim Rowe, Director, Hitachi Solutions Europe said, “While in some areas movement to the Cloud has been quite rapid, because ERP is seen as business critical and inherently complex, it has been slower to move to Cloud. Now, however, we are starting to see a shift as the benefits start to outweigh the perceived risks.”

Of the 22% of large business respondents who have already moved all or part of their ERP to the Cloud, the main benefits they have experienced include easier access to information, with 30% ranking this as the number one benefit, followed by reduced operating costs and ERP performance. In addition, 80% of this group rated their experience of using cloud-based ERP as excellent or good. All respondents were asked to rank what they perceived to be the main risks associated with moving ERP to the Cloud; 38% ranked data security and privacy risk as either their first or second greatest risk. This was followed by connectivity and dependency on a third party provider, which was ranked by 35% of respondents as the first or second greatest risk. W. uk.hitachi-solutions.com



8

News

ISSUE 23

IT worried about IoT data volumes and network throughput

T

he move to IoT is inevitable, but IT teams fear that new data from connected devices will swamp the networks and database systems by next year, unless action is taken A new report compiled by independent analysts Quocirca finds that businesses understand the benefits of the Internet of Things (IoT) but are concerned over data volumes, which will start to overwhelm their networks in 2016, and their ability to secure and analyse the data collected. The report, found that while a small number (14%) of those polled think the IoT is overhyped, and wouldn’t affect them, the overwhelming majority say the IoT is already impacting their organisations (37%) or will soon (45%). When the survey looked at IoT uptake there were some real differences between the take up of IoT when it was driven by IT and when it was driven by the line-of-business (LoB).

Where IT was in the driving seat the average take up was high (29%), with 50% of transport businesses already using IoT and 30% of retail businesses. The overall concerns over IoT were principally around three main areas; the networks being overwhelmed by the data volume produced, the inability to analyse the data produced and a lack of standards in IoT. However, they are taking actions to address these concerns, including deploying network edge processing to reduce data at the core, new business and operational intelligence tools and integration with enterprise applications, such as ERP systems. In general they identified four broad security groups: • Data protection: many devices gather sensitive data, so its transmission, storage and processing needs to be secure, for both business and

regulatory reasons. • Expanded attack surface: more IoT deployments mean more devices on networks for attackers to probe as possible entry points to an organisation’s broader IT infrastructure. • Attacks on IoT enabled processes: hacktivists wanting to disrupt a given business’s activities for some reason will have more infrastructure, devices and applications to target. • Botnet recruitment: Poorly protected devices may be recruited to botnets. When it comes to security and IoT you have two choices you either secure the endpoints or the aggregator level. Nearly half (47%) of respondents are already scanning IoT devices for vulnerabilities, and another 29% are planning to do so. W. www.quocirca.com

Verizon promises seamless workload transfer between public clouds

V

erizon has added IBM SoftLayer as the latest partner to its Secure Cloud Interconnect service to enable businesses to transfer workloads between a choice of clouds, data centre providers and on-premise systems. The addition of IBM SoftLayer IBM to Verizon’s new Secure Cloud Interconnect (SCI) system offers organisations the ability to swap workloads between eight public cloud providers using their software defined networking (SDN). The addition of IBM SoftLayer to SCI now offers connections between eight cloud providers, with the others including Amazon Web Services, Google CloudPlatform, HPE Rapid Connect, Microsoft ExpressRoute for Office 365, Microsoft Azure ExpressRoute, Microsoft

Azure Government and Verizon’s own public cloud. Additionally there’s also access available to three data centre providers (Coresite, Equinix, and Verizon) at more than 50 global locations in Europe, the Americas, and the Asia-Pacific region. Verizon’s Secure Cloud Interconnect services is advertised as a “secure, flexible private link that enables workloads to move seamlessly between clouds.” It claims to give businesses the option to store data in a variety of settings, including a traditional IT environment, a dedicated on- or-off premises cloud and a shared off-premises cloud making it easier to scale and meet business requirements. Because the service is based on SDN businesses also get additional control and agility over their connectivity above those

offered by services such as Amazon’s own Direct Connect and Azure’s ExpressRoute including consumption-based bandwidth, pre-provisioned on-demand resources, controlled application performance and varying classes of service. The services comes in three different options - Cloud Exchange, Network Service Provider (NSP), and Colocation - and is only available to connect to IBM Cloud SoftLayer data centre sites located in Dallas and San Jose in the US and Tokyo and Sydney in the Asia Pacific region, with connection to two additional sites in Europe (we’re guessing Paris and London) promised for the beginning of 2016. W. www.verizonenterprise.com/uk


News

ISSUE 23

9

Why IT transformation just isn’t happening fast enough

S

even in 10 say their companies are “just getting started” or have not even begun down the road to digital transformation. The latest survey by the Business Performance Innovation (BPI) Network shows businesses aren’t adopting digital transformation initiatives quickly enough. With the majority of businesses only just taking the most basic steps on the road to IT transformation, because of major failings in the planning, resource allocation, staffing, collaboration and financial commitments needed to fulfil that vision. Most IT leaders around the world give their businesses failing or near failing grades for their ability to adapt and innovate around transformational new technologies. Which is, a significant problem as executives globally have IT transformation as central to their future plan to be competitive. The study, entitled “Bringing Dexterity to IT Complexity: What’s

Helping or Hindering IT Tech Professionals,” is part of the ongoing initiative by the BPI Network and Dimension Data looking at the state of IT change and innovation at enterprises globally. Just a few months ago, 92% of business executives globally said in a BPI Network survey that they were making progress toward adopting modern technologies to transform their companies into dynamic digital businesses. They predicted the three areas that would benefit most from transformation were: increased agility in the face of business changes (70%), greater cost efficiencies (57%) and a faster development of innovative new applications (47%). To be successful in their IT transformation businesses need to move from a dependency on costly hardware systems located on the company premises to cloud and software-driven environments that have a more flexible

cost structure, the ability to scale-up, meet a rising demand and have the flexibility to reshape the enterprise to meet unexpected business challenges. Yet the BPI survey shows major failings in planning, resource allocation, staffing, collaboration and financial commitments that are impeding progress. Among other complaints from the IT team are that business managers wait too long to bring them into the process (52%), don’t provide sufficient funding and resources to get the job done (48%), and then change job requirements before work can be completed (46%). IT workers also indicate that they are frequently not viewed as trusted partners in the innovation process, with more than half of respondents indicating that business leaders have a negative impression of the IT department. W. www.reinventdatacenters.com.com

Hewlett Packard Enterprise and Microsoft team up on Hybrid

M

icrosoft Azure will become the preferred public network partner for Hewlett Packard Enterprise hybrid infrastructure At the Hewlett Packard Enterprise (HPE) Discovery event in London, HPE announced it would be looking to Microsoft Azure as their choice for public cloud and that HPE would be the Microsoft “preferred partner” for Azure, providing infrastructure and services for Microsoft’s hybrid cloud offerings. Garth Fort, Microsoft’s GM for Cloud and Enterprise speaking at Discovery said the relationship came about because “both Microsoft and HPE see hybrid very similarly.” According to Fort the work on the relationship and the integration of Azure into HPE has taken a year. The businesses will collaborate across engineering and services to integrate HPE

and Microsoft Azure compute platforms or as Satya Nadella, CEO, Microsoft puts it. “blending the power of Azure with HPE’s leading infrastructure, support and services to and make the cloud more accessible to enterprises around the globe.” Interestingly while Microsoft is now a preferred partner, one of the key takeaways from the HPE session on the new partnership at Discovery was a comment from Bill Hilf SVP and GM for HP Cloud at HPE that “there is no vendor that owns the cloud,” and indeed HPE were also keen to point out that they also have “deep expertise in Amazon Web Services and OpenStack,” and made plenty of references to Cloud28+ which is an HPE sponsored European answer to both AWS and Azure. Hedging bets? The first product off the back of

this relationship is a hyper-converged system with hybrid cloud capabilities, the HPE Hyper-Converged 250 (CS 250) for Microsoft Cloud Platform System Standard. The systems is based on HPE’s HPE ProLiant server technology and adds in a dedicated connection into Microsoft Azure. The jointly engineered solution brings is aimed at datacenters and includes an Azure management portal, allowing businesses to self-deploy Windows and Linux workloads. Built-in will be an Azure-powered backup and disaster recovery service with HPE OneView for Microsoft System Center, to manage the system, with hardware and software support, installation and startup services delivered via HPE. is available to order today. W. www.hpe.com


10

News

ISSUE 23

EU Data protection laws could mean substantial costs to move cloud data

U

K businesses are unprepared for new EU data protection laws or a possible UK exit from Europe and could end up with costs of up to £1.6 million each to move their data stored in the cloud to local data centres. The upcoming EU Data Protection Regulation is expected to increase requirements around where business data is stored throughout Europe, meanwhile, a UK exit from the EU could also spell stricter, more independent UK data laws that businesses have to adhere to and to which UK businesses aren’t ready for. Particularly as the new laws could end up with UK businesses paying up to £1.6 million each to move the data they have outside of the UK back on to safe terri-

tory. A new VMware survey of IT decision makers attitudes and knowledge of forthcoming EU legislation and possible changes, shows more than a third (34%) of UK business data is currently located outside of the country, with more than three quarters (76%) of businesses having at least some business critical data residing overseas With so much data stored offshore, almost seven in 10 businesses (69%) are concerned they may need to move their data in line with any regulatory, compliance or customer requirements. The research also shows 95% of respondents use some form of cloud services to host their data (including 37% using

public cloud and 34% using hybrid cloud). Over two thirds (70%) are concerned they would need to move their data to a different cloud provider who could host their data in the appropriate location if the European landscape changes. Despite the potential upheaval, half (50%) said they are yet to start making contingency plans while only 10% are fully prepared to move their data to UK soil, if necessary. 96% of organisations also admitted it would cost them a significant amount to move their data to a different location if needs be, with the average cost being estimated at over £1.6 million, with an average timeline of three months. W. www.vmware.com/uk

OVH launches its Public Cloud service in the UK

T

he new public cloud service from OVH has automatic DDoS protection, european-based hosting, triple data replication and is based on open standards OVH has launched its full Public Cloud service in the UK. The UK Public Cloud service is targeted at developers, system administrators and DevOps, and is backed up by a 99.999% ‘five nines’ Service Level Agreement (SLA) for availability and reliability and is based on the OpenStack open source Cloud Computing platform - of which OVH is a foundation member. The Public Cloud infrastructure comes with customisable DDoS (Distributed Denial of Service) protection against cyber-attacks, triple data replication and hosting in European-based data centres. Like its main competitors the solution has monthly and hourly payment options. OVH offers two types of public cloud

services: Public Cloud Instances and Public Cloud Storage. Public Cloud Instances provides a choice of two types of virtual machines: the RAM instances are designed for memory-hungry purposes such as running SaaS applications, creating multimedia content, or managing large databases. Cloud CPU instances lend themselves to managing processingheavy tasks including data analytics, computer simulations and managing peak server loads. The Public Cloud Storage service offers high-availability object storage, which can be used to warehouse and access large amounts of binary content for application development. In this way, software developers can avoid the hassle of having to set up NFS or FTP servers specially. Classic and high-speed storage options are also available. All of the packages also come

with access to OVH Net, OVH’s own global fibre optic network. The network is managed using DWDM (Dense Wavelength Division Multiplexing) and is currently being migrated to 100G coherent technology, offering a total capacity of 3 Tbps. “OVH Public Cloud is more than just a new service we’re launching to the market. It will form the basis of all future services on the OVH roadmap. So we’re putting out money where our mouth is,” explains Octave Klaba, CTO of OVH. “Our Public Cloud is designed to create complex hybrid architectures, combining private infrastructures with virtual machines and public instances. This is supported by a comprehensive selection of IaaS services which we believe to be unique in the industry.” W. www.ovh.co.uk



12

News

ISSUE 23

CIOs are challenged by cloud complexity C

IOs are struggling to identify and implement the cloud services most suitable for their business with many finding cloud more complex than they were led to believe New research from Trustmarque has examined CIOs views of cloud, and has found that as a result of vendor hype and differing definitions of cloud, a significant number (81%) are struggling to identify and implement the cloud services most suitable for their business. The survey ‘The CIO Cloud Conundrum’ also revealed two thirds (66%) of CIOs stated the complexity of existing IT infrastructure and services is a barrier to moving to cloud, with almost three quarters (74%) finding the interdependencies between different parts of their IT environment are a barrier towards moving certain IT services to the cloud. Nearly three quarters of CIOs also believed cloud is making data governance more complicated, with changes to the US Safe Harbor act set to make this situation even more complex. Trustmarque found more than three quarters of CIOs (78%) state that integrating different cloud services is a challenge. Furthermore, many current applications used by businesses have not been built with cloud in mind. As such, the research found more than two thirds (68%) of CIOs believe modernising or re-architecting certain applications will slow their journey to cloud. “While cloud can undoubtedly deliver many benefits to businesses, ‘moving to the cloud’ is often in reality, easier said than done. Selecting and implementing the right cloud services remains a challenge for CIOs,” said James Butler, CTO at Trustmarque. “Many CIOs

struggle to understand the differences between the many cloud options, what these offer them and how to choose – often because of vendor hype and a lack of clarity around the solutions on offer. Couple this with the complexity of existing IT environments and the need to re-architect applications to get the most from cloud, and many IT departments face hurdles on their cloud journey. By assessing the functions that can be moved to the cloud with the least disruption, CIOs can identify the ‘quick cloud wins’ and clearly demonstrate the business value needed to justify more complicated moves that involve transformation. This hybrid approach can be a way of delivering the benefits of cloud to the business rapidly, with reduced risk.” The research found almost three quarters (73%) of CIOs believe cloud services are making data governance more complicated. With recent changes to the Safe Harbor legislation between the EU and US expected to impact many businesses, the governance environment is set to grow even

more convoluted. Furthermore, the desire of employees to use personal cloud services (such as Google Docs and Dropbox) is also posing potential information security threats. Three quarters (79%) of CIOs stated that they found it a challenge to balance the productivity needs of employees against potential security threats, when it comes to authorising the use of personal cloud and file sharing applications. The sheer number of ways that organisations can procure cloud is also presenting challenges; with models including pay-per-user, pay-per-month, pay-as-you-go, through to monthly and annual subscriptions. As a result, over three quarters (76%) of CIOs stated the number of ways they can pay for cloud makes selecting the right cloud service difficult. In addition, a further 80% of CIOs believe existing software licensing agreements will delay them moving certain services to the cloud. W. www.trustmarque.com



14

ISSUE 23

Network Agility: Joy Gardham

How to increase your network agility in 5 steps Building for the cloud is just like building a new house, it’s important to get the foundations and the services connected and working smoothly first before attempting to build the fabric of the building. We offer five simple steps to increase your business’ agility with a flexible network.

B

Joy Gardham Joy Gardham is a regional sales director for Western Europe at Brocade, and has responsibility for managing and driving the sales and services teams, and partner and channel strategies. Gardham has held a number of technical sales positions with several large IT vendors including ICS Solutions, a UK cloud services provider, and Sun Microsystems. She moved over to Oracle in 2009, a year before Sun was acquired by the company and she helped lead the integration of Sun and Oracle sales and operations teams.

YOD, remote working and data analytics are just a few of the many capabilities that businesses have integrated into their daily activities across the world. Enterprises have increasingly demanded improved connectivity and data accessibility over the past few years which has brought an enormous increase in the volume of data travelling across their networks. Today, these underlying technologies have reached breaking point causing an urgent need for change, in the form of a move to cloud. However, to take full advantage of this shift to the cloud, businesses need to make sure they have the right foundations in place. And that means a network that allows the business to respond quickly and flexibly to the changing needs of the business. This requires a new type of networking; a New IP network that is more intelligent, more agile and more responsive than legacy infrastructures. As capacity demands on old infrastructures continue, it is becoming clear that businesses that fail to prepare for this impending rise of data will inevitably suffer the consequences of a disconnected workforce.

Therefore, as the New Year fast approaches, CIOs need to start making preparations for 2016. Here are five simple steps that you could take to increase your business’ agility with a flexible network, in the year ahead: Step 1: Audit your network It might sound obvious, but an audit is the best place to start. IDC recently projected that the majority of businesses underestimate how many devices they have in their data centre by up 50 percent. Don’t be fooled – take time to look at what you already have and what you are going to need moving into 2016. This way you can ensure that your network will mirror the elasticity of your business strategy. Step 2: Focus on application acceleration and cloud connectivity Almost two thirds of CIOs rate providing fast deployment of new applications as an ‘extreme’ or ‘significant’ concern, and 65 percent say the same about delivering fast access to applications from multiple devices. Applications connect your workplace to the digital environment and allow your business to access, engage and


understand data and information. An inability to access applications can and will affect productivity and ultimately, your bottom line. To overcome the threat, you need a highly automated, operationallyaligned infrastructure that delivers application acceleration and secure cloud connectivity, while eliminating network downtime. By adopting a Fabric network, applications can be deployed up to 90 percent faster and operational expenses reduced by 50 percent. Virtual Application Delivery solutions strengthen the benefits of Fabric network automation by providing additional levels of agility, targeted acceleration and intelligent protection. As your colleagues are not sat next to the servers which host their applications, extending network automation and intelligence across the campus local area network (LAN) is the final step in delivering application acceleration and optimising investments into cloud services and Software-as-a-Service (SaaS). Step 3: Virtualise to increase efficiency and reduce risk It is estimated that 80 percent of IT budgets are spent servicing technology

which was purchased around ten years ago. Imagine what you could do if you could refocus your budget onto innovation and create a servicesorientated and software-enabled IT environment, to keep up with the trends of 2016. Virtualized solutions such as, virtual network functions (VNF) can provide flexibility to scale at the speed of business, over traditional hardware devices as the network. And the convergence of storage and data networks can reduce cost and risks, whilst retaining the technical benefits of both network virtual technologies. Step 4: Make the most of Big Data The explosion of big data presents enormous challenges, but even bigger business opportunities. You need to be armed with reliable insight and analytics, as the demand for real-time insight is only increasing. Software-Defined networks (SDN) and advanced network analytics will support your businesses investments in big data analysis platforms, while maximising the value of your IT investments. It’s a win-win. Deciding right now what investments you’ll need to make in three to five years’

to deliver the right IT-as-a-Service (ITaaS) model is a long bet. For many, the ultimate goal is to support analytics by providing an internal ITaaS model. Step 5: Lead through constant innovation adoption with elastic IT services delivery Simply adding more of the same devices to your network is not the answer; nor is sticking with a rigid, physical, legacy approach to design while trying to deploy new flexible, application-centric technologies. Opt for a solution which is designed on open standards and software that will keep your network agile for tomorrow, as well as improve performance and efficiency for today. Whether your business wants to improve productivity and efficiency internally, to expand, to improve supply chain management, or to deliver new differentiated services externally, the chances are that cloud services will play a major role in making that possible. By planning ahead and developing a longterm network strategy, you will be able to fully embrace the benefits of cloud computing and give your organisation the best chance of achieving its goals in 2016. n


16

ISSUE 23

Business: Nigel Moulton

How to make IT keep up with the speed of business In the future, 2015 could be looked back upon as the year in which modern business mentality changed its attitude towards technology and fully embraced IT.

Nigel Moulton Nigel Moulton is EMEA CTO at converged infrastructure business VCE and is responsible for the technology strategy of VCE in EMEA. Prior to joining VCE, Moulton was the EMEA CTO for Avaya, he has over 25 years’ experience in the IT and Telecommunications industry, having also had previous roles both with 3Com, DLink and Cisco Systems

W

hilst recently formed businesses such as Airbnb, Alibaba and Uber have had technology at their core from their inception, legacy businesses are now realising that their focus also needs to change. Whether the business is Boots, British Airways or Burberry, they need to understand that in 2015 they are not a pharmacy, airline or retailer that has an IT department, but a technology company that happens to sell medication, flights or high-end fashion. This paradigm shift is being born out of necessity. Consumers are demanding that businesses continually update their systems of engagement to keep pace with technology trends. Lean digital companies like the aforementioned Airbnb, Alibaba and Uber all have the ability to deliver new customer experiences and routes to market quickly. Yet, traditional businesses wanting to deliver the same, find their legacy technology infrastructure can shackle any ability they have to speedily react to changing consumer demands. In addition, this legacy infrastructure can be


especially budget sapping, with a neverending cycle of expensive maintenance and low-value upgrades meaning an average company devotes more than 70% of its technology spend on sustaining legacy applications, according to Forrester’s research. Stealing a march on the competition So, how can businesses transform into a data-driven, customer-focused, digital enterprise, with a faster pace of innovation and speed to market? All the while cutting costs, boosting efficiency and adhering to data privacy laws, in a world where big data is putting immense pressures on the IT infrastructure. In the dynamic international marketplace that we all now trade in, it’s not always the best ideas that win but the quickest to market. This is how Airbnb et al have been so successful. One of the key’s to their success is that they understand that they can’t control the hardware and devices their customers are using but can focus their attention on developing software applications (Apps) for the two most popular mobile operating systems – Apple iOS and Android – and this is essentially their primary route to market. Once in a leadership position it is important that businesses don’t become complacent but remain agile. The corporate graveyard is filled with companies that have been out in front but taken their eye off the ball and been ultimately overtaken. HMV, Kodak and RIM Blackberry were all guilty of not reacting quickly enough to changing market conditions and all lost their leadership position seemingly overnight. In some cases, those companies no longer exist.

Those that have been able to operate on an agile footing are those companies that have found it easier to weather the economic storm of the last seven years. Bytes not boxes Businesses from the largest multinational organisations to the smallest family run start-up now store the majority of their data in bytes instead of boxes, meaning they now have access to more data than ever before. Increasingly, the ability to react quickly to changing market conditions will require these data assets to be effectively analysed. With this data growing exponentially in those businesses wanting to embrace digital transformation, computing, storage and network solutions are increasingly characterised by clutter, complexity and cost. They typically comprise a plethora of different hardware and software components that leads to a significant over-provisioning and resource duplication. Changing the perceptions of IT The starting point for digital transformation is to change the view of IT from a cost centre to a revenue enabler. Only when this thinking is flipped can an organisation truly build an agile, customer-centric business. Once technology is put at the core, businesses can achieve the integration, speed, scalability and resilience goals of a modern digitally transformed business. Converging towards a dynamic future Converged infrastructure technology is one of the first steps on this path, and can help circumvent the sluggishness of physical infrastructure change and bring the cloud-like agility, efficiency and

capability benefits to IT that businesses strive for. A scalable, agile, softwaredefined high capacity converged IT infrastructure can help organisations to manage and resolve emerging business trends quickly and effectively, thus reducing resource drain and the type of risks associated with shadow IT that would continue to occur if IT can’t provision new solutions fast enough. The global research company IDC is in agreement. Its ‘Business Value’ research into the business agility of organisations that have turned to a converged infrastructure platform saw that they have dramatically improved on the time to deploy and scale new services. In addition, IDC’s research found that moving to a converged infrastructure platform freed up IT resources to focus on more value-added, strategic activities, all while reducing operational costs. It is imperative that traditional businesses transition quickly to stave off the competition from the new breed of companies that have only ever existed in the software realm. Those that have managed to do so have been the ones who themselves have redefined themselves within an app. British Airways has changed how it communicates with customers via a mobile device; Marriott changed its whole booking system to make it device friendly. As forward thinking organisations continue their digital transformation and migrate their infrastructure towards converged systems, they take an important first step in joining the 2015 revolution and place technology at the core of their businesses processes. Thus opening the door to a more agile existence in the future years to come. n




20

ISSUE 23

Disaster Recovery: Graham Jarvis

Disaster Recovery: Reducing the business risk at distance Graham Jarvis looks at the best way to move your desktop infrastructure to the cloud and looks at the options available for the virtual desktop market.

L

Graham Jarvis Graham Jarvis is an experienced technology and business journalist and is a staff member on CCi.

et’s start with a question. How far do you need to separate your business continuity data centres to maintain a credible level of business continuity should there be a disaster? The answer depends on the type of disaster and the circle of disruption caused by that disaster. This could be a natural phenomenon like an earthquake, a volcanic eruption, a flood or fire. Calamities are caused by human error too; so the definition of the circle of disruption varies. In the past data centres were on average kept 30 miles apart as this was the wisdom at the time. But then today, the circle’s radius can be up to 100 miles or more. In many people’s views a radius of 20 or 30 miles is too close for comfort for auditors, putting business continuity at risk. So what constitutes an adequate distance between data centres in order to ensure that business goes on, regardless of what happens within the vicinity at one of an organisation’s data centres? “Well many CIOs are faced with a dilemma of how to balance the need

of having two data centres located within the same Metro area to ensure synchronisation for failover capability yet, in their hearts they know that both sites will probably be within the circle of disruption”, says David Trossell – CEO of SCION vendor Bridgeworks. So to ensure their survival, they should be thinking of what their minimum proximity from the edge of the circle is, for a tertiary DR site. “After all, Hurricane Sandy ripped through 24 US states, covering hundreds of miles of the East Coast of the USA and caused approximately $75bn worth of damage and. earthquakes are a major issue throughout much of the world too, so much that DR data centres need to be to be located on different tectonic plates”, he explains. A lack of technology and resources is often the reason why data centres are placed close to each other within a circle of disruption. “There are, for example, green data centres in Scandinavia and Iceland which are extremely energy efficient, but people are put off because they don’t think there is technology


available to transfer data fast enough – and yet these data centres are massively competitive”, says Claire Buchanan – Chief Commercial Officer at Bridgeworks. Risk matrix Due to the effects of latency too many data centres are overly placed within a circle of disruption, but there are solutions out there on the market which reduce the need to choose data centres that are in many respects too close together. This doesn’t mean that organisations should relax and feel comfortable if their data centres are located far from each other. The risks need to be taken seriously. These risks can be analysed by creating a risk matrix to assess the issues that could cause any disruption. This will allow for any red flags to be addressed as before and as they arise. Even if a data centre happens to be within a circle of disruption, it’s advisable to situate another one at distance elsewhere. Japan is prone to earthquakes, so it would be a good idea to back-up the data to a New York data centre. Parts of Europe aren’t immune to natural disasters too, and this also needs to be considered. Internet limitations With regards to time and latency created by distance, Clive Longbottom

– Client Service Director at analyst firm Quocirca - says, “The speed of light means that every circumnavigation of the planet creates latency of 133 milliseconds, however, the internet does not work at the speed of light and so there are bandwidth issues that cause jitter and collisions.” He then explains that active actions are being taken on the packets of data that will increase the latency within a system, and says that it’s impossible to say “exactly what level of latency any data centre will encounter in all circumstances as there are far too many variables to deal with.” He also thinks that live mirroring is now possible over hundreds of kilometres, so long as the latency is controlled by using packet shaping and other wide area network acceleration approaches. Longer distances, he says, may require a storeand-forward multi-link approach which will need active boxes between the source and target data centres “ensure that what is received is what was sent”. Jittering networks Trossell explains that jitter is a problem. It is defined as packets of data that arrive slightly out of time. The issue is caused, he says, by data passing through different switches and connections

which can cause performance problems in the same way that packet loss does. He explains that packet loss occurs when the line is overloaded – this is more commonly known as congestion, and this causes considerable performance drop-offs which doesn’t necessarily reduce if the data centres are positioned closer together. According to Buchanan the solution is to have the ability to mitigate latency, to handle jitter and packet loss and this needs to be done intelligently, smartly and without human intervention to minimise the associated costs and risks to give IT executives the freedom of choice as to where they place their data centres – protecting their businesses and the new currency of data. It’s, therefore, important to know how to mitigate the effects of jitter and latency. Mitigating latency Here are Buchanan’s top 5 tips for mitigating latency and for reducing risk at distance: 1. Take a fresh approach: If you could forget about limitations that distance imposes how would your DR plan evolve? 2. Think about the data you need for disaster recovery and about the data you don’t need. How much does data


22

ISSUE 23

change each day? 3. Go with new technology. Are you putting the business at risk for the sake of budgetary constraints? Look behind the marketing spin, dated views and vendor bias, just because the big vendors tell you to architect around the problem to the extent that you will have two of everything, there are new technologies available that do the job better and at a fraction of the cost. 4. Don’t be limited by distance, do what’s right for your business 5. Embrace machine intelligence to reduce human error. Deploy machine intelligence Self-configuring optimised networks (SCIONs) used machine intelligence to mitigate latency. With machine intelligence, the software learns and makes the right decision in a micro-second according to the state of the network and the flow of the data no matter whether it’s day or night. A properly architected solution can remove the perception of distance as an inhibitor for DR planning “At this stage be cautious. However it does have its place and making sure that there is a solid Plan B behind SCIONs a Plan A, means that SCIONs can take away a lot of uncertainty in existing, more manual approaches”, suggests Longbottom. Buchanan cites one company that has explored the benefits of SCION solution WANrockIT, and that firm is CVS Healthcare. The main thrust was that CVS could not move their data fast enough, so instead of being able to do a 430 GB back-up, they could just manage 50 GB in 12 hours because their data centres were 2,800 miles away – creating latency of 86 milliseconds.

This put their business at risk, due to the distance involved. Machine intelligence has enabled the company to use its existing 600Mb/s network connectivity and to reduce the 50 GB back-up from 12 hours to just 45 minutes irrespective of the data type. Had this been a 10 Gb pipe, the whole process would have taken just 27 seconds. This magnitude of change in performance enabled the company to do full 430 GB back-ups on a nightly basis in just 4 hours. The issues associated with distance and latency were therefore mitigated. Learn from Hurricane Sandy Machine intelligence will have its doubters as does anything new. However, in our world of increasingly available large bandwidth, enormous data volumes and the need for velocity, organisations would do well to consider what technology can do to help their businesses underpin a DR data centre strategy that is based upon the recommendations and best practice guidelines that have been learnt since disasters like Hurricane Sandy. Despite all mankind’s achievements, Hurricane Sandy taught us many lessons about the extensive destructive and disruptive power of nature. Having wrought devastation over 24 States this has dramatically challenged the traditional perception of what is a typical circle of disruption in planning for DR. Metro connected sites for failover continuity have to stay due to the requirements of low delta synchronicity but this is not a sufficient or suitable practice for DR. Sandy has taught us that DR sites must now be located hundreds of miles away if we are to survive. n


ISSUE 23

23


24

ISSUE 23

Big Data: Thierry Bedos

Big data: from investment to business as usual CTO at Hotels.com Thierry Bedos, offers four pieces of advice for CIOs to help successfully integrate big data into their everyday business.

Thierry Bedos Thierry Bedos is CTO at Hotels.com and is in charge of the global technology teams that deliver and operate all the online websites and mobile apps for Hotels.com. Thierry was the founder of a few startups in Asia and coauthored patents on web technologies. He holds an engineering degree from Supélec, France and currently lives in London.

H

otels.com is one of the world’s leading online booking services and is an early adopter of advanced and predictive data analytics with over 15 million Rewards members and over 14 million customer reviews. But, how can big data help to create a successful online booking service? One of the most important success metrics for an online booking platform is to offer customers an excellent user and booking experience that is independent from the devices they are using or where they are. The continuous improvement of our sites usability is based on analysis from various sources including clickstreams, reviews, personal preferences of users, and hotel profiles. App developers are able to improve the user interface and the algorithms behind the booking procedure by understanding the customer journey beyond just one session. Choose the right platform We all know that the right technology platform makes or breaks an IT project; the same can be applied to big data solutions. But how can we make the right choice? A technology decision should be based on a thorough assessment of business needs and deciding what could benefit from data analysis in the future. CIOs shouldn’t focus purely on investment costs when choosing the technology. The evaluation should also include performance, reliability, usability, data security and – this is very important - scalability.


ISSUE 23

After weighing up all the pros and cons, Hotels.com chose Datastax Enterprise as its online data platform, which is based on the open-source Apache Cassandra NoSQL database. This allows Hotels.com to benefit from features such as built-in management services, extra security features, and external Hadoop integration that complements the features of a pure open-source solution free of charge. We also benefit from Datastax’s support, maintenance and update services. That leaves us free to focus on data analysis and supporting the Hotels. com business. Get the bosses on board A big data project requires both investment and cross-company collaboration: silo thinking could be an obstacle to long-term success. A main goal for Hotels.com, for example, was to break down the barriers between the online world of our booking platform and the offline world with our data warehouse solution at its heart. This required a massive change in entrenched processes. We found that the best way to get backing from the bosses was to prove that we could achieve a quick return on investment (ROI). To demonstrate this we collected more than 150 business use cases that would be possible with the proposed platform.

We then selected a subset of 10, which were suited to illustrating a proof of concept within a narrow time frame. Seeing our proposed platform delivering quick wins helped to convince the board and served as a stepping stone to more challenging use cases. Data privacy comes first Customer trust is a precious commodity and respect for data privacy is one important key to success. Therefore CIOs should ensure that their big data strategy is carefully balanced with a commitment to protect customer data security. The use of anonymisation is vital to protect every user’s privacy, especially when analysing large quantities of aggregated data. Make data analysis business as usual Regarding cost, time and organisational restraints, project leaders should always keep sight of long-term development to build a reliable, efficient and future proof platform. Scalability is an important component in long-term planning to ensure that the technological platform is able to keep pace with the ever increasing flood of structured and unstructured data. CIOs should start to integrate the analytics processes into everyday

25

business once the platform has been established and first use cases have yielded results. Only when this has become business as usual should IT and data teams address new projects to advance the company’s business even further. More than analytics Big data is closely linked to data analytics, but Hotels.com’s big data platform goes far beyond that. One of the most important uses of data is to deliver a cross-device user experience for multiple screens and independent from where they are. On the one hand we analyse user data in order to determine which devices are being used. On the other hand, we also log data in order to be able to recognise the destinations each specific user has been searching for on the company’s booking platform and to ensure that customers are recognised on any device, so that they can pick up where they left off in their search for a hotel. This allows users to begin looking for accommodation, for example, on a tablet while travelling by train, before confirming the booking on a desktop at home without the need to start the booking journey again. This is only one example of how Hotels.com uses its data platform to improve the booking experience and make it as personal as possible. n


26

ISSUE 23

Cloud Storage: Richard Kemp

Solving your storage challenges in a virtualised environment The rapid adoption of virtualisation has created a disconnect in the data centre, with physical storage becoming the source of growing costs, bottlenecks and frustration.

W

hen it comes to storage, flash is undeniably a hero. But it is unlikely to save the day on its own. While it has an important role to play, flash is just one of a number of solutions required to address the storage pain points in the data centre. Just like The Avengers or The Fantastic Four, it needs to be part of a team to get the best results. In any case, whatever role flash does play in an organisation’s storage infrastructure, there’s an underlying problem (or villain) that needs to be tackled first. The real issue stems from the shift from physical to virtualised workloads.

Kieran Harty Kieran Harty is the Chief Technology Officer and co-founder of Tintri. Prior to becoming CTO, Kieran served as CEO and Chairman of Tintri. Before founding Tintri, he was Executive Vice President of R&D at VMware for seven years, where he was responsible for all products. He led the delivery of the first and subsequent releases of ESX Server, Virtual Center and VMware’s desktop products. Before VMware, he was Vice President of R&D at Visigenic/Borland and Chief Scientist at TIBCO.


While an organisation has gone virtual, its storage is still built on an architecture meant for a physical world. To gauge the extent of the trend, it’s worth noting that the percentage of virtualised workloads has grown from 2% to 75% in just ten years. This rapid adoption of virtualisation has created a disconnect in the data centre, with physical storage becoming the source of growing costs, bottlenecks and frustration. Virtualisation brings storage pain A recent survey of 1,000 data centre professionals found that the two most cited storage pain points were performance (50% of respondents) and manageability (41% of respondents). That’s hardly surprising given that increasing numbers of virtual workloads are generating far more random I/O patterns that are bound to choke disk-centric storage. To try and improve performance, storage admins shuffle virtual machines from one storage LUN or volume to another but

this presents them with all manner of manageability shortcomings along the way. Buying time won’t solve the problem One way to try and overcome performance pain is to use flash because it is low latency and can handle random I/O. It’s also a lot faster. A single commodity SSD (Solid State Drive) is 400 times quicker than a hard disk drive (HDD). To put that comparison in context, the speed of sound is “only” 250 times faster than walking! But flash’s super speed can only buy admins time. It doesn’t have the powers required to deal with the root cause of storage pain or relieve any management burden, namely the disconnect between virtual workloads and physical-world storage. It only addresses the symptomatic pain. Over time, data centre professionals are likely to add more virtualised workloads as they expand their footprint from virtualised desktops to servers to private cloud. To keep up with the pressure put on their infrastructure,

they may need to buy more and more (high cost) flash. Not only is that bad for their budget, worse still, it won’t resolve the disconnect. Matching storage to virtualised environments The best way to solve the root cause of storage pain is to deploy storage specifically built for the world of virtualised workloads, in other words, storage that is VM-aware. VM-aware storage (VAS) has none of the remnants of physical storage —no LUNs or volumes, striping or widths. It operates at the most granular level, allowing admins to take action on individual virtual machines. Conventional storage groups virtual machines into LUN or volume ‘containers’ and applies policies at the container level — assigning an amount of performance to be shared by all the virtual machines inside. A rogue virtual machine in the container will waste performance that should be used by its neighbours.


28

ISSUE 23

VM-aware storage gives admins the x-ray visibility to see and act at the VM-level and assign a specific performance level to each individual virtual machine. That means they can give mission critical applications more performance while setting a cap on any rogue virtual machine. This enables admins to guarantee every workload will get the exact performance it needs. Managing the pain away Manageability is an important part of the equation. Most storage admins and/or virtualisation admins have to maintain a large spreadsheet to map all virtual machines to their respective LUN or volume. As the virtual machines are shuffled around, the spreadsheet must be meticulously maintained. VMaware storage makes the spreadsheet obsolete with its x-ray like powers.

Admins can login and see every individual virtual machine, drill in for full analytics or set a policy (replication, cloning, etc.) at the VM-level. Getting the solution with the best value A storage admin can only improve performance in the long term with VM level manageability and the ability to align performance levels to individual virtual machines. What is required is a combination of the brawn to handle large scale deployments with the brains that provide VM-level visibility, control, automation and analytics. VM-aware storage relies on its software brain to assign tasks to flash and spinning disk storage based on the speed and strength required to handle a particular workload. Just like The Fantastic Four, The Avengers or

any other super hero troupe, it’s the perfect combination of brains, brawn and agility. With the arrival of VM-aware allflash on the market, organisations have the choice of enhancing their hybrid storage strategy of mixing flash and spinning disk or adopting an allflash approach where it is appropriate. In other words, customers can decide which workloads benefit from an allflash or hybrid-flash approach. This means VM-aware storage is smart enough to help balance workloads so organisations only buy storage when they need it; and only buy the type of storage they need. Flash might not be able to rescue an organisation’s data centre singlehandedly, but it can play a prominent role in helping the cause of VM-aware storage to save the day. n


ISSUE 23

29




32

ISSUE 23

StaaS: Akinbola Fatoye

StaaS and deliver: Welcoming cloud economics to enterprise storage Why the cloud economics of a Storage-as-a-service solution are particularly attractive to businesses wanting to maintain financial agility when it comes to budgeting for their IT infrastructure.

Akinbola Fatoye Akinbola Fatoye is a Head of Technical Services, TelecityGroup UK and has over 17 years of IT experience in the management and implementation of complex infrastructure solutions.

S

torage is changing. While IT becomes an increasingly commoditised asset, on the whole, provisioning storage has maintained a much more conventional acquisition model. For organisations accustomed to buying IT on demand based on business needs, storage’s up-front costs often don’t compute. While the traditional model of procuring storage upfront remains a common approach within the industry, companies that have leapfrogged over conventional attitudes towards sourcing expect flexibility from their infrastructure providers. The popularity of cloud-based platforms such as AWS and Microsoft Azure are partially driven by buyers taking a modern-approach to meeting storage demand; the ability


to flex up or down according to business requirements, only paying for what’s been consumed. Digitally Transformed Until recently, IT departments typically acquired enough data storage to meet anticipated needs for as much as a year in advance, often managing and hosting this themselves on-premise. This approach meant that increasing capacity if needs exceeded availability added cost and time to provisioning and deployment. Today, the options available are much more different thanks to Storage-as-a-Service (StaaS). Offering enterprise-quality storage for businesses needing to save large volumes of data to access ondemand, StaaS combines cloud flexibility, with dedicated physical storage delivered from local data centres. Cloudbased storage offers an attractive OpEx-only commercial model that provides simple, on-demand provisioning, with the same level of configurability one would find in physical storage arrays. This not only relieves pressures on in-house IT resources (leaving the business free to focus on what it does best), but puts storage in the hands of experts who can provide service reliability, but with the added benefits of a cloud economy. The cloud economics of a StaaS solution are particularly attractive to businesses wanting to maintain financial dexterity when it comes to budgeting for their IT infrastructure. When compared to traditional storage investments, in some instances, up to four years can pass without ROI being actualised. It’s also not unusual for Capex-intensive hardware solutions to have limited shelf lives, failing to be fit for purpose as the needs of the business evolve. This is counter-intuitive to the accelerated timelines often associated with how services are delivered today, where expectations around capacity and storage

may change quickly, and at an unpredictable rate. Changing the store-y The realities of running a modern business will undoubtedly include the need to have a strategy in place for how to handle the vast amounts of data being generated. Storage - and protection of the information being stored, are both increasingly critical priorities from both an operational and procurement perspective. The growth pace of storage capacities delivered as-a-service will eventually overtake traditional storage arrays, making such architectures increasingly obsolete. Combining flexible storage delivery through the cloud, interconnected with other eco-systems and applications has become the preferred route for many an organisation looking to transform their digital presence. Earlier this year, 451 Research Group’s Simon Robinson called the lack of a unified approach to data management one of the “largest elephants in the CIO’s corner office.” While the management and handling of data has several strands associated with it, businesses are increasingly aware of the implications a lack of investment in storage capacity can have on operations. Data management strategies will continue to work their way up the corporate agenda, making storage considerations an integral part of the decision making framework for how to build an effective master data strategy. A perfect storm of continuous data growth, capital expense, technological complexity, operational overheads and unnecessary future-gazing is prompting many enterprises to re-think their storage strategy. In a world of virtualised desktops and infrastructure-as-a-service, the modern enterprise needs the entire IT stack to look – and behave – like a cloud. The demand and technology is now in place for storage infrastructure to make the leap. n


34

ISSUE 23

Cloud Security: Monica Brink

Cloud security and your cloud provider Data breaches are high up in the news so we got Monica Brink to look at some of the steps a business needs to take to ensure they are securing their sensitive data

Monica Brink Monica Brink is Director of Product Marketing at cloud services provider iland and is based in Houston Texas. She has more than 10 years of global experience in product and channel marketing in the Cloud Computing and ERP sectors. Prior to this role, Monica worked in both Europe and the Middle East with Microsoft, BMC Software and Meeza. She can be found tweeting at @MonicaBrink

C

yber security is currently centre stage; no matter where you turn it is all over the news. Just this last month we’ve certainly heard a plethora of stories about companies that have been affected by breaches and hacks – UK telecoms provider Talk Talk experienced a detrimental cyber security attack where customers’ personal data was breached, and in a separate incident at the end of October, Scan Computers, Novatech and Aria Technology all encountered website disruption, with Aria confirming this was due to a Bitcoin-based DDoS attack. With the UK government also doubling funds to support cyber security programmes with plans to fend off more sinister threats, many businesses are realising the very real need to protect the sensitive and confidential data that they hold. Therefore, focusing on cloud security within your company is not only justified but more important than ever. However, it can be difficult to determine what the practical steps are that cloud managers, CIOs and architects need to take to ensure cloud security for their enterprises. Deploying workloads in the cloud does not necessarily present more security risks than deploying in the traditional on-premise data centre - as long as your company has the right security controls in place and you ask the right questions of your cloud services provider. A partnership with your cloud services provider that is open and transparent about cloud security combined with ongoing support is the foundation for

establishing, monitoring and maintaining cloud security. Many companies are simply not talking to their cloud service provider about security issues, nor are they demanding the data about their cloud resources that would help them monitor and maintain the required levels of cloud security that is so essential in the current climate. Security discussions with your cloud services provider need to start with ground-level issues, like segregation of data from other customers, user access control and two-factor authentication, security of networks and firewalls, availability and performance SLAs, as well as data sovereignty issues. The most pressing issue for many customers is also whether they’re covered for cloudbased disaster recovery in addition to their IaaS requirements. It is also important to not overlook the details, as customers and service providers also need to work together on very practical aspects of maintaining cloud security, including matters such as: • Scanning and reporting on network and server vulnerabilities

• Detection and remediation of virus and malware intrusions • Encryption of servers and networks with options for the customer to hold the keys themselves • Monitoring and reporting on firewall events and login histories The good news is that there are a lot of advancements in cloud security which can negate cloud risks when matched with cloud service providers like iland, who are willing to work closely with customers to match specific security requirements to cloud infrastructure and services. There is no doubt that cyber-attacks and security breaches will happen again to businesses in every sector – however they can be prevented, and this starts with the infrastructure implemented, and having an open line of communication with your provider. Organisations can move forward with their cloud initiatives and aspirations without getting held back by the security risks. Now more than ever it is really important to ensure that you have the right cloud security in place. n


• pPUE’s of less than 1.03 • Free cooling to increase energy efficiency • Data Centre air fully separated • Energy efficiency: Evaporative Cooling ESEERs over 45 • 3 modes of operation • Munters heat exchanger technology • Any fresh water type • Configurable design • Efficiency internal fan wall

Series: 100 / 200 / 300 / 400 New!

Oasis™ Indirect Evaporative Coolers Munters Oasis™ Indirect Evaporative Coolers (IEC) save you money, provide optimal operation and serve as an energy efficient solution for cooling your data centre. We are also proud to offer 1st class dedicated global sales and service teams plus fully operational test facilities for Factory Acceptance Testing and simulation. The facilities follow ASHRAE Standard 143-2015 guideline method for indirect evaporative cooling testing www.munters.com/dctestfacility Call to find out how much the Oasis™ IEC can meet your data centre cooling and efficiency needs.

Munters Europe & Middle East +44 1480 410223 • Americas +1 978 241 1100 • Asia +86 10 8041 8057 E-mail: airtreatment@munters.com Web: munters.com/datacentres


FREE YOURSELF FROM YOUR MASTER

FIND YOUR

DATA ZEN

MOVE FROM RELATIONAL TO RIAK Unstructured data is creating massive stress for those still relying on relational databases not designed for the digital first evolution. It’s time for innovation. It’s time to simplify your deployments, enhance your availability and reduce your costs. It’s time for a master-less NoSQL solution like Riak KV. Find yourself time for a little more Zen.

Find out today how you can free yourself Call: +44 020 3201 0032 Contact us today: basho.com/contact

DOWNLOAD WHITEPAPER http://info.basho.com/CCI-WP-Relational-to-Riak_LP.html


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.