Cloud Computing World Issue 2-Oct 2014

Page 1

CLOUDCOMPUTING WORLD Issue 2

October 2014

Audiocast: Why co-location has become a key driver in cloud computing

Removing the risk for data centre and enterprise IT Cloud Data Sovereignty in Europe

How embracing SaaS could evolve your brand Network Infrastructure is Critical to Public Cloud Services Launch Partners


SHOULD YOU STAY OR SHOULD YOU GO?

If your data centre isn’t in the right place it can be a business critical and costly mistake. Our international Data Centre Solutions team understand the importance of being well located, with the right power, fibre, security and transportation, guaranteeing every millisecond counts. If you want to know if you should stay or should go, speak to the Data Centre Solutions team who have transacted in 189 cities around the world and saved one major cloud client over £19 million in 15 years. Why would you risk it? View more here or call us on direct on +44 (0)20 7182 3529

A


CONTENTS 3

CCW News All the key news in the world of cloud.

CLOUDCOMPUTING WORLD Issue

6

Cloudy Data Sovereignty In Europe

10

The single largest improvement to G-Cloud is on its way

11

Exponential-e’s Cloud Gives The UK A Creative Cutting Edge

August

The cloud: it’s older than you might think

EU data protection and the Patriot Act explained

The G-Cloud is now coming of age

Understanding cloud load balancing ervice prices di erences under t e microscope

How creative industries are embracing the cloud

14

Sustaining Business Through ICT

16

Operating Your Data Centre At Peak Efficiency

20

Taking Flight To A Scalable Cloud Solution

22

Controlling The Cyber Challenge

26 30

Why it’s time to take IT action

Data centres - it’s all in the temperature

Managed cloud services help to reduce overheads

Towards a more efficient cloud planning process Why planning is now an essential part of the cloud process

The Role Of Data Centre Co-Location In The Cloud Why co-location has become a key driver in cloud computing

How embracing SaaS COULD evolve your brand

34

How The Cloud Built A Multi-Billion Dollar House

36

It’s All Change For G-Cloud Users

38

Network Infrastructure Is Critical To Public Cloud Services

42

Launch Partners

CLOUDCOMPUTING WORLD e-space north business centre 181 Wisbech Rd, Littleport, Ely, Cambridgeshire CB6 1RA Tel: +44 (0)1353 865403 info@cloudcomputingworld.co.uk www.cloudcomputingworld.co.uk

Security is the answer - now what was the question?

32

40

Audiocast total remote cloud security ecoming reality says veteran pen tester ooking towards an open source cloud uture cost cutting wit out service reduction

How clients can get better value from SaaS technology

Enabling a major 19-country network with the cloud

The G-Cloud is now coming of age

Why advanced IT will always be integral to the cloud

Defining Best Practice In An Evolving Ecosystem How a standards body is taking a different approach

Desktop Strategy: Why Virtualisation Is Not The Only Answer Why an effective desktop strategy is now a must-have option

LGN Media, a subsidiary of The Lead Generation Network Ltd Publisher & Managing Director: Ian Titchener Editor: Steve Gold Production Manager: Rachel Titchener Advertising Sales: Bob Handley Reprographics by Bold Creative The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. This publication is protected by copyright © 2013 and accordingly must not be reproduced in any medium. All rights reserved. Cloud Computing World stories, news, know-how? Please submit to steve@lgnmedia.co.uk

CLOUDCOMPUTING 1


FOREWARD The economics of the private cloud - how do they stack up? Hello everyone, Whilst the economics of the cloud are the main driver for most firms to adopt the technology, not all organisations are looking for the lowest possible price that a public shared cloud service offers. We are now starting to see the arrival of so-called private clouds in the marketplace, with major firms electing to occupy their own cloud-based data centres, but using the remote resource on an exclusive basis. This isn’t actually as odd as it might sound, as the economics of the cloud mean that it can be more cost-effective to locate a data centre in, say, Eastern Europe, than in, for example, London. Does this mean sacrificing security? Far from it, as provided the data is encrypted at source, and all data transmitted/stored in its encrypted format, then the security can be assured. Challenges only start to rear their head when the need to manipulate the data remotely arises. Without going into the subject too deeply, as a general rule of thumb, it pays to download the encrypted data to a local IT resource, decrypt and manipulate it - and then encrypt plus re-upload the data to the remote private cloud system. This might sound a lengthy process, but the falling cost of moving data around these days makes this operation perfectly viable. In fact, it’s what corporates have been doing with their network VPN data for many years. I had an interesting discussion this very subject with a major corporate at a recent communications conference, where I discovered the marginal cost of double-routing of data (via HQ) was around 3 to 5 per cent, making the process highly cost-effective. As IP transmission costs continue to fall, I suspect we may soon see the day when the cost of moving data around the world becomes negligible. At that point, the only reason for hosting data in a given geographic area will be regulatory, as well as marginal cost of locating data in a private cloud in a given country or region. The only downside to this evolution of the cloud is that more expensive `cloud’ areas of the world will start to lose their appeal, causing previously cost-effective data centres to go dark. Is this a positive or negative thing? I tend to sit on the fence in this regard, as we’ve seen the effects of this business strategy play out in the telecoms arena for several years, resulting in so-called dark fibre tarrifing. I can’t help wondering if the same thing will happen in the cloud computing space. May all of your cloud problems be little ones. Best Regards, Steve Gold, Editor, Cloud Computing World

2 CLOUDCOMPUTING


REGULARS

CCWNEWS S

kyhigh Networks claims that only 1 in 100 cloud providers meet proposed EU data protection requirements/ The firm says that the EU general data protection regulation is expected to be passed later this year and take effect in 2015, but after analysing its CloudRegistry of over 7,000 cloud services, it notes that the vast majority are not prepared for these new laws, with numerous and significant issues pertaining to new requirements such as: the right to be forgotten, data residency, data breach notifications, and encryption plus secure passwords. Charlie Howe, Skyhigh Networks EMEA director, said that is staggering how few cloud providers are prepared for the new EU regulations but, fortunately, there’s still time for providers to get into shape. “This means addressing a number of complex issues now, such as the right to be forgotten, as well as implementing data protection policies that meet these new standards. For cloud providers this will inevitably require additional resources and expenditures, but it’s a snip given the proposed penalties for violating the new laws, which can be up to five percent of a company’s annual revenue or up to €100 million,” he explained. On the right to be forgotten issue, he said that one the most well publicised and controversial amendments to the new regulation is the right for individuals to request deletion of data identifying them. “It is a complex issue but, given the media interest surrounding it, one that’s unlikely to blindside cloud providers,” he said. “Still, when you consider that the average organisation uses 738 cloud services, complying with this requirement presents some unique challenges. A big problem is that 63 per cent of cloud providers maintain data indefinitely or have no provisions for data retention in their terms and conditions. On top of this, another 23 per cent of cloud providers maintain the right to share data with another third party in their terms and conditions, making it even more difficult to ensure all copies are deleted,” he added. With this in mind, Howe went on to say that it is fair to say that the right to be forgotten could turn out to be a massive headache for many organisations – cloud

All the key news in the world of cloud. Please don’t forget to check out our Web site at www.cloudcomputingworld.co.uk for a regular weekly feed of relevant news for cloud professionals.

service providers themselves and those companies using these services – it’s not just an issue for Google. www.skyhighnetworks.com

A

mazon says that its new Zocalo service - which has been on test since July - is now generally available. The document storage and sharing facility for the enterprise is available on a 30-day free trial (200 GB of storage per user for up to 50 users), after which users pay $5 per user per month. As part of the move to general availability, Amazon is has announced that AWS CloudTrail now records calls made to the Zocalo API. The API, says the firm, is currently internal, but there are plans to expose it in the future. aws.amazon.com/zocalo

G

oogle has trimmed its cloud compute prices by 10 per cent, saying that the change represents an opportunity for the company to edge itself into the cloud leadership position ahead of competitors Amazon and Microsoft. The price reduction applies to all Google Compute Engine instances across all regions. CCW notes that, earlier this year, Google dropped its cloud storage prices by ore than two-thirds. Soon after, Amazon and Microsoft cut their storage prices by between 10 and 68 per cent. Industry observers suggest that the price cuts are possible largely because of cloud computing itself, which reduces hardware costs and improves data centre operability. https://cloud.google.com

R

ackspace boosts its OpenStack private cloud services Rackspace has announced a number of major enhancements to its OpenStack-powered private cloud offering, including a 99.99 per cent uptime guarantee and the ability to scale to hundreds of nodes. This means, says the firm that Rackspace

will now be able to help organisations build their own private clouds, with the scalability and flexibility of a public cloud environment, but combined with the highest availability guarantee in the industry. Launched in 2011, the Rackspace Private Cloud was designed and built by the OpenStack experts who co-founded OpenStack and run one of the world’s largest OpenStack-powered clouds. It delivers the agility and efficiency of a public cloud combined with the enhanced security, control and performance of a dedicated environment. Nigel Beighton, VP for technology with Rackspace, aid that for businesses that have traditionally been locked out of the public cloud for control and compliance reasons, or very large resource requirements, this private cloud environment now lets them experience all the benefits of the OpenStack platform. “This is an important indication that OpenStack is maturing fast. Today businesses don’t want to be tied into proprietary systems that stifle innovation and agility, so the principles of an open source platform that is easy to implement, massively scalable and feature rich is increasingly attractive. The fact that we are also offering unprecedented API uptime guarantees for our private cloud is testament to our faith in the capability of the platform,” he explained. Plans call for the Rackspace Private Cloud enhancements to roll out for clients in the UK during October, ith the addition of Rackspace DevOps Automation Services to be determined at a later date. www.rackspace.co.uk/cloud/private

A

lert Logic, the security-as-a-service cloud specialist, has completed its European data centre in Newport, Wales, which is now available for partners and customers. Marty McGuffin, the firm’s senior VP of security, said that international expansion is a strategic priority for Alert Logic, so hosting service and customer data within the European Union clearly demonstrates the firm’s commitment to providing the same service that clients in the US also receive.

CLOUDCOMPUTING 3


REGULARS

CCWNEWS For the European deployment, Alert Logic chose the next generation data centre in Newport, one of the world’s largest data centres, located on a 50-acre site with 750,000 square feet of space, and designed as a Tier 3+ data centre. This major investment, supported by the Welsh Government, allows the company to collect, analyse and store data within the European Union, ensuring adherence to the EU data protection policy around data privacy, protection and residency. Alert says that the building is an energy efficient (green) data centre as it runs on 100 per cent renewable energy and uses less than one per cent of the water volume used by standard data centres. One data hall uses approximately 1.5 megawatts, which is enough to power a small village of 750 houses. www.alertlogic.com

G

VA Connect, the agent for London’s 50MVA Gateway Data Centre in West Thurrock, has announced new connectivity to the site via Zayo networks. The move, says the firm, puts the entire site at the centre of both pan-European and US fibre networks - seeding the area as the second major outer London data centre park to eventually rival Slough. Charles Carden, director of GVA Connect, said that the company had already announced significant connectivity for the Gateway Data Centre in West Thurrock, but added that the potential of the Zayo dual redundant diverse routed fibre network into the equation is a total game changer for the area. The Zayo network, which is within 50m of the site, passes the East and North cable entry points and offers a diversely routed 432 cable enabling pan-European and US fibre networks for occupiers and a full suite of network services; dark fibre, WDM networks, MPLS/ VPLS networks, VPN’s and Tier-1 IP services. According to the company, the connectivity possibilities announced are in addition to those offered by BT, KPN, Vodafone (Cable and Wireless), Level 3 and Fujitsu whose own data centre is a very close neighbour in this area. Further data centres

4 CLOUDCOMPUTING

in the West Thurrock area are already being planned. www.gatewaydatacentre.co.uk

A

MD has linked with Canonical to offer what it claims is one of the industry’s easiest ways to deploy an OpenStack private cloud. The package features the SeaMicro SM15000 server, Ubuntu LTS 14.04 and OpenStack, which includes a set of powerful tools to build one of the most flexible and reliable private clouds. AMD says that its Canonical collaboration overcomes the complexity of deploying OpenStack technology and provides an out of the box experience making it possible to deploy a private cloud in hours compared to days. The joint solution, it adds, automates complex configuration tasks, simplifies management, and provides a graphical user interface to dynamically deploy new services on demand. Dhiraj Mallick, corporate vice president and general manager for AMD data centre server systems, said that the two companies have dedicated a tremendous amount of engineering resources to ensure an integrated solution that removes the complexity of an OpenStack technology deployment. “The SM15000 server, Ubuntu LTS 14.04 and OpenStack is an amazing system filling a need in the industry for an OpenStack solution that can be deployed easily without spending a fortune on professional services or hiring teams of people,” he explained. www.amd.com

T

echspace London has expanded into new office space for co working in Shoreditch, increasing its presence in Tech City by 20 per cent. The company claims that the new location positions Techspace as one of East London’s largest co working facilities, with a total of over 15,000 sq. ft. providing a home to 200 co-workers from more than 50 technology companies. Alex Rabarts, cofounder of Techspace

London, said that the expansion illustrates the increased interest in co working from growing technology-focused start-ups. The new space, located on Great Eastern Street in Shoreditch, is Techspace London’s third premises in the Tech City area. The office features 2,000+ square feet, a full kitchen area, a balcony and a roof terrace. Techspace London now provides co working and private office space for companies from 1 to 100 employees with options to expand or contract with 30 days notice. According to Rabarts, co-working has already experienced huge growth in the past few years. “Technology start-ups are in need of a plug and play office solution that can offer the support and infrastructure they need to grow. The facilities and community at Techspace allow our start-ups to focus on developing their businesses and product, rather than dealing with the distractions of managing a workplace,” he explained. www.techspace.co

H

ightail, the file-sharing service, has announced integration with SkySync and Mover, enabling enterprise customers to easily sync, share, transfer or backup files across any connected system – both on-premise and cloud-based. Hightail says the move recognises the significant investment enterprises have made in on-premise storage solutions, and unlike many of its competitors, the company doesn’t think you should have to replace your current system to take advantage of the cloud. With 97 per cent of businesses interested in hybrid cloud solutions, the firm adds that it wants to enable users to access their content no matter where it is stored: in the cloud, on-premise or in an online repository. With Hightail, users not only have the simplicity, but also the controls and security that are necessary when collaborating with users outside the company firewall. www.hightail.com



REGULATORYISSUES

EU data protection and the Patriot Act explained

CLOUDY DATA SOVEREIGNTY IN EUROPE

6 CLOUDCOMPUTING


REGULATORYISSUES

Ian Moyse explains the intricacies of who has access to your company’s data By Ian Moyse, Sales Director, Workbooks

Introduction When considering cloud computing the inevitable security questions arise around issues such as: where are your data centres?, what happens to my data?, and how can I ensure the decision I am making does not expose us to risk? It is worth remembering, however, that if you are using a cloud provider you are likely to no longer be in exclusive control of your data - and will not be deploying the technical, organisational and people measures to ensure the availability, integrity and confidentiality of the data stored. If we refer to the Cloud Industry Forum report: `Cloud adoption and trends for 2013,’ we can see that data security and privacy are consistently reported as the top concerns - and hindrances to cloud adoption is also a key issue. Business trust in the cloud is growing, however, and in fact, according to a recent Attenda survey amongst 100 CIOs and IT Directors, 87 per cent of respondents said that they have more trust in the cloud today compared with a couple of years ago. It is important to understand local and EU data legislation and any appropriate vertical legislation affecting your sector are key when it comes to making educated choices of what cloud platforms and vendors to consider and use. Example considerations are the European Union’s Data Protection Directive of 1995 and the UK-enacted Data Protection Act of 1998. The EU directive requires all EU member states to protect people’s fundamental rights and freedoms and, in particular, their right to privacy with respect to the processing of personal data, which includes the storing of data. The EU legislation also directed that personal data should not be transferred to a country or territory outside the European Economic Area except to countries, which are deemed to provide an adequate level of protection.

InfoBurst Data flowing across Europe - where is your company data now?

Meanwhile in the US... In the US, meanwhile, the Department of Commerce in 2000 created the Safe Harbor framework to ensure organisations put appropriate controls in place for the protection of

data when handling European and UK companies data that may be stored in the US. The Safe Harbor directives consist of seven rules that have been established specifically for US companies to comply with EU data storage directives. The ‘safe-harbor’ approach, which allows for data on EU subjects to be moved out of the EU territories does not have the level of adoption you may think, even if you did decide it covers your needs. It is also worth noting that many US cloud firms have not signed up to safe harbor and the liabilities that it might entail for them. There has been much discussion recently about storing data in the US or with nonEuropean cloud firms, much driven after it was realised that the United States can use the Patriot Act to access European citizens’ data without their consent. Since the issues around US stored cloud data and the Patriot Act’s lack of alignment with the Safe Harbor principals came to light, European bodies have been revising and updating the data protection laws that apply to all 27 European member states - and this situation is under review as this article is being written. It is, however, your data that you are placing into the cloud - and according to the lawyers and the data protection laws this means that you are responsible for that information. You are, by default, the data controller and must choose a cloud provider that guarantees compliance with data protection legislation. The problem here is that Microsoft, Google, Amazon, Salesforce and any other US-based organisation has to comply with local US laws meaning that any data that is housed, stored or processed by a US based company is open to inspection and interception by US authorities without notice or permission of a non-US company who has hosted their data in their systems. Office 365 During Microsoft’s Office 365 launch, Gordon Frazer, Microsoft UK’s managing director, admitted to the ZDNet newswire that the Patriot

CLOUDCOMPUTING 7


REGULATORYISSUES Total

No. Employees Fewer than 20

20-200

More than 200

Public

Private

Data security

82%

77%

85%

82%

89%

78%

Data Privacy

69%

79%

63%

68%

70%

69%

Dependency upon internet access (availability and bandwidth)

51%

53%

52%

48%

49%

52%

Fear of loss of control/manageability

46%

35%

33%

39%

25%

42%

Confidence in the reliability of the vendors

36%

35%

33%

39%

25%

42%

Data Sovereignty/jurisdiction

33%

37%

31%

30%

30%

34%

Only asked of respondents whose company has hosted or Cloud-based services in use

Act can be invoked by US law enforcement to access EU-stored data without consent. The UK MD also admitted that Microsoft would comply with the Patriot Act as its headquarters are based in the US. While Microsoft has since stated it would try to inform its customers before this should happen, it has also said that it could not guarantee this. This could illustrate why, in the above Cloud Industry Forum cloud adoption outlook report that 47 per cent of UK organisations wanted their data stored in the UK. It also perhaps reflects a sense of national law being perceived as providing a higher level of comfort for users. The cloud is too important a technological offering to ignore and whilst there are undoubtedly a number of considerations to address, none are insurmountable and the cloud technologies offer a great benefit when used in the right areas and for the right reasons Add to this ongoing revelations, mostly around the US, that have raised awareness to the general issues and you can see why `buy British# (or `buy EU’) has been hot on the lips of cloud experts recently. We have also seen the NSA and Prism stories illustrating how US officials have been casting their spying nets far wider than was previously expected. And quite recently we have seen a US judge ordering Microsoft to hand over foreign data it stores (in this case in Ireland) back to the US. The logic of the court in this case is that because the US-headquartered software giant controls the data it stores overseas, its foreign subsidiary companies are held to be applicable to US laws. Microsoft has already appealed this situation once - and lost. If the ruling continues to be upheld, US cloud vendors will face an increasingly emotive challenge of UK/EU customers looking to use European headquartered cloud firms as a priority over the traditional US giants. As a client you should select a cloud provider that guarantees compliance with EU data protection legislation and many articles have suggested going further if dealing with a US vendor. Suggestions include the recommendation that you should verify that the cloud provider

8 CLOUDCOMPUTING

will guarantee the lawfulness of any cross border international data transfers with your data. Some even go as far as to suggest that you ask the US vendor that is providing cloud services to you in the EU, to state clearly in their terms with you that “under no circumstances will the data you provide us leave the EEA, even from a request under the US PATRIOT Act.” However, having said this, our observations suggest you will likely find that this is something that none of the sensible US firms can – or will agree to. Questions, questions So can you use a US cloud provider? The answer here is yes. Should you do diligence and consider your position now on how you feel about your data being held in the USA or held on EU soil by a USA provider - the answer here is also a yes. The message is that you should perform your diligence, ask the questions of considered cloud providers and then make an educated decision on your own acceptability to the answers you get, both from a legal and an emotive perspective. You need, however, to ask questions such as where will my primary, secondary and backup data be held?, Under what jurisdiction to you hold data legally?, and what jurisdiction is the contract of service held under? If a provider avoids giving open and clear answers on your questions then amber warning flags should go up as to why they are not giving you clarity on these points. More importantly, you are moving data outside the EU for which you hold legal responsibility to ensure the data is protected and secured as if in your region/EU. Without a contract it would be deemed, should any failure occur, that you had not performed your own diligence and that you would be liable to breaking EU and UK Data Protection Laws. A clear understanding of your liabilities as the data controller are required, and you should not fall foul to easy sales promises. The bottom line here is that, if it sounds too good to be true, then it probably is. www.workbooks.com


Out of the box vs. outside of the box? Your organisation is different than your competitors, so why assume you have the same cloud needs.

Secure. Private. Personalised. Get a cloud solution that puts you in control of your business’ future. Find out how

www.dimensiondata.com/ukcloud


OPINION

The G-Cloud is now coming of age

THE SINGLE LARGEST IMPROVEMENT TO G-CLOUD IS ON ITS Introduction The Cabinet Office’s decision to revamp the search functionality of the CloudStore is a positive step in addressing concerns surrounding the G-Cloud framework. Recently, Cabinet Office minister Francis Maude announced that public sector organisations have now spent £175m procuring IT services through the framework, almost doubling the spend since January this year. Central to this growth has been G-Cloud’s ability to listen to feedback from suppliers and public sector departments, and make improvements where necessary. There are lots of changes coming in the new Digital Marketplace, but the single most important is the planned improvement in search. The buyer’s side From the buyer side, what you really want from the CloudStore or the Digital Marketplace is to go to one procurement tool and use it to find the services you need, create your long-lists and short-lists and then buy the service from the supplier. Our expectation of how accurate and simple a search process should be comes from our use of search engines like Google. In fact, we have heard a lot of feedback from G-Cloud buyers that in order to find services, they have to go outside the CloudStore to search for services in a search engine, then go back into the CloudStore and find the suppliers by name. We have tested the CloudStore to see if customers will be able to find our services. If you search for the word “backup,” for instance, you would expect the first results you are presented with to be backup services. Until now, that hasn’t been the case. The results you actually receive might be an infrastructure or platform service, but one that includes “backup” in the description.”

10 CLOUDCOMPUTING

The improvements to functionality will be significant. The new Digital Marketplace, which will replace the current CloudStore for G-Cloud 6, will actually prioritise titles and descriptions so users will start to get the right results. In addition, the Digital Marketplace promises a better, more user-centric approach to search altogether. The new search technology will make the whole process much more user-friendly, allowing users to search more quickly and easily, and most importantly provide them with the most relevant results. We think it’s great to see these changes happening; now the key is to ensure a continued level of improvement across all areas of the framework. The latest figures reported by Francis Maude are promising, but for the public sector to continue to make the best use of services available to them through G-Cloud, other concerns need to be addressed too. Education is still seen as the biggest barrier to cloud adoption by the public sector. More needs to be done by central government to educate local authorities and councils on the benefits of buying services through the framework. A major task Making G-Cloud really work is a big task, but it’s encouraging to see that steps are being taken to address the quick-wins like `search’ that will deliver big impact quickly. The framework is maturing and both customers and suppliers are beginning to see the benefits. It’s still very much a learning process and further investment is needed to break down the last remaining barriers. Databarracks has confirmed it has been selected onto the G-Cloud 5 framework with services available in Lots 1, 3 and 4 or Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Specialist Cloud Services.

Peter Groucutt welcomes changes to search functionality as it is confirmed on the G-Cloud 5 framework... By Peter Groucutt, Managing Director, Databarracks

The G-Cloud The UK Government G-Cloud is an initiative targeted at easing procurement by public sector bodies in departments of the United Kingdom Government of commodity information technology services that use cloud computing. The G-Cloud consists of: • A series of framework agreements with suppliers, from which public sector organisations can call off services without needing to run a full tender or competition procurement process • An online store - the CloudStore that allows public sector bodies to search for services that are covered by the G-Cloud frameworks • The service began in 2012, and had several calls for contracts. By May 2013 there were over 700 suppliers - over 80 per cent of which are small and medium enterprises. • £18.2 million (US$27.7 million) of sales were made by April 2013. With the adoption of Cloud First policy in UK in late February 2014 the sales have continued to grow, reportedly hitting over £50M in February 2014. • These are based on procurement of some 1,200 providers and 13,000 services, including both cloud services and (professional) specialist services as of November 2013. • Cloud computing caused a step change in the way information systems can be delivered. Given this, the UK Government initiated the G-Cloud programme of work to deliver computing based capability (from fundamental resources such as storage and processing to full fledged applications) using cloud computing. www.databarracks.com


CASESTUDY

How creative industries are embracing the cloud

EXPONENTIAL-E’S CLOUD GIVES THE UK A CREATIVE CUTTING EDGE Introduction From theatre to filmmaking, the UK’s creative industries have long been lauded as among the world’s best. Visual effects (VFX), in particular has in recent years evolved into a key sector of the British film industry, delivering work that is the envy of the rest of the world. Much of today’s success has come on the back of the skills that were developed during the production of the eight Harry Potter movies, the steady flow of work bringing stability and growth as well as innovation. Today, movies such as Gravity and Inception are the result, cutting-edge in ambition and execution - and laden with awards. Needless to say, the computing power required to render high-end VFX is substantial, and in the past this has restricted artists, designers and developers. The processing capabilities needed for animation and motion graphics meant that those working on VFX were tied to the desktop, confined to a single location where enough power could be delivered. This meant a traditional studio set-up using lots of powerful but expensive desktop workstations for creative work – assisted by racks of servers for rendering, cooling and storage as well as asset/project management. The VFX industry But now, cloud computing is beginning to transform Britain’s VFX industry, helping it maintain its position ahead of the chasing pack. The power required by high-end graphic applications can now be hosted in the cloud, delivered as a service to virtualised desktops powered by NVIDIA GRID technology. Where once, VFX workers were tethered to a workstation, the power of cloud means they can now work remotely. One UK company using cloud to do just that is Jellyfish Pictures, the BAFTA Award-winning London studio. Founded in 2001, it now employs over 100 artists across two locations, and has worked on BBC productions such as Doctor Who and Line of Duty, as well as Sky 3D’s Natural History Museum Alive, with David Attenborough.

Expotential-e details how cloud computing is giving UK PLC a positive boost...

Using the cloud, the company’s employees can log onto Jellyfish’s network from computers, laptops or other devices from any location. They can utilise a persistent virtualised desktop with all of their usual computing power, software applications and intensive creative tools. From there, artists and editors can work as if they were using a physical workstation, and also have asset management and storage taken care of too. Jeremy Smith, CTO of Jellyfish Pictures, explains how virtualisation improves their operational business model: “Cloud is absolutely massive for us. Soho is notorious for its power outages, which adds a complex layer to delivering against Hollywood service level agreements.” “We can now wave goodbye to the restrictions of desk-bound workstations and virtualise our power-hungry intensive graphics production both securely and privately. We don’t know longer need to be concerned with power consumption, server storage or cooling as everything is virtualised remotely,” he explained. The on-demand nature of cloud computing means that IT infrastructure is flexible, scalable and capable of adjusting to increased user demands. Almost all VFX work is project-based, which means that resource requirements fluctuate dramatically. And although the UK is very much at the heart of the VFX world, collaboration with partners and colleagues in dispersed locations is also a prerequisite in today’s media industries. “Cloud provides the flexibility to meet any processing demands, anywhere in the world”, says Smith.

infoburst Cloud computing is beginning to transform Britainís VFX industry, helping it maintain its position ahead of the chasing pack. The power required by high-end graphic applications can now be hosted in the cloud - including the now infamous Samsung Bear.

www.exponential-e.com

CLOUDCOMPUTING 11


OPINION

Why it’s time to take IT action Colin Curtis discusses how e-sustainability is now a board level issue... By Colin Curtis, Head Of Sustainability, Dimension Data

SUSTAINING BUSINESS THROUGH ICT

infoburst E-sustainability: “Ignoring carbon emissions will not make them go away. With a growing global population of over seven billion and rising global temperatures, reducing carbon emissions is an increasing concern for governments and businesses alike.”

12 CLOUDCOMPUTING


OPINION Introduction These are exciting times we live in. New and innovative technologies are changing the way people communicate, conduct business and live their daily lives. With the expectation that the next development is just around the corner, technological change is constant, and its benefits – cost savings, efficiency and productivity – well documented. However, with the swift adoption of new technologies, businesses are struggling to acknowledge, measure and manage the associated carbon emissions. According to a recent CDP (Carbon Disclosure Project) and Accenture report there was an average 22 per cent drop in green investment in 20131. Why? Because, as Ian Marchant, the former boss of Scottish Hydro Electric and owner of SSE once said: “Our whole economy was built upon high carbon consumption, a path that has undeniably put the world under stress. Businesses and organisations are looking at what disruption it will cause [to lower emissions] rather than what opportunities it can give them.” Ignoring emissions Ignoring carbon emissions will not make them go away. With a growing global population of over seven billion and rising global temperatures, reducing carbon emissions is an increasing concern for governments and businesses alike. In fact, to help meet aggressive carbon emission targets set by the European Union, the UK government announced in 2012 that businesses will be held to account: those listed on the Main Market of the London Stock Exchange now have to report their levels of greenhouse gas emissions. These sustainability issues and government policies will challenge current business models in the coming years creating both risks and opportunities. ICT will play a pivotal role in helping seize the opportunities and limit the risks. Specifically, businesses should look at three key areas for putting ICT to use to transform their operation models and reduce their carbon footprint: travel, energy and waste. According to the SMARTer2020 report by the Global e-Sustainability Initiative, ICT could reduce 16.5 per cent of global business-as-usual carbon emissions in 2020 – saving up to $1.9 trillion in gross energy and fuel costs. These carbon savings are almost seven times the size of the ICT sector’s own footprint, indicating the pivotal role that ICT can play in the transition to a low-carbon economy. Areas for improvement Businesses need to put a sustainability strategy in place that uses ICT solutions to lower costs, minimise environmental damage and benefit society. Starting with three key areas – travel, energy and waste – businesses can transform their operation models and reduce their carbon footprint, creating a better workplace for employees and positively contribute to the communities in which they operate.

Firstly, businesses must understand where the majority of their carbon emissions come from: Direct emissions: these come from fuel and refrigerants, such as a gas and generator usage, air conditioning, data centre cooling systems and company-owned vehicles. Indirect emissions from purchased electricity: this can come from any bought energy source, such as electricity in offices and operated data centres. Other indirect emissions: this focuses on everything external to the business including, business travel – air, public transport and privately owned vehicles – and electricity used in co-located data centres. Unifying dispersed businesses through communication Businesses need to focus on connecting employees with one another – and with their customers and partners – in a way that quantifiably reduces the cost, time and carbon associated with travel. Improving collaboration amongst dispersed global teams through unified communications and collaboration (UCC) and visual communication solutions, allows businesses to avoid needless spending on travel. Implementing a UCC development model can help a business create a roadmap to guide it from its current UCC state to meeting its future strategic and operational requirements. This allows a business to conduct meetings with minimal effort and complexity. The next step is for businesses to understand the way in which company, customer and partner meetings are carried out – this is vital in grasping how travel can be reduced. Analysing existing meeting trends in a business provides a roadmap for how collaboration and visual communications can be used to reduce cost, increase productivity, and improve sustainability. Decreasing energy levels With a data centre consuming up to 100 times more energy than a similar-sized office, IT departments can make significant energy savings in their own operations and provide systems that reduce office energy costs. Businesses should consider moving to virtualised IT environments to simplify management, reduce space and increase energy efficiency. They also need to optimise the efficiency of the data centre, focusing on computing architectures and power and cooling mechanisms. Small steps By taking these small steps, businesses can reduce their impact on the environment as well as help their customers do the same. Whether it’s through UCC, virtualised networking or ethical disposal of e-waste, these changes can make the difference between an energy-hungry company and a sustainable pioneer. www.dimensiondata.com

CLOUDCOMPUTING 13


INFRASTRUCTURE

How the cloud and SDN can help in a world of IT self-service

THE CLOUDY HORIZON FOR SOFTWARE DEFINED NETWORKING Introduction The idea that IT departments are increasingly acting as ‘service brokers’ has been a hot topic in recent times, but a change of focus is seeing the ‘self manage’ cloud at the forefront. Having a unified platform that is managed centrally and gives developers freedom, helps prevent people internally taking things into their own hands and is an attractive proposition for those concerned with matters of compliance and regulation. The concept being that the outsourced IT supplier provides cloud services, but the organisation ultimately retains control over its IT. As a result, we are now seeing hybrid cloud models being embraced that play very well into the networking arena, which is all about security, compliance and keeping a handle on workloads. However, organisations intent on taking too much of a hands-on approach to networking (when it comes to cloud) may struggle to anticipate the future requirements in a rapidly changing market. Engineering infrastructure for short-term requirements can be highly counterproductive. Most companies have private cloud or a configuration of a hybrid public cloud; what they really need to look at is what percentage of workloads, over a period of time, will be possible to outsource and then plan a network around the findings. They need to look at overall cost, security and compliance and as long-term contracts diminish, look at ‘on demand’ services. The next big thing is SDN (Software Defined Networking). We have already had the SDD (Software Defined Data centre) where, without having direct access, businesses can set up an entire data centre. This is made possible through self-service/ managed services through a cloud vendor, where the customer simply swipes a credit card and can add on services in terms of storage or infrastructure.

14 CLOUDCOMPUTING

The missing link What has been missing is the critical connection to the actual network itself. When it comes to risk and compliance, this is where IT departments are able to get to the crux of the real issues that concern them: the security and protection of their data. What we are now seeing is that SDN is giving IT departments a greater grasp of the networking itself so they can self-manage the technology. Implementation compliance rules and procedures are what are needed and are what the major vendors are starting to bring out. This approach fits very nicely with the hybrid scenario of private and public cloud, where users can set up rules using the SDN approach to make a more secure and stable connection between the data centres they have alongside the cloud they are looking to use as well. There is always going to be a lot of discussion about the right and wrong ways of going about cloud and networking. The future lies with IT providers that can combine their networks and create an effective cloud offering. www.centurylink.com

William Rabie discusses how IT departments are rapidly becoming service brokers... By William Rabie, Cloud Strategy and Business Development Director, CenturyLink Technology Solutions

infoburst What we are starting to see now is the demise of long term contracts and companies now need to look at providers who will give them that on demand, short term, scale up and scale down capability...



CLOUDINFRASTRUCTURE

Data centres - it’s all in the temperature

OPERATINGYOURDATACENTRE

AT PEAK EFFICIENCY

Dave Wolfenden extols the benefits of running a data centre within tolerances

By Dave Wolfenden, Managing Director, Mafi Mushkila

infoburst Operating a data centre - and allied IT systems - within normal operating tolerances is very important.

16 CLOUDCOMPUTING


CLOUDINFRASTRUCTURE

Challenges Modern data centres are complex systems that need to operate within tolerances if their owners and/or users are to extract maximum efficiency and the best possible value from the IT infrastructure they contain. Several decades of IT history have shown that, if elements of a data centre - or the entire system - are operated outside normal tolerances, then this has an increasing effect on the efficiency of the hardware and allied systems. Put simply, this situation decreases the time between equipment failures, something that IT professionals and engineers describe as the Mean Time Between Failure (MTBF). Although it may sound complex, the MTBF of a component or system is simply a measure of how reliable a hardware product or element is. For most IT components, the measure is typically in thousands or even tens of thousands of hours between failures. For example, a disk drive system may have a mean time between failures of 300,000 hours. The MTBF figure is usually developed by the manufacturer or supplier as the result of intensive testing, based on actual product experience, or predicted by analysing known factors. If multiple component systems start to fail in a data centre, the efficiency of the centre will take a nosedive. In many cases, as these failures compound, this can often result in the automatic shutdown of the data centre due to equipment failure. The reason for this is that, whilst a couple of decades ago a typical data centre was manned on an extended hours basis – or had local engineers available on-call – today’s centres are rarely manned and often located on client sites, requiring an engineer visit if something goes wrong. Systems redundancy Whilst systems redundancy – which all adds to the expense of the data centre - can help to ensure 24x7 operations, even when only a single piece of hardware fails, ultimately an engineer will have to visit to swap out and/or remediate the equipment problem. This can be an expensive option where the centre is located remotely. And it gets really expensive in sparsely populated countries, where an engineer’s visit may necessitate journey times of several hours - and often involve travel by light plane or helicopter. It’s for this reason that operating a data centre – and allied IT systems – within normal operating tolerances is very important. If tolerances are not maintained, then a partial or complete shutdown can add significantly to the operating costs of the centre, as well as dramatically decreasing client satisfaction levels.

But the effects of running outside tolerances can be subtler than a partial or complete shutdown. The risk of running higher temperatures, for example, does not so much involve electronic breakdowns but generally results in material changes, such as problems with insulation, wiring and connectors. Some older connectors, for instance, will corrode as temperatures start to creep up. It’s worth noting here that higher temperatures do not normally present a problem for the people working in the data centres because there are no staff – this is because a growing number of centres operate on a `lights out’ or dark basis, so as to reduce their energy footprint, as well staffing requirements. The downside of this arrangement is that the ability of an automated system to spot a temperature runaway situation as early as possible is far less than that of a human member of staff. This is especially true where the affected systems are localised, meaning that air cooling – typically in a cold aisle environment – will compensate for a heat problem for some time, until the situation gets out of hand. Being able to minimise the potential for a temperature runaway scenario is, therefore, a significant advantage where unmanned data centres are concerned. Testing, testing Whilst these are several players in the data centre testing ecosphere, there are few that can match our company’s experience in the load and heat testing specialist space. Mafi Mushkila has been steadily evolving its technology in parallel with the data centre industry for several decades, meaning it fully appreciates – and fundamentally understands – the risks of temperature problems and its effect on operating efficiencies. In an ideal world, modern data centres have a 20-year life span before they need to be replaced, usually for efficiency and obsolescence reasons. When temperature issues raise their ugly head, however, the lifespan is usually reduced - and, by implication, the Opex (operating expenditure) costs of that centre are increased. It’s important here to understand that it does not matter if the design of a data centre is old or new, since most centres have on-board and integrated capabilities as standard features. These systems normally allow remote monitoring systems to control most aspects of temperature and power consumption, but their operation – crucially – presumes that the testing and installation phase has been completed correctly. This situation is similar to the dashboard diagnostic systems in a modern motor vehicle – lights and alarms will sound in most modern cars, alerting the driver if something goes wrong.

CLOUDCOMPUTING 17


CLOUDINFRASTRUCTURE

These engine and transmission-related diagnostic systems are, however, only as good as their installation process – put simply, if they are not installed or calibrated correctly, then the diagnostic alerts they generate will not be reliable. The same is true for data centre monitoring and diagnostic systems. It’s also worth noting that some IT system vendors offer highly comprehensive warranties on their kit but - to avoid misunderstandings when claims under warranty are submitted - the guarantees can be declared invalid when hardware hits problems as a result of temperature or power runaway issues. These void warranty situations - whilst perhaps understandable from the vendor’s perspective - can add to the Capex (capital expenditure) costs of the IT systems involved. Nor is this a theoretical issue, as when we helped to design a data centre for a major bank recently, the potential for void warranties caused by IT systems operating outside normal tolerances was a key issue for the commissioning staff concerned. One interesting issue is the challenge of operating a data centre on a partial occupancy basis. This may be due to the client wanting to build options for future expansion into the centre - or it may simply be due to an occupant company delaying or cancelling its involvement in the project. Where a data centre is only partially occupied, the operating efficiencies of the centre are rarely anywhere near as good as a fully occupied centre, meaning that loading and temperature testing of the systems are all the more important.

18 CLOUDCOMPUTING

infoburst Where a data centre is only partially occupied, the operating efficiencies of the centre are rarely anywhere near as good as a fully occupied centre.

And as the current round of economic issues continue to dog businesses - as they have done over the last six or seven years - there is a significant possibility that elements of a normally fullyoccupied, but shared, data centre may be removed, resulting in a partially-occupied centre continuing to operate. Conclusions There is a strong need for data centres to operate within tolerances at all times in order to minimise their downtime and maximise efficiencies. Whilst operating a data centre at lower-thannormal temperatures does not have any significant effect on the reliability and efficiency of the centre, operating at higher-than-normal temperatures will usually result in significant impairments to the efficiency of the systems - even where the excess over tolerances is minimal. These efficiency impairments – which can also result in a cascade failure of components and systems as the temperature climbs – can directly affect power efficiencies, as well as ongoing Capex and Opex costs. Using a reputable and tried-and-tested heat loading system means that companies can fully test a centre at the installation stages, rather than having to expensively retrofit technology solutions when problems start to occur. The best method of testing a data centre is to use a device known as a heat load bank, which are useful for facilitating testing of the centre’s electrical and cooling systems in a controlled environment. www.mafi-mushkila.co.uk


// Cloud Solutions // Business Continuity // Managed Service Provider

SIRE helps businesses make the best use of IT systems to create a competitive advantage. We are an award winning supplier of leading edge cloud technologies, systems and processes. As specialists in Tailored Cloud solutions, we have been providing organisations with reliable, flexible and financially viable IT infrastructure coupled with a robust business continuity plan for over two decades. With SIRE alongside, you are free to get on with running your business, leaving us to make sure your IT infrastructure is protected, optimised and keeping pace with technical and legislative changes.

SIRE’s Cloud Solutions offer reliability and scalability: • Cloud Consultancy • Tailored Clouds • Private Clouds • IaaS and PaaS Providers • Virtualisation • Data Protection

For more information about cloud technology and solutions, please contact one of our specialists on 01344 758700.

www.sire.co.uk

// your essential partner


CASESTUDY

Managed cloud services help to reduce overheads

infoburst FlightDataPeople provide software based flight data and safety solutions to the aviation industry worldwide...

TAKING FLIGHT TO A SCALABLE

CLOUD SOLUTION

How a flight data and safety company tapped the power of the cloud...

20 CLOUDCOMPUTING


CASESTUDY The client FlightDataPeople provide software based flight data and safety solutions to the aviation industry worldwide. Clients use their software products to record and analyse operational safety information. The information processed comes from the on-board aircraft flight data recorders, often referred to the black box, as well as from operational safety and hazard reports submitted by pilots, cabin crew, technicians and ground handlers. Their clients are able to use the software to collect safety related information, conduct investigations, and issue mitigation actions to reduce their operational safety risk, identify emerging high risks areas and to proactively address these before they become more serious.

With the platform operating only from within UK data centres, on Sire’s own enterprise grade IBM server hardware and storage platform, FlightDataPeople know that the data is secure and highly available at all times. Benefits of a VPS The benefits of using virtual private server (VPS) include: •

The challenge FlightDataPeople’s clients have traditionally operated the application stack from in-house systems, though with IT department headcounts being reduced there was little available expertise to support the solutions in-house. With the flexibility and reduced overheads that hosted solutions bring, FlightDataPeople identified that they needed to be make their application available to clients using a Cloud based service. Sire was approached to help bring its application to the cloud, ensuring that the environment was scalable, secure and highly available.

• •

The outcome Built around the client’s application, the solution gives the software everything required to perform efficiently and FlightDataPeople now have a true cloud-enabled platform that is flexible and scalable. As a result FlightDataPeople have been able to grow their business based on a proven model with a supplier who understands their needs.

The solution Understanding FlightDataPeople, their clients and their offering led Sire to propose a virtual private server that could scale both in terms of storage capacity and processing power. This provided a replicated model to be used as the client grew. Secure access to the systems was of paramount importance and had to be achieved without detriment to the end user access or experience. Being able to provide both site-to-site VPNs as well as SSL-VPN connectivity into the servers for both the end users and FlightDataPeople administrators allowed us to meet the secure criteria no matter where the end user was located. In combination with the SSL-VPN. Sire says it was also able to provide secure Web based access for end users anywhere in the world using any Microsoft Windows, Apple Mac, Apple iOS or Android based device.

About Sire Technology Sire Technology is a supplier of leading edge cloud technologies, systems and processes. As specialists in tailored cloud systems, the firm has been providing organisations with reliable, flexible and financially viable IT infrastructure - coupled with a robust business continuity plan -for two decades. Whether you engage Sire to implement a one-off project or to provide on-going support, the company says that clients will

Cost efficiency - even if you are just starting out, you can get a small VPS and avoid the risks associated with shared hosting accounts and only paying for what you need. Security - virtual private servers provide a dedicated operating system environment (OSE) to the client. There is no co-mingling of clients within the OSE Control - a VPS gives the client complete access to their environment. If you need a custom software package installed, you can do so without having to wait for your hosting provider to help out. Scalability - a VPS can grow with your company needs. Stability - as you are not sharing the OSE, you are unaffected by other clients actions.

About FlightDataPeople FlightDataPeople (FDP) first started as a Flight Data Monitoring (FDM) Service to British Airways and other airlines using BAFDA (British Airways Flight Data Analysis). The team has extensive safety management, operational, engineering, maintenance and IT experience within airlines. In 2013 the company expanded its products and services to include SMS software and are the first - and currently only - software provider to produce FDM and SMS solutions. www.flightdatapeople.com

enjoy the same high standard of service. The company’s success is founded on the quality of the relationships it develops with its clients. Sire has had ISO 9001:2008 accreditation since 1997. www.sire.co.uk

CLOUDCOMPUTING 21


OPINION

Security is the answer - now what was the question?

CONTROLLING THE

CYBER CHALLENGE

Introduction The UK is in the grip of an online fraud epidemic, with 2014 expected to be a record year for losses - as witnessed by household names like Tesco, eBay, Facebook, PayPal and even the various departments of the UK government falling victim to what is perceived to be a new landscape of cyber threats. Organisation’ need to take notice of a broadening agenda of cyber risks and be aware that regulatory pressures on businesses are set to increase with the introduction of the new EU data regulation, which is likely to encourage an even more robust compliance environment. The EU legislation proposes significant fines for companies that do not comply with the proposed regulation - of up to 5 per cent of annual worldwide turnover, or €100m - with the possibility for individuals and associations, acting in the public interest, to bring claims for non-compliance. For many businesses, if adopted, these new obligations will require a significant review of existing security and data protection measures, policies and procedures, with training of staff and provision of additional resources - and will go further than the current model of convention when is comes to engaging cyber risk. Coverage concerns Cyber insurance has existed since the 1990s, but companies were forced to consider coverage limitations when a New York court ruled in February 2014 that Sony’s general liability policy would not cover the $2 billion in costs the company incurred from a huge data breach in 2011 involving the online network for its PlayStation game console. The decision highlights two important points. Firstly, because of the insurance industry’s continued efforts to limit coverage for cyber claims under commercial general liability policies, most businesses should consider policies specifically written to insure against cyber risks. Secondly, policyholders need to purchase adequate limits of liability for cyber risks. Ironically, Sony had purchased cyber insurance, and its cyber insurer provided coverage, but Sony

22 CLOUDCOMPUTING

Professor John Walker explains why security is integral to the modern IT equation

By Professor John Walker, CTO, Cytelligence

infoburst Once a hacker has breached a company’s security, the number of potential claimants may equal the number of clients the company has...


OPINION quickly exhausted its limits of liability defending the class action lawsuits. Once a hacker has breached a company’s security, the number of potential claimants may equal the number of clients the company has. Litigation costs resulting from a breach will likely be proportionately high, so it is important to purchase adequate limits of liability. Thus, the insurers who offer the product most suited to your company’s need must help negotiate favourable terms, limits and a realistic price plus appropriate cover. PwC’s latest (2014) Global Economic Crime Survey confirms the ongoing impact of cyber crime on business – the volume of detected incidents, says the report, increased by 25 per cent in 2013 and consequent average financial costs were up by 18 per cent. The advent of the GameOver malware variant in June 2014 - and its associated strains of viruses - seems to have particularly focused attention. The GameOver malware, in case you are not familiar with this darkware, has the ability to compromise a business or end user’s PC or laptop with an adverse payload. This payload can include one or more of the follow actions: • Remote viewing of sensitive and private files stored on the local PC’s hard drive • Allow access to information relating to bank accounts and other such on-line financial transactions • Sending emails from the system/email account without the owners’ knowledge • Invoking an attached web cam to visually infiltrate personal space to view the locality from afar • Using a compromised system, the malware can launch a distributed denial of service (DDoS) attack against other machines and/ or organisations • Activate other attached devices, such as microphones An associated threat is Ransomware, which cybercriminals can leverage to prevent the authorised user from accessing their own files by locking them down with encryption, so locking out the legitimate user. Whilst the attacker may offer the impacted owner the opportunity to pay to regain access to their locked files, there is no guarantee that they will be unlocked once the transaction has been made. These recent attacks are also associated with the distribution and communication of social engineering emails, claiming to be from a bank or a government agency, such as HM Revenue and Customs, urging the end user to go online to check their account, or to claim an outstanding refund. However, the real purpose of these communications is to capture, and of course abuse the valuable and sensitive data objects.

Following the well-publicised Stuxnet computer programme - which is considered to have been created by Israel and the US, and which succeeded in infecting and sabotaging Iran’s uranium production in 2012 - the SCADA industrial control systems of hundreds of European and US energy companies have recently been infected by a sophisticated cyber weapon operated by a state-backed group with apparent ties to Russia. This powerful piece of malware - known in some circles as `Energetic Bear’ - allows its operators to monitor energy consumption in realtime, or to cripple physical systems such a wind turbines, gas pipelines and wind turbines at will. According to Symantec, which produces security applications and tools. This particular new strain of virus emanates from Eastern Europe and has all the hallmarks of being state-sponsored. Hackers, and cybercriminals are also getting smarter with an imaginative miscreant evolution of criminal techniques. But this state of cyber insecurity is nothing new, and has been a subject of conversation for many years. In fact it was around seven years ago - in a conversation with a UK CPNI representative - that he commented that the “cyber Exposure was way over hyped.” The problem, he explained to his audience, was that no-one has been willing to listen –in fact these threats were also clearly outlined in a report written some ten years by myself, and which was, at that time, classified in the IT industry as scare mongering.

infoburst Hackers, and cybercriminals are also getting smarter with an imaginative miscreant evolution of criminal techniques...

An inter-connected world One of the key challenges with what we now call cyber, is that of the shortage of relevant technical

CLOUDCOMPUTING 23


OPINION

infoburst This [threat] group is made up of cybercriminals, hacktivists, black/grey hat hackers, some specialist members of law enforcement, and the intelligence agencies plus a very small number of imaginative forward thinking security professionals...

24 CLOUDCOMPUTING

skills. This is directly linked to what would seem to be an inability to recognise or accept the real scale of the threat is playing into the hands of the criminals and hackers who are harvesting multiples of millions in revenue out of their malicious activities. It was US Defence Secretary Donald Rumsfeld who commented about Iraq’s armaments capability: “There are known knowns. These are things we know that we know, and there are also known unknowns. These are things we know we don’t know and then there are the unknown unknowns, which can represent very real and present threats, but which are unseen by the conventional eye of security.” It is these elements of unknowns that pose the highest degree of danger to today’s cyber landscape of complex, interconnected globe of systems. And in his 2007 book The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb tells of a presentation on uncertainty he was requested to give to the US Department of Defence shortly before Rumsfeld’s speech. The core message of The Black Swan is that ‘unknown unknowns’ are responsible for the greatest societal change. In this new age methodology of presenting information objects, we have developed a profiled approach in which individuals and businesses are happy to accept. We are working to the agreed formation of rules, which are based on known knowns. In other words, we have evolved to trust that we have no unknowns in existence, and so work alongside the assertion of endowed and allencompassing knowledge. In the main, this group is made up of a cross-spectrum of individuals who are comfortable with the status quo and

the accepted sociological order associated with right-minded individuals, and a growing number of security professionals who believe Compliance and Governance are the road to cybersecurity. On the other side of the divide, there are members of society who recognise that if they could acquire an understanding of the things we don’t know, and which are unknown - these nuggets of isolated intelligence, which when aggregated or conjoined, can manifest in an opportunity to leverage a position of exploitation or compromise to underpin a part, or complete objective. This group is made up of cybercriminals, hacktivists, black/grey hat hackers, some specialist members of law enforcement, and the intelligence agencies plus a very small number of imaginative forward thinking security professionals. The bottom line is that this is the very methodology and applied level of thinking that is exercised by the criminals and hackers seeking out the vulnerable points of presence, or surfaces of attack, to reveal artefacts of potential interest in the form of data leakage. These criminal elements maliciously plan their projects, utilising what is exposed to the public domain to identify rich targets. Once target selection has been achieved the next steps are to find out everything they can to reveal about those unknown unknowns – an activity which may be underpinned by a methodology known as OSINT (Open Source Intelligence) which the Cytelligence platform supports, is shown below as an example lifecycle. Professor John Walker, CTO of Cytelligence, is a specialist in the fields of cyber investigations, OSINT, expert witness, and cyber forensics. www.cytelligence.co.uk


When your data centre is your business, partner with the best. Fully manageable, adaptable, scalable, and efficient modular physical infrastructure

Fast and easy-todeploy power and cooling modules

Cloud-enabled, energy-efficient high-density pods

StruxureWare software applications and suites for real-time capacity visualisation

Only Schneider Electric can provide the total solution for energy-efficient availability. Our modular, best-in-class DCPI is only the beginning. Data centre physical infrastructure (DCPI) from Schneider Electric™ enables you to adjust to your unique unique needs while also right-sizing your entire data centre. From ultra-efficient InRow™ cooling units, to our innovative EcoBreeze™ economiser modules, to scalable three-phase UPS solutions, our DCPI supports your availability and performance requirements. But our best-of-breed, modular DCPI is just the start.

Know how much capacity is available now and in the future. By partnering with Schneider Electric, you can run your data centre more effectively, efficiently and profitably than ever before. Our data centre infrastructure management software, StruxureWare for Data Centers and its Operation, gives you real-time visibility into available cage and capacity management. It also protects uptime, optimises performance and drives your data centre towards optimal efficiency.

Get comprehensive services from a true solution provider. In addition, our data centre life cycle services support you with planning, commissioning, operating, maintaining, monitoring, assessing, and optimising your data centre. Because we provide all elements of your data centre’s physical infrastructure, you’ll know whom to call when there’s a problem, and we can be on-site quickly due to our global presence and wide-ranging service locations. Schneider Electric offers the total solution, giving you stability and peace of mind in today’s ever-changing tech world.

Peace of mind with StruxureWare software and data centre life cycle services StruxureWare Data Centre Operation software and data centre life cycle services make running your data centre easier, more secure, and more profitable than ever before. Why?

>

Make faster and better informed decisions on new sales opportunities through real-time capacity visualisation.

>

Customisable coverage plans fit any budget and provide support throughout all stages of your data centre’s life cycle.

TM

APC by Schneider Electric is the pioneer of modular data centre infrastructure and innovative cooling technology.

Business-wise, Future-driven.™ Want to see what StruxureWare software can do for you? Get a FREE software demonstration and enter to win a Samsung Galaxy Note III! Visit www.SEreply.com Key Code 50052P Call 0845 080 5034 Fax 0118 903 7840

©2014 Schneider Electric. All Rights Reserved. Schneider Electric, APC, InRow, EcoBreeze, Business-wise, Future-driven, Square D, and D-in-a-square logo are trademarks owned by Schneider Electric Industries SAS or its affiliated companies. All other trademarks are the property of their respective owners. • 998-1182344_B_GB • www.schneider-electric.com/uk


INFRASTRUCTURE

Why planning is now an essential part of the cloud process

TOWARDS A MORE EFFICIENTCLOUD PLANNING PROCESS Challenges As an outsourced IT option, cloud computing has an economic imperative that is second to none, with cost savings of between 40 and 85 per cent when compared to conventional `bricks and mortar’ data centres. The actual cost saving, of course, is dependant on a number of issues, including whether you want a dedicated (private) cloud resource or are happy to use a shared (public) cloud system. Other factors that influence the price - and therefore the cost savings - include the level and

Karl Robinson explains the strategies that IT management professionals can employ when developing a cloud computing master plan... By Karl Robinson, Chief Commercial Officer, StratoGen

speed of access to the cloud resource, as well as the required `up time’ of the service. The economic imperative is perhaps best compared to the total cost of owning and running a company car, as compared against the cost of using a leased rental firm that supplies company vehicles on a pooled basis. The key advantage of storing your organisation’s data in the cloud, however, is that your business can then pay for the facilities it actually uses - rather than paying for the cost of data centre resources, whether or not you use them

infoburst The economic imperative [of the cloud] is perhaps best compared to the total cost of owning and running a company car, as compared against the cost of using a leased rental firm that supplies company vehicles on a pooled basis...

26 CLOUDCOMPUTING


INFRASTRUCTURE to their fullest extent. No small wonder a growing number of organisations are moving their data and IT resources over to the cloud. The planning process with cloud computing, however, is the icing on the cake in terms of cost savings. If well executed, a well-planned cloud migration/implementation can mean the difference from truly saving money on the project in its first year, and only breaking even during the same period. And these cost advantages are not just a one-off, as they are recurring. Groundwork If you have carried out your groundwork - and other areas of due diligence - with regards to cloud computing, you will almost certainly have realised there are large differences in the cost of the various cloud facilities that are available. These differences are not simply market-driven, but a reflection of the relative lack of maturity of the business models operated by many cloud computing service companies. This is not a criticism, by the way, merely an observation - it is also a market differentiator that allows the clients of cloud services - that’s you and your company - to select only those services they truly need, and to pay a fair price for those facilities. It’s worth noting that, when it comes to price differential factors, that criteria such as the ease (and speed) of access to your data - as well as where the data is physically stored - come into play here, as EC data protection laws often mandate that your company’s data must be stored within the confines of the European Union countries. There is a degree of pragmatism at work here, as whilst it is perfectly possible to host one’s cloud-based data outside of the European Union, there are regulatory issues associated with this option. Increasingly, for example, many companies are discovering that legal issues such as the US PATRIOT Act come into play. The PATRIOT Act is an Act of the US Congress that was signed into law by President George Bush in 2001. The stands for Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001. Critics of the legislation in the IT sector point to the fact that the Act allows the US government and its many agencies easy access - without a court order - to data held on a IT resource that can be operated anywhere in the world, as long as the owning entity is a US company. This means that, whilst your cloud data may be held in a data centre in Dublin, if the owning company is based in the US, then the US government can request direct access to that data. Some companies also elect to store their data in a UK cloud resource, either for financial regulatory reasons, or for ease of access. Assuming that you have a cloud project in mind, the time then comes to complete the due diligence stage of the planning process, which typically involves scoping out your potential cloud service suppliers. This can range from discussing the suppliers with your colleagues at other companies,

all the way through to requesting references from the supplier’s existing clients. This step is actually more complex than it initially appears, as there are a number of security and allied requirements that cloud service providers choose to meet, in order to better satisfy their client’s needs. These range from ISO 27001 compliance all the way through to compliance with PCI-DSS rules, a set of security standards that are mandated by credit card companies before a company is allowed to process credit and debit card transactions.

infoburst The most fundamental question that potential clients ask their could service providers (CSPs) is whether their data is physically located...

Where is your data? The most fundamental question that potential clients ask their could service providers (CSPs) is whether their data is physically located. This can be a more complex issue than it first appears, as many CSPs choose to mirror (back up) client data across multiple data centres, except where the client has expressly elected to store their data in a specific territory. Other questions that require asking include the speed of recovering data and the latency of the cloud service itself. It is no good relaying on a low-cost cloud resource if the latency is such that it takes several minutes to start downloading a given set of files or folders, and several days to download all of your data in its entirety. Discussing this issue with your potential CSP will also reveal what levels of redundancy that the CSP’s data centre resources actually offer. This is an important issue, as a partially used CSP resource is usually a lot more responsive to data download requests than a data centre that is almost completely filled with data.

CLOUDCOMPUTING 27


INFRASTRUCTURE SLAs are also important, however, when it comes to dealing with what happens to the client’s data at the end of the service contract or in the event that the CSP - for whatever reason - ceases operations, or curtails its services when (and if) it is acquired by a third-party company. This leads us neatly into the questions that the diligent would-be cloud client should be asking of their CSP. These questions are broadly in line with the due diligence questions that a company should be asking of all its IT systems suppliers, and centre on what type/quality of hardware the CSP uses - a growing number of service providers, we have observed, are opting for premium hardware systems for their cloud infrastructure, so reducing the MTBF (mean time between failure) of their systems, and helping to ensure that a given service is as close to 100 per cent uptime as possible. It’s worth noting at this point that StratoGen elects to use high-end - and known - vendor IT systems to maximise systems reliability and ensure the highest levels of support possible. infoburst It’s worth noting at this point that StratoGen elects to use high-end - and known - vendor IT systems to maximise systems reliability and ensure the highest levels of support possible...

There may well be an argument to operate a cloud resource on a pooled or shared basis, since any resources that your organisation does not use can then be resourced by other users of the cloud facility. This is a traditional cloud computing approach, and differs from the so-called `private cloud’ facilities that a growing number of major companies now operate. A private cloud resource is where the company concerned have full access - and control - over the cloud computing data centre, meaning that third parties are not involved. Generally speaking, the larger a cloud computing data centre operation is, the better, as is the issue of whether the resource has a global portfolio of customers, as this strengthens the need for 24x7 active customer support. There is also the issue of effective SLAs - Service Level Agreements. These are minimum set of service levels to which the cloud service provider agrees to, but are often drawn up before a given cloud service goes operational. In our experience very careful attention needs to be given to SLAs, as there are signs that a few shortsighted CSPs try to include a number of limitations in their standard client agreements, hoping that this will - God forbid - allow them to side-step their responsibilities if something goes wrong. Criticism This is a not a criticism of CSPs generally, as it is important to understand that the nature of cloud computing service is such that it is perfectly possible to provide multiple redundancies for a given service with only a modest increase in costs. Put simply, this means that the economic imperative of moving to cloud services is rarely affected to any major degree by pricing, so cost cutting is - usually - not an issue for most CSPs, except perhaps those providers operating at the lowest end of the price spectrum.

28 CLOUDCOMPUTING

Conclusions The adage that you get what you pay for applies in the cloud computing space. As with the provision of IT services generally, there is a fine line between operating a cloud services business profitably and at a break-even/loss-making level. This is because profit margins in the CSP world are reflective of the growing maturity of the industry - as with the IT hardware industry of around a decade ago, profit margins are becoming commoditised to the point where a profitable CSP of today may find its service platform disrupted by new technology players in the future. Unless the CSP concerned has deep pockets, this can cause the service provider to either seek acquisition by a larger company or focus its services on the more profitable clients. In most instances, this means that smaller users of a cloud resource - through no fault of their own - may find the entry-level price of the service they are using starts to rise, making the cloud service less attractive in economic terms. The good news is that the cloud industry has yet to reach this point on its evolutionary scale, although observers suggest that this issue will raise its ugly head in the cloud computing space sooner, rather than later. Having said this, it is possible to prioritise/deprioritise aspects of the various cloud services that a company uses in order to maximise the cloud resource’s return on investment - without affecting/ degrading the IT systems resources that are available to your company and its clients. Your mileage - as they say - may differ, but our observations suggest that careful planning at all stages in a cloud services project can go a long way to avoid many of the pitfalls that early adopters encountered. www.stratogen.net



AUDIO CAST

Why co-location has become a key driver in cloud computing

THE ROLE OF DATA CENTRE CO-LOCATION Steve Gold discusses the key co-location issues with Andrew Jay of CBRE… By Steve Gold, Editor, Cloud Computing World

INTHECLOUD

Introduction CCW recently caught up with Andrew Jay, Head of Data Centres at CBRE, the world’s largest real estate advisor, to discuss the role of data centre colocation in the world of cloud computing. The online definition of a co-located IT service is a type of data centre where equipment, space, and bandwidth are available for rental to customers. The idea is that colocation data centres provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms - and connect them to a variety of telecommunications and network service providers. So why the interest in co-location data centre facilities? Well, depending on who you speak to, outsourcing your IT infrastructure to a specialist third party provides businesses with the tools to react to an unpredictable IT demands and manage costs more efficiently. It is also estimated that in some cases facilities can be between 50 and 80 per cent more efficient in cost terms than retaining an in house data centre - and that translates to a considerable cost saving for most organisations. According to Jay, who runs the data centre team based on London, and so providing a centre of excellence service internationally, advice is a key facet of cloud planning. Where people are outsourcing their IT infrastructure, he and his team advise on the best approach to take.

oc

ith Peter Ja y RE

Aud i

tw

CB

30 CLOUDCOMPUTING

as

of

Typical customers So who are the typical customers that consume these kinds of services? Jay says that CBRE started off in the dot.com boom times with Web hosting and other firms. AT that time, the company saw a number of financial firms wanting to use more improved technology. The majority of clients today, he says, are cloud companies, as well as smaller companies that have traditionally hosted their IT services in-house. By outsourcing, he adds, smaller companies can support a range of services.

Traditionally Jay says that the enterprise customers have looked at full IT outsourcing so they have gone to the big IT integrators for such services. These services, he explained, take time to procure, with rigid contracts that can be both expensive and complex. As a result of this, he say that CBRE has seen a range of hybrid cloud services arrive. “There has been a popular shift to public and private cloud services. This is shifted demand to global IT infrastructure providers,” he said. “The cloud is actually a lot of data centres. The misconception is that the data sits in the cloud, when the data centres actually hold it. The Amazons and Googles of the world are building out huge new data centres to cater for the demand in their services,” he added. At the same time, Jay says that the demand is also shifting to smaller groups who want connectivity-rich smaller players with the better quality and more efficient IT infrastructures they can provide. Outsourcing data centres Do you think that greater use of Cloud will encourage more enterprises to outsource data centre infrastructure? “Yes, I think it will - especially with the adoption of hybrid solutions. Cloud is becoming part of the established solution for enterprises. Some clients are forward thinking, whilst a lot of them like to see what their peers are using,” he replied. This, says Jay, encourages competition with pricing of cloud services reducing. And it is, he adds, going to become impossible to ignore the cloud option because of this. According to Jay, where organisations are looking outsourcing their IT infrastructure, CBRE will advise them on the best co-location facilities they should look at. For more on CCW’s audiocast with Peter Jay of CBRE, please go to http://bit.ly/1oafKxA www.cbre.co.uk


THREE PHASE POWER Three Phase Power Designed to bring maximum power to your servers, the G4 three phase range are built to exacting standards to ensure maximum safety for your facility.

Available with: • C13 C19 Locking outlets • C13 C19 Fused outlets • BS1363 UK outlets • Continental outlets • Individual circuit protection per outlet • Overall metering of V, A, kWh, Harmonics, PF.

G4 MPS Limited Unit 15 & 16 Orchard Farm Business Park, Barcham Road, Soham, Cambs. CB7 5TU T. +44 (0)1353 723248 F. +44 (0)1353 723941 E. sales@g4mps.co.uk

Vertical rack Mount

Maximise you rack space, specify mixed connector PDU’s built to your exact requirements to give you just the solution you are looking for.

Horizontal rack Mount

Thermal overload protection or fused outlets mean that you only loose a single socket in the event of a fault, not the whole PDU thereby removing the risk of a total rack failure.


OPINION

How clients can get better value from SaaS technology

HOW EMBRACING SAAS COULD

EVOLVE YOUR BRAND

John Davis discusses the effect SaaS can have on businesses By John Davis, Managing Director, BCSG

Introduction SaaS represents an opportunity to move customer relationships online and move conversation from selling to support. As more and more small businesses move their day-to-day tasks online, brands have an opportunity to move into that space too. Not only can it evolve their services and positioning, it’ll also build stronger customer relationships. The place to start is with what you already have. What would be a natural addition to your core offering? What might your customers need that they can’t get from you at the moment? This could be accounting and business planning software from a bank. But it could just as easily be IT support, storage capability and document transfer from a printer supplier.

About the author John Davis joined BCSG as Managing Director in January 2011. Responsible for the dayto-day management and leadership of the rapidly growing tech company, He has overseen a period of substantial growth for BCSG, both in financial and reputational terms. John is now spearheading ambitious plans in the US, Australia and Africa as well as Europe. www.bcsg.com

32 CLOUDCOMPUTING

Answering customer needs By finding out what your customers need, then providing solutions, you show that you’re both listening and responding. We’ve talked about how you move customers over to cloud services in other posts. What is key here is that, by starting from your place of expertise, these services become a natural extension of existing propositions and messaging. And because they’re designed to assist customers in everyday tasks, you step more into their everyday world, which opens up new opportunities to start new conversations. Nailing the customer journey matters. Only once you’ve achieved that with your anchor services, and people are regularly using them, can you then build. If we look at Amazon, for example, the company mastered online book buying. It had a highly functional website that was easy to navigate, delivery options to suit different situations, and a breadth of inventory that made it almost impossible not to find what you were looking for. Trusted supplier In essence, Amazon delivered. And that meant it could sell almost anything because it became associated with a benefit rather than a product.

Your customers should feel like your services are genuinely adding value to their working life – that way they’ll trust your recommendations, whatever they may be. As you add new services, they then give you a reason to talk about your core products, especially if you’re able to integrate them. An example would be a telco that bundles existing products (mobile and broadband) with new services (web conferencing for example, or cloud storage). In this way the messaging comes full circle. And by diversifying, your products will have a broader relevance for your customers, touching on different points of their day and their working life. You can then legitimately initiate ongoing conversations that reflect that relevance, which will be more about help and support than about selling a product. And you can take that conversation online, which is where more and more people want to transact. You might even be able to create a positive story during difficult times and divert attention from your core services. An example comes from Barclays, which was one of many institutions hit by the financial crisis. Just as UK banks were getting a bad press, Barclays coincidentally launched Credit Focus, an online service that helps small businesses get paid on time and reduce the likelihood of bad debt. This was evidence of a big bank doing something to really help small businesses at a time when people really weren’t expecting it. And it generated a pleasing amount of good press. Conclusions If done well, SaaS can help drive growth for businesses. It can create new reasons to talk to people, it can move those conversations online, it can embed the brand in customers’ day-to-day lives. By doing that it will of course increase wallet share. But it will also mean the brand starts to stand for different things, like innovation, support, delivery, agility, and so on. This kind of evolution is hard to measure, but in today’s highly competitive world, it’s increasingly of value.


Launching November 2014 networks Ireland is the Ireland’s first publication totally dedicated to the subject of network and data communications in the data centre. networks Ireland will reach an audience of over 3,000 individual subscribers on a quarterly basis, delivering them up-to-date information on this fast paced subject, enabling them to make the best use of available technology and its unlimited opportunities to enhance and grow their businesses.

To subscribe visit www.networksireland.com

The Dedicated Title for the Irish Data Centre Industry


CASE STUDY

Enabling a major 19-country network with the cloud

HOW THE CLOUDBUILT A MULTI-BILLION DOLLAR HOUSE infoburst With an ambitious aim of becoming fully operational in just nine months, the new company wanted to achieve the status of being one of the world’s largest real estate funds, operating in 19 countries.

34 CLOUDCOMPUTING

TIAA Henderson Real Estate - and how it tapped the cloud for a global presence


CASE STUDY Introduction With an ambitious aim of launching a new global company from standing start in just nine months, TIAA Henderson Real Estate turned to the cloud to enable success across 19 different countries Announced by two well-known and trusted global fund managers, TIAA-CREF and Henderson Global Investors, the launch of TIAA Henderson Real Estate (TH Real Estate) combined more than 90 years of global real estate knowledge to provide innovative real estate investment solutions. With an ambitious aim of becoming fully operational in just nine months, the new company wanted to achieve the status of being one of the world’s largest real estate funds, operating in 19 countries. The challenge In order to commence trading from April 1 2014, TH Real Estate needed a full business and technology suite built from scratch, including a global wide area network, private cloud facilities in Europe and Asia, virtual desktops and high quality unified communications tools across the world. Choosing a vendor that understood the timelines and was willing to partner in not only the delivery but also the vision was therefore essential. “The establishment of a global entity from a standing start and with very tight deadlines presents a number of challenges, particularly when up against the traditionally long lead times which come with networks and equipment,” said James Whyte, TH Real Estate’s acting head of ICT. “We also had a very specific technology strategy we wanted to incorporate for the new business in order to underpin business operations, cater for growth and provide a great user experience,” he explained. The areas of focus for TH Real Estate included: • Effective operation across 19 countries with the ability to easily increase capacity, in Asia or Europe, according to business demand and grow easily into new global markets • Information security and regulatory compliance across various markets • Sustainability and a reduced environmental footprint through best of breed technology • Staff flexibility and productivity In order to realise its business objectives and achieve the desired growth, TH Real Estate underwent a series of phases to select its technology partner and solution, including competitive bidding, technical design, contracting and implementation. “Choosing a vendor that understood the timelines and was willing to partner in not only the delivery but also the vision was essential,” said Whyte. “The decision of vendor in the end became quite simple – a global provider with strength in Asia Pacific and the ability to bring together three streams of services across cloud, network infrastructure and communications,” he added.

The solution TH Real Estate selected Telstra, drawing on the telecommunications solutions provider’s global capabilities, resources and flexibility to quickly implement a new infrastructure from the ground up. Through the provision of managed network, IaaS and unified communication services, Telstra Global helped TH Real Estate launch its business and lay the foundation for strategic growth and business success. Adopting an ‘agile infrastructure’ approach, Telstra enabled TH Real Estate to commence building key elements of the global network and hosting, which enabled the model to be built according to a design that was still forming. A number of main applications that the new joint venture relies on were also hosted on the cloud. With the timescales as they were, every opportunity to fast track the delivery of the applications had to be taken, with Telstra providing applications from various vendors’ clouds through a Citrix solution to the end users’ desktops. “Telstra was unmatched in its ability to align its approach to our objectives and willingness to share the risks in our ambitious plan,” explained Whyte, adding that everything the cloud brings to a new business contributes to its success – end users have total mobility, deployment is rapid, costs are scalable and data centres become more energy efficient. “Often stressful and time-consuming day-today IT operations are also eradicated, leaving employees to focus on their core competencies and growing the business,” he said. The result TH Real Estate’s corporate core business applications and data are now housed within Telstra Global’s secure IaaS facilities, located in Asia and Europe. Other critical applications are hosted externally but are integrated into the core, allowing for flexibility to add or take away applications to support the business as well as support a distributed business recovery model, which sets it apart. This innovative solution overcame TH Real Estate’s key concerns, such as security, disaster recovery and compliance, whilst also minimising the internal resourcing required to manage these systems. The ability to easily increase capacity as the business grows has also been achieved. TH Real Estate is now operating on a global level and worth $79 billion. The infrastructure provided by Telstra enables it to keep costs to a minimum and provides a service to the end user, which allows TH Real Estate staff to focus on their core competency - real estate investment management. www.telstraglobal.com

CLOUDCOMPUTING 35


G-CLOUD

The G-Cloud is now coming of age

IT’SALLCHANGEFOR

G-CLOUD USERS

John Godwin explains the changes as pan-govt accreditation is phased out By John Godwin, Head of Compliance & Information Assurance, Skyscape Cloud Services

Introduction From the end of July, G-Cloud Framework suppliers no longer need to seek CESG Pan Government Accreditation (PGA), following the recent launch of the new Government Security Classification Policy (GSCP). The GSCP was launched on 2nd April and replaced the previous Government Protective Marking Scheme (GPMS). he six Impact Levels that were previously in use have been superseded by just three classifications, OFFICIAL, SECRET and TOP-SECRET. G-Cloud suppliers will now be required to self-assess their services, and buyers will become responsible for selecting the most appropriate cloud services to meet their individual security requirements. Despite these changes, submissions for cloud services that are to connect to the Public Services Network (PSN) will continue to require PGA involvement and will need to be submitted to the PSN Authority (PSNA). It’s worth noting that current PGA accreditations remain valid for a year following their date of issue, so these are likely to remain in use well into 2015. A combination of these activities means that the robustness of PGA checks will continue to provide valuable assurance to those organisations with the highest of data security needs.

Changes The more optimistic of suppliers (including many SMEs) are likely to view these accreditation changes as good news, as the previous PGA process required specialist knowledge, was time-consuming and often very expensive to complete. On the other hand, the new self-assertion approach has the potential to confuse public sector buyers, who are now solely responsible for determining the security controls needed to deliver the most appropriate protection to their valuable data. Many are likely to find this a significant challenge without being able to refer to the previously respected accreditation system. Without more comprehensive guidance, the buying community is unlikely to be equipped with the competencies needed to confidently identify an appropriate supplier who meets their organisation’s specific data security requirements. This could have two potentially adverse consequences: buyers may misjudge their selection, resulting in inadequate security for their data, or the absence of one trusted accreditation system may put them off altogether, with companies potentially abandoning cloud services in order to mitigate the risk of making a wrong decision. From the point of view of reputable and security conscious suppliers, these changes present a challenge of demonstrating how their security credentials will protect data to a potentially confused marketplace. Most concerning of all in this transition period is the risk that some cloud suppliers could make unsubstantiated claims regarding their security capabilities (either inadvertently or intentionally) which could increase the risks of security breaches occurring. The government has done an excellent job in encouraging the adoption of assured cloud services within the public sector via the G-Cloud Framework, but given these recent changes all suppliers now need to be playing their part in maintaining the security track record to date. www.skyscapecloud.com

36 CLOUDCOMPUTING


High Performance, High Density Excel MTP solutions are designed to support high speed next generation Ethernet applications, they also offer patch panel density of up to 120 cores in 1U of rack space. Market leading performance is assured through the use, as standard, of US Conec Elite low loss connectors.

Want to save space, time and money?

Contact us +44 (0) 121 326 7557 sales@excel-networking.com www.excel-networking.com


OPINION

Why advanced IT will always be integral to the cloud

NETWORK INFRASTRUCTURE IS CRITICAL

TO PUBLIC CLOUD SERVICES Rick Stevenson explains the underlying complexities of the public cloud By Rick Stevenson, CEO, Opengear

Introduction Cloud services depend on complex network infrastructure - Software-as-a-Service (SaaS) and other cloud offerings have the potential to streamline IT and save organisations tons of money. But for all of their promise, these cloudcomputing services are still held back in many cases by concerns about security, as well as performance bottlenecks. For example, some cloud providers do not have sufficient network infrastructure to handle demand. As similar organisations expand their operations, they should consider cellular out-of-band management tools to ensure that IT assets can handle assigned workloads. Irrespective of whether an organisation opts for private, hybrid or public cloud or even some kind of SaaS, underneath the sales model is a data centre and complex network infrastructure that needs to deliver the service. The first generation of cloud services were all about defining the core technology and getting customers signed up. Pioneers like Amazon Web Services have done well but as cloud platforms mature, service providers are now seeking to differentiate themselves through additional offerings layered on top. Value added services such as global load balancing, collaboration tools, security monitoring, enhanced service level agreements (SLAs), managed infrastructure services are all potentially available from cloud service providers keen to attract and retain customers. In a market where margins are falling as competition heats up, service providers are looking at ways of improving service delivery and reducing costs. Within these broad categories, key drivers include a desire to reduce complexity at an operational level, which has a massive impact on an ability to run services with very lean human resources. Considering that data centre capacity, electricity costs and rack design are pretty much

38 CLOUDCOMPUTING

infoburst In a market where margins are falling as competition heats up, service providers are looking at ways of improving service delivery and reducing costs...


OPINION

a fixed cost, allowing fewer staff to accomplish more is a major benefit to the bottom line. In addition, as technical teams gain better visibility and control over the data centre and network elements underpinning the cloud, the business is able to offer tighter service level agreements. These visibility and control drivers will also help underpin the delivery of new cloud services while offering a better method of dealing with outages and failures. Clouds will still fail Big cloud failures are messy, public affairs. In August 2013, a hardware failure at Amazon’s USEast data centre in North Virginia led to spiralling problems for its customers that include Instagram, Vine, and AirBnB a popular accommodation booking service. The incident, which lasted 49 minutes, was traced to glitches with a single networking device that resulted in data loss and corporate embarrassment. There have also been incidents at Azure and other cloud providers and the reality is that hardware fails, or worse still, partially fails making it hard to track down the fault. The fact that Amazon was able to track down the issue within potentially thousands of network elements in under an hour and remediate the problem shows remarkable skill. For cloud service providers and customers, the infrastructure has not always got simpler as other than the pure-cloud start-ups, the majority of enterprises adopting cloud are in fact running a hybrid model. According to a 2013 survey by the Cloud Industry Forum, although 78 per cent of companies already use cloud services for at least one application, 85 per cent of companies still operate their own data centres or on-premise hardware. Furthermore, three quarters of these enterprises use their own in-house IT staff to manage these cloud environments. Hybrid environments where cloud is just part of a service delivery makes sense for a number of reasons such as compliance with certain industry regulations and simply because some organisations want to have applications in-house for competitive advantage. Despite this the tools for managing the underlying network elements that underpin these services also need some consideration.

infoburst As cloud providers begin to offer endto-end managed services, there is also a need to manage end-point devices, in particular customer premise based switches and firewalls...

Irrespective of whether you are an IT manager charged with delivering a public, private or hybrid cloud, the worst case scenario is not the failure of a network switch. In a well-architected environment, there should be no single point of failure that shuts down the environment completely. For many, the worst case scenario is a fault that goes unnoticed until it brings down multiple systems and worse still kills the underlying network that allows network admins to communicate with devices and implement a fix. For example, a patch to core switches which seems fine but then fails when a certain network condition occurs and `bricks’ the device. In this case, the device needs to have its firmware rolled back to a known good version. Unfortunately in this scenario the network is down and the device, wherever it resides, is unreachable by normal means. Accessibility is vital The case for deploying an out-of-band management (OOB) strategy is an easy case to make and more so for multi-tenant cloud services. The first reason is to simplify the management process and to increase the effectiveness of staff charged with looking after the underlying data centre infrastructure. This includes ensuring that the infrastructure does not have a single point of failure especially at the critical management layer. As cloud providers begin to offer end-to-end managed services, there is also a need to manage end-point devices, in particular customer premise based switches and firewalls. However, the economic viability of having to send out a field engineer to fix an on-premise problem becomes unfeasible for a lean cloud based service provider. In many instances, a simple power cycle or reboot will solve many issues and this is an area where a low cost OOB device communicating back to the NOC via 3G/4G mobile network quickly pays back its cost compared to a site visit. OOB is also valuable in instances of lights out data centres, which may well reside within a colocation data centre. In this instance, direct access to the rack may well be impractical or complicated by physical security issues. An OOB solution again using mobile network connectivity offers remote access to equipment in the rack without the requirement to interact with the co-location provider. Fewer tools = more work The cloud revolution has also intersected with a major change in the status quo in the vendor community. The days of Cisco technology dominant at the network layer, IBM and HP at the compute layer and EMC for storage has been disrupted by new entrants. As the underlying infrastructure for clouds becomes denser and more complex, the need to embrace enhanced management tools will grow. www.opengear.com

CLOUDCOMPUTING 39


INFRASTRUCTURE

How a standards body is taking a different approach

DEFINING BEST PRACTICE IN AN EVOLVING ECOSYSTEM Introduction The CEF faces multiple challenges if it is to succeed in its objectives. Radical challenges encourage radical solutions, and the CloudEthernet Forum (CEF) has decided to develop standards in a way that is very different from the traditional standards body approach. Will the CEF be a standards body? And will it announce a certification program? These were among the first questions raised when we launched the CEF last year, but it was too early to give a definite answer at that stage. We were faced with a set of critical issues and it was up to us to first define those - then agree the best strategy to resolve them. Listening mode The CEF began life in listening mode. Tata Communications was amongst the first to note nascent difficulties emerging when their bigger customers began to seriously scale up their data centres and take them global. We had our ears to the ground for early signs of impending infoburst The CEF has named its emerging set of standards `CloudE 1.0’, where the `1.0’ is a deliberate tag to suggest an evolving standards development process that is radically different from the norm...

40 CLOUDCOMPUTING

James Walker explains how the CEF is evolving By James Walker, President, CloudEthernet Forum

thunder at a time when a wider public was being confidently assured that the cloud was one hundred per cent silver lining. CEF founder members agreed that in time the cloud would be the promised solution, but serious co-operation would first be needed to resolve these issues before the cloud scene fragmented into proprietary silos. A year later, and our mission is already a lot clearer. When the MEF, for example, began developing standards for Carrier Ethernet, their task was to persuade the industry that Ethernet was a wide area network solution, and creating standards was an essential part of that mission. Our situation is very different, because cloud services are already forging ahead at full steam requiring sound rails to be laid ahead to make sure the market does not come unstuck. To achieve this we must integrate three relatively new concepts: Networks Function Virtualisation (NFV), Software Defined Networking (SDN) and Carrier Ethernet. These represent uncharted territory, are highly complex, and are needed under the pressure of rapid cloud deployment.


INFRASTRUCTURE Cloud computing is as radical and disruptive as the arrival of the personal computer. What happened then was an influx of new personal computers with a whole range of incompatible operating systems. The market eventually shook down with the dominance of Microsoft and the IBM PC, but there were still incompatibilities with Macintosh and Unix, and it was generally recognised that the dominant OS was not necessarily the ideal one for business. This is what we now risk with cloud computing: there are very good cloud services emerging, but they are not designed to global standards. Again business will face the problem of trying to integrate its global operations while depending on incompatible proprietary services, however good those services might be. Industry standards will be needed to build the sort of vendor-neutral cloud services that are required to support a truly vibrant global business culture. Objectives That is the objective of the OpenCloud project announced in May of this year. The CEF has named its emerging set of standards `CloudE 1.0’, where the `1.0’ is a deliberate tag to suggest an evolving standards development process that is radically different from the norm. Most standards bodies work on something like a quarterly cycle. They meet and debate towards a consensus on what the required standard should be, then draft versions are shared around and refined until the next version is agreed and approved at the following quarterly meeting. This approach would not work for something as complex and fast moving as the cloud, so CloudE 1.0 will be developed as an on-going iterative process based on a feedback loop involving a Cloud Reference Architecture, a Reference Test Bed (the OpenCloud platform) and a growing set of Use Cases based on real business needs. Cloud reference architecture “What exactly is the cloud?” sounds like a naïve question, but it is still being asked. We chose to define our cloud not in abstract terms but by creating a reference architecture containing all the essential components – the interfaces and functions as well as the players – and then building it in real life. One of the first benefits for CEF members is the opportunity to shape this `Cloud Reference Architecture’ by contributing elements to it. This casts the membership net very wide: not only NEMs, CSPs, data centre operators and carriers, but also any enterprise anticipating becoming a cloud consumer can become a player, and can now get its hand in to help shape tomorrow’s cloud environment to its needs. Membership is already expanding in response. The reference architecture is already well established, but it will continue to be an organic, growing structure – both in response to new developments and as part of an iterative process.

“This provides another interesting opportunity for members – to get a head start by seeing how well their products will perform in a leading edge cloud environment”

Reference test bed Iometrix president, Bob Mandeville, is responsible for the creation of a Reference Test Bed that will be very different from a `proof of concept’ exercise, as he explains: “Proof of concept is a short term process, whereas we are creating a permanent test platform that mirrors the CEF Reference Architecture and will evolve over time. We are not testing a single concept, but how the reference model responds to realistic use cases reflecting CEF members’ actual business needs.” A very simple test scenario could be a virtual machine representing the initial state, then the virtual machine is relocated, or else multiplied into one hundred thousand clones, then finally returning to a single VM for the final state. Can this happen while preserving all the VM’s attributes and functions, with the required logical resources (such as connectivity) provisioned automatically? The results of each test feeds back data for refining the reference model until it is considered to be a suitable basis for defining an industry standard. The standards and best practices are While the underlying architecture is already taking shape the test bed is very much work in progress, with the CEF working with members to supply equipment to represent each component of the architecture. This provides another interesting opportunity for members – to get a head start by seeing how well their products will perform in a leading edge cloud environment. CloudE 1.0 That is how the CEF will evolve its role as a standards body: reference architecture to test bed; impose use cases; note results and feedback to the reference architecture until is seen as a sufficiently stable basis for the next release of CloudE standards. Use cases are already defined for application performance management, cloud security and traffic load balancing, all across multiple providers – with a growing membership keen to suggest their own challenges needing to be resolved. This is very much in keeping with the forum’s aim to stay highly relevant to real business needs and not become an ivory tower exercise. www.cloudethernet.org

CLOUDCOMPUTING 41


OPINION

Why an effective desktop strategy is now a must-have option

DESKTOP STRATEGY:

WHY VIRTUALISATION IS NOT THE ONLY ANSWER Kevin Linsell explains why a desktop strategy is now a must-have in most large organisations... By Kevin Linsell, Head of Service Development, Adapt

Introduction Five to ten years ago, many organisations did not have a desktop strategy at all - a desktop was simply an end-user device, provided based on need and budget. There was often little thought given to the cost and complexity of managing these devices over their lifetimes and even less thought given to the business impact of these devices not functioning or being lost/ stolen. However, the consequences of losing a device can be serious, particularly if an organisation is subject to formal regulation. Ignoring the obvious data loss and security repercussions, from a pure cost perspective, the implications can be significant: in 2007, Nationwide Building Society was fined £980,000 by the Financial Services Authority for the loss of inadequately secured laptops and in 2013, Glasgow City Council was fined £150,000 by the Information Commissioners Office for the same reason. Why a desktop strategy? Quite simply, the role of the desktop has changed within business: a greater reliance on computing for everyday tasks across a wider percentage of employees; increasing levels of regulation and governance of data security and protection – depending on the industry, there’s been a significant tightening of regulations and enforcing agencies, including the Data Protection Act, Financial Conduct Authority (FCA), Payment Card Industry Data Security Standard (PCI DSS) and Solvency II issues, to mention but a few. Other drivers include a rising technology evolution that has enabled better management and control of end user devices, as well as the recent economic climate forcing businesses to try and reduce costs wherever possible, and the constant drive to support an increasingly mobile and geographically diverse workforce. The acceptance of mobiles, smartphones and tablets within business, often driven by senior management who have first adopted these premium devices in their personal lives, is also a key driver.

42 CLOUDCOMPUTING

infoburst The acceptance of mobiles, smartphones and tablets within business, often driven by senior management who have first adopted these premium devices in their personal lives, is also a key driver...


OPINION Whilst no single major event has driven the need for businesses to create and implement a desktop strategy, a combination of factors including those mentioned above now make a desktop strategy a key part of a business’ overall IT strategy. What does it have to cover? There is no single correct answer to this as every business has a different set of requirements and drivers, but typically a desktop strategy will include: • Deployment of software updates and changes – both security/ functionality and user/ business request driven • Hardware break/fix • Asset management: software licensing and hardware • Security and compliance: information and data security (physical and electronic) • Evergreening: the ability to refresh any aspect of the desktop (hardware and software) over time to avoid end of life/ support issues • End user support and assistance: helpdesk, local resources, training etc. End device selection, based on a pre-defined internal product catalogue: • Technical specifications to meet the business and end user needs – performance, size, weight, battery life, screen size/ quantity, power consumption, noise, etc • Functional requirements to support disability requirements – screen readers, text to speech, speech to text, braille keyboards, etc • Remote and home working policy, including Bring Your Own Device (BYOD) – not just an IT policy but heavily aligned with, or even owned by, HR Applications specific to user roles, based on a predefined internal software catalogue: • What versions of core applications and operating system/s • Availability and entitlement to business needs software tools • Roadmap for applications aligned with evergreening policy and software licensing agreements The challenge that many organisations face is how best to deliver against these strategic requirements. This challenge is frequently made tougher by an existing broad range of OS, hardware and software and the need to do so with a static or shrinking budget. Add in the rapidly approaching end of life status for many common Microsoft enterprise products – Windows XP, Office 2003, Server 2003 and Exchange 2003 – and the ability to deliver becomes very challenging. Won’t desktop virtualisation deliver? Desktop virtualisation, virtual desktop infrastructure and hosted desktops are all terms used to describe

infoburst The challenge that many organisations face is how best to deliver against these strategic requirements. This challenge is frequently made tougher by an existing broad range of OS, hardware and software and the need to do so with a static or shrinking budget...

the delivery of a desktop from a central location to end user devices. Just as 10 years ago it was not practical to have a strategy that virtualised every server (due to technical compatibility, vendor support, performance and network availability issues), there are similar reasons why a strategy to virtualise every desktop is not viable for most organisations today. Many of those server virtualisation barriers have now been removed as acceptance has grown and vendors have responded to demand with improved products. Desktop virtualisation however is late to this party and has not had the major market acceptance (yet) that server virtualisation has enjoyed. What is clear though is that at a glance, desktop virtualisation could deliver many of the requirements of a desktop strategy. It is important to understand here that desktop virtualisation is not always appropriate for a number of reasons. Some of these will be a major influence on any decision to deploy (or not) desktop virtualisation in an organisation: • Upfront design and software/ hardware investments can be significant ahead of any live deployment to end-users. • The duration of the design, test and deployment phases can cause a lengthy delay to benefit realisation. • Dependence on network availability and performance can be particularly challenging to some remote locations or mobile workers. • Licensing complexity and costs, particularly for Microsoft Desktop operating systems, threaten any business case return. • Application compatibility and performance require testing. The more applications an organisation has, the bigger this task. • Management and deployment of patching, updates and applications is still required but often delivered in a different way to traditional desktops. www.adapt.com

CLOUDCOMPUTING 43


ARE YOUR COLLEGUES AS WELL-INFORMED AS YOU ARE? www.netcommseurope.com

Volume IV, Issue 3 2014

and only, pan-European journal dedicated to the network communications infrastructure

£35/€50

NETCOMMS europe magazine is the first,

marketplace. NETCOMMS europe features news, legislation and training information from industry-leading bodies, application stories and the very latest information on cutting edge technology and products. NETCOMMS europe compiles editorial contribution from worldwide industry figureheads, ensuring that it is the No. l place to find information on all aspects of this fast-paced industry. If you think your colleagues would be interested

Cloud - the next phase:

Resilient cloud networking revealed

in receiving their own regular copy of

FEATURES Optical fibre - the future of mobile FEATURES Measuring data centre PUE FEATURES Building Britain’s appetite for repair

NETCOMMS europe simply register online at www.netcommseurope.com. And don’t forget to renew your own subscription every now and then, to make absolutely sure that you never miss an issue of the most up-to-date publication in the industry!

LGN Media is a trading name of the Lead Generation Network Ltd, e-space north business centre, 181 Wisbech Rd, Littleport, Ely, Cambridgeshire CB6 1RA

Tel 01353 865403 www.netcommseurope.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.