Data Centre Hub Issue 2

Page 1

DATACENTREHUB Issue 2

The Cat 8 Cabling Revolution ➲ Understanding The Internet Of Everything ➲ The Perils Of Ignoring Load Testing ➲ Improving Data Centre Efficiency

Data Centre Hub March 2015 | A


DATA CENTRE SUMMIT 2015 NORTH

Manchester’s Old Trafford Conference Centre

Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchester’s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business.

om .c ld or

ew tr en ac at .d

w w

w

30th of September 2015

DATA CENTRE SUMMIT 2015 NORTH Platinum Headline Sponsor

Schneider Electric

The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals. White paper

To enquire about exhibiting call Peter Herbert on 07899 981123 or Ian Titchener on 01353 865403

Event Sponsor


| Contents

Quick Look Case Studies

Cool Runnings

8

Converged Infrastructure

Understanding The Internet Of Everything

12

8

Data Centres

The Perils Of Ignoring Load Testing

16

The Cat 8 Cabling Revolution 18 DCIM

Improving Data Centre Efficiency 10

12

18

Regulars

News 5 Opinion

Reducing the Cost of Cooling 22 Physical Security

Physical Security

26

26 Data Centre Hub May 2015 | 1


Foreward |

DATACENTREHUB Issue 2

The Cat 8 Cabling Revolution ➲ Understanding The Internet Of Everything ➲ The Perils Of Ignoring Load Testing ➲ Improving Data Centre Efficiency

Data Centre Hub March 2015 | A

Publisher & Managing Director: Peter Herbert

Design: LGN Media

The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. This publication is protected by copyright © 2015 and accordingly must not be reproduced in any medium. All rights reserved. Data Centre Hub stories, news, know-how? Please submit to peter.herbert@datacentrehub.com

2 | Data Centre Hub May 2015

Welcome to the second edition of Data Centre Hub. As the demands on the requirements of storage continually evolve and expand the demands for security and power management, connectivity and cooling further increase the expectations on the data centre industry and the data centre managers. This issue sets to look at ways in which data centre manager’s can find solutions that help run data centres more efficiently. The issue also focuses on Manchester as the state of the Data Centre Industry. Manchester is regarded as the main digital City outside London. After the successful launch of Data Centre Summit North, Data Centre Hub in collaboration with LGN Media are pleased to announce the second Data Centre Summit event will take place in London 10th February 2016 at the Barbican Centre. Further information on the event can be found at www. datacentresummit.co.uk I hope you enjoy the issue and look forward to receiving your comments and articles for future issues. Peter Herbert



News |

All the Latest Data Centre Hub News Viatel continues European expansion 27 April 2015 –Viatel, one of Europe’s leading telecommunications companies, today announced the connection of two new points of presence to its European fibre backbone network. The demand for bandwidth across key European cities has prompted the connection of Telehouse Metro in London and Paris Equinix 4 to the network. These new sites will enhance Viatel’s wholly owned network infrastructure, which includes more than 150 points of presence and spans across Western Europe with direct reach into cities such as London, Dublin, Amsterdam, Frankfurt, Paris and Zurich. The new London and Paris sites are key access points to the Viatel network and enable customers to take advantage of Viatel’s connectivity, voice, data centre and cloud services portfolio. This investment follows a stream of recent developments at Viatel including a significant infrastructure project between Milan and Marseille [LINK]. The completed network build is now ready for service and enables Viatel to provide diverse as well as protected wavelength and Ethernet services with speeds of up to 100G from Milan and Marseille. Com Piercy, CEO at Viatel comments, “The demand for bandwidth continues to drive our business and we have been working to expand our network reach and capacity significantly over the past year by opening up new routes, adding new POPs to the network and increasing network capacity by adding 100G capabilities across key network routes. This continued investment from Viatel demonstrates our commitment to meet the growing demand for high bandwidth services in London and across Europe.”

DATACENTREHUB If you have any news stories please forward to Peter Herbert. peter.herbert@datacentrehub.com

4 | Data Centre Hub May 2015

Zadara Storage expands European availability March 12, 2015: – Zadara™ Storage, the provider of enterpriseclass storage-as-a-service (STaaS), today announced that its awardwinning Virtual Private Storage Array™ (VPSA™) platform is now available via TelecityGroup’s CloudIX ecosystem. Extending the service’s availability to TelecityGroup’s 39 data centre facilities across Europe through the Cloud-IX platform enables customers to introduce enterprise-grade cloud storage into public, privately hosted or hybrid IT environments. Cloud-IX is TelecityGroup’s software-defined networking platform that enables customers to access multiple cloud services, toolsets and solutions through a single, dedicated, secure connection. The Cloud-IX platform removes the need for customers to perform any network connectivity integration directly, and through an intuitive self-service portal, they can simply order, set up and change their Cloud-IX service. Cloud-IX is available throughout TelecityGroup’s European estate of premium data centres, providing direct connectivity to Zadara Storage’s STaaS and leading cloud providers, including Amazon Web Services and Microsoft Azure. The extended service connectivity means that businesses can now meet strict regulatory compliance requirements for data storage through a combination of Cloud-IX’s dedicated, private connection that bypasses the public internet, and Zadara Storage’s advanced file encryption that is controlled by the end-user. This offers enhanced levels of control and security over data stored on the platform. Zadara’s STaaS model

also offers enterprise-class features such as improved service reliability and Quality of Service (QoS), as well as extensive backup and disaster recovery options. Dani Naor, Vice President of International Sales, Zadara Storage, said: “As enterprises take a more data-driven approach to business, storage needs are growing exponentially and businesses are turning increasingly to cloud technology to ensure that their IT infrastructure is equipped to handle these evolving requirements. With advanced cloud-based storage business models running alongside private infrastructure, businesses can enjoy the economies of scale and elasticity of cloud computing while maintaining control of their data and meeting in-country storage requirements. By connecting to Zadara Storage via Cloud-IX, businesses can develop the simple, flexible, and scalable IT set-ups that help drive business innovation.” James Tyler, Chief Commercial Officer, TelecityGroup commented: “Our Cloud-IX platform continues to build momentum. This thriving and highly connected ecosystem is enabling digital businesses to create truly dynamic, bespoke and innovative IT environments. Zadara’s pure OpEx enterprise platform is another important addition to our vibrant Cloud-IX ecosystem that provides choice and flexibility to our customers. Through direct connectivity to enterprise storageas-a-service platforms, businesses can access flexible and scalable local storage that meets data sovereignty and compliance requirements.”


SAY GOODBYE TO DOWNTIME THANKS TO DUAL SIDED LOCKING POWER CABLES!

The zLock dual sided locking cables simply replace existing power cables. All connected IT devices are well protected against accidental and vibration disconnects and the resulting power supply disruptions, downtime and equipment damage. • Unique power cable that locks both ends • Locks automatically (C13) and via twist lock (C14) • Requires no mating plug or receptacle • Prevents power cables from being dislodged • Increases the reliability of power distribution • Available in C13 to C14 and C19 to C20 • Choice of different colours and lengths

For a full range of power solutions contact info.uk@daxten.com, +44 (0)20 8991 6200 or visit www.daxten.com/uk/

®


News | SSE Enterprise Telecoms launches LIGHTNOW in Manchester High speed, ultra-resilient networking service now links nine Manchester data centres with 21 in London London – April 14, 2015 –SSE Enterprise Telecoms – the UK’s leading provider of network infrastructure services and part of the SSE Group – today announced that it is launching Manchester LIGHTNOW, a new high-capacity, ultra-resilient optical networking service, providing 1Gb and 10Gb wavelength connectivity between nine of the busiest data centres in the Manchester and between dedicated LIGHTNOW data centres in London and Manchester. Manchester LIGHTNOW follows the latest announcement of SSE Enterprise Telecoms bringing seven new Manchester-based data centres on-net and builds on the success of the LIGHTNOW service in the London area. The Manchester data centres will benefit from 1Gb and 10Gb optical wavelengths between commercial data centres, with sub 1ms latency, which can be rapidly provisioned within a week. SSE Enterprise Telecoms is offering customers flexible contract durations which start at three months, a zero charge set-up option, in-life circuit moves between any of the on-net data centres real time, and 24/7 support, all at extremely competitive rates. “When we first launched our LIGHTNOW service, we were serving the demand from businesses for high capacity connectivity between data centres in London,” said Colin Sempill, managing director of SSE Enterprise Telecoms. “We could see that increasingly more data centre managers were choosing to use a combination of data centres to best suit the requirements of their data – whether that be high security, maximum availability or the need for speed – but they weren’t willing to compromise on service. These ‘pro-locators’, as we called them, are now emerging in

Manchester too, most likely due to the booming business in the area and resulting increase in data, and they also require faster data centre links both within the Manchester area and to London.” Manchester LIGHTNOW provides meshed connectivity in and around the city and on to London. It follows the original debut of the service in London, where SSE Enterprise Telecoms connects 21 of the busiest data centres. The original LIGHTNOW London service formed part of the extensive network expansion, dubbed Project Edge, which saw SSE Enterprise Telecoms increase the reach of its fibre network to more than 13,700km with a total of 234 points of presence, serving more than 200,000 metropolitan business postcodes, nationwide. “The expansion of the LIGHTNOW service builds on its success in London and demonstrates SSE Enterprise Telecoms’ commitment to delivering ultra-high capacity connectivity services. Our ambition is to design a data centre portfolio that makes a real difference to our customers, supported by a network infrastructure that has no tolerance for downtime,” concluded Sempill. The Manchester data centres connected by SSE Enterprise Telecoms are as follows: DataCentred, Dock 10 – Media City, M247, Telecity Group (Williams House), Telecity Group (Reynolds House), Telecity Group (Synergy House), UKFast – MaNOC 6, Telecity Group (Joule House) and Telecity Group (Kilburn House). LIGHTNOW is powered by Ciena’s 6500 Packet-Optical Platform, equipped with WaveLogic coherent optics. The service leverages the 6500’s ROADM bridge features, which ensure full agility for the provisioning and routing of wavelengths, underpinning the “wavelength in a week” service pledge.

EcoCooling joins the Node Pole Alliance EcoCooling has joined the Node Pole Alliance, an active international network of over 80 world leading knowledge partners coming together to build the data centres of the future. The Node Pole region encompasses three municipalities in the very north of Sweden, just by the Arctic Circle, and has the potential to become a global hub for data traffic. This is mostly due to its reliable power infrastructure, the ample supply of low cost renewable hydroelectric energy and low air temperatures ideal for natural cooling. The Alliance members are companies from the technology and construction sectors who combine their knowledge and experience to build world-class data centres. “We are very proud to have been able to join the Node Pole Alliance”, said Alan Beresford, MD at EcoCooling. “The direct-air evaporative cooling systems we have developed are ideal for the climate in the Node Pole region and make the most of the resources available.” Air temperatures so close to the Arctic Circle are not only cool enough to make refrigeration in data centres redundant – they can even be too cold for the IT equipment. EcoCooling has designed patented control systems and atemperation processes to keep the cooling air within a tightly controlled temperature band – typically 18 to 21 degrees Celsius. 6 | Data Centre Hub May 2015

Best in class colocation services to the Manchester market LDeX Group has today announced that it is to launch a second UK datacentre in Manchester in Q2 of this year, signalling the strength and growth of the carrier neutral datacentre and network connectivity provider. The new datacentre ‘LDeX2’ will feature best in class colocation facilities with a capacity of 4MVA, network carriers and global Internet Exchange Points (IXPs) as well as offering 24x7x365 customer support. Confirmed on-net carriers offering ultra-fast low latency connections for clients will also be revealed in Q2. Similar to LDeX1, the new 20,000 sq. ft. facility in Trafford Park will have an onsite satellite farm and will provide connections to content delivery networks, OTT players and cloud platforms, enabling customers to broadcast large scale events and stream content over multiple hosted platforms. Customers can also look to expect energy efficient facilities which will have a 100% uptime SLA and offer 24x7x365 disaster recovery and remote hands support services. Commenting on the news, Rob Garbutt, CEO of LDeX, said: “With an increased uptake of colocation services in the UK, we are delighted to announce our plans to expand and open up another UK based datacentre facility offering best in class colocation, network connectivity and streaming media satellite facilities to clients.” He added: “As a growing technological hub, Manchester is a great strategic fit for us in expanding our customer orientated datacentre portfolio. We look forward to investing in the economy there and providing employment and IT services to the local market.” Further details of the expansion will be confirmed in Q2. For more information, please contact sales@ ldexgroup.co.uk.


Innovative

Data Centres at the core of your business Sudlows are leading experts in data centre audit and consultancy. We specialise in the design, build and maintenance of energy efficient, sustainable data centre environments. Call +44 (0) 870 278 2787 or email hello@sudlows.com to discover more about what we can do. www.sudlows.com

Audit | Design | Build | Maintain


Case | Study

Cool Runnings

Neil Cresswell, Virtus CEO Introduction This case study looks at how Virtus leveraged Romonet software and services to create a predictive model for Total Cost of Ownership (TCO) over a 10 year period for their new LONDON2 data centre. The move enabled Virtus to independently assess the long-term financial impact of two competitive cooling technologies and validate the vendor claims made during the design stage. Challenge Virtus’ new facility presented a variety of challenges for the Virtus design team: develop a competitive CapEx profile comparable to London’s first-rate data centres, while ensuring Virtus’ high standards for quality, flexibility and service at minimum operating cost and at a market competitive Power Usage Effectiveness (PUE). To put it in their own words, a “Virtus intelligent data centre - flexible by design” and independently verified. Virtus engaged Romonet based on their credentials for TCO and data centre modelling. Solution The Virtus design team had identified that indirect/free aircooling could deliver a low PUE while delivering best-inclass operating and financial 8 | Data Centre Hub May 2015

performance. To some extent this had already been validated by their own market analysis and vendor references. However, they turned to Romonet to independently assess and determine the lifecycle running costs based on the performance of each vendor solution and compare them against the overall capital and operational cost profiles over a 10 year analysis period. Romonet’s predictive modeling capabilities were used to build a number of models of the Virtus data centre and comparatively predict the TCO of each option for the business. Why Romonet? “We liked the Romonet technology” said Robbie McGhie, Director of Services and Engineering at Virtus. “It fitted the bill and there was no other obvious choice.” Benefits The chosen technologies for each design option was validated by the Romonet model as to whether it was able to deliver the required performance at the right TCO. Romonet’s analysis confirmed that indirect free air was the best choice given the climatic conditions of London. In addition, a comparison was made between two competing vendor products using “what if” scenario analysis with the following objectives:

Located within London’s metro, Virtus designs, builds and operates a new generation of efficient data centres.


Case | Study

Optimise your IT for today’s challenges. How London’s data centre specialists helped Virtus appraise their new LONDON2 site.

The Virtus design team had identified that indirect/free air-cooling could deliver a low PUE while delivering best in-class operating low TCO, high agility/flexibility, class-leading efficiency (e.g. PUE), excellent quality and durability of construction and superior resilience (Tier III certification required). The Romonet model confirmed that the design choices made would meet the business objectives and furthermore was helpful in selecting the final vendor cooling solution. Outcome Romonet software not only validated
 the choices made, but also created a long term forecast with lifecycle TCO and PUE forecasts for the data centre based on a load fill out plan and the actual climatic conditions in London. Future Romonet is now performing a similar exercise for the LONDON1 site, which has been in operation for over 
3 years. With LONDON2 now being 
an operational site, Romonet’s SaaS based Portal is being deployed and will determine how site cost performance compares to the original business plan and validate whether critical sub-systems have performed vis-avis the original design intent. Performance models for the two sites 
will be tracked within Romonet Portal, where actual metered performance

will be compared to expected performance
on an ongoing basis and any divergences immediately notified to the local ops team to investigate. The Romonet Portal managed service will also allow Virtus to systematically plan for capacity changes, rapidly compare the impact of operational changes, re-investment scenarios, concurrently providing the business with an adjusted long-term TCO view as well as fully loaded cost-per-customer analysis over the assets’ remaining life. Conclusion Virtus contracted Romonet to assess the financial and energy performance of the design and technology choices made for their new LONDON2 site. The resulting report validated Virtus technology selections and helped inform the selection of the specific vendor cooling device. It also provided Virtus with a TCO forecast and achievable performance for the lifecycle of the site, given expected load fill-out and climatic conditions. The external “audit” enabled by Romonet technology was also instrumental in re-assuring investors and potential customers of the soundness of the Virtus business plan. www.romonet.com Data Centre Hub May 2015 | 9


DCIM |

Improving Data Centre Efficiency Forget DCIM software, what you need is a DCIM process! Steven Bailey unravels the barriers to wider adoption of DCIM solutions

By Steven Bailey, Managing Director AIT Partnership Group Ltd

Like margerine, DCIM software is a good idea that’s been marketed as a panacea because the problem it was designed to resolve is not widely acknowledged by customers as a problem.

10 | Data Centre Hub May 2015


| DCIM

Introduction Many of us have become bored of the hype and noise generated by DCIM software vendors and their marketing research agencies such as Gartner, 451 and Forrester, much of it resembling the ‘snake oil’ pitch of traveling salesmen in old cowboy films. DCIM is the must have product for every well-run data centre we are told. All you have to do is buy the software and you will magically realise the benefits; lower PUE, higher air supply temperatures, reduced stranded capacity, identification of inefficient hardware, more accurate change management, faster fault diagnosis, better and faster reporting and it will probably make the blind see, the lame walk and improve your sex life as well! All that is left for you to do is to evaluate the products based on the features you want, negotiate the best price for the software licence, install and reap the rewards. That’s the hype. The reality, of course, is very different. DCIM software is a bit like margarine. Margarine was a good idea that originally addressed a genuine problem, which was the need to replace butter due to a lack of refrigeration in the nineteenth century. But it became overhyped, oversold and over time the problem it was designed to solve was forgotten. With fridges commonplace the marketeers had to create another benefit so they sponsored falsehoods about its health giving properties. What has

this got to do with DCIM software? Well, like margarine, DCIM software is a very good idea that has been marketed as a panacea because the problem it was designed to resolve, improving data centre efficiency, is not widely acknowledged by customers as a problem. Early adopters have been organisations with a strong culture of improving energy efficiency, but this isn’t widely seen in the UK as a driver. Great Expectations The potential market for DCIM software, like margarine, is big, but probably not as big as was once predicted. A 2014 survey by 451 Research downgraded their previous estimate and predicted that the global DCIM market will grow by 27% each year to reach $1.7 billion by 2018. Of course, the size of the market depends on your definition of DCIM and what solutions are included, 451 Research identified more than 60 suppliers, but there does seem to be an increasing consensus that the market isn’t growing as fast as was previously thought. Why then, despite all the efforts of some very well funded marketing machines, has growth been below expectations? To be fair 27% growth is still impressive, but this doesn’t match the hype. There are many possible reasons for this mismatch; the hype was always over the top and created as much to attract investors and venture capital as it was to create real demand,

DCIM software is a bit like margarine. Margarine was a good idea that originally addressed a genuine problem

the sales cycle on a major DCIM project is very long, the barriers to breaking down departmental silo’s in big organisations can be very daunting, there is often no budget allocated and paradoxically proving a return on investment for a DCIM solution probably requires information to be gathered using a DCIM solution. At AIT we have seen all of these obstacles, but the biggest barrier to wider adoption of DCIM solutions is that senior management teams rarely target the data centre with making efficiency improvements because they rarely understand what’s possible. DCIM software is therefore a product designed to solve a problem of which, in many organisations, there is little recognition. How many senior management teams understand the cost of running a server room or data centre? How often do those who pay the energy bill appreciate how they can reduce the bill by adopting best practise processes? How many are now making strategic decisions to go into co-location hosted space or adopt more cloud services without evaluating how to sweat their assets and make more of their existing facilities? In my experience the answer to these questions is, not many. Best Practise When senior management does demand greater efficiency, hard pressed facilities and data centre managers may find it hard to respond if they don’t have the information or resources they need. Reaching for the shelf marked ‘DCIM software’ may seem tempting, but it can prove costly if objectives and processes are not put in place before the tool is purchased. DCIM software, like any tool, will only add value if it is in the hands of a skilled practitioner and used in the right way. More importantly, like any database, it requires the right information to be loaded into it. DCIM vendors consistently underplay the amount of resource required to do this. Auto discovery tools and integration with existing asset management systems can help, but won’t eradicate the need for extensive professional services when implementing a DCIM project.

Data Centre Hub May 2015 | 11


Converged | Infrastructure

Understanding The Internet Of Everything

Why identity matters in the Internet of tomorrow. Geoff Webb discusses the growing link between identity and The Internet of Things... Complexities The increasing complexity of technology has brought about a variety of issues for security organisations. As a reaction to trends such as BYOD and cloud, companies are renewing their focus on keeping data safe. In order to protect data in this new landscape, IT departments are now focusing on two key questions: Are you who you say you are? And are you doing what you’re supposed to? 12 | Data Centre Hub May 2015

Security professionals are spending an increasing amount of time considering how to define a user. It’s vital that an organisation understands exactly who a user is and what they do, as without the knowledge of what constitutes normal behaviour data is at serious risk from external parties as well as insiders. Understanding where identity fits into IT security is becoming even more problematic as the ‘Internet of

By Geoff Webb, Senior Director Solution Strategy, NetIQ


Servers

Cloud

Datacentre

WELCOME TO YOUR 100% GREEN DATACENTRE

Work with a Datacentre provider with long term goals to reduce energy consumption



If you already colocate, you will need to think about expansion, more resilience and choosing a supplier of not just more space, but better space; one where security, power, cooling and fire concerns are all taken care of.

LIMI T TO F ED 50 C IRST LIEN TS

If you house your own servers, you may already be looking around the office, wondering where the safe, secure, cooled space is coming from for your future expansion, be it for greater resilience or more processing power. If you’re in the hosting business you need to think about your clients expanding demands.

Domains

Hosting

Overview > An industry standard 42u rack fully secured & lockable > APC PDUs to each cabinet > DDOS Mitigation Colocation Features > 24 hour access - 365 days a year > 24/7 Support > Instant reboot service > Up to 64 Amps power per cabinet > Redundant UPS protection > Redundant diesel generators > Advanced VESDA fire detection & FM200/IG55 Ad protection system > 10 Gbps network > Meeting, Storage & Build rooms available 100% Uptime

24/7 Support

The Netcetera Dataport

Unlimited Bandwidth

Offshore Hosting Options

19

Years 1996-2015

ISO

9001 14001 27001

From £259/pm

netcetera

www.netcetera.co.uk/dch Call us FREE on 0800 808 5450 sales@netcetera.co.uk All prices exclude VAT at 20%


Converged | Infrastructure

Things’ (IoT) continues to evolve. A recent report by HP found that devices designed for the Internet of Things are full of inconsistencies when it comes to security. While organisations still have time to adapt before the IoT age is truly upon us, they need to start conducting reviews and implementing standards before the explosion of connected devices makes this an impossible task. Connectivity growth Identity is inevitably going to be a challenge as the IoT adds to our complex IT landscape, and as HP’s report demonstrates, it is about to get much, much harder. The amount of connected devices is estimated to reach one trillion by 2020 and the extent of the impact this will have on our daily lives is almost unimaginable. This new world of the Internet of Things will change everything from the way we work to the way we play. IT departments need to consider how best to manage and secure these devices to make the most of this trend, balancing the needs of productivity, innovation, and security. Arguably, the first step in finding a solution lies in how we define the very idea of identity, because identity lies at the heart of unlocking the real potential of the IoT. This enables us to engage with one another, personally and professionally, in completely new and personalised ways while ensuring our data is protected. In the past, the term ‘identity’ has been used to uniquely define either a thing or a person. Nowadays, identity can be better explained by looking at multiple elements, like contextual clues, including our previous behaviour or interaction with others, as well as our interactions with third parties. Broadening the definition of identity to encompass these ideas will play a key part in managing the variety of IoT devices and is absolutely critical to keeping us and our devices secure. Everyday Internet Earlier this year, it was reported that numerous connected domestic devices – including fridges and light bulbs – had been hacked. Following these reports, the IoT started to gain widespread traction in the media for 14 | Data Centre Hub May 2015

the first time and is now perceived by the majority as an important part of our future. In fact, the IoT is already a very real thing in many places. Some devices are able to monitor themselves in such a way that if something were to break, or if the device knew it was time for a regular service, it could automatically schedule maintenance. Another area where the IoT continues to develop is healthcare, with medical devices in constant communication with each other to monitor patients and alert doctors should something serious occur. The example of self-monitoring components is interesting when considered as small parts of a whole – of a large overhead crane, for example. Each component can be given its own identity and individually tracked, right through from the manufacturing phase to the point where it needs to be replaced. Its lifespan can be improved and downtime reduced, especially significant for an industry such as manufacturing where efficiency is so critical. In order to make this happen, each device requires its own individual identity – an identity that must be assigned and managed. And therein lies the challenge at the centre of the Internet of Things. As the IoT becomes a part of our daily lives, it’s important that we look to develop an ‘Identity of Everything’ alongside the Internet of Things to cope with the management of these multiple identities. A new norm for business As the connections between devices and people continue to grow, understanding each unique element will be critical in ensuring that the IoT is a safe, secure environment – both for machines and for users. For organisations looking to make the most of these new technologies, there will be an excess of new commercial opportunities. Buying behaviour, product preferences, even entire markets, can be understood more clearly, creating new business models and ways of engaging with customers, but this is only possible if all pieces of the identity puzzle are put into context. Fundamentally, companies will be able to relate to their customers on a far deeper level. As an

example, your car will know when you’re ten minutes from home and let your smart thermostat know to turn the heating on in time for your arrival. Your fridge will know when food is running low and order your favourite items to be delivered on your return so you’ll never run out of milk or eggs again. Endless possibilities Alongside these limitless possibilities, there are also inescapable complexities that need to be addressed such as finding the balance between security and access. As the volume of connected devices continues to increase, their interactions could become overwhelming. We need to understand what interactions are ‘normal’ to identify what’s abnormal – and potentially malicious – behaviour. In order to do this, every device needs its own identity so that its behaviour can be contextualised. Imagine you spotted a reliable employee behaving strangely in the office, indicating something abnormal taking place. The same applies when we look at devices, with the ability to identify rogue behaviour in your domestic devices’ technology playing a key part in the future security of the IoT. It’s also important to remember that many of these devices will communicate with each other, or to third parties, directly without our knowledge, such as the gathering of data by an organisation to better understand customer habits. Such data gathering could give hackers another potential source from which to steal personal data or even manipulate connected devices – a risk that is very much at the forefront of concerns regarding the security implications of the Internet of Things. As more devices start to communicate with each other, understanding their unique identities and how they interact with one another will be of the utmost importance to ensure our information stays safe. Not only that, it is knowing what constitutes ‘normal’ behaviour for these devices that will be most integral to ensuring the Internet of Things remains a secure environment, full of opportunity. www.netiq.com


IT Cooling Solutions

The Whole Range of Data Center Cooling Solutions from a Single Source CyberAir 3 Room Cooling

CyberRow High Density Cooling

CyberCool 2 Chiller Units

CyberCon Modular Data Center Cooling

CyberHandler Air Handling Units

STULZ GmbH . Company Headquarters . Holsteiner Chaussee 283 . 22457 Hamburg . Germany products@stulz.com . Near you all over the world: with sixteen subsidiaries, six production sites and sales and service partners in more than 120 countries. www.stulz.com

D ATA C E N T R E C O O L I N G S O L U T I O N S S A L E S / S U P P O R T / S E R V I C E / S PA R E S STULZ UK Ltd . First Quarter . Blenheim Road . Epsom . Surrey . KT19 9QN 01372 749666 . Sales@stulz.co.uk . www.stulz.com


Data | Centres

The Perils Of Ignoring Load Testing Why data centre load testing is now a `must have’ option. Dave Wolfenden explains the benefits of load testing in modern data centres... Introduction Testing a data centre prior to formal handover to the client is a normal part of data centre construction, but few IT professionals look at the testing that is carried out, that can affect modern centres in the future. The usual reason for this is that, until a full set of servers and allied IT systems are in place, it is perceived as impossible to test a data centre under full loading conditions. This is actually an incorrect assumption, and one that seems to be perpetuated by a number of misunderstandings regarding the complex technologies involved in a typical data centre. In many cases airflow within the data centre is modelled using Computational Fluid Dynamics (CFD) modelling software during the design phase. In addition to the testing set out by the commissioning team the CFD model should be proven before the IT infrastructure is installed. The reality is that the testing of a good data centre needs to be carefully planned and executed to ensure continuous operation for the design life of the data centre. The data centre facility should be tested at a variety of load levels, working up to 100 per cent load. The majority of energy consumed by IT infrastructure is rejected as heat, meaning that the simplest way to replicate the IT infrastructure is the use of fan heaters. In the past these varied from 2 or 3kw domestic fan heaters to large floor standing space heaters to produce load. In most cases the safety thermal cut out had to be removed to cope with elevated 16 | Data Centre Hub May 2015

temperatures within modern data centres. The heaters are often connected to temporary power supplies. These types of load do not reflect the airflow and temperature range akin to IT infrastructure and do not test the power supply end-to-end. 100 per cent The CFD model at 100 per cent is likely to assume that the data centre is fully occupied with floor standing and rack mounted IT infrastructure. The reality is that during testing only some of the racks may be installed. To ensure the testing process is valid, temporary measures need to be in place to ensure the layout and load distribution to reflect the CFD model layout. These measures could include installation of temporary IT racks, blanking, construction of temporary walls / aisle containment and the implementation of heaters and server emulators that reflect the load distribution across the data centre. If the customer’s IT racks have been implemented the heat load should be connected using the power strips installed within the racks. This may be the only time that the power strips are fully loaded (and therefore completely tested). Whilst the latter two issues can be met using sensible planning, effective heat control is something of a science in its own right, as dissipating heat - from whatever source - within the data centre is a critical process. If carried out poorly or using unreliable technology, then a

runaway heat problem can quickly turn into an IT disaster, shortening both system and server lifespan at best - and causing equipment failures at worst. Given companies increasing reliance on data centres to service the IT needs of their business, an equipment failure can cause a number of problems - ranging from a temporary outage of telephony and computer services for staff and allied personnel, all the way to a failure of an organisation’s e-commerce web site - causing customer confusion, loss of brand loyalty and an ongoing loss of revenue. ROI/cost issues In an ideal world, a business could throw enough money at a data centre project to ensure 100 per cent uptime and happy customers, as well as staff. In the real world however - even in a mission-critical application - there are clear ROI (Return on Investment) issues that must be addressed when planning, testing and maintaining an effective facility. For most of our clients, this translates to the effective testing of a data centre at all possible stages in its planning and development, all the way from the computer modelling aspect of the installation, right through to the test heat and power loading prior to the installation of the relevant IT systems and servers. So why do we need server emulators to complete the heat load testing process? The reason for this is that a new IT equipment room, data centre - or modular data centre - is designed and expected to run continuously for the duration of its design lifetime, which can amount to many years, even in today’s rapidly evolving IT arena. To achieve this level of reliability it is necessary to thoroughly test


By Dave Wolfenden, Director, Mafi Mushkila

the infrastructure before it goes into operation, both physically – using test equipment – and using appropriate CFD software to model the airflow within a facility and provide a graphic analysis of how the hot and cool air flows. Using actual servers to complete the tests is not possible for a variety of reasons, including the cost of filling the data centre with servers, the potential for damage to IT equipment and the time it would take to reset servers after each test. Coupled with the need for fixed, predictable loading during testing, a server emulator provides a variable electrical load and produces a heat load. These loads allow the testing of the electrical and cooling systems in a controlled environment. On the electrical test front, the use of head load banks and allied systems can make life simpler for data centre developers and facilities managers, as well as on the power governance front, as they help prove the efficacy of static transfer switches under partial and full load conditions. As part of this element of the testing process, good testing equipment allows the thermal inspection of all joints and connections under a full load condition before the building becomes operational, so reducing the fire risk. One useful side effect of this process is that the electrical assessment process provides confirmation that power monitoring and billing equipment is operating correctly, as well as minimising risks and issues that may not otherwise be found for several years. Allied to the electrical check process is the testing of ancillary systems such as electro-mechanical and mechanical units, pumps, cooling and chiller systems, as well as Room Air Conditioning Units (RACU) where appropriate. These test processes are

also useful for load testing of intermediate heat exchangers which are usually installed to reduce water leakage loss in the suite, with capacities ranging from 100,000 litres all the way down to 250 litres. Other processes can also include the proving of fail-safe systems on high-density racks - such as confirming doors will open in the event of in rack cooling component or system failure. On the water chilling side, the testing process normally requires load testing to prove that the chilled water ring has a sufficient volume of cold water to allow the chillers to restart when a generator kicks in, so negating the requirement to UPSequip the chillers for resilience. Commercial Risk All of these methods are, we believe, a fundamental aspect of data centre testing as the comprehensive checking of electrical

and chilling/cooling systems is infinitely preferable - on several fronts - than destroying a bank of servers. As an example, a rack of heaters can cost just a few thousand pounds, against the cost of a rack of servers that can cost into six figures. By including effective testing as an integral part of the commercial risk evaluation and mitigation process, our observations suggest that this supports a timely sign off for data centre and allied buildings, and their acceptance into service. Arguably and more importantly, by documenting a safe and reliable testing phase of a data centre deployment, this can act as proof to insurers that the systems are fit for purpose under full load, as well as providing high levels of assurance that the components and systems are set up and configured correctly. www.mafi-mushkila.co.uk Data Centre Hub May 2015 | 17


Data | Centre

The Cat 8 Cabling

Revolution

18 | Data Centre Hub May 2015


Data | Centre

Will Cat 8 change the face of data centres? Ken Hodge explains how Cat 8 is set to become a rack-level standard...

By Ken Hodge, CTO, Brand-Rex

Introduction Work is rapidly advancing on Category 8 copper cabling - Cat 8 – a technology that looks set to find its applications primarily in data centres. Like its predecessors this new BASE-T standard will be later to market than its twin-ax and fibre based competitors, but when it arrives it will rapidly displace them because of its far lower cost. Cat 8 will become the mainstream technology for rack-level interconnects in the data centre. However, unlike earlier Gigabit and 10 Gigabit technologies it will not have a 100 metres range and so it will not support centralised switching with passive patch-panels at row level, except in smaller server rooms. The need for speed Cat 6A supports the fastest BASE-T solution currently available (10GBASE-T) and is the de-facto choice for data centres as specified by the TIA/EIA for North America and by ISO/IEC internationally. In the data centre, individual devices require ever faster interconnects. For example, one physical server now runs maybe ten virtual servers – and so the physical interconnect must handle roughly ten times the data. And then there is the seemingly unstoppable move towards streaming more and more video, plus the upcoming wide-scale adoption of ultra high definition ‘4K video’ and ‘Big Data’ that is going to affect a lot of data centres in coming years.

All of this means that, as data centre professionals, we would be very unwise if we did not forecast that much higher bandwidths will be needed. Already in certain high performance computing data centres (or HPC sections of data centres) we see that 10Gb/s is not enough. Solutions such as bonded multiple 10Gb/s copper or fibre channels and 40Gb/s or 100Gb/s fibre channels are being deployed. Also - as happened in the early days of Gigabit and then again with 10Gigabit/s – and now with the 40 & 100Gb/s technologies – the first solution set of cabling products that were standardised (and commercialised) included fibre optics for short, medium and long reach connections, as well as multiple twinax copper cabling for high-speed, short-range, solutions for links to the top of the rack. These solutions, we have observed, are already handling the early-adopter need for very high speed interconnects at 40Gb/s & 100Gb/s. Unlike BASE-T, these short-range twin-ax solutions don’t need all of the complex signal processing that is required for longer-length channels. As a result, they are far quicker to develop and bring to market. The downside is that the cables and connectors are extremely expensive. Whilst these high costs are not really an issue in early-adopter applications, they are totally unaffordable in the data centre mass-market. And that


Data | Centre is where a BASE-T has historically come in around two years later at a fraction of the cost. I predict that a similar cycle will happen with 40Gb/s Cat 8 is still in its early days of development and it will be a good year or more before we will really know how it will look technically. But it is almost inevitable that, once standardised and productised, its cost per link will quickly drop to a fraction of the twin-ax and fibre based alternatives. It will be the solution of choice for mass connection of equipment; commercial imperatives will drive its adoption. What is Cat 8? Currently there are a number of similar but different ‘Cat 8’ solutions being considered by the standards bodies for 40Gb/s over twisted pair copper. In the USA, the TIA/EIA is considering Cat 8 based on an extended performance Cat 6A cable. Meanwhile internationally ISO/IEC is looking at two options currently tagged Cat8.1 based on an extended performance Cat 6A cable and Cat 8.2 based on an extended Cat 7A cable. Interestingly all of these are based on shielded cables and connectors because of alien crosstalk difficulties. As yet, there is no clear choice of connector - though there is a significant body of weight in favour of the RJ-45 footprint rather than the larger ‘square’ contender. This is partly in order to achieve highdensity patch panel and switch configurations and partly because RJ-45 is what almost everyone in the industry is used to and comfortable with. If the RJ45 footprint is adopted it will meet the maxims of interoperability and backwards compatibility favoured in the market. This is clearly the most attractive route for the industry itself as it will allow IT managers to specify Cat 8 knowing that they will not compromise existing installations nor limit the supported technologies on the cabling. It looks likely that an RJ-45 profile jack will be used, however the choice is not simple - there are different styles of RJ45 connector that have different electrical performance levels. The essential differences between the connectors are that contact pins are in a single flat row in the RJ45 20 | Data Centre Hub May 2015

type and the pins in the ARJ45 type connector are positioned at the four corners. Whilst the ARJ45 has a better electrical performance than the RJ45 (because of the separation of pins) it is not backwards compatible with Cat6A, 6 etc. The silicon designers involved in the IEEE project have not yet decided if they will take advantage of the better cabling performance solution and use less processing technology or work with the standard RJ45 solution and add more processing power. The decisions on technology have yet to be taken - although it looks like RJ45 will be the preferred option we cannot be sure today. Whilst the choice of a new connector type to support the new application is unlikely to create a problem in a new data centre that is designed at the outset to support 40Gb/s, we could anticipate some inherent problems with this approach in established data centres. For example, if Cat 8 horizontal cabling was installed in a data centre that operates lower speed applications (e.g. 1 & 10Gb/s Ethernet and fibre channel technologies) that are based on RJ45 connectivity, hybrid ‘Cat 8 to Cat 6A cords’ would be required to attach end equipment, and true Cat 8 cords would be needed when the equipment is installed to migrate to 40Gb/s speeds. In addition, if Cat 8 is moved out of the data centre to the horizontal in ‘future proof’ building installations, the lack of backwards compatibility will be a real issue. A new connector type might not be so acceptable if higher speeds (40Gb/s) do reach the enterprise LAN. Topology considerations In an ideal world, the LAN connectivity would place no constraints on the designer’s choice of architecture or topology. But the world is is seldom ideal. BASE-T standards have always (until now) been based on a 100 metres channel length. However back in 2008 Brand-Rex launched a data centre Zone Cable product that had a maximum reach of 70 metres. Our research had shown that this would cover 85 per cent of existing data centre link requirements and that, with only a minor amount of re-planning, a data centre could be designed to use 100 per cent of this Zone Cable. The massive advantage that

made it worth the designer’s efforts was that our Zone Cable gave Cat 6A 10Gb/s performance, but instead of being the ‘thickness of a small garden hose.’ it was as thin as a Cat 5e cable. This was - and is - a major positive benefit both inside racks and under the floor, where thick cables create air dams and cause expensive cooling inefficiencies. The technology also has a lower carbon footprint and saves weight compared to conventional cabling. In the early-adopter implementations of Gigabit, early 10Gb/s and now early 40Gb/s a major topology change has been essential. This is because of the very short distances for twin-ax based copper links, which has always meant that an expensive Top-of-Rack switching topology is essential. Later, as the BASE-T solution for each of these speeds became available, designers were able to have total flexibility to choose cheaper EoR (end of row), MoR (middle of row) or, in many cases, centralised switching with passive in-rack or in-row patching. Different The situation with Cat 8 will be different - this is because it is not going to have 100m or even 70m link-length capabilities. No firm decisions have yet been made on its link distance capabilities but 30m looks likely, but allowing for some crosstalk, the technology could possible reach to 50m. This issue is going to affect the way that connectivity solutions in the data centre will need to be designed, if they need (or will ultimately need) 40Gb/s and the cost effectiveness of BASE-T. Gone will be the option for centralised switching, as EoR or MoR switches become essential to stay within the 30m or 50m reach of the network cabling. Interestingly network planners are already discussing, and in some cases implementing, a move away from hierarchical switching in the data centre to a flatter topology distributed or mesh approach; which ties in well with an in-row switching configuration. So perhaps this apparent constraint with 40Gb/s copper will not be a real constraint after all. www.brand-rex.com


DATACENTRES are MATURING Mature Data Centres know that protecting their customers’ data isn’t just about being popular, living in the upmarket streets of London, wearing Tier III trainers or comparing the size of their PUE.

A mature data centre understands that high quality, exceptional service, low cost & ultimate flexibility combined with levels of security unsurpassed elsewhere is more important than boasting about the size of your PUE or your tier III label.

Don’t let childish boasts cloud your decision choose a data centre that offers maturity and puts your business needs first.

Contact MigSolv Today

0845 251 2255

migsolv.com


Opinion |

Reducing the Cost of Cooling Air Management Solutions. Mark Hirst reports on effective cooling solutions Introduction Facility teams and data centre managers know that to survive in a world where low cost cloud infrastructure dominates, they need to cut costs to the bone. The hardest thing to cut has always been the cost of cooling. As air temperatures creep up inside the data centre, techniques such as free air-cooling and airside economisers become effective cooling solutions. Containment and Hotter Input The two biggest improvements in data centre cooling have been the introduction of aisle containment and the ability of servers to manage higher input temperatures. Containment is a solution to air management that can be retrofitted to data centres. It has been responsible for both extending the life of older data centres as well as enabling higher densities without having to invest in expensive refits of cooling systems. Ashrae, the industry body responsible for data centre standards, have promoted higher input temperatures. Just 10 years ago, many data centres were still cooling input air to 65F (18C) while today they are working at 79F (26C) and even higher. This ability to manage higher input temperatures has also been helped by new generations of silicon and motherboards. Using Natural Resources Despite these changes, more still needs to be done to help cut the costs of cooling. This has led to a group of techniques known as free air-cooling. The idea is to use as much ambient air as possible to remove the heat from the data 22 | Data Centre Hub May 2015

centre. The stated goal of most of these systems is to have no mechanical cooling at all. It sounds great but the reality is that there are few places on the planet where the outside air temperature is low enough to cool most data centres all year round. It is not just ambient air that is a challenge, the type of technology under the free air-cooling banner that is chosen comes with a number of additional challenges from data centre design to particulate matter. Ambient Air Using pure ambient air inside the data centre is not a technique that can be retrofitted to existing facilities. The first challenge is getting a large enough volume of air below the room. A large volume is needed to create the pressure to push the air through the data hall. The Hewlett Packard data centre in Wynyard, UK uses a five-metre hall to create the required volume of air. To help create the right pressure to draw the ambient air into the data hall, the hot air has to be expelled via a chimney. This needs careful design in order to not only extract all the hot air, but to do so in such a way as to create a partial vacuum, which then draws in the cold air behind it. To ensure that the air does not contain any particulates that would impact the internal performance of the equipment in the data hall, you need very large filters. Air inside cities tends to have high lead and other particulates, especially from diesel vehicles and general dust. It also tends to be warmer than air in the countryside and this can severely limit the number of days where ambient air can be used without secondary cooling.

By Mark Hirst, Head of T4 Data Centre Solutions, Cannon Technologies.

The air in country areas can be even dirtier from a data centre perspective. Pollen, dust, insects even swarms of bees and wasps have been reported being caught on the filters that guard the large air halls. The ambient temperature is often lower than in city areas but here wind can be a problem as high winds can force small particles of dust through filter screen. Humidity and Dew Point Data centre managers are acutely aware of the risk of humid air inside the data centre - too little humidity and the risk of static electricity rises. When this discharges over electronic equipment it causes havoc and destroys circuit boards. Too much humidity leads to condensation, which can short out systems and cause corrosion, especially in power systems. When using conditioned air inside the data centre, this problem is handled through the use of the chillers and the dehumidifiers. Free air-cooling, however, creates its own problem. The most obvious of these is on rainy days or when there is a very cold ambient temperature being drawn into a hot data centre. In both these cases, water in the air tends to condense very quickly and if not handled properly is a disaster waiting to happen. Some data centres are being built near to the sea to take advantage of the natural temperature change between the land and sea. This looks like a good strategy at first, but the massive


Efficiency and technology in perfect harmony.

Today, choosing a new UPS system is about more than simply protecting your critical load, you must also consider the best way to minimise your CapEx and OpEx and ensure you are future-proofing your power protection investment against any eventuality.

Just like our elite sportsmen and women, our UPS solutions utilise the very latest technological developments to deliver class-leading efficiency levels of up to 98%*, whilst still offering you the flexibility and control you need to achieve your financial and business objectives, both now and in the future.

To find out more call or email us today: 01256 386700, sales@upspower.co.uk, www.upspower.co.uk


Opinion | damage from salt water in the air can destroy data centre equipment in a fairly short period of time. Any use of free air-cooling in these environments requires a significant investment in technologies to clean the air of salt before it is used for cooling. To get around this problem, free air cooling systems mix existing air from the data centre with the air being drawn in from outside. Where the air is extremely cold, this helps to heat it and reduces the risk of cold damp air condensing on processors, storage devices or power systems. Where the air is simply very heavy because of the external humidity, there must be dehumidifiers available that can be brought online even though this adds extra cost to the power budget. Airside Economisers A technology that gets the most out of free air, reduces the particulate and dew point issues as well as being capable of retrofitting into an existing facility is airside economisers. They bring ambient air in, filter it and then mix it with exhaust air to raise the temperature if required. The air is then passed either through an Air-to-Air heat exchanger (indirect) or direct and through a backup water or DX air coil in order to get the right air input temperature for the room. The advantages of using airside economisers is that they are not a single approach to free air-cooling. By dealing with the issues identified and by having the ability to filter, have both direct and indirect exchange and additionally cool or heat air, they can reduce the cost of cooling and get the most out of ambient temperatures. The Green Grid estimates that even data centres in hot environments such as Florida, Mexico, Texas, Portugal, Southern Spain and even the Middle East should be able to manage 2,5004,000 hours per year of free aircooling. Much of this will be at night and during the winter months. In more temperate climates such as the UK, Northern France, Netherlands, New York and parts of California, this can rise to 6,500 hours. Further north and data centre owners should expect up to 8,000 hours although there will be additional costs in removing excess humidity and heating air before injecting it. 24 | Data Centre Hub May 2015

Too much humidity leads to condensation, which can short out systems and cause corrosion, especially in power systems.

Some data centres are being built near to the sea to take advantage of the natural temperature change To get the most from airside economisers, however, it is essential that users understand the requirements of the technology. One of the most common failure points is poor pressure management. If there is insufficient pressure to draw the air through the data halls then air will remain stagnant and just increase in temperature as it is poorly circulated. It is also important to ensure that the temperature sensors are effectively placed. These should be integrated with the Data Centre Information Management (DCIM) systems so that operators can quickly identify any temperature hotspots. One problem caused by poorly placed sensors is that too much return air is added to the airflow causing the input temperatures to rise unexpectedly. The opposite is also true if they are located too close to a heat source where air will be given additional cooling creating a large difference between hot and cold and exacerbating the risk of condensation. When integrating airside economisers into modular solutions,

it is essential to allow enough exterior space in order to install the equipment. This is why modular equipment manufacturer Cannon Technologies has designed its own solution specifically for modular data centres. Depending on the climate, the target PUE can be as low as 1.1. Conclusion In one survey, Intel looked at the impact of airside economisers where the outside air temperatures were 90F (32C). They estimated that a 10MW facility would save almost $3 million per year and that the risk of increased equipment failure was so low as to be insignificant. In the temperate climate of the UK and mid US, free air-cooling is able to deliver a Power Usage Effectiveness (PUE) as low as 1.05 against an industry average of 2.0. What this means is that for every 1kW of power consumed in the data halls during winter months, just 1.06kW of energy is actually consumed. This means that non-IT equipment usage of energy is 6% of the total energy bill.



Physical | Security

Physical Security Understanding Physical Attacks. Marcus Edwards explains why physical protection should not be neglected Introduction Data centre security is about minimising risk and maximising operational uptime. Of the two types of security, cyber and physical, the emphasis is usually put on cyber security, which is clearly the obvious risk. High profile events like North Korea’s attack on Sony underline the threat of this type of action. The main focus should be on providing security against such attacks, which are happening all the time. Physical Attacks Typically, physical attacks on data centres are lower profile events and much less frequent, but they do still happen and they can be catastrophic. When we imagine this type of physical attack we normally think about thieves stealing physical equipment for resale. When this occurs the resultant breakdown in service or loss of key data can be an embarrassing and costly by-product as Vodafone found to their cost when their service was disturbed in 2011 after network equipment was stolen from their Basingstoke data centre. Another attack occurred in 2007 when five thieves disguised as police stole up to £1 million worth of computer equipment from a ‘state-of-the-art’ data centre in the Kings Cross area of London. The vast majority of UK data centres have very good security measures in place to guard against this type of theft. Security fencing supported by CCTV and lighting plus controlled vehicle and pedestrian access makes theft by your casual opportunist nearly impossible. Clearly these measures are necessary and should not be overlooked. Once in place the data centre is secure for all but professional planned attacks. These are unlikely to be carried out by your usual, home grown, criminal gang as the 26 | Data Centre Hub May 2015

financial rewards of obtaining IT hardware does not justify the risks involved. So, does this mean all is right in the world of physical security for data centres? Unfortunately, gangs of professional thieves turning up to steal lorry loads of servers are not the major threat. The main threat in terms of physical security comes from within, as most of the large thefts of data are often a result of inside jobs or negligence. For example, Edward Snowden leaked thousands of classified documents, much to the embarrassment of the USA and UK governments. The disgruntled or criminally minded employee is probably the biggest physical security threat faced by small businesses. Stealing information in order to help with another employer or to set up their own business is a crime that also appears to be growing rapidly if you look at the conviction rates. High security fences and access control into the building will not protect against the authorised employee. Access control to data and the physical storage devices need to be controlled and recorded per individual. Assets and data also need to be ring-fenced and segregated to minimise any potential loss. Defence in Depth This is all fairly straightforward and in line with The HMG Security Policy Framework, Version 11.0 – October 2013 issued by The Cabinet Office, which states the following; “The ‘defence in depth’ or ‘layered’ approach to security starts with the protection of the asset itself (e.g. creation, access and storage), then proceeds progressively outwards to include the building, estate and perimeter of the establishment.” The significant point is that security

By Marcus Edwards, Owner of Server Fortress Limited

Netw threa have the ne


Physical | Security should start as close to the asset as possible. This limits any potential loss even from malicious individuals from within the organisation. The framework covers the normal commercial risks, however, there is a lot of sensitive data that could be subjected to another type of professional attack. Government backed cyber-attacks are not a thing of fantasy. Governments have teams looking at this in both terms of defence and as an offensive tool. Thanks to Edward Snowden, we know UK’s GCHQ has gained access to the network of cables that carry the world’s phone calls and Internet traffic and has started to process vast streams of information that it’s sharing with its American partner, the National Security Agency (NSA). This is what we know, so far, about our own ‘friendly’ security organisations. It would be very naïve to assume other governments are not doing the same with commercial objectives. I’m not suggesting that data centres are likely to be attacked by foreign backed intruders with guns and ski masks. Physical attack is normally much more subtle.

The next question is who could be targeted? Government institutions, including the police and military, are obvious targets, but the banks, financial services, technology and research institutions are also potential targets. Once you widen the catchment to cover these areas nearly all multi-national companies, and even Universities, become potential targets. Caging What types of subtle physical attack are we talking about? Network eavesdropping is the main threat and that gets easier once you have access to the building where the network is situated. In private office buildings you need to keep network points in meeting rooms away from visitors. Data hosting centres create another issue. As mentioned earlier, most data centres have very good perimeter security and record whoever enters the actual data centre. Is this good enough against truly professional eavesdropping? What other companies are based in the hosting centres? Could any of those companies, or their employees, have links to overseas Remote cyber-attacks are the biggest threat to all data, but physical protection should not be neglected.

work eavesdropping is the main at and that gets easier once you access to the building where etwork is situated

governments? Once an individual has open access into the data centre all sort of illicit opportunities become available. The standard way to offer some protection against this is segregating different companies’ server racks by caging. Caging is available in a wide range of cost and quality. At the lowest end it is nearly only cosmetic allowing both the hosting centre and the end client to ticket a box in the specifying contract. High-quality caging with audit tracking locking systems can look very similar, so care should be taken before signing off on the cheapest solution. Caging also has a few practical problems. It takes up floor space, which may not be an issue if a lot of cabinets are contained by one cage, but for a small number of cabinets that space costs money. Caging can also disrupt airflows within the data centre causing hot spots and dead zones. It also doesn’t normally lend itself to either hot or cold aisle containment, limiting the thermal and efficiency advantages of these systems. From the ‘layered’ approach to security it fails to bring the protection as close to the IT assets and its cabling as it should be. Once a cage has been breached all the assets within that cage will be compromised. A possible alternative to security caging within a hosting centre is very secure server cabinets, which have the advantage of securing IT equipment down to the cabinet level. One potential weakness is the data cables entering the cabinet could be exposed. In an overhead cabling situation this can be easily remedied with enclosed, locking, cable ducting systems. Conclusion Remote cyber-attacks are the biggest threat to all data and systems, but physical protection should not be neglected. Nearly all data has some value and the loss of data or systems shutting down have very high associated costs for its owner. Physical protection may take second place in this war, but once a professional outside organisation has penetrated the physical barriers, you may never know about it until they want to gain a political or commercial advantage. As a result you need both types of protection otherwise you may have a very nasty surprise in the future.

Data Centre Hub May 2015 | 27


DATA CENTRE SUMMIT 2015 NORTH

om .c ld or

ew tr en ac at .d

w w

w

Manchester’s Old Trafford Conference Centre

30th of September 2015

Registration is now open Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchester’s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business. The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals.

Platinum Headline Sponsor

Event Sponsor

TO REGISTER CLICK HERE


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.