Data Centre Hub July 2015

Page 1

DATACENTREHUB Issue 3

Increasing energy efficiency ➲ Automating Network Visibility ➲ Heat Load Testing ➲ Network Connectivity

Data Centre Hub March 2015 | A


Data Centre Testing Services UK, Europe and Middle East Rental Leading Industry Solutions n Server Simulator Load Banks 2KW to 6KW 4U 19� rack mounted n 20KW Portable Mini Tower 1ph and 3ph n Combined ATS & Cable Distribution units n Data Logging: Power, Temperature and Humidity n CFD Reporting of IST

Check out our youtube video

www.loadbanks.co.uk Hillstone Products Ltd Tel: +44 (0)161-763-3100 Fax: +44 (0)161-763-3158 sales@hillstone.co.uk


| Contents

Quick Look Case Studies

A Threat From Within Increasing energy efficiency The Excel Solution

14 24 32

Connectivity

Network Connectivity

30

Data Security

An Evolution in Infrastructure

8

28

Data Centres

Automating Network Visibility 12 What’s Free, Renewable and Really Cool? 20 Load Testing

The Mega Data Centre IST Challenge 6 Heat Load Testing 8

16

24

Network Visibility

Automating Network Visibility

12

Opinion

Cloud Backup

18

PUE

The Significance of PUE

22

Regulars

News 4

30 Data Centre Hub July 2015 | 1


Foreward |

DATACENTREHUB Issue 3

Increasing energy efficiency ➲ Automating Network Visibility ➲ Heat Load Testing ➲ Network Connectivity

Data Centre Hub March 2015 | A

Publisher & Managing Director: Peter Herbert

Design: LGN Media

The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. This publication is protected by copyright © 2015 and accordingly must not be reproduced in any medium. All rights reserved. Data Centre Hub stories, news, know-how? Please submit to peter.herbert@datacentrehub.com

Welcome to the third edition of Data Centre Hub. Welcome to issue 3 of Data Centre Hub. Data centre load testing is a subject that has come up more frequently over the past year or two. In this issue two of the industries leading specialists examine the subject in greater detail and make some practical suggestions as to how and why load testing should make a difference to your facility. We also have a detailed look at data centre cooling delivered by Alan Beresford of Eco-Cooling. In this article Alan looks at how to use a plentiful and also renewable resource to cool your environment, that being fresh air! This issue contains the industry preview of Data Centre Summit North, which will open its doors at Manchester United’s Old Trafford stadium on the 30th of September 2015. Over 40 exhibitors are already booked to attend and some of the industries best speakers will be delivering practical knowledge in the form of 20-minute seminar sessions. If you would like to attend just visit www.datacentresummit.co.uk and register for free. After the successful launch of Data Centre Summit North, Data Centre Hub in collaboration with LGN Media are pleased to announce the second Data Centre Summit event will take place in London 10th February 2016 at the Barbican Centre. Further information on the event can be found at www. datacentresummit.co.uk I hope you enjoy the issue and look forward to receiving your comments and articles for future issues. Peter Herbert

2 | Data Centre Hub July 2015


Innovative

Data Centres at the core of your business Sudlows are leading experts in data centre audit and consultancy. We specialise in the design, build and maintenance of energy efficient, sustainable data centre environments.

innovation Pod Book a tour of our award winning data centre

Call +44 (0) 870 278 2787 or email hello@sudlows.com to discover more about what we can do. www.sudlows.com

Audit | Design | Build | Maintain


News |

All the Latest Data Centre Hub News N2S celebrates achieving government approval for data destruction Network 2 Supplies (N2S), a UK leader in IT lifecycle management, is celebrating ‘great news’ after achieving accreditation allowing it to work at the highest level of the HM Government. The Suffolk based company has been awarded CAS-S (CESG Assured Service Sanitation) accreditation for data destruction at the highest levels within government departments. CESG is the information security arm of the Government Communications Headquarters (GCHQ) and the National Technical Authority, which is considered the definitive voice on the technical aspects of information security in the government. N2S will now provide government approved onsite data sanitisation and recycling of IT equipment at the end of its life. The accreditation will also allow N2S to specialise in the destruction of Government Security Classified data bearing assets in accordance with CESG policy. N2S can shred or disintegrate all IT equipment including computer hard drives, tapes, mobile phones and associated cabling. All N2S engineers are security cleared. N2S can also offer onsite data wiping and carry out secure moves from one customer site to another. Andy Gomarsall, N2S director, said “Obtaining the CAS-S accreditation is fabulous news for us and is part of the ongoing N2S strategy designed to make us the most secure IT data destruction company within the UK. This accreditation will complement the range of services that N2S provides to its client base and one which will be required by our global business partners.”

Data centre infrastructure tips towards 40% by 2019 Consulting firm BroadGroup has released the new edition of its seminal report on the data centre market in Europe. Representing 27% of global market revenues for third-party data centres, Europe is poised for continued growth in co-location and cloud over the next four years. Following its proposed acquisition of TelecityGroup – to be closed in 2016 – Equinix will lead the market in terms of space, holding a total share of 9%. Much expanded from previous reports, Data Centre Europe 6 provides forecasts to 2019 for co-location, hosting and cloud covering 18 countries in Europe and profiles and assessments of 16 major players in the market. The report provides an analysis of the broader trends and future developments in the sector, and market shares at country level. It also extends the area of analysis to include hosting and IaaS in Western Europe.

4 | Data Centre Hub July 2015

RiT Technologies Names Yossi BenHarosh President and CEO Tel Aviv, Israel, July 7, 2015 – RiT Technologies Ltd. (NASDAQ: RITT), a leading provider of IIM and structured cabling solutions and the developer of an innovative indoor optical wireless technology solution (Beamcaster), today announced that its Board of Directors has unanimously approved the appointment of Yossi Ben-Harosh as President and Chief Executive Officer, effective immediately. Mr. Ben-Harosh replaces Motti Hania, who has resigned to pursue other opportunities. “The Board of Directors is excited that Yossi has agreed to join RiT and eager to work closely with a proven leader during this important and exciting time in our history. Yossi brings to RiT tremendous experience and success gained during his more than 20 years as an executive in the high-tech industry,” said Sergey Anisimov, Chairman of RiT Technologies. “On behalf of the Board, I would like to thank Motti Hania for his service during a very challenging period for RiT. I wish him all the best in his future endeavors.” Motti Hania said, “Throughout my tenure, I have worked closely with the Board to assemble an effective team and develop new, innovative products, with the goal of enabling RiT to achieve sustainable growth. With RiT moving positively towards stable growth, I believe the time is right to transition to a new leader that brings both the necessary energy and experience to drive further improved results. I am grateful to the Board of Directors for their active participation and cooperation in helping develop a thoughtful turnaround plan for the Company.” Mr. Ben-Harosh, 52, held roles of increasing responsibility over the past six years with Amdocs, most recently serving as President of the Global Operation Division and preceding that, he was President of Global Customer Operations Management. Prior to joining Amdocs in 2009, Mr. Ben-Harosh was President and CEO of Telrad

Networks LTD from 2006 – 2009. Before joining Telrad, he served as a project manager at TEVA Pharmaceutical Industries LTD. Mr. Ben-Harosh holds an MBA from Bar-Ilan University and a bachelor’s degree in Industrial Engineering & Management from Ben-Gurion University. Mr. Ben-Harosh said, “Having made favorable progress within its core network infrastructure business over the past year, coupled with the positive strides it has made with Beamcaster, RiT is well-positioned to accelerate growth. I look forward to setting both the strategic direction and leading the team to increased success and continuing to improve RiT’s added-value to our customers.” About RiT Technologies RiT Technologies (NASDAQ: RITT), is a leading provider of IIM and structured cabling solutions and a developer of an innovative indoor optical wireless technology solution. The RiT IIM products provide network utilization for data centers, communication rooms and work space environments. They help companies plan and provision, monitor and troubleshoot their communications networks, maximizing utilization, reliability and physical security of the network while minimizing unplanned downtime. The RiT solutions are deployed around the world, in a broad range of organizations, including data centers in corporate organizations, government agencies, financial institutions, airport authorities, healthcare and education institutions and more. Our BeamCaster™ product is an innovative indoor optical wireless networking technology solutions, designed to help customers streamline deployment, reduce infrastructure design, installation and maintenance complexity and enhance security in a cost effective way. RiT’s shares are traded on the NASDAQ Capital Market under the symbol RITT. For more information, please visit: www.rittech.com


| News

Research points to increased focus on value and innovation in the data centre Independent research commissioned by Zenium Technology Partners (www.zeniumdatacenters.com) suggests that the determination to demonstrate value and deliver innovation to the business is competing with traditional ‘reasons to outsource’ data centre requirements. According to the report - ‘Managing Growth, Risk & the Cloud’ - 86% of respondents felt that data centre outsourcing is the most effective way to manage core IT infrastructure, enabling organisations to focus on demonstrating value and innovation to the business. Indeed 73% reported that outsourcing enabled them to increase the amount of time they were able to devote to this kind of work, with the average increase in available time cited as 24%. Interestingly the positive impact of outsourcing has been especially pronounced in Turkey, with 81% of respondents saying they have seen an increase in time they have available for value-add work, and on average they have seen an increase of 34%. Commenting on the research, Franek Sodzawiczny, CEO & Founder of Zenium Technology Partners, said: “At a time of

increased globalisation, advancements in technology and the relentless move to the cloud, it should be no surprise that 46% of those surveyed want to free up IT staff by outsourcing core data center requirements. New and innovative ways to support mission critical business systems must be developed if we are to harness the full potential of the cloud, for example, and this can only be achieved if IT staff have the time to explore and exploit new developments for the benefit of the business as a whole.” The standard motivations to outsource data centre requirements remain constant – cost reduction (61%), improved resilience/up time (49%), connectivity (41%), scalability (37%) – but 87% of respondents also regard outsourcing as the most effective way to demonstrate accountability and compliance to the board, in relation to energy efficiency, carbon footprint, security and resilience. It is also worth noting that 86% of respondents think that companies that outsource benefit from having access to more sophisticated and advanced infrastructure than their budgets would otherwise allow.

DATACENTREHUB If you have any news stories please forward to Peter Herbert. peter.herbert@datacentrehub.com

SAVE 90% ON YOUR DATA CENTER COOLING COSTS The Fresh Air Cooling and Ventilation Specialists Industrial, Commercial , IT and Data Center Evaporative Cooling Over 350 UK Installations No refrigerants and low carbon ASHRAE compliant conditions New build and retro fit Internal and external product ranges Advanced control system ROI in under 1 year

New products

www.ecocooling.org | sales@ecocooling.org | 01284 810586 Data Centre Hub July 2015 | 5


Load | Testing

The Mega Data Centre

IST Challenge Integrated System Testing. Paul Smethurst highlights the issues of load bank testing 100MW data centres. By Paul Smethurst, CEO Hillstone

Introduction The insatiable demand for data coupled with the growth of cloudbased services has changed the European data centre landscape with the arrival of the mega data centre. The mega data centre, which allows global software giants like Microsoft, Google and Apple to provide our day-to-day IT services, is also the foundation for colocation providers such as Digital Reality Trust, Equinix, Telecity and Interxion to facilitate connectivity to the cloud for multinational conglomerates in banking, telecoms, oil and gas. With such a rapid expansion of cloud services, how do you commission mega data centres of 20MW, 40MW, 80MW and 100MW? Historically, the largest ever load bank solutions have been in the oil and gas sector and used 50-70MW of containerised load banks. The load bank would be situated outdoors as part of very large temporary generator power projects. Fortunately, the evolution of the mega data centre has taken a practical, modular build approach, with roll out phases of dual halls at 2500KW or as a single 5000KW empty white space. However, such a reduction in rating does not reduce the challenges of sourcing the quantity of required load banks needed to complete integrated system testing (IST). Integrated System Testing The primary objective for data hall IST commissioning is to verify the mechanical and electrical 6 | Data Centre Hub July 2015

systems under full load operating conditions, plus maintenance and failure scenarios, to ensure the data hall is ready for the deployment of active equipment. Todays’ IST requires a package of equipment that will closely replicate the data hall when in live operation. Server simulators, load banks, flexible cable distribution, automatic transfer switches, data logging for electrical power, environmental conditions (temperature and humidity) and the ability to incorporate the load banks within temporary hot aisle separation partitions give the foundations for a successful IST. These tools allow the commissioning report to present a computational fluid dynamics (CFD) model of the actual data hall operation. The selection and use of server simulators, typically rated between 3KW to 6KW as per the expected IT rack loads, gives a granular distribution of low delta temperature across the data hall. Such detailed consideration of air distribution during testing is required due to the scale of the IST and the increased volumes of air that’s affected in the mega data centre environment. This replicated heat allows mechanical cooling systems to run at optimum design temperature, which ensures deployment of future active IT equipment will not overheat and fail. If the commissioning occurs prior to deployment of IT cabinets, server simulators can be used and housed in portable mini-towers for distribution across the empty space. The requirement for using

flexible cable distribution facilitates the ease of cabling high quantities of 5KW to 20KW-rated load banks to A and B feeds on a PDU or busbar infrastructure. If the cable distribution also includes the ability to automatically transfer the load then the commissioning team can replicate maintenance procedures and failure scenarios during the IST. In order to report the successful operation and performance of

Completing an IST to budget takes on a greater importance when commissioning a mega data centre.


Load | Testing

the room during the IST, the commissioning team will need to monitor and record electrical and environmental data. Having electrical data available within the load package avoids the use of a power analyser with exposed connections to live terminals in the data hall. When server simulators include temperature sensors extensive temperature analysis allows CFD modeling to be performed in the testing period. While the data centre will ultimately have a building management system (BMS) and a data hall fitted out with the latest DCIM system, they are unlikely to be fully operational at the time of testing. The project team should source a provider offering only server simulators at the earliest opportunity in the project and avoid alternative load bank solutions that will cause delays to the IST program. Common Mistakes The restricted choice in the market dilutes the availability of load bank choices and equipment. Decisions to select on cost the wrong type of load bank solution can compromise the validity of the IST, but the unknown hidden problems will

not manifest until the data hall goes live with active IT equipment. The temptation to choose 20KW 3 phase industrial space heaters rather than load bank server simulators effects the commissioning of mechanical cooling systems. The design of such heaters prevents the elevation of the ambient room temperature reaching the design criteria needed to commission the CRAC or AUH units. Some suppliers have removed the thermostatic controls only to find the space heater over heats and in some circumstance catches fire. The choice of large 110KW load banks can be justified when testing site equipment such as PDU panels, bus bars or switchboards to Level 3 ASHRAE requirements. These load banks provide a cost effective solution to proving the electrical infrastructure of the mega data centres, however they will create localised hotspots or areas of concentrated heat should they be used for the commissioning of the cooling systems. In extreme circumstances during tier certification the electrical load has been provided from 2KW infrared heaters or 1KW hair dryers. Infrared heaters create an

ambient temperature of >40 degrees Celsius and wall skin temperatures of 70 degrees Celsius. Hair dryers are not designed for continuous operation as required in an IST. This type of low cost solution should not be considered to replicate the operation of IT equipment and risks costly delays while compromising the integrity of the testing program. Achieving Cost Savings Completing an IST to budget takes on a greater importance when commissioning a mega data centre, especially with the number of rental load banks that will be required. The increase in size of the facility will increase the time needed to complete the commissioning, so by combining the latest technologies traditional delays often associated with load banks can now be avoided. By selecting solutions that give enhanced data logging, the commissioning report will also give the client detailed levels of operating information and fully auditable reports for future tenants considering use of the space. Data logging can also be used in CFD models to ensure that the mega data centre is ready to use.

Data Centre Hub July 2015 | 7


Load | Testing

Heat Load Testing Reducing the Cost of Cooling. What can be done to make heat load testing more effective? By Dave Wolfenden, Director, Mafi Mushkila Introduction It doesn’t matter whether it’s a new data centre or a refurbishment, getting the cooling balance wrong means throwing money away. The problem for many data centre builders is that irrespective of new build or refurbishment, they often have no idea what the real heat load will be. As a result they do limited testing, which is often poorly designed, and their clients eventually end up paying the price. Heat Source One of the biggest problems with testing is making it representative of the end use of the product. It doesn’t matter if it’s a car, a washing machine, a laptop computer or a data centre. If the tests have no basis in reality, then not only is the time and money spent on them a waste, but the false impression that they give of efficiency means that future waste is completely undetected. In the data centre, one of the most common ways of testing the heat load is to introduce a heat source. There is nothing inherently wrong with this provided that it is done in a real-world way. For example, introducing a heat source of 20kW, 40kW, 60kW or higher might seem like a good way to discover what the cooling system can handle. However, if there is no equipment other than the heat source in the room, all that is being tested is the ability of the cooling systems to deal with hot spots rather than normal computer load. One of the hardest things to get from the end-user client is expected loading inside the data centre. While customers put down numbers in their specification, they are often ‘guestimates’ rather than realistic numbers. With long lead times for data centre construction, it is also possible that the hardware originally destined for the data centre may have changed. 8 | Data Centre Hub July 2015

The solution is to get a customer to provide a range of value for each data centre hall giving the expected lower end, the maximum expected load and some idea of the type of systems to be installed. With the latter, it is then possible to place variable heat sources around the data centre in order to best match how heat will be generated when systems are running. Effective Testing There are a range of actions that can be taken to make testing more effective and it doesn’t matter if this is a new build or a refurbishment. The key is to get infrastructure and load emulators into the hall and configure them to be as representative of reality as possible. Ten things that can be done to make testing more effective include:

Racks and cable trays are the minimum type of equipment required. To make the cable trays more effective, tape off some of the ducts to represent different densities of cable load.

1

Make sure that the racks are all properly blanked off to prevent air mixing and arrange them in a similar configuration to the end user requirement.

2

Not all infrastructures will be rack mounted, so add in additional components to represent the type of equipment often found in the data centre.

3

4

Make sure that the expected means of input and exhaust air are accounted for along

It’s not difficult to create a valid test environment, but it does require planning.


Server Emulators & Load for Commissioning & Integrated Systems Tes9ng Temporary Racks & Power distribu9on Temperature & Humidity logging Fully installed and managed or rental only

‫ مشكله‬ ‫ال‬

Mafi Mushkila Ltd

Datacentre Testing, No Problem

15MW Heat Load Available for rent

Rack Mounted 2kw Single Phase

Rack Mounted 3.5 or 3.75kw Single Phase Floor Standing – 3-Phase & Single Phase 2, 3, 9, 15 & 22kw

Visit us at Data Centre Summit 2015, North. 30th September www.mafi-mushkila.co.uk +44 1243 575106


Load | Testing

Experienced testing vendors will be able to advise on how best to create realistic baselines for the types of workload to be run.

with the common types of air flow interference and arrange the room accordingly. Place multiple sensors in each rack and row to get a granular view of air from the ground to the ceiling.

5

Use multiple heat emulators per rack and place them where the load will occur. For example, if the rack will have multiple switches mounted at the top, place a heat emulator there. Similarly, if there are going to be blade servers that generate large amounts of heat at the bottom, place a larger heat emulator at the bottom.

6

Don’t test with a single heat load. Vary the loads from the emulators across the racks and throughout the day to make this representative of normal workloads. Focus on the edge cases such as peak logon and backup times.

7

If designing or refurbishing multiple data halls, create or buy movable racks that will hold the heat emulators. With hardware refreshes taking place every 3-5 years and data centres having a life of up to 25 years, it makes sense to invest in equipment that will support a rolling program of refurbishment.

8

9

Invest in or hire specialists in computational fluid

10 | Data Centre Hub July 2015

dynamics (CFD) who will be able to see how airflow moves as you alter the heat load. This will quickly identify where there is a risk of hot spots that are not easily cooled and provide information as to where certain types of equipment with high heat load can/cannot be located.

to maintain cable standards, can have a disproportionate impact of the effectiveness of cooling. The sensors showing an increase in heat and regular use of CFD to test airflow will also indicate hidden problems, especially where infrastructure is under the floor or above the ceiling.

Ensure that you create a set of baselines for the different types of test load. This can then be used to compare against ongoing readings from the sensors once the data centre has been commissioned. Evaluating actual vs. projected heat is a good indicator of future problems and potential energy waste.

Conclusion It’s not difficult to create a valid heating and cooling test environment, but it does require planning. The biggest issue is often the communication between the end-user clients, the contractor creating/building the data centre and the test team. In many cases, while there is a contract and a handover process, there is little real communication over future use. One reason for this is that those commissioning the data centre are not part of the IT team. There is still a disconnect between facilities management and IT and this will always create opportunities for money to be wasted. Another reason is commercial sensitivity where corporate customers want to prevent competitors gaining an understanding of their future data centre requirements. Irrespective of why customer and contractor do not talk to each other, both parties must take responsibility for historical poor practices in testing data centres. Solving the problem is not hard, will save money on energy costs and is an important contributor to any corporate environmental audit.

10

These steps don’t form an exhaustive list. The choice of whether to use some or all will depend on budget and availability of equipment. It is possible to bring in a third party to do the testing and they will supply more of the equipment required to make the testing realistic. Experienced testing vendors will be able to advise on how best to create realistic baselines for the types of workload to be run. They will also help create the processes that then compare tests, projections and actual heat and cooling figures gathered by sensors. While these figures are significant in maintaining future costs and efficiency they are also indicators of non-IT related problems. Poor housekeeping practices, such as a build up of old cabling or a failure



Network | Visibility

Automating

Network Visibility DC operators face the challenge of maintaining extremely high levels of availability, whilst significantly improving efficiencies and lowering costs. Automating Network Visibility can help – but where to start? Barry Silverman, Senior Business Development Manager, R&M UK

xxxxxx

12 | Data Centre Hub July 2015

As you connect more hardware, tracking the operational aspects of your servers, switches, cooling and power equipment and any other linked IT hardware becomes increasingly difficult. Furthermore, as the number of applications grows, the underlying architecture is becoming increasingly complex. In today’s increasingly complex and converging environment, Data Centre monitoring and management require constant attention. The significantly higher percentage of east-west DC traffic and the complexity of blade servers further add to the problem. When you add mobile workloads, big data and increasingly erratic traffic patterns into the equation, the chance of retaining an up to date clear overview of all DC systems becomes very slim. The average surface area of data centres is currently between 1,000 and 2,500 m2, often with thousands of network ports. You really don’t want to bother with manual faultfinding if anything goes wrong. Knowing the exact location of every port, switch, cable, link and router and how they are all connected is no luxury. Especially in the light of developments such as cloud, mobile infrastructure, BYOD, 10/40/100G,


Network | Visibility convergence and virtualization. It is also worth pointing out that manually managed infrastructure data typically has a 10 percept error rate* and 20-40 percent of ports in a network are forgotten over time**. Mapping and management also takes up a great deal of staff time, introduces unnecessary costs and hinders inventory consolidation. Automated monitoring: the benefits Automating the monitoring of the entire DC, including all of its components and subsystems, would be an ideal solution. Automated realtime network monitoring allows you to avoid or mitigate problems and security threats at the moment they occur. Increased network visibility not only improves uptime and greater efficiency in handling errors, it also lowers OPEX and provisioning costs. In addition, monitoring supports auditing and helps find weak spots. Management reporting improves significantly, as does documentation on which improvements, upgrades, strategic choices and hardware purchases are based. Furthermore, a well-thought out and implemented monitoring solution helps discover external security threats much faster.

Increasing DC virtualization means that reaching core networks is actually easier for hackers, so security needs to be stepped up and monitoring can help. Finding a starting point Knowing where to start and what to tackle first is difficult. Budgets are often limited, so being able to cover every part of the network from the outset is unlikely. Of course, you can choose to start by investing in the areas which will bring the greatest short-term returns, or which are the most vulnerable, or which can be scaled most easily. New network monitoring functions can simply be provisioned whenever new services are set up or customers added. To make the best possible decisions, you need to have access to the most accurate, up to date data at all times. However, mapping every single thing in the DC could result in a map that is as complicated as the DC itself. By building visibility directly into the DC instead, you can significantly limit the dangers that come from blind spots without losing your overview. Data on tap Traffic access points, or TAPs, offer ongoing, granular insight into everything taking place on the network. TAPs are an innovative way of accessing traffic in real-time. TAPs contain passive fibre-optic splitters that deliver an identical copy of a passing optical signal without introducing latency or packet loss. Once the optical signal is out of band and connected to a Fibre Channel or

Ethernet network probe, the entire infrastructure can be monitored in real-time without agents - helping ensure availability and performance for live applications. Ideally, TAPs are fully compatible with the existing structured cabling infrastructure hardware. Deploying TAPs during a refresh cycle or new build and making access to these products controllable with RFID-based monitoring, minimises risk without increasing rack space. Total Network Visibility: choosing wisely For some years, automation has been considered a key feature when it comes to optimizing data centre productivity – for example by taking away the need to manually program switches. Such benefits can also be realised in the area of monitoring. Adaptive automated network monitoring increases flexibility and reaction speed and takes away dreaded ‘blind spots’. According to Gartner, intelligent data centre infrastructure management can cut operational costs by 20 to 30 percent. Apart from drastically reducing the time spent on creating inventories, capacity is freed up to spend on core tasks which contribute to the bottom line. An automated solution can help significantly enhance performance, improve response times and futureproof your network and should be selected with the same care as other core network components. * Source: Watson & Fulton ** Source: Frost & Sullivan Data Centre Hub July 2015 | 13


Case | Study

A Threat

From Within About the Client The client is a global business-tobusiness agency with over 600 employees. Plan B, a specialist disaster recovery company, looks after all advertising imagery for this client, which is one of their most valuable assets and totals 12TB of data, predominantly on one large file server. Plan B minimises the risk of IT downtime by pre-recovering and testing their systems every 24 hours, offering a hot standby equivalent in a virtual environment. This offers them the fastest return to service of all virtual DR companies at a lower cost. The Disaster One morning Plan B received a call from a member of the technical team. They had lost around 8TB of data and needed it back quickly. The best way to achieve this was to

invoke the Plan B recovery platform, as data transfer times for 8TB, even to disc, were estimated at days rather than hours. The initial diagnosis by the client was a virus, however a further 3TB had gone ‘missing’ during the invocation process, leading the Plan B technical team to suspect the cause was closer to home. This was because the deletions were different: the first a disc deletion and the second by a disc being reformatted. Agreeing the cause of the missing data could be an internal threat, staff were asked to leave the building and admin domain passwords changed. Unfortunately the root cause of the trauma had not been completely resolved by the change in admin domain passwords as the SAN admin passwords were not changed and the perpetrator managed to delete some of the LUNs on the SAN.

An external provider is much better suited to handle IT disasters 14 | Data Centre Hub July 2015

Disaster Recovery Solutions. An IT disaster can impair your performance and capabilities


DATACENTRES are MATURING Mature Data Centres know that protecting their customers’ data isn’t just about being popular, living in the upmarket streets of London, wearing Tier III trainers or comparing the size of their PUE.

A mature data centre understands that high quality, exceptional service, low cost & ultimate flexibility combined with levels of security unsurpassed elsewhere is more important than boasting about the size of your PUE or your tier III label.

Don’t let childish boasts cloud your decision choose a data centre that offers maturity and puts your business needs first.

Contact MigSolv Today

0845 251 2255

migsolv.com


Case | Study

The Recovery Plan B booted up the client’s pre-recovered system and within just a few minutes of the initial call it was fully available to the client. Due to the nature of the problem, a high level call was made to the parent company in the USA to establish who could be trusted to handle the recovered system. It was only on their authority that we made their system available to the Head of IT only. The plan going forward was to restore the client’s servers from the Plan B appliance rather than give employees access to the rescue platform. Because of the amounts of data involved, this was a very lengthy process so the Head of IT manually provided information to employees. As a safety net, in parallel to this it was agreed that Plan B start restoring the 12TB of data onto an ‘export server’ so that it could be physically couriered to site and deployed as a replacement to the original server. This would cover them if the initial data restoration took too long. This process took 8 days over the Christmas break to right to disc (demonstrating how long they would have been without their systems if they didn’t use Plan 16 | Data Centre Hub July 2015

B’s Pre-recovery service). It was fully tested and shipped to the customer where the Head of IT took receipt and installed it, providing a standalone local access of files while the data restoration to the newly built servers continued. At the start of the New Year service was starting to get back to normal as staff came back from the festive period. Unfortunately, however, the incident was not quite over as the Export server was left unprotected within the locked machine room. Someone subsequently accessed the machine room and deleted one of the VMs. At this stage the culprit had been identified, but the damage still remained. The client continued to provide services to their customers during this traumatic period, working from the Plan B’s protected systems. Having finally secured the perimeter, local services were rebuilt and fully restored and a short while afterwards the customer was moved back to their live system. The Plan B Disaster Recovery service resumed, protecting the new ‘live’ platform in the usual, dependable manner. The Conclusion The Plan B Disaster Recovery

solution was able to save this client from experiencing severe consequences of their IT disaster, but only because: • Plan B pre-recover. Recovering after the event they would have taken days rather than minutes due to the nature and longevity of the attack. • Plan B is independent of their IT department. If they had been trying to run their own disaster recovery provision inside their management perimeter, it’s likely that the attacker would have destroyed that too. This situation illustrates just how much an IT disaster can impair performance and capabilities. Even though the client was highly capable of withdrawing access from employees appropriately under normal circumstances, they were under such immense pressure that they were unable to lock down their IT system effectively when they needed to. It is very common for the stress and pressure of an IT disaster to adversely affect performance of individuals, leading to further errors and impairing recovery. An external provider is much better suited to handle IT disasters for exactly this reason.


IT Cooling Solutions

The Whole Range of Data Center Cooling Solutions from a Single Source CyberAir 3 Room Cooling

CyberRow High Density Cooling

CyberCool 2 Chiller Units

CyberCon Modular Data Center Cooling

CyberHandler Air Handling Units

STULZ GmbH . Company Headquarters . Holsteiner Chaussee 283 . 22457 Hamburg . Germany products@stulz.com . Near you all over the world: with sixteen subsidiaries, six production sites and sales and service partners in more than 120 countries. www.stulz.com

D ATA C E N T R E C O O L I N G S O L U T I O N S S A L E S / S U P P O R T / S E R V I C E / S PA R E S STULZ UK Ltd . First Quarter . Blenheim Road . Epsom . Surrey . KT19 9QN 01372 749666 . Sales@stulz.co.uk . www.stulz.com Data Centre Hub July 2015 | 17


Opinion |

CLOUD BACKUP Implementing a Reliable Strategy. Chris Sigley divulges the keys to preserving information security and risk management. By Chris Sigley, General Manager of Redstor

As IT continues to evolve within business, companies are exposed to more risks than ever before. 18 | Data Centre Hub July 2015


| Opinion

Introduction As the role of information technology continues to grow and evolve within business, the potential risks associated with accessing, storing, sharing and protecting information are similarly increasing. In order to better equip themselves to adjust to these kinds of threats, businesses need to consider the various risks they might be vulnerable to and implement a reliable strategy to deal with these effectively and efficiently. Firstly, let’s consider a few threats. In each of the scenarios below, vulnerability can result in a serious risk to your business: • A hacker obtains access to your website and remains undetected until the damage has been done. Maybe they have maliciously updated something on the website or they have defaced it to generate negative public opinion. • Attacks have resulted in your communications getting blocked, or perhaps your domain name has become blacklisted. • Hardware failures are exacerbated by the lack of an up-to-date DR plan. • You recently discovered that a disgruntled employee who left the company a few weeks ago used their high access privileges and deleted or updated some critical internal data.

An environmental problem (e.g. flood, fire, power failure) means you have no access to your server room, and all the kit is powered off.

In order to try and stop these kind of threats from resulting in disastrous consequences for your business, here are three areas you should review, consider and action. Prevent As the classic idiom states, ‘prevention is better than cure.’ Try and prevent attacks from happening in the first place by utilising network and software technologies that detect and block threats while allowing appropriate traffic to proceed with minimal performance impact. This is an area that most of us have already thought about and implemented. Firewalls, proxy servers, spam filters, web filtering and isolated DMZ’s to name but a few. React Consider how you would react to these threats if they were to actually happen. Are you able to roll back your applications and critical data or restore entire systems? Maybe you can go back to last night easily, but what if you need to go back three weeks? How quickly can you get those back and then reinstate the systems as they were before the incident?

Plan Plan for the worst, but hope for the best. Have you got copies of all your critical servers, services and data in an offsite location, away from the incident? If so, have you tested that you can actually recover that data? Is it part of your regular DR tests, or do you not even have an up-to-date Disaster Recovery plan? Conclusion As IT continues to evolve within business, companies are exposed to more risks than ever before and it’s essential that they remain robust and agile to cope with them. Utilising cloud services is an effective way for an organisation to safeguard itself from a number of critical threats facing businesses today. By implementing secure cloud backup, unified endpoint management and efficient cloudbased disaster recovery, companies can become less reliant on hugely complex disaster recovery plans and are no longer faced with significant upfront expenditure to ensure they’re protected. Nevertheless, one thing that has remained unchanged is the need to choose the right service delivery partner and the right technology for the job. No two businesses are alike and the same can be said for a reputable cloud provider. It’s important that business owners consider their individual needs. and choose a service that can be customised to fit those needs.

Data Centre Hub July 2015 | 19


Data | Centre

What’s Free, Renewable and

Really Cool? Free Fresh Air Cooling , Alan Beresford explains the benefits of free fresh air. By Alan Beresford, Managing Director, EcoCooling.

Since the dawn of (data centre) time, we’ve all been conditioned to believe that we need refrigerated air to cool our sensitive IT kit. Yet if you only blow hard onto something with slightly cooler air you make it colder. So, says Alan Beresford, Technical and Managing Director of EcoCooling, why not apply that same principle in the data centre and do away with refrigeration? Data centre cooling air doesn’t need to be cold, it only needs to be a little cooler than the thing you’re trying to cool. What really matters is the volume of slightly cooler air that ‘blows’ onto the item being cooled. Thanks to changes to both server designs and the guidance from ASHRAE (the standards body for data centre cooling) it is now possible to use free Fresh Air to achieve the newly redefined ideal operating temperature range for IT equipment of 21C to 27C. Where in Europe and the top half of the northern hemisphere can you find an endless supply of air at 21C or less on most of the days of the year? Just about everywhere - because we’re talking about plain, ordinary fresh air and it’s available for free by the thousands of billions of cubic metres. Giants like Facebook with their 120MW Lulea data centre in Sweden; and 20 | Data Centre Hub July 2015

one of Europe’s largest Telco’s are using Free Fresh Air to cool their data centres and digital switching centres. And neither has deployed any form of expensive, energyguzzling, auxiliary refrigeration plant. How are these two giant data centre operations overcoming the problem of the occasional ‘hot’ (21C+) days? Facebook’s strategy is simple and elegant. Having done much analysis and modelling on the effects on reliability and uptime, Facebook concluded that for them the most cost effective solution to the few ‘hot’ days in Lulea is simply to allow the IT kit get a bit warmer and on the very hottest days evaporative cooling. On the other hand, the major European Telco I mentioned has been busy de-installing all of its expensive to run refrigeration plant and is installing adiabatic (evaporative) CRAC-like units in its data centres and telephone exchanges to cope with the ten per cent or less of days that are ‘warm’. These units are far cheaper and less space consuming than refrigeration plant and consume only 10 per cent of the power. Key to the Free Fresh Air strategy however is designing the whole system so that this

evaporative cooling only kicks in on the hottest of days. As a result, the overall cooling cost is reduced phenomenally. That’s because for around 95 per cent of days the only ‘cost’ of cooling is that of running electrically commutated (EC) fans to blow and extract the correct volume of Free Fresh Air through the data centre or telephone exchange. In both of these examples, the operators are looking at true annualised PUEs less than 1.1. Not Free For All We’re currently installing Free Fresh Air cooling at 70 sites, and as more data centre operators realise what is actually possible and conduct their own investigations we will doubtless see significant movement to this methodology throughout the top half of the northern hemisphere and the bottom half of the southern one. But this solution isn’t suitable for every data centre. Strangely, the thing that you’d probably most expect to be a problem with Fresh Air – relative humidity (RH) generally isn’t a problem. However, RH combined with the wrong sort of dust or particles or corrosive gasses in the atmosphere local to the data centre is problematic. From this you can see that the location of a data centre relative to other industry


Data | Centre

and/or to city centre traffic fumes needs to be carefully considered and analysed when deciding whether Free Fresh Air cooling can be deployed. Also key to the decision process is the localised historic weather data relating to temperature and humidity - since this affects the number of ‘hot’ days where auxiliary evaporative cooling or refrigeration support might be needed. One of the biggest issues with Free Fresh Air cooling is the generally large number of days where the outside temperature is TOO COLD! Some modern server equipment is programmed to shut down if it drops below around 14C. It’s for this reason that we at EcoCooling have put a lot of R&D effort into producing patented control systems and attemperation processes to keep the cooling air within a tightly controlled temperature band – typically 18C to 21C. Keeping The Fresh Air Out Because many data centre operators fear the idea of introducing Fresh Air into their data centres, there’s been an upsurge in the deployment of indirect evaporative cooling units that use heat exchangers to keep the fresh air separated from the data centre cooling air. However, while these units solve that particular concern, like all technologies they introduce other problems. One such problem is their size. Indirect evaporative coolers are particularly large. The heat exchange introduces an inefficiency in the form of the temperature differential (∆T) across the heat exchanger plate - meaning that the internal air is always around 3C warmer than the outside air. This significantly increases the number of ‘hot’ days on which evaporative cooling is operational or refrigeration support is needed. In most cases where indirect air-cooling is used, a complete parallel DX or chilled water cooling system also has to be deployed. This seriously adds to the already high capital costs and size of these indirect units. There’s also an issue with auxiliary refrigeration cooling in that it needs large amounts of power when it operates. A further spoiling factor in terms of the amount of ‘free cooling’ available with indirect air systems is that it is not possible for the Fresh Air inlet and the hot air exhaust

to be very far apart. The result is that eddy currents draw the hotter exhaust air back into the inlet. This means that the inlet temperature is considerably higher than ambient air. On Free Fresh Air we generally design the system so that inlet and exhaust are on opposite sides of the data centre building with no opportunity for hot air recirculation. Chilled Water For the cooling of data centres consuming 1MW and above there are now some very efficient chilledwater systems on the market. By combing refrigeration-based chillers with either dry or adiabatic/ evaporative pre-coolers, these systems harness the ‘free cooling’ power of ambient air to significantly reduce the amount of time that the power-hungry compressors need to run.

Horses For Courses There is no simple standard solution for cooling. Every application is absolutely unique. Chilled water can be a great solution for megawatt and multi-megawatt data centres often only needing 75 kW of energy per MW of cooling load. Indirect air cooling is a good solution in appropriate circumstances. DX (direct exchange) refrigeration cooling still has its place as a simple and quick to deploy solution. Free Fresh Air is the most efficient, needing only 1.5kW of energy or less per 35kW of IT load. It is suitable for small server rooms right through to multi-megawatt data centres and can bring PUEs of down to 1.05.

Data Centre Hub July 2015 | 21


UPS | Systems

The Significance of

PUE

What is Power Usage Effectiveness? Kenny Green discusses how modern UPS topology can help improve PUE.

By Kenny Green, Technical Support Manager for Uninterruptible Power Supplies Ltd.

Introduction Driven by continuously growing demand for secure data processing capacity, dedicated data centres have become truly enormous. Colocation provider Switch’s SuperNAP data centre campus in Las Vegas, for example, has a mission-critical power capacity of up to 200MW and can house up to 20,000 cabinets. As power demand has climbed to these levels, energy efficiency has become a critical issue for both commercial and political reasons. In recognition of this, the Green Grid – an industry group focused on data centre efficiency – created the Power Usage Effectiveness or PUE metric to determine a data centre’s efficiency using a globally recognised

calculation. PUE is defined as the ratio between the total amount of power entering a data centre and the amount usefully consumed by the data-processing load within it. As a data centre’s efficiency improves, its PUE drops; a perfectly efficient data centre would have a PUE of one. According to the Uptime Institute’s Data Center Industry Survey 2014, the typical data centre has an average self-reported PUE of 1.7, meaning for every 1.7W taken from the utility, only 1W is used directly for IT activity. The ‘useful’ power is consumed by data processing hardware including servers, storage and telecommunications equipment. The ‘overhead’ or wasted energy

Modern modular systems contribute significantly to PUE 22 | Data Centre Hub July 2015


UPS | Systems is due to chillers and other cooling equipment, switchgear and UPSs. As cooling equipment has become more efficient, attention has turned to UPS systems as they offer the major remaining PUE improvement opportunity. PUE values can vary continuously over a 24-hour period as data centre loads change, on both the overhead and IT equipment sides. External ambient temperature fluctuations can also affect cooling equipment and its contribution to the PUE value. Nevertheless, PUE provides a useful comparative indicator that can reveal improvements and changes within the data centre. However, one slightly counterintuitive result occurs if a data centre succeeds in improving its IT hardware’s efficiency, but not the efficiency of its overheads. In this situation the overall PUE deteriorates due to the change in the ratio between ‘effective’ energy use and wasted energy use. However, this effect can be negated, and PUE improved, if UPS energy efficiency is also improved. In fact, the benefits are two-fold; in addition to direct energy savings, increasing UPS efficiency cuts energy use by reducing air conditioning requirements. UPS Technology UPS design has travelled a tremendous way in the last twenty years, becoming smaller, more powerful, more flexible and most importantly, more efficient. Today, most modern UPS installations use

transformerless UPS topology, which offers several significant advantages over the earlier transformer-based approach it replaced. As a result, efficiency has massively improved in recent years and the most advanced UPS suppliers are now able to offer efficiency levels of up to 96 per cent, and in certain circumstances even higher levels are achievable, albeit with some compromises. What’s more, this high level of efficiency is closely maintained for a wide spectrum of loading, even down to 25 per cent or less, meaning efficiency does not suffer if your UPS is underutilised. Further energy efficiency savings arise as the UPS’s increased efficiency reduces waste heat output and therefore demand on cooling systems. Another benefit of modern UPS technology is the ability to configure your UPS system as a scalable set of ‘hot-swappable’ modules, rather than a single, monolithic installation. The modular approach enables load capacity to be increased throughout the life of the system, in line with business requirements. For example, a load of 60 kVA could be supported by a modular rack-mounted implementation, such as UPSL’s PowerWAVE 8000DPA, using with three 20 kVA modules. If the modular system requires N+1 redundancy to support a critical load, it can be fulfilled by slotting in a single extra 20 kVA module. This close matching to load size minimises expenditure on unnecessary capacity. By contrast, the old transformer-based solution would have required a second 80 kVA installation to achieve redundancy. This not only incurs costs for excessive capacity, but also further reduces efficiency in an already inefficient topology. The modern modular system may also improve energy efficiency by presenting an input power factor much closer to unity and far less load-dependent. This reduces the magnitude of the input currents, therefore reducing the size of the power cabling and switchgear. Eco-mode Operation Modern modular systems, such as Uninterruptible Power Supplies Ltd’s PowerWAVE 9500DPA, contribute significantly to a data centre’s efficiency and PUE as they can provide 96.1 per cent efficiency while operating in true on-line, double conversion mode. This is the

most attractive mode for almost all data centres, as it represents an optimum balance between efficient operation and full UPS protection from mains-borne spikes and disturbances as well as blackouts. However, there are occasionally applications where operators decide to increment UPS efficiency up to 99 per cent by operating the UPS in what’s called eco-mode. Circumstances under which ecomode may be acceptable include some industrial applications where the load is not susceptible to damage from mains spikes or blackouts. By contrast, in areas such as data centres where the critical load comprises sensitive ICT equipment, operators are extremely unlikely to adopt this mode. Even if the mains quality is generally good, a single mains aberration may lead to loss of data and permanent damage to unprotected ICT hardware. Despite these major shortcomings, eco-mode may one day become a popular option if IT hardware manufacturers design their products to better cope with the factional delay between the mains failing and the UPS coming online. If this is achieved, eco-mode may become a global standard practice, making a significant impact on improving PUE in the process. Conclusion As data processing equipment efficiency improves, overall PUE can deteriorate – raising the efficiency of essential support systems, such as the UPSs, is vital to maintain or improve PUE values in the long term. Modern UPS hardware is needed to correctly size for the critical load. Theoretically, PUE can be improved by operating the UPS in eco-mode; however, this benefit must be balanced against the risk of exposing the critical load to raw mains during normal operation. It is equally important to note that the significant increases in efficiency from the latest UPS makes the benefit of operating in eco-mode even less appealing, despite what some manufacturers may say. Overall, PUE can provide a valuable metric for understanding efficiency within the data centre, but improving it is not going to come from one area alone. Lowering your PUE is an ongoing challenge with reputational, financial and environmental benefits for those who keep up their commitment. Data Centre Hub July 2015 | 23


Case | Study

Increasing energy efficiency Introduction Frankfurt has the highest density of data centres of any city in Europe. Back in 2008, the German branch of Citigroup built its Frankfurt Data Centre (FDC) and created a high performance, energy-efficient facility. They appointed Siemens as their technical partner based on their ability to deliver multiple building systems and create high level protection and availability, which are key requirements for an efficient data centre. Siemens has delivered intelligent safety and security technology; reliable building automation that cools and conditions the air based on demand, keeping the servers from overheating; and an uninterruptible and redundant power supply. The project demonstrates Siemens’ ability to integrate seamlessly across a variety of disciplines. The data centre has been continually enhanced as it has evolved, with modifications

24 | Data Centre Hub July 2015

to the deployed building automation, security technology and power supply solutions to meet the growing requirements for availability, protection and energy efficiencies. Energy Efficiency Redundancy, high system availability, security and energy efficiency are key elements for Citigroup. Processed data is stored in duplicate within the data centre and mirrored to other Citigroup data centres. In the building itself, a dual power feed, duplicate downstream medium and low voltage switchboards with duplicate switches, busbars and dual cooling technology ensure maximum redundancy. If a power failure were to occur, two independent uninterruptible power supply units and the emergency power supply would take over, ensuring continued power for at least 72 hours. This means that the FDC meets the Tier

Disaster Recovery Solutions. An IT disaster can impair your performance and capabilities

The IT infrastructure of a data centre requires a great deal of energy


A more flexible way to use your power. Frankfurt Data Centre

DATA CENTRE SUMMIT 2015 NORTH

Come and visit us at stand 21 We understand that the size of your critical load is not static and ideally will grow over time. That’s why our modular, ‘hot-swappable’ UPS systems enable you to easily rightsize your power protection to match your current load requirements, whilst still giving you the flexibility to add additional capacity should the need arise.

Each simple to install module can add up to 100 kW of protection and up to five modules can be added within a single frame to deliver 500 kW of protection (400 kW N+1). What’s more, up to six racks can be paralleled to deliver scalable power up to 3 MW.

To find out more call or email us today: 01256 386700, sales@upspower.co.uk

www.upspower.co.uk


Case | Study Citigroup has had success with free cooling, relying on cool outside air and the Desigo building automation system to adjust the cooling performance to meet the actual demand. In addition Citigroup has lowered the pressure of the cool air which exits from the raised floor and cools the server racks by 10 Pa.”

Increasing energy efficiency is a top priority for the industry

IV standard awarded by the Uptime Institute for maximum redundancy and 99.995 per cent availability. To guarantee the safety and security of the building, assets and processed data, all the installed Siemens security and safety systems were aligned with Citigroup’s global safety and security requirements, as well as the facility for regular emergency evacuation and safety drills. The building automation, security, fire safety and power supply solutions include: a Desigo building automation system; 1,600 Sinteso fire detectors; early fire detection via smoke extraction systems; fence sensors and cameras for the 1,300 metre perimeter fence; 150 internal and external CCTV cameras; an intrusion detection system with 168 door sensors; 150 Sivacon lowvoltage switchboards; 104 NXAIR medium-voltage switchboards and 96 Sentron transfer control devices. Approximately 2.3 km of busbars and 22 km of medium-voltage cable were laid for the power supply. The building was awarded Leadership in Energy and Environmental Design (LEED) Platinum certification just one year after it opened, making it one of the most energy efficient data centres in the world. The 26 | Data Centre Hub July 2015

Desigo building automation system from Siemens plays a central role in maintaining high energy efficiency and the standards of the LEED certification. Saving Energy Increasing energy efficiency of data centres is a top priority for the industry. The IT infrastructure of a data centre requires a great deal of energy, however utilising intelligent lighting and cooling control can considerably lower energy consumption. Other measures at the FDC include cold aisle containment, which allows the cold air that’s cooling the servers through the raised floor to be directed in a more targeted way, thus reducing the air volume. In addition, the warm exhaust air from the server rooms, which has a heat of approximately 30 °C, can be utilised for local heating through the use of heat pumps. “Our motivation was to continuously save energy,” explains Norbert Heberer of Cofely Deutschland GmbH, the data centre’s operator. “Building automation allows us to individually control and continuously monitor the heating, ventilation and cooling technology that is so vital to us.

Conclusion The FDC is designed for thirty years of operation. The systems are flexible and continue to grow along with the data centre’s requirements. The planned power capacity of the FDC was 5 MW. In its current configuration, the data centre uses 5,000 m²of its 10,000 m² for server operations. At the beginning of operations, the data centre used 900 kW of electricity. Additional utilisation and occupancy has increased the electricity requirement to exceed 1MW. “One challenge was to continually adjust the power supply and cooling capacity to meet the demand, from the planned 5 MW of total capacity to the actual starting load of 900 kW and then to the current level of about 1.5 MW” comments Heberer. “When operations started, we had a less efficient ration of consumed energy to server energy demand, giving us a Power Usage Effectiveness, PUE, of 2.8.” Norbert adds. Technicians analysed the complete electrical supply to adjust the power supply and cooling capacity. Siemens then worked in collaboration with the technicians to optimise the cooling control. “Now all the dependencies of free cooling, pumps and chillers operate together as a bundle and can be controlled as demand dictates,” explains Heberer. To adjust energy efficiency based on demand and lower the PUE value, lighting has been tied to access control, so that lights turn on only when server rooms are occupied. In addition, air conditioning in the server rooms has been set to the optimal operating point so now less cooling capacity is needed. Air pressure in the cold aisle has also been lowered by 10 Pa. As a result of working closely with Siemens, the current PUE of the FDC is 1.5. Citigroup and Siemens continue to work in collaboration to exploit energy savings and further reduce the PUE.


Data Centre Hub July 2015 | 27


Data | Security

An Evolution in

Infrastructure

By Sean McAvan, Managing Director of NaviSite Europe

A Multi-Step Approach to Security. Sean McAvan outlines the importance of securing your data centre from human error

Introduction Ten years ago, few could have predicted what today’s data centres would look like. The development of technologies like cloud computing and the explosion of data generated from the likes of social media and the Internet of Things has completely changed the modern data centre. This data growth not only impacts how and where data is stored, but has created the challenge of how to protect this information. In recent years we have seen an evolution in infrastructure and storage to support these new trends, both for the business community and for consumers, which has driven innovation in how the data can and should be protected. Companies and individuals are responsible for securing and protecting all this data, and while great strides have been made to ensure that information is protected from external threats, it’s often humans who continue to be the weakest link in the security chain. Whether through malicious intent or inadvertent carelessness, even the most sophisticated technology can be rendered useless if sensitive information gets into the wrong hands due to human error; so data centre providers must take a multi-step approach to security. Colocation In a recent survey, NaviSite found that 82 per cent of UK respondents are either using or considering the use of colocation this year, and 54 per cent said security is a main consideration when evaluating colocation services. If you are looking to a third party provider to host your data, it is essential to seek absolute clarity on what measures of security are in place at the logical and physical level. World class data centres have a number of sophisticated controls to ensure systems remain protected,

28 | Data Centre Hub July 2015

including physical security controls like cameras and biometric access systems and may then offer managed services to deliver logical controls at the network level like firewalls, intrusion detection or DoS mitigation. At the OS level, operating systems have become more secure and more sophisticated anti-virus software is now available, while threats at the applications level can be mitigated in a number of ways; for example, intelligent web application firewalls can be implemented. These are clever enough to understand what the normal traffic patterns are for an application and if they encounter traffic patterns outside the defined ‘normal’ parameters, the firewall can automatically block the problem traffic averting a problem before it happens. External Threats Sitting on top of these tools and systems are defined processes and best-practice, including specific industry compliance standards such as PCI, HIPPA, FISMA, and others which define broader measures to protect data like ISO, SSAE16 and ISMS. But despite development in tools, systems and process, new threats continue to emerge and organisations need to be on alert to stay one step ahead of those external threats. Much of the focus on the human link in the data centre security chain is on protecting networks from outsiders, but the insider threat continues to pose a significant risk. ‘Rogue insiders’ already have access to systems and can often avoid tripping alarms that might otherwise signal some form of attack. In a 2014 Ponemon Institute survey, 30 per cent of data breaches were related to a negligent employee or contractor i.e. human error. Recognising the sources of these threats is one thing, but it is


Data | Security

Even the most sophisticated technology can be rendered useless if sensitive information gets into the wrong hands quite another to be able to deal with them. However, there are several practical steps datacentre managers can take to enable this. Many data centre providers take advantage of the new levels of sophistication in algorithms for encryption, which can provide another layer of protection, should outsiders gain access to data. However, appropriate measures need to be in place in order to ensure that rogue insiders do not get access to encryption keys, which would invalidate even the most sophisticated encryption systems. As well as encrypting data for both storage and transmission, it is important to capture all the information about data access attempts – both legal and illegal. This allows privileged users to do their jobs in a climate of transparency, while also acting as a deterrent for unauthorised access. Multiple Checks Multiple factor authentication is now more apparent, where multiple checks take place at a physical level; for example, passwords, together with finger print or retinal scans and

personal data, can be incorporated as an additional measure. In some instances a phone factor is used where a message is sent to a phone to ensure that the correct individual receives the password. This can be strengthened further by authorisation based on least privilege, intrusion detection and notification and restrictive access controls – measures that are of paramount importance when securing data. Another way in which data centres can reduce the risk of rogue insiders is to eliminate a generic visitor pass. Although this can seem a low-tech safety measure, given the research about data breaches, it is key that safety measures are equally stringent at the physical level and not ignored or viewed as less important. With the unique visitor pass, all personnel entering the datacentre are uniquely identified with a photograph, which is placed on their visitor badge. This is supplemented with key information relating to the individual and their role and the badge is also time stamped, so the visitor is unable to

reuse the badge at another time, pass the badge onto someone else or to stay beyond their permitted time slot. Conclusion Data centres must take a multi-level approach to security, considering both physical and logical measures. The aim of this approach is to meet compliance and specific legal requirements as well as to stay one step ahead of the risk posed by rogue employees and external threats. With the right tools in place alongside a multi-level strategy, data centre employees will be able to fulfil their daily tasks, repair and protect systems and overall satisfy real security needs. While it is essential that technology continues to develop and protect against external threats, it is evident that internal threats are constantly posing a huge risk to companies. A multilevel approach will be able to tackle both, by creating opportunities to proactively detect, deter and overcome any data breaches from an internal or external source. Data Centre Hub July 2015 | 29


Connectivity |

Network Connectivity By Derek Watkins, Vice President of Sales EMEA & India, Opengear

Understanding the Hype. Derek Watkins takes a hard look at the benefits of this emerging approach to IT integration

Introduction The current hype around converged infrastructure (CI) has been fuelled in part by the promise of helping organisations to reduce IT complexity. An approach that integrates compute, storage and networking resources under a single pane of glass is also a potential boon for scaling out IT to meet demand. Demand for CI has spawned a number of new entrants, such as Nutanix, SimpliVity, and Scale Computing, who offer all-in-onestyle appliances, but even though they have experienced strong growth, they are still small compared to the establishment. CI alliances such as VCE, formed by Cisco and EMC with investment by VMware and Intel or its rival Flexpod, which 30 | Data Centre Hub July 2015

also includes Cisco but effectively switches out EMC for NetApp, have proven popular. There are also a number of single vendor CI stacks with rivals Cisco, HP, IBM and Dell starting to be joined by the likes of Huawei and Oracle. Other than the new start-ups, the vast majority of CI vendors have tended to use a software layer to glue together the different elements from existing product portfolios. Although there are many beneficial claims of CI, a cynic might suggest that the trend is more beneficial for vendors keen to lock in customers into a single supplier stack. Reliability & Resilience CI is not all roses. Some of the solutions are difficult or even impossible to modify from the vendor’s standard build. Vendors also tend to bundle their own products within the CI stack, which may include individual items that are far from ‘best in class.’ The cost of a CI solution is not necessarily cheaper in terms of hardware costs although savings are primarily achieved through support and management OPEX reduction. However, these savings are sometimes difficult to gauge. These negatives are real and have led to a number of organisations effectively building their own convergence strategy by combining best of breed connected with third party management tools and technologies like virtual SAN software. Although the number one driver for convergence is often stated as some form of performance advantage, improving management and maintenance is often a close number two. This hierarchy is borne out by research at The 451 Group, which also noted fears over reliability as a major inhibitor. Gauging whether CI is more or less reliable than current deployment

methods is very tricky. Irrespective of whether the CI is single vendor or a combination of best of breed solutions, CI does not instantly alleviate the inherent reliability issues. Connectivity The underlying base for CI deployments is network connectivity, which typically requires a router, switch and firewall. The higher layer compute and storage layer management benefits are all for nought if the underlying transport layer components develop a fault within either a centralised datacentre or when deployed to a remote location, such as a branch office. Even with the virtualisation and creation of network based fabrics, organisations moving to CI need to consider Out-Of-Band Management (OOBM) connectivity to provide direct control into the infrastructure to enable fault diagnostic and remediation. With the plethora of CI offerings, networking equipment still has serial or USB console interfaces, and in-band tools like Telnet live on as common methods of accessing and maintaining these devices. While servers fitted with some form of lights-out/IPMI management card also provides an OOBM path that are also desirable. These requirements are just as valuable in a single vendor or hybrid CI deployment. This is a particular strength of solutions such as Opengear which are designed to provide OOBM while remaining vendor agnostic for deployment in either an integrated stack or hybrid configuration. Management Tools Most CI vendors supply a set of tools to provide management capability, but if there is a problem with the network then these access tools could well be useless. Remote access using out-of-band


| Connectivity management to servers, WAN equipment, networking gear and power control devices enables the IT/ Network Manager to maintain and manage these devices. Monitoring of devices is vital to ensure that problems can be identified and fixed as quickly as possible. With remote monitoring many problems may be identified, and resolved, before they begin to affect local traffic and the core CI platforms. With out-of- band management the IT/Network Manager can provide modem or cellular access if the WAN is unavailable, which can prevent a trip to the remote location and speed repair time. CI also needs to integrate with third party tools. In a survey last year by Zenoss, although 30 per cent of organisations that had deployed CI use multiple tools supplied by the CI vendor, around a quarter still use one or more tools that they already own. A further 19 per cent needed to buy new tools to manage the infrastructure. The myth of a single pane of glass for CI management tools is pretty much just that, especially as many CI implementations are going to be multi-vendor. This is more the case in larger organisations that go through phased roll-in and roll-out

technology. By and large, the CI management tools tend to be limited to the strictly defined product sets that make up the stack. However, IT departments can’t simply ignore the critical part of the stack that doesn’t fit with the CI view of the vendor. So devices like routers, switches and UPS’s need to maintain uptime, but all that equipment doesn’t always exist in the same building, much less the same data centre. CI has proven popular in branch deployment as a method of reducing remote burdens, but again, the CI does not extend to the all of the supporting ICT and OT/facilities infrastructure. Conclusion Organisations should consider CI management tools that deliver a central management ability to allow network engineers and system administrators to centrally view and manage all distributed devices through out-of-band connectivity. A few feature considerations should include ‘call home’ and remote access capabilities that are specifically designed to enable secure access to remote locations in order to optimise network uptime and staff efficiency. Security is paramount,

and these tools should also offer a single point of authentication and access to equipment behind firewalls. Secure comms should at least include SSH, but as best practice a full VPN stack is recommended, along with monitoring, logging, and event notification not just to expedite troubleshooting, but also facilitate auditing and compliance. In addition, support for enterprise authentication services such as Active Directory and TACACS+ even during an outage scenario, rather than relying on master emergency access passwords shared amongst the entire operations staff, should be considered for most environments. Advanced remote infrastructure management tools such as Opengear Lighthouse are able to manage thousands of individual OOBM appliances and finally live up to the single pane of glass promise, unlike the plethora of CI tools. Irrespective of whether an organisation selects an innovative start-up, big name vendor or hybrid CI solution; IT managers must not lose sight of the fact that a large part of the estate is still outside of the convergence framework. Without a viable out-of-band strategy, CI is not enough to instantly solve the challenges of reliability and resilience.

Ther underlying base for CI deployments is network connectivity Data Centre Hub July 2015 | 31


Case | Study

The Excel

Solution

Structured Cabling Excel partners with LMG to bring News UK under one roof.

News UK needed a structured cabling system that could support the technical requirements of 4,500 staff

32 | Data Centre Hub July 2015


Case | Study

About News UK News UK is part of News Corp – a global media business focused on creating and distributing content that educates, entertains, informs and inspires its customers. News UK includes such prestigious media brands as The Times, The Sunday Times, The Sun and TLS. News Corp includes Harper Collins and Dow Jones. Previously located in various offices in and around London, the new News UK building allowed all of the media titles to be brought together under one roof. This prestigious building, also sometimes referred to as the ‘Baby Shard’, due to its exterior glass construction and close proximity to The Shard, is located at London Bridge, providing a perfect central location. The Requirement News UK needed a structured cabling system that could support the technical requirements of the 4,500 staff that would be based in the office, including journalists, photographers, web developers, TV studios etc. A Category 6A solution with a fibre backbone to support 10 Gigabit Ethernet was required, with proven, standards compliance, strong UK support and a robust warranty programme. An experienced project team, comprising of in-house and external advisors, evaluated a number of systems on the market. As a result of this assessment process, Excel was chosen for a number of reasons including, the advantages of the screened system design, the breadth of the product range, which also carry 3rd Party Independent Verification, and the fact it was backed by strong support services and had the ability to meet the fast track installation programme. Once the News UK team had finalised their decision to go with Excel, goods began to ship within a month. The Integrator LMG was chosen by the main contractor and News UK as the preferred integration company. This was due to their experience and proven success in delivering fast track and large scale, prestigious projects. LMG has worked with Excel for a number of years and are one

of a few companies who carry the Excel Solutions Partner status due to their commitment to Excel and the fact they offer a total integrated IP solution. At the height of the project LMG had approximately 80 engineers working day and night shifts in order to meet the tight deadlines. Design and Installation John Hunt of the News UK IT team led the infrastructure design with a SER room located on each floor and larger CER rooms on two of the lower floors. A mix of open frames and bespoke racks were used to house the equipment in each room. Where power was required in the cabinets, bespoke Excel Power Distribution Units were installed. To meet current and future performance requirements, an Excel Category 6A U/FTP screened solution was chosen and over 1.25 million metres of cable installed. Excel provides a wide choice of frames and compatible keystone jacks. After a thorough evaluation process, including Excel support with demonstrations and proof of concept sample provision, News UK opted for the unloaded Excel Keystone Jack Patch Panel Frame populated with angled Keystone 6A F/FTP jacks. The Excel angled jack design directs patching naturally to either side of the frame, reducing stress and bend radius on the patch cable and allows the patch leads to flow neatly. At the rear cable entry remains perpendicular to the panel via the integrated cable management, allowing for an extremely neat and tidy finish. In total 35,000 points were installed including the panel to panel links, together with over 5,500 20mtr U/FTP harness leads. In the work areas standard Excel screened tool free keystone jacks were installed to floor, ceiling and desk positions, where appropriate Excel GOP (Grid Outlet Position) Boxes and Copex style assemblies were used. Both the Excel Category 6A U/FTP cable and screened jacks are independently verified by leading test house Delta, which gave News UK further confidence that the Excel products were the right choice for the installation. The verification is applicable at both individual component, and channel level, a value not available from the competition considered for

this project. The backbone of the installation was based on Excel OM4 24Core LSOH Tight Buffered fibre cable and Excel 50 pair Category 3 LSOH. To ease the installation time, much of the system was preterminated off site and then brought in and installed overnight. On average two floors were completed by LMG every month, which then allowed staff to be moved across in a phased manner. In total 11 floors of the 17-floor building were cabled, together with the Ground and Basement floors. As an Excel Accredited Partner, LMG was able to provide News UK with the Excel 25 Year Warranty that covered the Copper, Fibre and Voice elements of the installation. Because of the size of the project after each floor was completed, the warranty was applied for to cover that particular floor. Once the overall project was finished a final warranty application was made to cover the total site. The Result The first cable was laid in January 2014 and the last in January 2015 making this one of the fastest turnarounds seen on a project of this size in London. Every programme and move date was achieved and all of the technology handovers were met. No work could stop for any of the media titles moving across; around 150 people were moved each weekend until around 4,500 members of staff were moved into the new building. Paul Ovall was the Programme Manager for the project. “Although the project wasn’t without its many challenges, everything ran very smoothly. We were extremely pleased with the LMG team that worked on the installation, much of the time they had teams working both day and night and at the weekends. We’ve been delighted with the Excel solution and impressed by the support provided by the entire Excel team, and its partnership with the LMG delivery team. From pre tender, throughout the selection process, to on-site support and warranty programme we have been very pleased with the knowledge, professionalism and enthusiasm from all involved.” Data Centre Hub July 2015 | 33


DATA CENTRE SUMMIT 2015 NORTH

30th of September 2015

om .c ld or

ew tr en ac at .d

w w

w

Manchester’s Old Trafford Conference Centre w w w a .d c ta en ew tr .c

ld or om

Registration is now open Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchester’s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business. The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals.

Platinum Headline Sponsor

Event Sponsor

TO REGISTER CLICK HERE

DATA DATA CENTRE CENTRE SUMMIT 2015 DATA CENTRE SUMMIT 2015 SUMMIT 2015

DATA DATA CENTRE CENTRE SUMMIT 2016 DATA CENTRE SUMMIT 2016 SUMMIT 2016

10th February 2016

DATA DATA CENTRE CENTRE SUMMIT 2016 DATA CENTRE SUMMIT 2016 SUMMIT 2016 22nd June 2016


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.