Volume V, Issue 2 2015
£35/€50
www.netcommseurope.com
The Cloud Effect Cutting the Cost of High-End Hosting
Connectivity The Data Centre Decision
Physical Security Understanding Physical Attacks The Internet of Things Choosing the Right UPS Topology
Struggling with managing your IT Infrastructure? Automate datacenters and IT labs - with CloudShell CloudShell's self service platform, automated provisioning and reservation/scheduling system empowers cloud-like access to any combination of infrastructure, including legacy systems, physical networking, virtualization, SDN/NFV, and public cloud resources, so you can enable: Automated provisioning Faster test cycles Higher utilization Continuous integration DevOps
ELO DE V
PER
S
TESTE
RS
Learn more info.qualisystems.com/netcomms
CloudShell
PL COM
IANC
E
S E C U R IT Y
DEPLOYERS
CONTENTS
BUSINESS CONTINUITY 34 The Cloud Effect
Is there a silver lining for data centre real estate investment?
CASE STUDY 28 Core Transmission Networks Meeting next-generation demands 32 Creative Problem Solving The Phybridge solution
COMMENT/OPINION 24 The Data Centre Decision
Can copper cabling support coming generations?
D ATA S E C U R I T Y 20 An Evolution in Infrastructure A multi-step approach to security 30 Securing the Cloud in 2015 A new era of automated remediation
DCIM 38 Optimising IT
Maintaining accurate data with DCIM
ENCLOSURES AND RACKS 26 Air Management Solutions
Reducing the cost of cooling
INFRASTRUCTURE AS A SERVICE 14 Building an IaaS Cloud
Could a Lab be your starter IaaS cloud?
PHYSICAL SECURITY 12 Physical Security
Understanding physical attacks
REGULARS
STRUCTURED CABLING 6 Unshielded Copper Cabling
The benefits of new STP systems
UPS SYSTEMS The Next ‘Hot Topic’ in UPS
22 Choosing the Right UPS Topology www.netcommseurope.com
Published under licence by: LGN Media, a subsidiary of The Lead Generation Network Ltd Publisher & Managing Director: Ian Titchener Creative Director Andrew Beavis Editor: Nick Wells Production Manager: Rachel Titchener Advertising Sales: Ian Titchener Financial Controller: Samantha White Price: €50 | £35 Subscription rate: €200 | £140 The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. ISSN 2045-0583 This publication is protected by copyright © 2015 and accordingly must not be reproduced in any medium. All rights reserved.
3 Foreword 4 Industry News
18 Modular UPS Systems
Tel 01353 865403 info@netcommseurope.com www.netcommeurope.com
Carrier Neutral Data Centres
COPPER CABLING SYSTEM 8 The End of Copper?
E Space Business Centre, 181 Wisbech Road, Littleport, Cambridge, CB6 1RA
Printed by MCR Print, 11 English Business Park, English Close, Hove, East Sussex BN3 7ET Netcomms stories, news, know-how? Please submit to nick@lgnmedia.co.uk including high resolution (300dpi+ CMYK) images.
Meeting the challenge of the ‘Internet of Things’
NETCOMMS europe Volume V Issue 2 2015
1
DATA CENTRE SUMMIT 2015 NORTH
Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchester’s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business. The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals. Schneider Electric White paper
To enquire about exhibiting call Peter Herbert on 07899 981123 or Ian Titchener on 01353 865403
om .c ld or
Manchester’s Old Trafford Conference Centre
ew tr en ac at .d
w w
w
30th of September 2015
DATA CENTRE SUMMIT 2015 NORTH Platinum Headline Sponsor
FOREWORD
Always Think Before You Click!
Snooping on a person or a company is nothing new, it’s just that the Internet age has brought with it an added dimension: the cyber thief. Ten years ago, few people could have predicted the explosion of data generated from the likes of social media and the Internet of Things, which has completely changed the world of electronic security. While no business knowingly puts its key assets at risk, digital criminals continue to advance their agility and inventiveness to get into IT networks. Reports estimate that almost 80 per cent of successful data breaches are the result of weak authentication systems. So, is the password the weakest link? What’s clear is that cyber security has shifted from an IT problem to a business problem, as Marcus Edwards reminds us on page 12 of this month’s issue. What’s more, in today’s data centres physical security translates to electronic security, which is where many companies are getting caught out. The reality is that as we enter the era of the hybrid cloud environment, there’s just no way to ensure that you won’t be hacked - let’s not forget that internal threats also pose a significant risk. As we’re actively encouraged to share intelligence and prevention techniques to help stop attacks, detecting the breach is becoming just as important as preventing it. I recently overhead someone say that, “bytes are replacing bullets.” To this end, investment in technologies that can reduce lag time between when a breach occurs and when it is discovered is paramount. Enjoy the issue. Nick Wells Editor – Netcomms Europe
Flexible Network Test to 100G All-in-one transport testing to 100G Today’s core and metro communications networks are implementing 100 GigE and OTN technologies rapidly to provide sufficient bandwidth supporting the explosive increase in mobile communications data. These high-bit-rate networks demand very high reliability due to the large data volumes and variety of client signals in use. With four 100 Gbps ports, MT1100A supports R&D of the latest OTN 400 Gbps technologies using client signals, including Ethernet, SDH/SONET, PDH/DSn, and Fibre Channel, now in development.
Europe 44 (0) 1582-433433 www.anritsu.com ©2014 Anritsu Company
www.netcommseurope.com
NETCOMMS europe Volume V Issue 2 2015
3
INDUSTRY NEWS
xxxxxxx
...NEWSWIRE...NEWSWIRE...NEWSWIRE.
Schneider Electric has unveiled its Galaxy VM three-phase uninterruptible power supply, which employs the company’s patented ECOnversion technology. “New Galaxy VM represents a major new breakthrough in UPS technology,” said Gael Souchet, Global Product Manager, Galaxy VM UPS. “ECOnversion combines the advantages of double conversion on-line topology and advanced ECO mode technologies, providing our customers with an innovative new UPS solution that helps to maintain the highest levels of energy efficiency.” The Galaxy VM UPS can achieve 99 per cent efficiency using ECOnversion mode, which allows the UPS to operate at the greatest efficiency without putting data centre loads at risk. The electrical load is never exposed to unconditioned mains power and as such the Galaxy VM is compliant with IEC 62040-3 and Class 1, a unique achievement in its class. Additionally, the Galaxy VM offers flexible energy storage solutions with swappable modular battery modules and fans that can be replaced without the need to go to maintenance bypass, thus increasing availability and load protection. It also offers an unrivalled operating range, able to support loads from 160 to 800kVA. Designed to integrate into existing electrical, physical and monitoring environments, Galaxy VM works with Schneider Electric’s StruxureWare data centre infrastructure management (DCIM) software applications, building management systems (BMS) and Modbus protocol. It is also easily configured with Schneider Electric’s ISX Designer configuration tool. www.schneider-electric.com CityFibre, a leading operator of fibre optic infrastructure in UK, has signed an agreement with leading Scottish network service provider, Commsworld, for the first phase of deployment of pure fibre network infrastructure in Edinburgh. The initial 50km of an anticipated 150km
4
NETCOMMS europe Volume V Issue 2 2015
build is backed by a contractual commitment from Commsworld to migrate a significant proportion of its large existing base of business customers onto CityFibre’s new fit-for-purpose infrastructure. As in all it’s previously announced projects, the Total Contract Value to capex coverage ratio during the build is anticipated to be well within the Company’s previously stated range. The initial new network deployment will bring gigabit connectivity to within reach of an estimated 7,000 businesses, and establishes Edinburgh as CityFibre’s next Gigabit City project. Construction will commence this summer, and initial planning is already underway. As with CityFibre’s other Gigabit City projects in Coventry, Peterborough, York and Aberdeen, the network will be deployed in-line with the company’s “Well Planned City” design philosophy. This highly sophisticated planning approach optimises network design and deployment to accommodate current and future capacity requirements from all aspects of a city’s community under a shared infrastructure model. www.cityfibre.com
Excel Networking Solutions, the copper, optical cabling and rack solutions provider, has launched a new online configurator to make it easy for customers to choose the exact fibre pre-terminated cables they need. The Excelerator Configurator takes the user through a number of simple steps including whether they require an Assembly or MTP Trunk, the Construction i.e. Distribution, Breakout and MiniBreakout, Category i.e. OM3, OM4 or OS2, No. of cores, Connector types and Lengths and Quantities. Once the details have been entered the configurator produces a diagram of the cable and provides a cable specification that can be emailed through to the team at Excel to receive a quotation.
Tracey Calcutt, Marketing Manager commented. “With so much choice and so many options it was imperative that we introduced a tool such as the configurator to assist our customers. Speed and accuracy are essential at each stage and this starts with the customer choosing the right product, and being able to see a diagram of the cable specification really assists them and us to understand exactly what they need. We’ve had a great response to the range so far.” www.excel-networking.com
Italtel, a leading telecommunications company in next generation networks and services has announced it is fully supporting Vodafone’s NFV and SDN strategy, called Telco over Cloud, and its implementation in Vodafone Germany, via a partnership with worldwide IT leader Cisco, in a large and ambitious network evolution program. The project will see the migration of the German operator’s fixed legacy networks and the deployment of innovative services onto a common virtualized infrastructure empowered by VCE technology. “Vodafone is a firm believer and supporter of Telco over Cloud and SDN as two key technology transformational thrusts that will deeply affect the way the telecom industry will consume technology,” said Roberto Vercelli, Head of All IP One Core - NFV at Vodafone Core & Transport Center of Excellence - TSO. “Italtel’s rapid adoption of this strategy has enabled us to move from theory into real implementation to serve real customers.” Rui Frazao, Director Network Engineering at Vodafone Germany, added: “We were impressed by how quickly Italtel adapted to our Telco over Cloud strategy to offer the right technology services. Italtel has proven to be a reliable partner.” www.italtel.com
www.netcommseurope.com
The First of its Kind
✗
Net pert ...
1400
your free To claim t NetXper cap go to ll a b e s a etxper t b n / m o c . a t berda emea.psi
LAN Qualifier for Ethernet Speed Certification Connection Tests • Qualification of passive cabling up to 1 Gbit/s Comprehensive Ethernet-Troubleshooting • Network Tests and Diagnosis Ethernet Speed Certification • Passive Link - Signal-to-Noise Ratio - Bit-Error-Rate Test - Delay Skew
Psiber Data Limited Unit 14 Newhouse Business Centre Old Crawley Road Faygate, West Sussex, RH12 4RU Tel.: +44 (0) 1293 852306 info@psiber-data.co.uk
emea.psiberdata.com
STRUCTURED CABLING
Is Unshielded Copper Cabling Routed in the Past?
Unshielded Copper Cabling By Keith Sullivan, Marketing Director EMEA at Corning Optical Communications
Introduction
Keith Sullivan discusses the benefits of new STP systems
Most things are taken pretty seriously in Germany, and none more than the build of a German car. Can a product ever have too much engineering integrity? High-end German saloon carmakers like to put the engine in the front, and the gearbox and drivetrain at the rear. Sure, it’s more expensive to build, but it delivers the best driving experience. Likewise, German enterprise network designers like their copper pairs to be wrapped in a metallic shield – twice. But just because shielded copper (STP) is de rigeur in Dusseldorf, it doesn’t make it trendy in the Docklands. Indeed, many UK cabling professionals are as vociferously in favour of UTP (unshielded copper) as the Germans are for STP, and when people are vociferously in favour of anything it can be difficult to change their minds. No one sits on the fence in this debate.
The Argument for Change
The tide is changing however, and I know of network managers working for the biggest financial services companies in the City of London who’ve been dyed-in-the-wool UTP supporters since the year dot, only to switch their allegiance having put their prejudices aside and fully examined the respective cases for and against. One of these guys – let’s call him Gary - had a major building installation into a very difficult environment: tight spaces, shallow floors, lots of water and power going between levels. Immediately they had to forget about dedicated routes for any cabling – it was never going to happen. Whatever copper they used was going to be susceptible, but choosing shielded cabling would take away any risk of crosstalk and inject extra survivability into the network. Naturally, management asked questions. After all, everything had worked well with unshielded for fifteen years, so what was the argument for change? It was the business case that convinced them in the end. STP and UTP used to be miles apart in terms of cost, but today they are largely the same. Quick and easy termination makes installation just as easy for STP as it’s always been for UTP, so no added expense there. Plus, 6
NETCOMMS europe Volume V Issue 2 2015
they knew it worked better over longer distances, and can deliver 10Gbps, because they’ve deployed STP in their datacentres. However, it was the immunity to cross talk and the overall robustness that sealed it. According to Gary, even if it wasn’t such a challenging building environment he thinks he still would have pushed to go all-STP for the cabling infrastructure.
The Choice for STP
Having been ‘a UTP house’ for so long, other like-minded peers from rival businesses were sitting up and taking notice at this institution’s first foray into STP; anxious to see how it would turn out. The organisation’s choice for STP was made on the basis of robustness, but what they weren’t expecting were many of the other advantages. For example, it takes a lot to fit out a very large building, and what you can’t have are any delays. Mid-project, with the cabling going into the floors, the place looked like a building site with fifty or sixty installers and tradesmen walking around, and the floor covers all up. Then the inevitable happens and someone’s big heavy boot comes down on those cables. With UTP, they’d all have been crushed and needing to be replaced; with all the time and money associated with them written off. Because it was STP, they just tested it thoroughly, found no damage and the project carried on without delay.
Reliability
Among the UTP naysayers – of whom a number were invited in to see the new solution in action – some were still unconvinced about reliability; specifically the cross talk interference. When using UTP, you always apply a careful ‘never the twain shall meet’ planning philosophy - data cables no closer than 300mm to power, no parallel runs over a metre long etc. - that mitigates the risk factors of interference meaning you avoid the problem in the first place. STP planners, by contrast, think they can afford to be complacent. Gary put paid to those concerns by pointing at the 1,300-user trading floors supported by the new STP cabling. Each user has a screen and a phone line, and many have hoot-and-holler
lines (a squawk-box open circuit comms link popular in stock brokerages and the like). If there were any noise or interference on the hoot-and-holler, it would be deemed unusable. No trader would have stood for any interference problems or any degradation of service of any kind. In deciding to go STP, the killer question was: “Can we guarantee it would be OK with UTP?” The answer was a resounding no. Those who champion UTP over and above STP often have an out-of-date impression of STP’s capabilities. They invariably accept that it is a robust solution, but with the Achilles’ Heel of electrical earthing. STP has always needed earthing, and in most organisations the electrical supply comes under the remit of the building facilities team, rather than IT. UTP champions will tell you that this separation of duties between facilities and IT led to problems, firstly because facilities guys aren’t cabling experts (maintenance or changes to the electrical systems rarely take into account the health of the data cabling), and secondly, because putting the data cables in proximity to other services exposes them to other physical risks. To put it another way; those facilities guys aren’t gentle. They like to use hammers and blowtorches a lot.
Conclusion
The truth is that you don’t come across earthing issues with new STP systems. As you plug system elements together and add cables you’re automatically extending the metallic path for earthing along with the data path. Plus, the earthing requirements on modern electrical systems are so rigorous; you’ll find reliable earth points just about anywhere. Just one patch cable connected into a powered component like a switch or router would establish an earth for you. From the moment they were first broadcast in the 1980s, Audi’s ‘vorsprung durch technik’ TV adverts struck a chord with the grudging acceptance all British people seem to have about the quality of German approaches to engineering. They could still learn a thing or two about fashion from us, but maybe we should learn more about cabling from them.
www.netcommseurope.com
THREE PHASE POWER Three Phase Power Designed to bring maximum power to your servers, the G4 three phase range are built to exacting standards to ensure maximum safety for your facility.
Available with: • C13 C19 Locking outlets • C13 C19 Fused outlets • BS1363 UK outlets • Continental outlets • Individual circuit protection per outlet • Overall metering of V, A, kWh, Harmonics, PF.
G4 MPS Limited Unit 15 & 16 Orchard Farm Business Park, Barcham Road, Soham, Cambs. CB7 5TU T. +44 (0)1353 723248 F. +44 (0)1353 723941 E. sales@g4mps.co.uk
Vertical rack Mount
Maximise you rack space, specify mixed connector PDU’s built to your exact requirements to give you just the solution you are looking for.
Horizontal rack Mount
Thermal overload protection or fused outlets mean that you only loose a single socket in the event of a fault, not the whole PDU thereby removing the risk of a total rack failure.
COPPER CABLING SYSTEMS
xxxxxxx Data Centre Performance
The End of Copper? By Reinhard Burkert, Product Manager, R&M
Has copper reached the end of its useful application in data centres?
Introduction
Data centres need higher speeds and cost-effective migration paths to accommodate, for example, 10GBase-T and 40G, virtualisation, mobile apps, ‘big data’, network and service convergence and streaming. Cisco’s Visual Networking Index predicts global IP traffic will soon reach 1.6 Zettabytes per year. The IDC Digital Universe Study claims the decreasing cost of managing information will be an incentive to create more data. Data centre performance, however, is increasing at a slower rate than the number of users and the amount of data. So, what steps must be taken to satisfy the fast-growing demand? Can copper cabling handle the bandwidth requirements of today’s active data centre equipment and support coming generations?
New Standards
Many claim that only optical fibre will work for higher data rates, and
twisted pair copper cabling is limited to 10 GB/s. Furthermore, the inherent distance limitations of copper limit its active radius. Unlike copper cabling for earlier 1G and 10G technologies, Cat. 8 classification for next-generation twisted-pair cabling won’t have a 100 metre range. So, has copper reached the end of its useful application in data centres? Not quite. The ‘limited distance’ argument isn’t that relevant and new solutions can help meet higher bandwidth requirements. For most data centre purposes, the fact that Cat. 8 doesn’t have a 100 metre range is not a problem. IEEE’s cabling surveys show most data centre configurations can be serviced with a 30-metre overall reach. Copper is absolutely fine for short distances and bandwidth up to 40 Gb/s, providing a low-cost, user-friendly solution within or between cabinets and racks. Cat. 8 is specified up to 2 GHz (4x today’s 500 MHz bandwidth!) That makes copper capable of accommodating 40GBase-T with potentially lower cost-per-port than
FO connections. Cat. 8 may become the mainstream technology for rack-level interconnects.
Shield Or Not To Shield?
This question comes up whenever structured cabling investment decisions are made. With fibre DC cabling, shielding is not an issue. Copper, however, has a number of specific shielding requirements. Is a total transition from unshielded copper to shielded cabling inevitable, then? No. If installed correctly, both shielded and unshielded technologies offer sufficient reserves for applications up to and including 10G Ethernet. While shielded cabling is an excellent choice for supporting 10GBase-T, you can reach your goals with unshielded cabling as well - providing you make sure the conditions are right and you take care to select the right quality. With traditional unshielded Cat.6A cabling, distances between individual components need to be increased as
Proper connectivity management is essential to accommodating growth and offering flexibility.
8
NETCOMMS europe Volume V Issue 2 2015
www.netcommseurope.com
Efficiency and technology in perfect harmony.
Today, choosing a new UPS system is about more than simply protecting your critical load, you must also consider the best way to minimise your CapEx and OpEx and ensure you are future-proofing your power protection investment against any eventuality.
Just like our elite sportsmen and women, our UPS solutions utilise the very latest technological developments to deliver class-leading efficiency levels of up to 98%*, whilst still offering you the flexibility and control you need to achieve your financial and business objectives, both now and in the future.
To find out more call or email us today: 01256 386700, sales@upspower.co.uk, www.upspower.co.uk
COPPER CABLING SYSTEMS
disturbance or inefficiency can be found and remedied instantly. Audits and inventory maintenance become significantly faster and easier.
Connectivity Management
Integrating cable management into a DCIM solution enables fast, efficient reaction.
much as possible. However, the latest generation of unshielded cables can, in fact, offer ANEXT reserves that were previously only possible with shielding. Choosing the best possible quality of unshielded cable is vital. After all, your investment should keep operating for fifteen years and longer without error. Once a network has been installed or upgraded it should outlive several consecutive generations of active equipment. Shielded Cat. 8 cabling can provide a truly future-proof solution for current and new active equipment. Research has shown that in virtually all data centre applications the inherent distance limitations are not an issue. Current specifications adequately deal with the transmission related parameters. Furthermore, manufacturers and standards bodies have smart solutions for any potential connection issues. With Cat.8 cabling data centres can make a cost-effective choice and lower Capex without increasing Opex.
However...
Far more important than discussions about copper versus fibre is the mapping and monitoring of that copper, fibre and other network hardware. Manually managed infrastructure data typically has a 10 per cent error rate* and 20-40 10 NETCOMMS europe Volume V Issue 2 2015
per cent of ports in a network are forgotten over time**. Mapping and management also takes up a great deal of staff time. An Automated Infrastructure Management solution offers functions for mapping, managing, analysing and planning cabling and network cabinets. An automated solution can continuously monitor each connection in one or more data centres or local networks while a (remote) central server records the cabling status. It can store detailed information about the physical equipment as well as operational and workflow data and, in many cases, track changes, such as deployment and movement of physical assets. Linking change management to asset management systems means information is always up to date. Updates are automatically generated when new devices are integrated or changes are made. Unused patch panels and ports in active equipment are immediately detected. Connectivity can be traced in real time with a PC or smartphone, locating faulty connections within seconds. This kind of solution can halve network monitoring and management costs and is essential to generating performance data that enables improvements and enhancements and can be fed into a DC asset management tool. Any
Integrating cable management into a Data Centre Infrastructure Management solution improves uptime and enables fast, efficient reaction. It allows ‘drilling down’ to individual links between specific racks and other equipment, including switches, routers, firewalls and network appliances. Proper connectivity management is essential to accommodating growth and offering flexibility. This allows a more proactive approach to infrastructure management, showing up potential growth areas and difficulties. Problems and improvement areas can be examined and solutions proposed before they are physically carried out on site. You can visualize patch panel cable management from remote locations, for example.
Conclusion
An integrated Infrastructure monitoring solution incorporating connectivity offers automated, end-to-end, real-time, on-demand physical asset inventory and management for your data centres at the touch of a button. This can support risk-mitigation, compliance to regulations and defined processes, capacity planning and security needs while drastically reducing time consuming manual work. This way of operating eliminates error-prone manual checking, processing and data entry. A data centre can easily contain 10,000 individual assets. A maintenance tech might easily spend a day locating a faulty asset or link. Capacity is freed up to spend on core tasks, which contributes to the bottom line. Audits that once required folders full of spreadsheets and took weeks can now be performed immediately. Management reporting improves significantly, as does documentation on which strategic choices and hardware purchases are based. * Source: Watson & Fulton ** Source: Frost & Sullivan
www.netcommseurope.com
Schneider Electric White paper
54185p
PHYSICAL SECURITY
xxxxxxx Understanding Physical Attacks
Physical Security By Marcus Edwards, Owner of Server Fortress Limited
Introduction
Data centre security is about minimising risk and maximising operational uptime. Of the two types of security, cyber and physical, the emphasis is usually put on cyber security, which is clearly the obvious risk. High profile events like North Korea’s attack on Sony underline the threat of this type of action. The main focus should be on providing security against such attacks, which are happening all the time. Marcus Edwards explains why physical protection should not be neglected
Physical Attacks
Typically, physical attacks on data centres are lower profile events and much less frequent, but they do still happen and they can be catastrophic. When we imagine this type of physical attack we normally think about thieves stealing physical equipment for resale. When this occurs the resultant breakdown in service or loss of key data can be an embarrassing and costly by-product as Vodafone found to their cost when their service was disturbed in 2011 after network equipment was stolen from their Basingstoke data centre. Another attack occurred in 2007 when five thieves disguised as police stole up to £1 million worth of computer equipment from a ‘state-ofthe-art’ data centre in the Kings Cross area of London. The vast majority of UK data centres have very good security measures in place to guard against this
type of theft. Security fencing supported by CCTV and lighting plus controlled vehicle and pedestrian access makes theft by your casual opportunist nearly impossible. Clearly these measures are necessary and should not be overlooked. Once in place the data centre is secure for all but professional planned attacks. These are unlikely to be carried out by your usual, home grown, criminal gang as the financial rewards of obtaining IT hardware does not justify the risks involved. So, does this mean all is right in the world of physical security for data centres? Unfortunately, gangs of professional thieves turning up to steal lorry loads of servers are not the major threat. The main threat in terms of physical security comes from within, as most of the large thefts of data are often a result of inside jobs or negligence. For example, Edward Snowden leaked thousands of classified documents, much to the embarrassment of the USA and UK governments. The disgruntled or criminally minded employee is probably the biggest physical security threat faced by small businesses. Stealing information in order to help with another employer or to set up their own business is a crime that also appears to be growing rapidly if you look at the conviction rates. High security fences and access control into the building will not protect against the authorised employee. Access control to data and the physical storage devices need to be controlled and recorded per individual. Assets and data also need to be ring-fenced and segregated to minimise any potential loss.
Defence in Depth
Remote cyber-attacks are the biggest threat to all data, but physical protection should not be neglected.
12 NETCOMMS europe Volume V Issue 2 2015
This is all fairly straightforward and in line with The HMG Security Policy Framework, Version 11.0 – October 2013 issued by The Cabinet Office, which states the following; “The ‘defence in depth’ or ‘layered’ approach to security starts with the protection of the asset itself (e.g. creation, access and storage), then proceeds progressively outwards to include the building, estate and perimeter of the establishment.” The significant point is that security should
start as close to the asset as possible. This limits any potential loss even from malicious individuals from within the organisation. The framework covers the normal commercial risks, however, there is a lot of sensitive data that could be subjected to another type of professional attack. Government backed cyber-attacks are not a thing of fantasy. Governments have teams looking at this in both terms of defence and as an offensive tool. Thanks to Edward Snowden, we know UK’s GCHQ has gained access to the network of cables that carry the world’s phone calls and Internet traffic and has started to process vast streams of information that it’s sharing with its American partner, the National Security Agency (NSA). This is what we know, so far, about our own ‘friendly’ security organisations. It would be very naïve to assume other governments are not doing the same with commercial objectives. I’m not suggesting that data centres are likely to be attacked by foreign backed intruders with guns and ski masks. Physical attack is normally much more subtle. The next question is who could be targeted? Government institutions, including the police and military, are obvious targets, but the banks, financial services, technology and research institutions are also potential targets. Once you widen the catchment to cover these areas nearly all multi-national companies, and even Universities, become potential targets.
Caging
What types of subtle physical attack are we talking about? Network eavesdropping is the main threat and that gets easier once you have access to the building where the network is situated. In private office buildings you need to keep network points in meeting rooms away from visitors. Data hosting centres create another issue. As mentioned earlier, most data centres have very good perimeter security and record whoever enters the actual data centre. Is this good enough against truly professional eavesdropping? What other companies are based in the hosting centres? Could any of www.netcommseurope.com
those companies, or their employees, have links to overseas governments? Once an individual has open access into the data centre all sort of illicit opportunities become available. The standard way to offer some protection against this is segregating different companies’ server racks by caging. Caging is available in a wide range of cost and quality. At the lowest end it is nearly only cosmetic allowing both the hosting centre and the end client to ticket a box in the specifying contract. High-quality caging with audit tracking locking systems can look very similar, so care should be taken before signing off on the cheapest solution. Caging also has a few practical problems. It takes up floor space, which may not be an issue if a lot of cabinets are contained by one cage, but for a small number
of cabinets that space costs money. Caging can also disrupt airflows within the data centre causing hot spots and dead zones. It also doesn’t normally lend itself to either hot or cold aisle containment, limiting the thermal and efficiency advantages of these systems. From the ‘layered’ approach to security it fails to bring the protection as close to the IT assets and its cabling as it should be. Once a cage has been breached all the assets within that cage will be compromised. A possible alternative to security caging within a hosting centre is very secure server cabinets, which have the advantage of securing IT equipment down to the cabinet level. One potential weakness is the data cables entering the cabinet could be exposed. In an overhead cabling situation this
can be easily remedied with enclosed, locking, cable ducting systems.
Conclusion
Remote cyber-attacks are the biggest threat to all data and systems, but physical protection should not be neglected. Nearly all data has some value and the loss of data or systems shutting down have very high associated costs for its owner. Physical protection may take second place in this war, but once a professional outside organisation has penetrated the physical barriers, you may never know about it until they want to gain a political or commercial advantage. As a result you need both types of protection otherwise you may have a very nasty surprise in the future.
Server & Network Cabinets from ü Next day delivery ü Available in black and grey ü Full range of accessories available on request
ü Engineered, designed and manufactured in the UK
Next Day Delivery Place your order before 5pm & have it delivered the next working day
Cablenet is a premier distributor of B-Line by Eaton server & network cabinets
Call: 01276 405300
enquiries@cablenet.co.uk www.cablenet.co.uk
www.netcommseurope.com
NETCOMMS europe Volume V Issue 2 2015 13
INFRASTRUCTURE AS A SERVICE
Could a Lab Be Your Starter IaaS Cloud?
Building an IaaS Cloud Alex Henthorn-Iwane, VP Marketing, QualiSystems.
Lab as a Service clouds can offer tremendous productivity gains.
Introduction
Infrastructure as a service (IaaS) clouds can be a bit of a confusing and daunting topic. Some confuse having virtualized their data centre servers for having created a private cloud. Others are daunted by the thought of trying to achieve anything like ‘self-service’ deployment for production applications that are usually carefully shepherded into operational deployment with tons of controls and oversight. For many, a relatively safe way to start the journey to private or hybrid IaaS clouds is by building infrastructure self-service for IT developers and testers — what is being called a Lab as a Service (LaaS) cloud.
What is an IaaS Cloud?
We’ll define IaaS as a way to deliver IT infrastructure resources with the following characteristics: • IP-Accessible: clouds must be accessed via the Internet or an IP network • Self-Service: clouds must offer programmatic or user self-service to the IT resources via an API or web portal • Automated: clouds must be enabled with automated configuration or provisioning of the requested IT resources as part of the self-service cycle • Elastic: clouds must be able to deliver resources dynamically and elastically to serve variable use cases and usage patterns Note that while in many or most cases, server virtualization with a hypervisor like VMWare or KVM precedes building an IaaS cloud, it is possible and sometimes important to build IaaS clouds that address infrastructure beyond what is virtualized — nonvirtualized servers that are still employed by 80 per cent of enterprise IT groups (according to Enterprise Management Associates’ 2014 Software-Defined Data Center report) and networking devices.
Why LaaS?
The reasons that many organizations are building Lab as a Service clouds are very similar to why cloud computing 14 NETCOMMS europe Volume V Issue 2 2015
became popular to begin with. The primary motivator today is business agility — getting products, services and applications to market faster. Cost savings is also a consideration. Development and test labs can constitute a significant investment for IT and technology organizations, and manual processes for allocating, connecting and provisioning infrastructure are very wasteful — typically resulting in asset utilization rates of only around 15 per cent. From an IT evolution point of view, the reason why development and test labs are a good place to build a starter IaaS deployment is because the relative gains in productivity can be dramatic, while the risk of any downtime along the way is relatively low — especially compared with production application deployments.
Major Gains from LaaS
There are three major types of organizations that typically see major gains from building a Lab as a Service: • Large Enterprises: often, large IT organizations must operate a variety of labs for application development. • Telecoms: landline, mobile, Internet and other communications operators have to perform functional, performance testing across realworld, end-to-end network scenarios. • Technology Manufacturers: manufacturers of everything from flash memory to communications switching must perform extensive functional, performance and interoperability testing in their development cycles. Exploring Lab Automation Challenges More Deeply There are many opportunities to turn labs into self-service cloud. These opportunities are valuable because typical lab operations are plagued with productivity-draining issues. Absence of Inventory Visibility In most labs, equipment inventory is not tracked in a way that provides practical visibility to engineers. While most organizations track assets for financial purposes, what passes for
inventory management by engineers is a simple spreadsheet that is often poorly maintained. As a result, it is difficult to tell without exhaustive investigation what equipment exists, who is using what, and what is actually available. Offline Development/Test Environment Design In the absence of usable inventory visibility, the design of infrastructure environments is performed offline without regard to resource availability. Visio or other diagramming tools are generally used to produce what is essentially the electronic version of a paper drawing, which is then printed to aid a manual process to secure the resources. If any of those resources are physical equipment, then you multiply the time and aggravation involved. Chaotic Connectivity Once inventory is found that appears to be available, engineers must often manually re-cable connections between physical equipment. With different people adding, moving and changing components, typically without up-todate documentation, errors such as accidentally disconnecting someone else’s test inevitably occur. Manual Provisioning An engineer, after painstakingly assembling the topology, must then proceed to perform a variety of further time-consuming logical provisioning steps. Engineers may be highly paid knowledge workers, but in effect, they spend the vast majority of their time on low level provisioning tasks. With so much time needed simply to allocate and provision resources, utilization rates on costly IT assets are very low. Making things worse is human psychology. Due to the costly manual process to get resources, engineers are reluctant to release them, so hoarding drives IT resource utilization down even further. A Few Keys to Building a Lab as a Service Cloud: • Reservation system: it’s important to understand that labs have a different usage cycle from production data centres. In a production data centre, infrastructure is initially provisioned www.netcommseurope.com
Visit our website today
www.comtecdirect.co.uk
We supply a comprehensive range of products, from cabinets and structured cabling to specialist test and measurement equipment. • • • • • •
New products and promotions Easy ordering - credit card or trade account Exclusive prices and terms for trade accounts - apply today Enhanced technical specifications of products Free technical support Next day delivery
Need help with ordering or advice call 01480 415000
Ultima
INFRASTRUCTURE AS A SERVICE
into a rack or pod once in a long while, then applications are deployed and expected to run for a long while. By contrast, lab usage cycles are much more dynamic. Development, test and other user environments may only be used for hours to days. This means that the ability to track who is allocated what, for how long, and having built-in business logic to encourage (or force) reclamation of those resources after engineers are done with their short-term use case is important. This means that a reservation and scheduling system is pretty much a necessity for lab clouds. • Cover the Entire Infrastructure: realistically assess the need for supporting the assembly, connectivity and provisioning of a variety of infrastructure. If the teams are solely working on virtualized infrastructure, great. However, if they need access to physical servers and networking,
make sure that the IaaS cloud automation system can handle that, since most cloud management systems only handle virtualized resources. If the cloud management system does need to handle physical/ networking gear, how diverse? Some cloud management platforms can handle physical gear—but perhaps only a couple of vendors’ equipment. If you need more diverse infrastructure coverage, assess cloud management platforms on that basis. • Start Small, then Federate or Consolidate: lab consolidation or federation is a great way to drive large-scale savings by allowing distributed teams to utilize share lab infrastructure. If your organization is contemplating this, from an automation point of view, it is recommended that if you can afford to do it on a timeline basis, that you work on doing a pilot Lab as
Get a better view of your fiber
a Service automation deployment before you take that step, so that you can understand what it will take to do the cloud automation before committing to a massive project with tons of pressure. If you design your pilot correctly by deploying lab cloud services to remote users, it should set the stage for scaling that automation deployment out fairly effectively.
Conclusion
Lab as a Service clouds can offer tremendous productivity gains and significant cost savings to your organization. These benefits, plus the chance to cut your teeth on a relatively safe IaaS project, could make lab clouds a really great starting point for your IaaS private and ultimately, hybrid cloud practice.
Fiber Visualizer - get a graphical summary of all your fibers faults The NEW Fiber Visualizer simplifies the entire fiber testing process. Automatically setting the correct test parameters for your fiber, it quickly and easily displays a self explanatory graphical summary of the fiber under test. Instantly highlighting any problems with the location and severity. A pdf report can then be generated to complete the test process. Available now on the uOTDR & MT9083 Access Master series.
• Test up to four wavelengths with a single unit • 7 inch widescreen TFT-LCD, ready to test in 15 seconds • Test ultra-long fibers >200 km and rapid PON testing up to 128 splits • NOW with larger screen, longer battery life and only 2.6 kg • One button pdf report generation
Scan the code to find out more and get your FREE Guide to Understanding OTDRs
w w w. a n r i t s u . c o m
16 NETCOMMS europe Volume V Issue 2 2015
Europe +44 (0) 1582-433433 ©2014 Anritsu Company
www.netcommseurope.com
UPS SYSTEMS
The Next ‘Hot Topic’ in UPS xxxxxxx
Modular UPS Systems Robin Koffler, Director, Thamesgate
If you ran a data centre, which UPS topology would you choose?
Introduction
Electricity availability, costs and sourcing remain top agenda items for many data centre operators. The UK grid was designed for a centralised generation system and has become increasingly unstable through rising energy demands and the need to grow the grid using decentralised, renewable connections. There is an increasing likelihood of insufficient energy generation and storage capacity to meet demand, particularly from fast-growing industry sectors and a rising population. How can an industry that powers the Cloud reduce its own electricity consumption and yet meet rising demands for power? This will continue to be a growing issue that hardware suppliers have been tasked to help solve in order to remain competitive.
Investing in UPS
UPS manufacturers are no exception to this challenge. Regarded by some as a ‘grudge’ purchase, uninterruptible power supplies are now a vital component within the critical power path of any data centre. The larger the data centre and the greater the resilient Tier-rating it is aiming to achieve, the higher the investment has to be in UPS and overall energy consumption. Most UPS manufacturers had already jumped from the transformer-based system to transformerless technology. Use of the latest IGBT-power components for their rectifiers (input/ battery charging stage) and triple-conversion inverter systems has led to the development of more compact and efficient UPS systems. As important is the efficiency curve itself with most transformerless UPS systems able to achieve above 95 per cent efficiency over a wide load range (25-90 per cent). Eco-mode offers an even higher level of operational efficiency (99 per cent), but at the cost of resilience to operational performance. As such, UPS systems have to run on a line interactive/standby basis.
System Specification
With all UPS manufacturers now offering high efficiency systems there 18 NETCOMMS europe Volume V Issue 2 2015
can be little to differentiate between them other than brand reputation, service network and price. The only differences in system specification will be marginal and relate to efficiency, footprint, airflow, service accessibility demands and software reporting and management. The majority of UPS systems are single cabinet (mono-blocks) that can typically be paralleled. Using interconnecting cables and parallel interface cards a non-block UPS (10-800kVA) can be scaled to achieve resilience (N+1) and larger capacity. Each UPS manufacturer will have their own ‘buzz’ word for an advanced battery charging system, rectifier and inverter stack system, but when you get down to the basics the systems offered are all remarkably similar. Within a data centre all the components at the load-end are scalable. This factor reduces as you move up the critical power and cooling path to the main building infrastructure. On the ‘shop floor’, space permitting, you can add servers, cabinets, cooling and so forth, providing cabling and support infrastructure has been added or built-into the facility.
Modularity
‘Modularity’ within the UPS world is the latest bandwagon and for once it appears to be one that will actually make a difference in reducing data centre operational costs. While modular UPS systems have existed for some time, they have not been a popular investment due to their cost and complexity. Even today they represent only around 15-20 per cent of the overall worldwide market for UPS systems. Most UPS manufacturers are starting to offer a modular-based system. For some it is a rework of their existing transformerless technology. For others it may be a first-principles R&D development project or one that builds on their experience within a modular DC environment and power electronics miniaturization. Whichever approach is taken, typical modular UPS systems today comprise slide-in UPS modules of a fixed size
(typically 20-25kW or 40-50kW) that sit inside a cabinet frame. The frame houses the connections to the electrical supply and load distribution points as well as a central bypass arrangement. Each UPS module will have its own static bypass and power management system that self-tests the module before it powers up and connects to the central frame. The main advantage of a modular UPS over a traditional mono-block system is that of ‘right-sizing’ with modular UPS systems easily scalable through the addition of further slide-in modules. Right sizing in turn leads to greater operational efficiency and lower overall investment in the UPS system. Mono-block/fixed capacity UPS systems, as the name implies, have fixed capacity ratings. A 500kVA UPS system will deliver 500kVA and most installations would look to load this no more than 80-90 per cent (i.e. 400kVA) to provide an operational safety margin. The UPS could be run at a lower load, but the energy losses may be greater dependent on its efficiency curve. Modular UPS systems allow the data centre operator to mirror the same infrastructure expansion strategy adopted throughout the installation. As additional server racks become operational, the modular UPS can be expanded vertically through the addition of slot-in UPS modules, or horizontally through the addition of another UPS cabinet. In terms of reliability, modular UPS and mono-block/fixed capacity systems are inherently reliable with the modular UPS able to offer higher levels of availability more easily. As a UPS module is slid into a UPS cabinet, it energises and goes through several hundred self-tests before system connection is allowed. Any test failure results in an alarm condition and rejection by the system controller. Once operational, the system controller monitors the overall and module loads, automatically deciding which modules to put into ‘sleep mode’ to save further energy. Though idle, these sleeping modules can instantaneously take up load within the data centre as server workload increases. The efficiency range of the modular www.netcommseurope.com
UPS system is exceptional with the best reaching 96.5 per cent from 20-100 per cent load. This really comes into play when operating a modular UPS system with parallel redundancy. Most modern UPS (fixed capacity or modular) tend to be Unity Power Factor rated where their kW rating is equal to their kVA. Installing a 120kW system, using three 40kW UPS modules to power a 40-80kW load profile, would see two modules at 50 per cent load (20kW) and one 40kW module in redundancy. System efficiency would be around 96.5 per cent even with the modules running at 50 per cent or less load. The availability would be high (99.9999 per cent). Upgrades can also be attractive for UPS included on the
Energy Technology List – Carbon Trust, allowing organisations to benefit from Enhanced Capital Allowances and reduced tax bills. Adopting ‘Critical Power as a Service’ (C-PaaS) can also be an economical and flexible way to upgrade. As well as enjoying a rebate for the removal and disposal of an existing UPS system, data centre operators can move onto an operating lease that includes the UPS, batteries, annual maintenance and future battery replacement.
Conclusion
So, if I ran a data centre, which UPS topology would I choose? If it was purely on price, the fixed/mono-block
One of the largest
stockists of network cabling in the UK
Bulk
Cable
Cat5e
system is always going to be the most competitive. Mono-block systems are less complex and use a single metal cabinet to house the electronics. Modular UPS are more complex by design. Looking to the future and for 10-20 per cent more budget, the modular UPS would be the one of choice. The overall system footprint (vertically or horizontally scaled), ability to easily increase resilience or capacity with further slide-in modules, right-sizing for optimal efficiency and lower service and maintenance costs will always outweigh the initial lower costs of a traditional fixed capacity system.
Cat6 Cat6a
Copper
Patch
Leads
Fibre Optic Cabling
Over 1 million leads in stock • Fully compliant with industry standards
Call: 01276 405300
enquiries@cablenet.co.uk www.cablenet.co.uk
www.netcommseurope.com
NETCOMMS europe Volume V Issue 2 2015 19
D ATA S E C U R I T Y
xxxxxxx A Multi-Step Approach to Security
An Evolution in Infrastructure By Sean McAvan, Managing Director of NaviSite Europe Sean McAvan outlines the importance of securing your data centre
Introduction
Ten years ago, few could have predicted what today’s data centres would look like. The development of technologies like cloud computing and the explosion of data generated from the likes of social media and the Internet of Things has completely changed the modern data centre. This data growth not only impacts how and where data is stored, but has created the challenge of how to protect this information. In recent years we have seen an evolution in infrastructure and storage to support these new trends, both for the business community and for consumers, which has driven innovation in how the data can and should be protected. Companies and individuals are responsible for securing and protecting all this data, and while great strides have been made to ensure that information is protected from external threats, it’s often humans who continue to be the weakest link in the security chain. Whether through malicious intent or inadvertent carelessness, even the most sophisticated technology can be rendered useless if sensitive information gets into the wrong hands due to human error; so data centre providers must take a multi-step approach to security.
Security Measures
In a recent survey, NaviSite found that 82 per cent of UK respondents are either using or considering the use of colocation this year, and 54 per cent said security is a main consideration when evaluating colocation services. If you are looking to a third party provider to host your data, it is essential to seek absolute clarity on what measures of security are in place at the logical and physical level. World class data centres have a number of sophisticated controls to ensure systems remain protected, including physical security controls like cameras and biometric access systems and may then offer managed services to deliver logical controls at the network level like firewalls, intrusion detection or DoS mitigation. At the OS level, operating systems have become more secure and more sophisticated anti-virus software is now available, while threats at the applications level can be mitigated 20 NETCOMMS europe Volume V Issue 2 2015
in a number of ways; for example, intelligent web application firewalls can be implemented. These are clever enough to understand what the normal traffic patterns are for an application and if they encounter patterns outside the defined ‘normal’ parameters, the firewall can automatically block the problem traffic, averting a problem before it happens.
External Threats
Sitting on top of these tools and systems are defined processes and best-practice, including specific industry compliance standards such as PCI, HIPPA, FISMA, and others which define broader measures to protect data like ISO, SSAE16 and ISMS. But despite development in tools, systems and process; new threats continue to emerge and organisations need to be on alert to stay one step ahead of those external threats. Much of the focus on the human link in the data centre security chain is on protecting networks from outsiders, but the ‘insider threat’ continues to pose a significant risk. ‘Rogue insiders’ already have access to systems and can often avoid tripping alarms that might otherwise signal some form of attack. In a 2014 Ponemon Institute survey, 30 per cent of data breaches were related to a negligent employee or contractor i.e. human error. Recognising the sources of these threats is one thing, but it is quite another to be able to deal with them. However, there are several practical steps datacentre managers can take to enable this. Many data centre providers take advantage of the new levels of sophistication in algorithms for encryption, which can provide another layer of protection, should outsiders gain access to data. However, appropriate measures need to be in place in order to ensure that rogue insiders do not get access to encryption keys, which would invalidate even the most sophisticated encryption systems. As well as encrypting data for both storage and transmission, it is important to capture all the information about data access attempts – both legal and illegal. This allows privileged users to do their jobs in a climate of transparency,
while also acting as a deterrent for unauthorised access.
Multiple Checks
Multiple factor authentication is now more apparent, where multiple checks take place at a physical level; for example, passwords, together with finger print or retinal scans and personal data, can be incorporated as an additional measure. In some instances a phone factor is used where a message is sent to a phone to ensure that the correct individual receives the password. This can be strengthened further by authorisation based on least privilege, intrusion detection and notification and restrictive access controls; measures which are of paramount importance when securing data. Another way in which datacentres can reduce the risk of rogue insiders is to eliminate a generic visitor pass. This can seem a low-tech safety measure, but given the research about data breaches, it is key that safety measures are equally stringent at the physical level and not ignored, or viewed as less important. With the unique visitor pass, all personnel entering the datacentre are uniquely identified with a photograph on their visitor badge. This is supplemented with key information relating to the individual and their role. The badge is also time stamped, so the visitor is unable to reuse, pass it onto someone else or stay beyond their permitted time slot.
Conclusion
Data centres must take a multi-level approach to security, considering both physical and logical measures. The aim of this approach is to meet compliance and specific legal requirements as well as to stay one step ahead of the risk posed by rogue employees and external threats. While it is essential that technology continues to develop to protect against external threats, it is evident that internal threats are constantly posing a huge risk to companies. A multi-level approach will tackle both, by creating numerous opportunities to proactively detect, deter and overcome any data breaches from an internal or external source.
www.netcommseurope.com
enclosures
FI SERVER
committed to better
CO-LOCATION
BIG-O
01923 698230 | sales@prismenclosures.co.uk | www.prism-online.co.uk
WATER COOLED SOLUTIONS
British Manufactured
UPS SYSTEMS
Meeting the Challenge of ‘The Internet of Things’
Choosing the Right UPS Topology By Kenny Green, Technical Support Manager at Uninterruptible Power Supplies Ltd Kenny Green looks at how operators can use today’s modular UPS technology to manage the implications of the Internet of Things
Introduction
The Internet of Things (IoT) is about giving devices intelligence and creating new sources of data for improved decision-making. Its future growth is expected to be dramatic, with Gartner forecasting 26 billion devices installed by 2020. These deployment levels will generate large quantities of data that must be processed in real time, which will impact on data centre workloads, creating new security, capacity and analytics challenges
Impact on Data Centres
Organisations seeking to integrate an IoT structure will have to aggregate raw data in multiple, distributed mini data centres for initial processing. The resulting, refined data can then be forwarded to a central site for further treatment. The demand on these mini data centres will be heavy. The large volume of data handled will probably make full data backup unaffordable. This will create a need for selective backup operations, in turn requiring more processing. These data centres must exhibit high availability, as they process real time data that would be lost if not captured on arrival.
UPS Topologies
If high availability, efficiency and scalability are essential to the data centre ICT equipment, these attributes are critical to the UPS supporting it.
7.8a1 Figure
22 NETCOMMS europe Volume V Issue 2 2015
7.8b
Figure 2
Today’s systems are best able to meet such challenges if they are built using modern, modular topology. Advances in semiconductor technology, especially IGBT devices, have led to the transformerless technology now found within modern UPSs. The inverter output ac voltage level is sufficiently high to drive the load directly, without need for a step-up transformer. Transformerless technology has become ubiquitous because it offers so many advantages. These include improved efficiency with a higher input power factor, lower input current harmonic distortion (THDi) and lower audible noise. Importantly, both capital and operating costs are significantly reduced. Users seeking the ultimate in UPS availability, flexibility can take advantage of hot-swappable, modular UPS topology; an advanced concept made possible through transformerless technology’s hugely reduced size and weight. Fig.1 is an example of a hot-swappable, modular system. The UPS rack on the left has three modules, while that on the right has five. Each module is an entirely self-contained UPS of up to 100 kVA capacities. Because it is hot-swappable, if it fails it can be removed simply and quickly, without having to take the UPS system off-line. It is just as easy to ‘plug in’ extra modules to gain extra capacity. Adding modules to a rack in this way is known as vertical scaling; this can be done without needing extra space, cabling or installation effort. If the rack is full, further capacity can be provided by populating additional racks – ‘horizontal scaling’. Large IoT device population growth could therefore be
accommodated without interruption to power. This flexibility also allows redundant configurations to be set up efficiently. A 120 kVA load, for example, could be supported by either a pair of non-modular standalone 120 kVA units – one of which is redundant - or by four 40 kVA rack-mounting UPS modules, totalling 160 kVA capacity. If one module fails, the remaining three have 120 kVA capacity between them, which is enough to fully support the load. This is known as n+1 redundancy, where, in this case, n=3. During normal operation the redundant configuration’s capacity exceeds the load by only 40 kVA, rather than 120 kVA as in the standalone example. Capital expenditure on unnecessary extra capacity has been reduced, while each module’s loading has been increased to 75 per cent, compared with 50 per cent each for the 120 kVA standalone pair. High availability is essential to avoid loss of data arriving in real time from arrays of IoT devices. Redundancy improves availability by providing resilience to failure, but modular systems with hot-swapping capability can achieve availability levels up to 99.9999 per cent, often known as ‘six nines availability’. Mean Time To Repair (MTTR) is a key determinant of availability – and this figure drops from around 6 hours for a standalone system to half an hour for a modular system
Balancing Performance
For a UPS supporting business critical loads, a hot-swap, modular system offers the highest possible power availability. Other UPS operators within less critical www.netcommseurope.com
applications may wish to trade off some power availability against cost. The Power Availability (PA) chart shown in Fig. 2 shows the choices that exist. Data centre managers can choose the solution that best balances their budget and level of need for high availability. A standalone UPS, which is neither hot-swappable nor redundant, provides normal power availability based entirely on UPS reliability. Power availability can be improved with a fault tolerant system that has redundant components, although many of these are not hot-swappable. Such systems offer high power availability as they continue to support the load if a component fails. However, a failed component can often mean that the entire UPS needs
replacing. A modular UPS offers high power availability. With hot-swappable components, and often with redundant batteries, they are typically used for multiple servers and critical applications equipment. Their main advantage over a fault-tolerant UPS is that all main components susceptible to failure can be hot-swapped, eliminating planned downtime during a service call. Modular UPS systems perform best on the PA chart due to all their major components being hot-swappable and redundant. These systems offer the highest levels of power availability and protection for data centres. They also make accommodation of future growth simpler, with easier handling if the organisation moves or expands.
Meeting the Challenge
The IoT is new and unpredictable, as is how technology and user’s needs will develop. UPS must offer sufficient flexibility, through their scalability and availability credentials, to meet these potentially unknown challenges. They must also remain suitably space, cost and energy efficient to comply with data centre operators’ space and budget constraints. Modular UPS technology has already proved itself to be well suited to dealing with changing requirements and is well placed to play an important role in managing exponential growth in IoT data requirements, both now and in the future.
Making a difference where it counts In a Data Centre, the slightest cable malfunction can have enormous consequences. That’s why countless customers across Europe have chosen R&M for their mission-critical networks. We are a Swiss developer and provider of best-in-class connectivity systems for high quality, high performance Data Centre networks. We have operations in more than 30 countries and a strong European presence, R&M offers trusted advice, tailor-made solutions and enduring support.
DATA C NTRE
Cabling might not be the first thing to grab your attention when designing a DC. However, many quality failures can be attributed to seeing cabling as a commodity product. Designing, upgrading or migrating your Data Centre - and looking for peace of mind? R&M can help. If you are looking for trusted advice and enduring postsales service – R&M can help.
gbr@rdm.com | www.rdm.com
www.netcommseurope.com
NETCOMMS europe Volume V Issue 2 2015 23
D ATA C E N T R E S
xxxxxxx The Data Centre Decision
Carrier Neutral Data Centres By Jonathan Arnold, Managing Director, Volta
Introduction
Whether the requirement is Internet connectivity for cloud-based applications or links to a private WAN, most organisations now recognise the importance of a carrier neutral data centre. However, while the list of possible connections available may look compelling – some data centres boast hundreds of carriers – it is important to look beyond the top line promises.
Fibre Provider Jonathan Arnold determines just how much connectivity choice is really on offer at a data centre
When it comes to data centre connectivity organisations have diverse requirements, from cost to quality of service, and choice is key. But what does that choice actually mean? Telcos routinely route traffic over each other’s physical networks – indeed, often just reselling each other’s services. So, if a company opts for a connection from a given provider, the chances are that the traffic is routed over networks belonging to a variety of other telcos. This is a great model for creating competition, with Tier 2 providers aggregating services from different Tier 1 providers to offer different Service Level Agreements (SLA) and cost models. It is not, however, so good for resilience – especially if a company opts for primary and secondary connections from different carriers that are actually routed over the same physical infrastructure from a Tier 1 provider such as BT or Level 3. If anything happens to damage that physical cable – from road works onwards – both connections will fail. Therefore, it is essential to ask some pertinent questions before making the data centre decision, from the number of different entry points into the building to the number of Tier 1 and Tier 2 carriers that are providing connections within that data centre.
Digging Deeper
The first question for most organisations is whether the data centre has a relationship with its incumbent WAN provider. If so, it will be easy to connect into the network and get operational quickly. If not, there will be a number of
24 NETCOMMS europe Volume V Issue 2 2015
challenges facing that carrier in creating a link to the data centre that could add significantly to the cost. How many diverse entry points are there into the data centre building? Don’t just assume that because a data centre provider boasts hundreds of carrier relationships that there are many different physical connection points: indeed, many data centres have just two. As a result, choice is limited and the carriers will be constrained in the SLAs they can offer simply due to the limitations of the infrastructure. How many Tier 1 providers and how many Tier 2 providers are there – and which is the underpinning carrier network being used by each? To ensure resilience an organisation needs to use different infrastructure coming into the building at different entry points. To ensure choice a data centre should have not only multiple entry points, but also lots of Tier 2 carrier relationships. This will enable both competition and the creation of different cost/quality of service packages to meet diverse business requirements. How much will it cost to get a connection from your office location to the data centre? For example, to get a 10Gb connection from a central London office to an office outside the M25 will cost significantly more than connecting to a central London data centre that is located just around the corner. What are the options for connectivity outside the UK – to Europe, the US and/or Asia Pacific? Depending on both current and predicted business requirements, access to a carrier with excellent international connectivity capacity could be an important consideration.
it will be possible for the new carrier to connect to an existing service within the data centre, the process will not be straightforward and the company is likely to incur additional costs – costs that may undermine the business case associated with the carrier decision. Ensuring the data centre has a broad range of Tier 1 and Tier 2 providers on board is key to avoiding either additional costs or constraining carrier options further down the line. Indeed, during the typical life of a data centre relationship an organisation’s connectivity requirements will evolve in line with business changes – from the company looking to add cloud services to the managed services provider expanding into new markets. Hoping that a data centre will add carrier relationships during this time may be a little risky – especially when it comes to those critical Tier 1 relationships that add resilience.
Conclusion
The truth is that once a data centre is in place, adding new physical connections is far from easy: it incurs all the cost, complexity and legal ramifications of digging up roads to lay fibre. So, if a data centre only offers two physical connections into the building, the likelihood is that two is all it will ever offer. And that may be fine; it will still enable a number of Tier 2 carriers to offer a variety of services across the infrastructure – but it does limit a company’s options in the longer term. From resiliency to cost, choice to quality of service, accurately ascertaining the true quality of the carrier relationships on offer with a data centre should be an essential component of any decision making process.
Planning Ahead
While organisations typically only review the WAN provider every three to five years, it is important to remember that the data centre relationship is likely to last even longer. What happens in three years’ time if the company decides to change carrier for the WAN and the new provider does not have a relationship with the data centre? While
www.netcommseurope.com
LANactive
Switch to the future
FTTO Active & Passive Solutions Nexans is pleased to announce LANactive, an alternative approach to structured cabling. Using fibre-to-the-office (FTTO) topology together with access switches installed near to the work place, it provides Ethernet services via standard copper based RJ45 technology to the device.
• Long distance transmission • Elimination of costly floor distribution • Reduced cable containment • Refurbishment with minimum disruption
The approach offers significant cost savings and other benefits in specific circumstances:
• Redundancy at user level
ncs.uk@nexans.com www.nexans.co.uk/LANsystems
Global expert in cables and cabling systems
OPINION
Air Management Solutions
Reducing the Cost of Cooling By Mark Hirst, Head of T4 Data Centre Solutions, Cannon Technologies.
Introduction
Facility teams and data centre managers know that to survive in a world where low cost cloud infrastructure dominates, they need to cut costs to the bone. The hardest thing to cut has always been the cost of cooling. As air temperatures creep up inside the data centre, techniques such as free air-cooling and airside economisers become effective cooling solutions. Mark Hirst reports on effective cooling solutions
Containment and Hotter Input
The two biggest improvements in data centre cooling have been the
introduction of aisle containment and the ability of servers to manage higher input temperatures. Containment is a solution to air management that can be retrofitted to data centres. It has been responsible for both extending the life of older data centres as well as enabling higher densities without having to invest in expensive refits of cooling systems. Ashrae, the industry body responsible for data centre standards, have promoted higher input temperatures. Just 10 years ago, many data centres were still cooling input air to 65F (18C) while today they are working at 79F (26C) and even higher. This ability to manage higher input temperatures has also been helped by new generations of silicon and motherboards.
Using Natural Resources
Despite these changes, more still needs to be done to help cut the costs of cooling. This has led to a group of techniques known as free air-cooling. The idea is to use as much ambient air as possible to remove the heat from the data centre. The stated goal of most of these systems is to have no mechanical cooling at all. It sounds great but the reality is that there are few places on the planet where the outside air temperature is low enough to cool most data centres all year round. It is not just ambient air that is a challenge, the type of technology under the free air-cooling banner that is chosen comes with a number of additional challenges from data centre design to particulate matter.
Ambient Air
As air temperatures creep up, techniques such as free air-cooling and airside economisers become effective cooling solutions.
26 NETCOMMS europe Volume V Issue 2 2015
Using pure ambient air inside the data centre is not a technique that can be retrofitted to existing facilities. The first challenge is getting a large enough volume of air below the room. A large volume is needed to create the pressure to push the air through the data hall. The Hewlett Packard data centre in Wynyard, UK uses a five-metre hall to create the required volume of air. To help create the right pressure to draw the ambient air into the data hall, the hot air has to be expelled via a chimney. This needs careful design in order to not only extract all the hot air, but to do so in such a way as to create a partial vacuum, which then draws in the cold air behind it. To ensure that the air does not contain any particulates that would impact the internal performance of the equipment in the data hall, you need very large filters. Air inside cities tends to have high lead and other particulates, especially from diesel vehicles and general dust. It also tends to be warmer than air in the countryside and this can severely limit the number of days where ambient air can be used without secondary cooling. The air in country areas can be even dirtier from a data centre perspective. Pollen, dust, insects even swarms of bees and wasps have been reported being caught on the filters that guard the large www.netcommseurope.com
air halls. The ambient temperature is often lower than in city areas but here wind can be a problem as high winds can force small particles of dust through filter screen.
Humidity and Dew Point
Data centre managers are acutely aware of the risk of humid air inside the data centre - too little humidity and the risk of static electricity rises. When this discharges over electronic equipment it causes havoc and destroys circuit boards. Too much humidity leads to condensation, which can short out systems and cause corrosion, especially in power systems. When using conditioned air inside the data centre, this problem is handled through the use of the chillers and the dehumidifiers. Free air-cooling, however, creates its own problem. The most obvious of these is on rainy days or when there is a very cold ambient temperature being drawn into a hot data centre. In both these cases, water in the air tends to condense very quickly and if not handled properly is a disaster waiting to happen. Some data centres are being built near to the sea to take advantage of the natural temperature change between the land and sea. This looks like a good strategy at first, but the massive damage from salt water in the air can destroy data centre equipment in a fairly short period of time. Any use of free air-cooling in these environments requires a significant investment in technologies to clean the air of salt before it is used for cooling. To get around this problem, free air cooling systems mix existing air from the data centre with the air being drawn in from outside. Where the air is extremely cold, this helps to heat it and reduces the risk of cold damp air condensing on processors, storage devices or power systems. Where the air is simply very heavy because of the external humidity, there must be dehumidifiers available that can be brought online even though this adds extra cost to the power budget.
Airside Economisers
A technology that gets the most out of www.netcommseurope.com
free air, reduces the particulate and dew point issues as well as being capable of retrofitting into an existing facility is airside economisers. They bring ambient air in, filter it and then mix it with exhaust air to raise the temperature if required. The air is then passed either through an Air-to-Air heat exchanger (indirect) or direct and through a backup water or DX air coil in order to get the right air input temperature for the room. The advantages of using airside economisers is that they are not a single approach to free air-cooling. By dealing with the issues identified and by having the ability to filter, have both direct and indirect exchange and additionally cool or heat air, they can reduce the cost of cooling and get the most out of ambient temperatures. The Green Grid estimates that even data centres in hot environments such as Florida, Mexico, Texas, Portugal, Southern Spain and even the Middle East should be able to manage 2,5004,000 hours per year of free air-cooling. Much of this will be at night and during the winter months. In more temperate climates such as the UK, Northern France, Netherlands, New York and parts of California, this can rise to 6,500 hours. Further north and data centre owners should expect up to 8,000 hours although there will be additional costs in removing excess humidity and heating air before injecting it. To get the most from airside economisers, however, it is essential that users understand the requirements of the technology. One of the most common failure points is poor pressure management. If there is insufficient pressure to draw the air through the data halls then air will remain stagnant and just increase in temperature as it is poorly circulated. It is also important to ensure that the temperature sensors are effectively placed. These should be integrated with the Data Centre Information Management (DCIM) systems so that operators can quickly identify any temperature hotspots. One problem caused by poorly placed sensors is that too much return air is added to the airflow causing the input temperatures to rise unexpectedly. The opposite is
Too much humidity leads to condensation, which can short out systems and cause corrosion, especially in power systems.
also true if they are located too close to a heat source where air will be given additional cooling creating a large difference between hot and cold and exacerbating the risk of condensation. When integrating airside economisers into modular solutions, it is essential to allow enough exterior space in order to install the equipment. This is why modular equipment manufacturer Cannon Technologies has designed its own solution specifically for modular data centres. Depending on the climate, the target PUE can be as low as 1.1.
Conclusion
In one survey, Intel looked at the impact of airside economisers where the outside air temperatures were 90F (32C). They estimated that a 10MW facility would save almost $3 million per year and that the risk of increased equipment failure was so low as to be insignificant. In the temperate climate of the UK and mid US, free air-cooling is able to deliver a Power Usage Effectiveness (PUE) as low as 1.05 against an industry average of 2.0. What this means is that for every 1kW of power consumed in the data halls during winter months, just 1.06kW of energy is actually consumed. This means that non-IT equipment usage of energy is 6% of the total energy bill.
NETCOMMS europe Volume V Issue 2 2015 27
CASE STUDY
xxxxxxx Core Transmission Networks
Meeting Next-Generation Demands 400Gbps to be made available to BT Ireland commercial network customers from
Introduction
BT and Huawei have announced the successful completion of a 400 Gigabit-per-second trial over the optical fibres that form BT Ireland’s live network between Dublin and Belfast, the first trial of its kind in either the UK
or Ireland. The trial was conducted by running 400Gbps transmission through existing live 10Gbps, 40Gbps and 100Gbps wavelengths, proving that BT’s current core network has the capability to support this future, next generation transmission technology.
The trial also revealed how BT’s core fibre optic infrastructure could work even more efficiently in the future, thus reducing the need to invest in more infrastructure as customers’ bandwidth demands grow.
The Results
March 2015
With the rapid growth of mobile broadband and video services, the time is now for 400G technology.
The trial has now paved the way for full commercial deployment of 200Gbps and 400Gbps speeds on the BT Ireland core transmission network by March 2015 from both Dublin and Belfast. Alex Crossan, Managing Director, Commercial Networks, at BT Ireland, said, “The combination of BT Ireland’s leading-edge network, the expertise of our local team and the optimum geographical distance between Dublin and Belfast, made Ireland the perfect location for our innovative 400Gbps trial. The results essentially demonstrate how we will now be able to maximise the efficiency of our network investment, building on our core network infrastructure, while continuing to meet the ever increasing needs of our customers in a fast evolving digital world. These in-life trials have also been
IT infrastructure from smallest to largest. ENCLOSURES
POWER DISTRIBUTION
CLIMATE CONTROL
crucial in understanding the capabilities of these new technologies, and have allowed us to accelerate our plans to deploy with confidence in the very near future.” Karl Penaluna, Managing Director Global Network Services, at BT said, “Innovation in communications technology is core to our business. This trial proves the robustness of cutting edge optical transmission technology that we have developed in our labs, by placing it in a truly testing environment in our live network. It demonstrates to our customers in a very direct way that we are able to deliver innovative, cost-effective solutions that will future proof our global network infrastructure.”
Conclusion
Ryan Ding, President of Products and Solutions at Huawei, said: “This is a landmark innovation, which Huawei is very excited to be part of. With the rapid growth of mobile broadband and video services generating tremendous volumes of traffic for backbone networks, the time is now for 400G. At Huawei, we’re
proud to be leading this development in 400G technology and fully understand carriers’ network reconstruction requirements. We are dedicated to improving network performance, expanding bandwidth capacity, and eliminating bandwidth bottlenecks to help carriers build future-ready backbone networks.”
Key Facts:
The link between Dublin and Belfast is approximately 200km long, and shares wavelengths with an existing live commercial BT Ireland DWDM (Dense Wavelength Division Multiplexing) network between Dublin and Dundalk carrying existing 10G, 40G and 100G customer wavelengths. The 400G signal comprises 2 x 200G (16QAM) in adjacent 50GHz wavelength slots and can be selectively configured to any wavelength slots available on the live network.
Key technical achievements included:
“With the rapid growth of mobile broadband, the time is now for 400G.”
IT INFRASTRUCTURE
•
• •
Successful error-free transmission of 400G (2 x 200G (16-QAM)) DWDM over a 200km long link between Dublin and Belfast, together with live 10G, 40G and 100G traffic (Dublin to Dundalk), demonstrating the potential of seamless upgrade and future proofing of the existing network to 400G. Successful demonstration of end-to-end Ethernet using a mix of fully loaded 10GE and 100GE traffic over the 400G signal. Successful error-free transmission of 400G signal in adjacent (50GHz) wavelength slots to 10G traffic, thereby demonstrating the ability to maximise traffic capacity on the existing network link.
SOFTWARE & SERVICES www.rittal.co.uk
D ATA S E C U R I T Y
A New Era of Automated Remediation
Securing the Cloud in 2015 By Steven Harrison, Lead Technologist & Head of Products, Exponential-e ltd Steven Harrison looks at the emerging field of context aware security.
Introduction
With the computing world well and truly on its way to 100% cloud adoption, issues continue to arise with regards to the security, or lack thereof, inherent in the cloud. Let’s begin by dismissing the myth that the word cloud is interchangeable with the word Internet. It’s true, many clouds are accessed via the Internet, but this is only one possible design. Let’s look at recent cloud security incidents and think for a moment about whether it was the cloud at fault, or the user.
Cloud Security
If we use a simple analogy that most business executives would understand, let’s say that your data bytes are instead pounds, and the cloud provider is
instead the financial institution you deposit them in. In the same way as financial institutions offer many types of accounts, so too do cloud providers. Now if one of those account types offered you the convenience of a debit card that could be used at any ATM on the public high street, and with only a simple set of credentials you could access your money quickly and easily, like most consumers you’d say “Brilliant! I’ll have one of those please, for my day-to-day use.” Let’s then say that some of your money (remember we’re talking about data bytes here) was more important than the rest. You’d probably ask for an account that didn’t have a debit card, but instead required, for example, dual-signatures on paper delivered privately to the branch to access it.
Understanding cloud security depends on using the right type of cloud for each situation.
30 NETCOMMS europe Volume V Issue 2 2015
If it were then the case that someone stole your debit card, and gained your pin, you would not lose your most important money. Likewise, I doubt very much that you would blame the ATM machine on the high street for fulfilling its function of dishing out money when presented with a valid card and pin. Strangely however, people routinely blame the cloud’s version of that ATM, the web-access portal, for doing just that. You see it’s not the cloud’s fault that your name and password were stolen, nor is it the cloud’s fault that when in possession of those bits of information it served up your data. The reality is that your most important data should never have been accessible in that way.
Denial of Service
This situation becomes critically important when we look at the prevalence and increasing severity of distributed denial of service (DDoS) attacks. Any Internet facing IP address is a point that can be attacked. All the VPN’s in the world won’t stop the attackers from simply flooding your VPN appliance and shutting it down. Encryption here is not the answer. We need to take the most mission critical applications completely ‘off-net’. This was interestingly the norm back in the mainframe days, where connections to the mainframe were only ever highly secure private leased lines. We’ve not lost the technology to do this for the cloud, but the willingness and understanding of why we did it that way in the ‘80s has faded and we need a reminder. With that out of the way, it’s clear that understanding cloud security depends very much on using the right type of cloud for each situation, we see that it becomes much more complex to know what acceptable behaviour is. Here we begin looking at the emerging field of context aware security. This is a security system that understands at a more intimate level the difference between say, a user, let’s call him Bob, accessing the data he normally accesses on the company servers, from a network that he normally uses, at the hours he normally works, on a machine he normally uses than say the same username and password accessing data www.netcommseurope.com
The Complete Labelling Solution for Network Cabling
not normally accessed from an Internet café in Thailand in the middle of the night. If we’re reliant on traditional access – authentication – accounting (AAA) system and traditional VPN or remote worker technologies, we’re sunk. The answer of the day has been security information event management (SIEM) systems, but these are history professors. It’s ultimately not that useful to know that ‘Bob’ downloaded your customer database from an Internet café in Thailand last night. We need to move away from logging, auditing and compliance to a focus on remediation.
Products Include:
Conclusion
Is there security in the cloud? Absolutely! Just remember the tools and the designs of the network will be Labelling Software different. Done correctly, there’s no reason at all why a cloud shouldn’t be more secure than your old on-premise Wrap-Round Cable Labels IT, in the same way as your bank is (hopefully) more secure than when you kept a cash-drawer or safe in the office. Patch Panel Labels It’s not about where the data is stored, it’s about who’s storing it and what protection mechanisms are in place to monitor it and control access to it. Faceplate and Module Labels
Cabinet and Logo Labels
Hybrid Cloud
The reality is that with the increase in typical IT environment complexity, and as we enter the era of the hybrid cloud environment, there’s just no way using static policies is going to work. More to the point, there’s just no way to ensure that you won’t be hacked. Being hacked is inevitable. It will happen. It may be a new exploit, Trojan, worm or crafty piece of malware; or it may be a user who insists on writing down passwords, or just a gap in the policy, but the hackers will one day get in. Finding out what they did, after they’ve done it, won’t help. You need to think about how you’ll respond when they get in. How quickly will your systems react, and how quickly will your teams be alerted so that damage control and remediation can happen. Enter the new era of automated remediation. A few security vendors are looking at the future of our connected world and finding ways of teaching expert systems how to learn, audit and monitor behaviour. Not in the intrusive, in-your-face big-brother way, but in the metadata way. Sure, netsvc.exe is a valid windows process, but why does this laptop suddenly have two extra copies running? What has changed on this computer since yesterday? Why don’t I just stop those extra two copies and then confirm with a human specialist that they were authorised? How about instead of blocking social networking at work I monitor the use of social networking?
Tie-On Cable Labels Engraved Labels Warning Labels
Download your FREE full version Sharpmark Labelling Software now:
www.sharpmark.com All our products are available to purchase from our website. Order before 2pm to receive your order the next day*. *Subject to product availability
www.netcommseurope.com
The NETCOMMS complete europelabelling Volume V Issue 2solution 2015 31
CASE STUDY
Creative Problem Solving xxxxxxx
Life in the Fast Lane By Tony Christopher, Network Engineer, Voice/Data TransUnion Credit. Financial Services Organization Saves $1.5M with groundbreaking IP platform.
About TransUnion
TransUnion is a global leader in credit information and information management services. For more than 40 years, TransUnion has helped businesses become more efficient in managing risk, reducing costs and increasing revenue. Today, TransUnion provides solutions for over 45,000 businesses and an estimated 500 million consumers in 25 countries around the world. The 1,400 employees located at the company’s headquarters in Chicago, Illinois, strive to provide information and services to their colleagues and customers around the globe. And with every additional phone call made or email received, TransUnion’s management team recognized the need to improve its communications infrastructure in order to keep up with digital age demands. Tony Christopher, Network Engineer Voice/Data of TransUnion Credit, wanted to modernize their communications platform and was looking to move the 1,400 employees to Unified Communications and IP Telephony. The challenge was mitigating financial and operational risk as they migrate to a converged platform.
The Challenge
Like many organizations, TransUnion initially planned to achieve their communications enhancements by building on their Local Area Network (LAN) infrastructure to support an IP Telephony solution (the deployment of their IP phones layered on their data network with the IP phone acting as a switch for the data device connected to it). Layering voice and data is quite common in today’s communication world. It can also be costly and time consuming to implement. TransUnion estimated that local area network readiness would cost the company over $1.8 million and take more than 12 months to complete, so when a Phybridge partner prospected Tony he was intrigued. “I’d like to introduce you to a proven innovation that delivers Ethernet and Power over your existing voice infrastructure with four times the reach of traditional switches. It was designed to optimize and future proof your LAN for convergence and beyond and we believe we can save you money while eliminating risk,” said the partner. Tony agreed to a meeting to better understand the Phybridge proposition.
Solution
The Phybridge UniPhyer is the only data network switch in the world to deliver Ethernet and Power over Ethernet over a single pair of telephony grade wire with four times the reach of traditional data switches. Customers are leveraging their existing, proven reliable voice infrastructure to create a separate network path for voice communications, complementing an existing data network, while optimizing an organization’s IT infrastructure for voice and data convergence. Tony learned that installing the UniPhyer switch would allow TransUnion to optimize their local area network and create a separate physical path for voice communications. Phybridge claimed that the ongoing management of the network would be simpler, and the risk of issues compared to a layered network solution would significantly be diminished. Additionally, a plug and play deployment solution would not require major infrastructure changes, resulting in a lower-cost solution. Tony found the Phybridge value proposition very interesting. It fit with TransUnion’s mandate to seek alternative information resources to make sound financial decisions. He admits that he was skeptical. The old adage “if it sounds too good to be true, it probably is” was running through his mind. He agreed however, to meet with a local Phybridge partner to get more information. At that meeting, Tony was provided an estimate of $300,000 to install the Phybridge UniPhyer switch supporting all of the IP telephones in TransUnion’s Chicago corporate office. If true, this would result in a $1.5 million savings. To mitigate risk, Tony agreed to a pilot deployment to test the Phybridge solution and confirm that it would meet all of TransUnion’s requirements.
Considerations
TransUnion’s management team recognised the need to improve its communications infrastructure.
32 NETCOMMS europe Volume V Issue 2 2015
TransUnion’s senior management team recognized the technical and economic benefits of the Phybridge solution. The IP Telephony system was purchased and installed at a cost drastically reduced from initial budget forecasts. www.netcommseurope.com
TransUnion welcomed the UniPhyer’s ability to improve upon their emergency preparedness planning by creating a more robust 911 system. The point-to-point topology leveraged by the UniPhyer allowed Tony to map all ports on the Phybridge switches to a specific physical location in the 10-story building creating a robust E911 location database. • Once in place, the wiring didn’t need to be touched. IP phones could move from one location to another and the E911 location database was automatically updated with the new location of the IP phone; achieved through SNMP integration. • The Phybridge backbone was easily integrated into the overall management of the network through SNMP. • QoS on a Phybridge backbone complementing the existing data LAN is achieved by physically separating voice with each IP phone to have a dedicated point-to-point infrastructure to support requirements. • Future data requirements are greatly simplified. The physical separation of voice on its own Phybridge switch fabric greatly reduces future financial considerations and potential risks when needing to increase bandwidth speeds for data users. • Data switches do not need to be PoE and the IP phone doesn’t need to be changed to support higher bandwidth speeds needed to support the data device connected.
The Pilot
With Phybridge’s plug-and-play deployment solution, TransUnion was able to easily test the usability of the UniPhyer solution without making any financial investment. This same level of real world testing is not possible if voice communications are layered on the data LAN network. TransUnion would have had to make significant LAN investments before being able to test even a single phone. Tony identified end points throughout the building to test. He tested some of www.netcommseurope.com
the furthest end points from the central closet and chose the most difficult office locations to ensure an accurate test was conducted in a real work environment. Several IP phones, including key executive desktops, were connected to the Phybridge switch on various floors of TransUnion’s headquarters. This allowed users from all levels of the company to test the solution and experience first-hand the ease of using an IP phone in their day-to-day activities. The transition during the pilot was seamless for TransUnion’s employees and had no adverse effect on their productivity. Tony was satisfied with the results of all testing and was confident that the Phybridge solution would support TransUnion’s migration to IP Telephony. With the pilot complete, Tony recommended the UniPhyer to TransUnion’s executive team and outlined how the Phybridge solution could be implemented faster and cheaper, but with the same technical results as the layered solution they had initially considered. TransUnion’s senior management team was impressed with the pilot’s results. They found tremendous value in the ability to test the solution in a real operating environment, thereby eliminating project risk and proving the solution’s viability without having to make a financial investment up front.
Deployment
Given all the telephony pairs supporting the IP end points can be identified in the main closet, TransUnion decided to consolidate the pairs by department for easy management once fully deployed. They calculated the power and back-up power requirements for the project. Given all the Phybridge switches were in a single location TransUnion realized significant savings in back up power costs while reducing power management complexity. Prior to cutover, TransUnion was able to configure and test all the switches to ensure a successful migration. The following is a summary of the strategy applied by TransUnion: • Configured WAN routers for QoS and kept the PSTN connectivity
•
• • •
•
for DID/DOD traffic. Configured the Phybridge switch fabric for redundancy, enhanced security and optimum performance using VLAN and redundancy strategies. Clustered the 48 port UniPhyers into 6 groups of 5 on three racks. Created specific VLANs for each of the clusters. For redundancy, Tony daisy-chained the cluster of Phybridge switches together connecting the top switch to a gigabit data switch and the bottom UniPhyer to a different gigabit data switch. If either one of the data switches failed or a UniPhyer in a cluster failed, there was a redundant path available. Racked all the Phybridge switches, connected to the PBX and tested switches based on configuration strategy and locally tested some end points without any business impact prior to cutover day.
Results
The installation of the Phybridge solution began on a Friday evening. Over the course of two days, a team of 8 to 12 people worked to complete the transition. Part of the team unpacked and delivered 1,400 IP phones to employees’ desks. At each desk, they disconnected the RJ11 cable from the old phone, connected it to the PhyAdapter, and plugged the PhyAdapter into the new IP phone. Other team members began working on the wiring consolidation and mapping. The more accurate the wiring records, the less time this part of the project would take. With consolidation complete and the RJ21 cabling connected to the Phybridge switches, the IP phones were powered up, registered and tested to ensure functionality. On Monday morning TransUnion employees arrived at work to find the new IP phones on their desks. Like every other day at the head office, numerous calls, faxes, video conferences and voice messages were transmitted worldwide, all without a single quality of service issue.
NETCOMMS europe Volume V Issue 2 2015 33
CONVERGED INFRASTRUCTURE
Is There a Silver Lining for Data Centre Real Estate Investment? xxxxxxx
The Cloud Effect By Martin Miklosko, Director – Head of Data Centre Valuation at CBRE
Introduction
Martin Miklosko charts the changing nature of the data centre market.
Given the rapid emergence of cloud technology over the last decade, there is a common misconception that the role of data centre real estate, including investment into this sector, will become less significant. Driving this belief is the inherent flexible nature of cloud, meaning data centres can now theoretically be located anywhere, regardless of any issues of latency. Therefore, some hold the view that the advent of cloud is causing the data centre sector to become less reliant on the real estate industry for investment. A crucial factor, often missed, is that real estate and data centres have gone hand-in-hand for decades. Born in the late 1980s, companies began to move IT infrastructure from in-house server rooms to purpose-built enterprise locations, and later on colocation data centre facilities. Over time, investment into data centres from the real estate sector has created some of the largest owners and operators in the world. Many of these are now listed as Real
Estate Investment Trusts (REITS). The last 12 months has seen the creation of a further number of data centre REITS and this trend suggests that data centres continue to be seen as an investable real estate sector. Critics would argue that this could be short lived given the increasing dominance of cloud and the number of facilities being constructed by the large cloud providers themselves. Importantly, and in many cases, such data centres are in remote locations where there is restricted or no sustainable colocation demand, or the potential for other high value uses. Furthermore, such cloud data centres do not benefit from underlying real estate-style contracts, or leases, as seen in colocation facilities, which ultimately create the secure income streams so prised by real estate investors. This means there is often limited scope for investment into such facilities from the real estate sector. However, the perceived threat of diminishing real estate investment into traditional colocation facilities is not
as bleak as some predict. There is still room for investment, irrespective of the emergence of cloud technology.
Data Centre Proposition
In order to understand the future for real estate investment into data centres we first must delve into its attraction. Like any good asset, the potential for secure long-term income streams cannot be understated. The difficulty for users to relocate once deployed given the risks and challenges of data centre migration, in addition to the expense of the initial fit out means large scale end users typically enter into long leases, or contracts. The main intention of this is to minimise the risk of having to relocate in the short term, with the knock-on-effect being a perception of customer or tenant ‘stickiness’. Furthermore, in order to effectively forecast expenditure, annual rental payments are either subject to pre-determined periodic increases or linked to inflation. This also creates
Data centres can now theorectically be located anywhere, regardless of any issues of latency.
34 NETCOMMS europe Volume V Issue 2 2015
www.netcommseurope.com
Could you install your racks in an environment like this?
With our range of waterproof IP66 19” cabinets you can • • • • • •
TUV Certified, Upgradeable to IP68 Anti-tamper panel design Available in any colour to suit the most demanding installation 180 degree door opening Full remote monitoring and control options Can be bayed together and IP rating maintained Thistleton Road Industrial Estate, Market Overton, Oakham, Rutland LE15 7PP
Working together for all your testing needs! 01572 768333 www.commsbuyer.com sales@commsbuyer.com
CONVERGED INFRASTRUCTURE
xxxxxxx
investment appeal, as the rental income tends to go one way, which is up. Finally, the largest data centre space requirements were historically taken by global corporates including banks and financial institutions, providing extremely strong income security.
Is Cloud The End Of The Colocation Model?
The main question, often posed, is whether colocation will remain a sustainable data centre model going forward. Or will the impact of cloud result in a move of the majority of companies’ IT infrastructure to the growing number of facilities owned and operated by the cloud providers
themselves. Current indicators and our own CBRE research suggests that rather than threatening the colocation model, it appears that the growth of cloud is actually driving increasing colocation demand. While cloud companies continue to build large data centres in traditionally non-core locations, the need to have a presence in the established data centre markets with high levels of connectivity and proximity to end users appears to remain. In 2013, 68 per cent of cloud providers’ servers were housed in colocation data centres, up from 65 per cent in 2012. Even more tellingly, the proportion of servers located in their own data centres actually decreased from 39 per cent to 34 per cent over the
same period. At this stage it’s impossible to predict to what extent, or for how long, this trend will continue given the investment by cloud operators into colocation facilities over the last few years. That said, any reversal doesn’t appear particularly imminent.
Does Cloud Make Colocation Facilities Less Attractive Investments?
While cloud technology does give end users greater flexibility in terms of IT infrastructure, the actual cloud providers are far less mobile when it comes to the facilities they use. While the corporate and financial institutions that drove data centre growth during the 1990s and early 2000s took vast quantities of space, much of this remained unused. This resulted in a perceived weakening of income security and lower likelihood of lease renewal on expiry. Today, cloud providers are taking up colocation space in smaller initial quantities. However, given the growing demand for their services and potential for attracting new customers to facilities, once a cloud provider is established and has invested in a colocation facility, the risk of them vacating is significantly reduced and potential for future income growth increased.
The Future
With growing demand from cloud providers for, and continuing expansion within, colocation facilities, such assets can undoubtedly continue to provide long term secure income streams for investors. This, after all, was how real estate investors became attracted to the sector in the first place. While the nature and dynamics of the data centre market may be changing, real estate continues to have a pivotal role to play in the sector as a result of the growth of cloud.
Current indicators suggest that the growth of the cloud is driving increasing colocation demand.
36 NETCOMMS europe Volume V Issue 2 2015
www.netcommseurope.com
ARE YOUR COLLEAGUES AS WELL-INFORMED AS YOU ARE? www.netcommse urop
e.com
£35/€50
Volume V, Issue 1 2015
Optimise You r IT
For Today’s Chall enges
NETCOMMS
Coping with Da ta Growth Online Backup and Data Recove ry Future Proofin g Your Netwo rk Improving Data Centre Efficiency Automated Coo ling Control Making Use of Data Analysis
V5 iss1 cover
_contents.indd
1
16/01/2015
FAX BACK TO +44 (0) 1353 865301 Register online at www.netcommseurope.com
14:22
D ATA C E N T R E S
Maintaining Accurate Data with DCIM
Optimising IT By Darren Walsh, Business Development Manager at Temple, and Julie Mullins, Head of Marketing at Cormant, Inc.
Introduction
Getting Value from a DCIM Solution.
Tracking IT equipment in a data centre or multi-building campus has historically been an inaccurate, time consuming and difficult process. Managers now need to track more data than ever before, but since the information is often inaccurate it is less useful than required. An estimated 20 – 30 per cent of servers are unmanaged in most data centres due to poor physical process execution and multiple, disparate data sources that don’t allow for simple cross checking. Spreadsheets are the most common form of tracking IT equipment today, but are no longer effective for data centres with more than a few racks. For these data centres, managers are implementing data centre infrastructure management (DCIM) software to achieve increased efficiency and data accuracy with decreased cost and risk.
What is DCIM?
A DCIM solution consolidates data into a single repository to better visualise, manage and optimise equipment and
connections within the data centre. The consolidated data empowers users to make informed, accurate decisions as all teams work from the same information source. Ultimately, DCIM optimises IT capacities, reducing the data centre’s efficiency and cost per compute unit. However, DCIM is not a ‘magic bullet.’ Significant internal work must be completed to get value from a DCIM solution. The right DCIM vendor will act as a partner to help formulate a process framework for your DCIM success. DCIM is not a Building Management System (BMS) or an ITIL Configuration Management Database (CMDB). While many DCIM solutions interface with such systems, they do not typically replace them. Rather, DCIM creates a holistic view by aggregating information from systems such as the BMS and CMDB as well as data from PDUs, UPSs and more.
data versus inaccurate data could mean a cost savings of millions of euros. Consider some of the costs associated with having 20 – 30 per cent of all servers unmonitored. 1.
2. 3.
4.
Data Accuracy
Managing a data centre with accurate
5.
6. 7.
Power costs: unmonitored/ unused servers, or ‘zombie servers,’ are often powered-on despite being unused. Companies have saved hundreds of thousands of euros in power and cooling energy savings in just one year by decommissioning zombie servers. Server maintenance costs and fees: an average of €550 is saved per decommissioned server. New server costs: accurate data provides insight to available and underutilised servers to maximise capacity and defer the estimated €3,000 cost to purchase a new server. Time: employee time is saved when a change is planned without a physical audit and when changes are executed quickly and accurately. Opportunity costs: money wasted on resolving unexpected problems would be better spent on business improvements. Cost of downtime: a poorly planned and executed change can result in costly downtime. Security risks: security breaches cause financial and reputational risk. Unmonitored equipment can provide (unpatched) platform access to a company’s network, leaving it vulnerable to malicious attack.
A clear vision for future growth of your data centre can only be achieved with accurate data. The DCIM ecosystem helps improve and maintain accuracy through three principal steps: • Gather data • Integrate DCIM with operational processes • Monitor and maintain data.
The right DCIM vendor will act as a partner to help formulate a process framework for your DCIM success.
38 NETCOMMS europe Volume V Issue 2 2015
These steps are unattainable without first recognising the need for process change and having the will to implement that change. www.netcommseurope.com
Steps to Accurate Data 1) Gather Data During implementation, data is acquired by importing current spreadsheets, discovering through SNMP or another network discovery type, pulling data from another system’s API, auditing physical infrastructure on a mobile handheld device, or most likely, some combination of the above. It’s important to recognise that multiple data sources can’t always be imported due to dissimilar data structure, so it may be necessary to use only one database as the main import. The most accurate process uses the DCIM solution to audit the infrastructure to cross-check and update imported data for ultimate accuracy. When gathering data only gather and record what is essential for your organisation’s needs today. A scalable DCIM solution allows you to add more data in the future. 2) Integrate DCIM with Operational Processes Internal processes are either integrated directly into the DCIM solution or changed to better suit new goals. Although DCIM features vary by solution, look for one with configurability to ensure new processes can be seamlessly aligned with current company standards. 3) Maintaining Data Monitoring and maintaining data occur during regular use of the DCIM software. Maintaining accurate data takes process discipline, but the right DCIM tool helps to simplify the process. Data accuracy is cyclical; a structured change management process enables teams to use the procedure. When teams use the standardised process, data remains accurate. When data remains accurate, processes are easier and the natural cycle continues. Top 5 Ways DCIM Helps Maintain Data Accuracy It’s important to maintain and improve data accuracy to realise the projected benefits of the DCIM solution. DCIM will help maintain data accuracy in many ways, but the top 5 are listed here.
www.netcommseurope.com
1) Visibility: DCIM consolidates scattered information, so all teams, both horizontally (ie. IT vs. Facilities) and vertically (ie. Management vs. Technicians), act off of the same data. Changes only need to be recorded once and are instantly viewable to all users. This functionality enhances communication and reduces rework. Tip: Look for a solution that combines analytics with alerts to be proactive in your decisions. 2) Configuration/Ease-of-Use: Intuitive DCIM solutions simplify the steps required for change management and prevent users from forgetting how to use the software. Configurable solutions provide the option to easily match existing company structure with the DCIM solution. A configurable, easy-touse solution, such as one with a simple UI and open API, is more likely to be used, therefore maintaining accuracy and reducing the likelihood of errors. Tip: Look for a DCIM tool with a small learning curve since multiple employees will be using the software. 3) Mobility: Mobility enables timely database updates at the location of the change. Before DCIM, changes were recorded on a laptop, piece of paper or even someone’s hand, only to be forgotten and not updated. Support for mobile devices differs drastically by solution. Options are available for limited Wi-Fi facilities, users requiring full, seamless functionality across multiple devices, and users requesting enhanced features such as barcode and RFID support for improved efficiency and accuracy during change management. A comprehensive mobile solution makes the change process as easy as two scans to instantly record the change while almost eliminating the possibility of error. Tip: The cost of a solution with full mobility is not necessarily greater than the cost without it and is worth the savings in assured data accuracy.
discovering features later in the journey or from vendors introducing new features. A powerful, process-enhancing feature in some DCIM solutions is Workflow. Workflow provides a comprehensive view of project status with task lists to reduce costly assumptions and errors and confirm accurate completion. It keeps teams aligned and projects on-schedule until completion. Tip: Look for a DCIM tool that combines workflow with mobility for complementary process efficiency. 5) Creating an Ecosystem: Data remains accurate with DCIM by creating an ecosystem of data management from the previous four points. Processes and process improvements develop an efficient flow of information which is augmented by workflow and mobility to maintain accurate data. Control of all required data empowers users to make informed decisions while reducing time, cost and risks during change management. Data is then integrated with other systems to continue the data accuracy cycle for a holistic view of your data centre.
Conclusion
Maintaining data accuracy is a cycle that follows the same thought process as good management. In management, if an employee is always assumed to be wrong, they will stop trying to be right. Management of data is similar. Once data is accurate, teams see the difference in results and are motivated to maintain the accuracy, particularly when the process for maintaining accuracy is easy. By combining ease-of-use and configurability with mobility, DCIM provides the ecosystem to maintain the same level of accuracy on day 1,000 as on day 1.
4) Incremental Process Improvements: DCIM is a journey where process improvements are uncovered periodically, whether from users NETCOMMS europe Volume V Issue 2 2015 39
DCIAR SE EC TSOT RUYD Y
network infrastructure products
Cray Valley is a leading distributor of Networking, Cabling Infrastructure and IP Physical security products and prides itself on the innovative range in its portfolio. With a market leading Wireless LAN product from Extricom that has a unique single Wireless blanket giving it a number of technical advantages unavailable to traditional cell based wireless systems. The innovative and comprehensive range of IP door access/IP cameras/IP Environmental monitoring from Axxess ID coupled with excellent technical back up support offered across the range from leading manufacturers, Cray Valley offers a partnership of choice to its customers. This is complemented with a full range of High speed RF and FSO links, with all products having free training courses available from the manufacturer. Our Cabling infrastructure Systems from Siemon, Nexans and Matrix are well respected Global manufacturers with a full range of Cat5e/ Cat6/Cat6a and Cat7 and Fibre. Cray Valley Communications Limited Unit 11, Concorde Business Centre Airport Industrial Estate Westerham, Kent TN 16 3YN, UK Tel: +44 1959 573444 Fax: +44 1959 572172 Web: www.crayvalleycomms.co.uk
network infrastructure products Mills is a leading distributor of structured cabling, cable management and specialist tooling for the communications industry. With a stocked product range of over 4000 lines, Mills is the one stop shop for your cabling infrastructure requirements. • Cabinets & Enclosures
• Cable Preparation &
• Structured Cabling
Termination Tools
• Fibre Optics & Tooling
• Power Tools
• Voice Products
• Contractors Tools & General
• Active Products
Hand Tools
• Coaxial and Audio Visual
• Overhead & Underground
• Power Distribution
Cabling Equipment
• Trunking & Cable
• Safety Equipment
Management/Fixing
• Test Equipment
• Tool Kits & Tool Cases
Mills is the premier distributor of the full Fusion structured cabling system range. Established over 90 years, Mills is an IS09001 and Investors In People certified company. Free catalogue on request. Mills Ltd, 13 Fairway Drive, Fairway Industrial Estate, Greenford, Middlesex. UB6 8PW, UK Tel: 020 8833 2626 Email: sales@millsltd.com Web: www.millsltd.com
40 NETCOMMS europe Volume V Issue 1 2015
network infrastructure products Excel is a worldclass premium performance endto-end infrastructure solution – designed, manufactured, supported and delivered – without compromise. Excel is driven by a team of industry experts, ensuring the latest innovation and manufacturing capabilities are implemented to surpass industry standards for quality and performance, technical compliance and ease of installation and use. Since the brand was conceived in 1997, Excel has enjoyed formidable growth and is now reported in the latest BSRIA UK market report as the 2nd largest structured cabling brand with 17% share of the UK market in 2013. The system is also a growing force in markets across EMEA and is sold and supported in over 70 countries. Excel European Headquarters Excel House Junction Six Industrial Park Electric Avenue Birmingham B6 7JJ UK Tel: +44 (0)121 326 7557 Email: sales@excel-networking.com Web: www.excel-networking.com
network infrastructure products
network infrastructure products The Fusion Product range represents the outcome of two years of market research and focus groups to establish installers and users expectations for an end-to-end network cabling system. Altogether better because .. Completely integrated - so everything fits together Cost effective - ensuring maximum return on investment Fast to install - every aspect of design optimised to save time Comprehensive range - providing a complete solution No excess packaging - save time opening packs and minimise impact on the environment 25 year warranty - providing peace of mind • • • • • • • •
Cat5e Cat6 Fibre Voice Coaxial Audio Visual Cabinets & Enclosures Cable Management
Fusion, PO Box 556, Greenford, UBS 9JS, UK Tel: 0845 370 4709 Email: sales@fusiondatacom.com Web: www.fusiondatacom.com
network infrastructure products
All The Eco Power Supplies You Will Ever Need UPS - Generators - Batteries - PDUs
FREE
UPS Site Surveys Power Analysis Energy Audits When power is critical, EcoPowerSupplies can keep your systems running and save you money. Our UPS systems run from 400VA to 1MVA and at up to 99% efficiency. They also feature advanced battery life extending technologies and modular scalability.
Sales 0800 210 0088 www.EcoPowerSupplies.com
EcoPowerSupplies.com part of the Thamesgate Group www.thamesgate.com
WADSWORTH LTD Established for over 50 years, Wadsworth provides leading brands with exceptional customer service to computer, telecoms and network cabling trade customers. We stock a comprehensive range with easy ordering for next day delivery • QUANTUM - Copper, Fibre, Trunking, Containment & Wall Boxes -25 Year System Warranty • HELLERMANN TYTON DATA - Premier Distributor • PRISM ENCLOSURES - Premier Distributor • DRAKA PRYSMIAN - Full copper with traditional and blown fibre • TRIPP LITE - UPS , Intelligent PDU’s and KVM • NETGEAR - Switches and WLan • AUSTIN HUGHES - KVM • SELENTIUM - Acoustic Soundproof Cabinets • PATCHSEE - Traceable patch cords WADSWORTH LTD SUNBURY–ON–THAMES TW16 7HE T:- 0844 844 44 44 F:- 0844 844 10 10 E:- sales@wadsworth.co.uk www.wadsworth.co.uk
www.netcommseurope.com
DIRECTORY
active products Austin Hughes solutions provide data centre managers and administrators instant secure, local and remote access control to mission critical equipment. InfraPower: Quality rack mount power distribution and power monitoring solutions that help manage data center power capacity, reduce downtime and energy costs and improve energy efficiency. Locally metered, remotely monitored and switched rack PDUs are designed for use across the network, either locally via SNMP or over IP. InfraSolution: Enhance rack level security and equipment efficiency by using remote rack IP door access with HID or MiFARE swipe card control, temperature & humidity monitoring including integrated monitored and switched rack PDUs. InfraGuard: Environmental solution provides Smoke, Vibration, Water, door and side panel sensors, lamps, alarms and Temperature & Humidity monitoring CyberView: Our leading edge LCD drawer and KVM (Keyboard, Video and Mouse) solutions provide the widest range, available on the shortest lead-times in the European market today whilst ensuring capital equipment and software
cable management
network infrastructure products
Cablenet Trackmaster Ltd is an importer and distributor od networking, cabling and power products. As well as a wide range of imported copper and fibre optic cabling products and computer cables Cablenet also distributes for a number of best breed vendors. Cablenet has one of the UK’s widest ranges of copper patch cables in stock, with cables available in 11 different colours and lenfths from 0.3mtr up 30mtr, and also has in house a manufacturing facilitty to produce cables to your own specifications Call our sales team on the contact details below for more information on this. Our sales staff are very knowledgeable about the products we sell,w ith particular expertise in Cabinets, KVM and UPS. Our 18,000ft2 southern logistics centre is within an hours drive of central London and 30 minutes drive from Heathrow airport marking Cablenet an ideal partner for intergrators and installers who serve the UK, international financial markets and overseas customers.
Brand-Rex is a leading BrandRex global supplier of structured cabling systems for data networks, and is a niche supplier of high performance cables for extreme environment applications. The Brand-Rex data communication solutions include high performance copper and fibre optic cabling systems, a unique air-blown fibre system, high density data centre cabinet systems and an intelligent Infrastructure Management solution. Through sophisticated modelling techniques, extensive research and advanced test laboratories, Brand-Rex designs, develops and manufacture some of the most advanced cable and connectivity solutions available on the market today. Brand-Rex has been manufacturing in the UK for almost 40 years and is one and is one of Europe’s leading structured cabling providers. With a worldwide office network, Brand-Rex delivers international sales and technical support to an extensive global customer base.
Cablenet Trackmasters Ltd Cablenet House 2A Albany Park, Frimley Road Camberley, Surrey GU16 7PL UK Tel: +44 1276 405 300 Fax: +44 1275 405 309 Email: sales@cablenet.co.uk
Brand-Rex Head Office Viewlield Industrial Estate Glenrothes File KY6 2RS UK Tel: +44 1592 772124 Email: morketing@brand-rex.com Web: www.brand-rex.com
management costs are kept to an absolute minimum.
Austin Hughes Europe Unit 1, Chancery Gate Business Centre Manor House Avenue, Southampton SO15 0AE, UK Tel + 44 2380 529303 Email: sales@austin-hughes.eu Web: ww.austin-hughes.eu
network infrastructure products Cannon Technologies is an international leader in the design and manufacture of IT infrastructure. From fully featured server racks, high density cooling and power management to remote control systems all under BSI - ISO 9001 :2008 Cannon Technologies has serviced some of the world’s leading organisations and is the ideal partner for challenging projects. Taking our 35+ years of experience in the market Cannon Technologies has launched a completely unique modular data centre solution that will dramatically alter the way everyone views modular build techniques. The design is based on existing, market proven solutions and can be deployed in a fraction of the time required for traditional modular builds. Offering a wide range of in built features such as: Power protection; Power management; Cooling; Fire detection & suppression; Environmental & security monitoring; Low PUE. Cannon Technologies Ltd Queensway, New Milton Hampshire, BH25 5NU, UK Tel: +44 1425 632600 Email: sales@cannontech.co.uk Web: www.cannontech.co.uk
www.netcommseurope.com
network infrastructure products Creating perfect connections is Metz Connect’s core competence. The personal commitment of the founding family characterizes the international success of the independent, medium-sized enterprise group, which together with its subsidiaries pursuers the company’s goals with a high degree of responsibility. Highly innovative, efficient processes and partnerships have characterized the Metz Connect Group for decades. The company’s brands RIA Connect, BTR Netcom and MCQ Tech offer a diverse, innovative product portfolio with highly specialized connector components that satisfy with highest quality. Metz Connect Ottilienweg 9 78176 Blumberg Deutschland Phone +49 7702 5330 Fax +49 7702 533 433 Email: sales@metz-connect.com Web: www.metz-connect.com
network infrastructure products
Established for 30 years, Comtec provides the trade with one of the most comprehensive product portfolios for building and maintaining communication networks. We stock everything from structured cabling and tooling to specialist fibre optic and copper test equipment and aim to deliver quality products at the lowest possible price, next day.
Visit our website today
www.comtecdi
• • • • • • •
ADC KRONE premier distributor Nexans cabling solutions Cooper B-Line cabinets Over 5 ,000 product lines stocked Volume discounts FREE technical support Easy ordering by credit card or Trade Account
Orderphone: +44 1480 415400 Orderfax: +44 1480 454724 Email: sales@comtec-comms.com Web: www.comtecdirect.co.uk
NETCOMMS europe Volume V Issue 1 2015 41
Environ SR Racks Think big, think Environ SR, designed to safely and easily accommodate the most demanding of server and equipment technology, choose from two colours, and sizes up to 47U high and 1200mm deep then pack them with up to 1300kg of kit.
Want to save space, time and money?
Contact us +44 (0) 121 326 7557 sales@excel-networking.com www.excel-networking.com