WWW.DATACENTRENEWS.CO.UK
data centre news
May 2018
Protect your Ethernet Chad Marak and Phillip Havens of Littelfuse examine the four key Ethernet threats board designers should be most wary of.
inside...
Meet Me Room Industry News
Special Feature
Design and Facilities Management
Digital Realty reveals what the UK’s data economy is actually worth
Neil Stobart of Cloudian discusses the issues he thinks will soon be dominating the data industry
No Compromise. The only intelligent PDU you will ever need. Learn More
data centre news
Editor Claire Fletcher claire.fletcher@allthingsmedialtd.com
Sales Director Ian Kitchener – 01634 673163 ian@allthingsmedialtd.com
S
o this is it, it’s finally happened. By the time this goes to press, GDPR will have landed. On the plus side, it may curb some of those annoying emails and targeted adverts. You know the ones, where you search for a cushion cover once and before you know it you’re drowning in emails for home furnishing. Sometimes I’m convinced I only have to think about something and a targeted ad appears. The other day I legitimately had a dream about buying a new sofa and woke up to the ads in all their creepy glory. And although GDPR won’t stop the conspiracy theories, it will certainly tighten the rules when it comes to unconsented communication. On the not so plus side, the last time I checked, it would seem a lot of us still weren’t ready for the advent of GDPR. But in everyone’s defence, it’s been a confusing business and there has been a great deal of conflicting advice. These teething pains are understandable, but organisations that have not adapted are without a doubt likely to suffer. One thing’s for certain though, companies are now legally obligated to clearly inform us about why they are collecting our personal data, how it’s going to
Claire Fletcher, editor
be used and who they intend on sharing it with. Overall, this should make our personal data safer and less likely to fall into the hands of those with malicious intent. Although GDPR is a piece of EU law, the government has made it clear that despite Brexit, the UK will remain signed up after the event, so even that doesn’t get us out of it. Then again if we suddenly watered down our data protection laws post-Brexit, this may be somewhat frowned upon by those remaining in the EU, and would certainly have a negative impact on trade, so probably best we stick with it. GDPR aside, I hope everyone had a smashing bank holiday in the glorious and incredibly uncharacteristic sunshine, and if you were stuck indoors, or serving the bank holiday revellers, I salute you, not all heroes wear capes. Thank you for reading this month’s issue, and if you have any comments, questions or opinions on the topics discussed, please write to: claire.fletcher@ allthingsmedialtd.com *This is actually my last issue as editor on DCN, as I am moving onto pastures new. I would like to thank everyone on the team, our contributors and everyone else I’ve met along the way, you’ve all helped make my time on the mag a blast.
@DCNMag www.datacentrenews.co.uk
Studio Manager Ben Bristow – 01634 673163 ben@allthingsmedialtd.com
EDITORIAL COORDINATOR Jordan O’Brien – 01634 673163 jordan@allthingsmedialtd.com
Designer Jon Appleton jon@allthingsmedialtd.com
Business Support Administrator Carol Gylby – 01634 673163 carol@allthingsmedialtd.com
Managing Director David Kitchener – 01634 673163 david@allthingsmedialtd.com
Accounts 01634 673163 susan@allthingsmedialtd.com
Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173
The editor and publishers do not necessarily agree with the views expressed by contributors, nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2018. All rights reserved.
May 2018 | 3
contents
WWW.DATACENTRENEWS.CO.UK
in this issue… May 2018
data centre news
May 2018
Protect your Ethernet Chad Marak and Phillip Havens of Littelfuse examine the four key Ethernet threats board designers should be most wary of.
inside...
Meet Me Room Industry News
Special Feature
Design and Facilities Management
Regulars 03 Welcome
GDPR has landed.
06 Industry News
Digital Realty reveals what the UK’s data economy is actually worth.
12
Centre of Attention
Patrick Lastennet of Interxion explains why data centres are a critical link in the blockchain.
14
Meet Me Room
Neil Stobart of Cloudian discusses the issues he thinks will soon be dominating the data centre industry.
4 | May 2018
Neil Stobart of Cloudian discusses the issues he thinks will soon be dominating the data industry
features 20 Case Study
Matthew Fuller of ABM Critical Solutions discusses his latest project and why DCIM is essential when it comes to e-tailing.
46 Projects and Agreements
UK government selects Plexal to create London’s new Cyber Innovation Centre at Olympic Park.
52 Company Showcase
The next innovation in air handling units from Weatherite.
54 Final Thought
Chris Adams of Park Place Technologies discusses the rise of data residency regulations.
14
Digital Realty reveals what the UK’s data economy is actually worth
32 Big Data
Darren Watkins of Virtus Data Centres discusses why you should go back to basics when it comes to making Big Data work for your business.
34 Hardware
Steve Grady of Equus Compute Solutions examines how to optimise your hardware in a software-defined data centre.
38 Edge Computing
Tobi Knaup of Mesosphere explains how abstraction and automation could help enable the next wave of distributed computing.
Development 42 Software Lifecycle Phil Bindley of The Bunker discusses integrating security into the software development lifecycle.
SPECIAL FEATURE: Design and Facilities Management
Weinreich of Colt Communication Services gives us the Leppard of Future Facilities discusses how to get data 22 Falk 24 Jon five key questions you should be asking when choosing a centre consolidation down to a fine art. data centre provider.
Smith of NGD gives us some insight into the company’s Ikpa of Technimove shares his key considerations 30 Ochea 26 Phil best practices when it comes to successful design and when migrating from an on-premises infrastructure, to a facilities management.
fully managed off-premises cloud solution.
May 2018 | 5
industry news
IT storage splurge: Spending increases by 25%, yet overall cloud adoption suffers
Data protection: SCCs in jeopardy It has been estimated that as high a percentage as 88% of EU companies who transfer personal data to third countries outside the European Economic Area, do so under the protection of Standard Contractual Clauses (SCCs) – contractual forms approved by the EU Commission as offering adequate protection to the individuals whose data it is. Now, thanks to an Irish data protection case, this entire system may be in jeopardy. Back in October 2017, the Irish High Court found, in a case brought against Facebook Ireland by privacy campaigner Max Schrems, that there was “mass indiscriminate processing of data by the United States government agencies, whether this is described as mass or targeted surveillance.” Schrems’ original campaign had been against the Privacy Shield, the self-certification mechanism which had replaced the ‘Safe Harbor’ provisions which had fallen victim to an earlier Schrems challenge. However, the questions which the judge has now referred to the Court of Justice of the Europe Union (CJEU) also call into question the legal basis of SCCs. Depending on how the CJEU rules on these, it may prevent companies such as Facebook from passing personal data to their US parent companies at all, and cast into jeopardy a wide spread of cloud-processing arrangements. The outcome of this referral will be awaited with great interest. Clarke Willmott, clarkewillmott.com 6 | May 2018
Spending on IT storage has increased by nearly 25% in the past four years and these expenditures are forecast to increase an additional 6% by 2019. However, companies are battling both high barriers to investment and lack of testing opportunities for their applications, which is causing uncertainty when it comes to total cloud adoption. To help companies reduce initial investment to test innovation, measure the impact of service delivery and accelerate time to market deployment, e-shelter, data centre operator in Europe and an NTT Communications group company, has launched its first Innovation Lab in Frankfurt. The e-shelter Innovation Lab, located at e-shelter’s Frankfurt 1 data centre campus, provides a setting for customers, no matter where they are based, to test and validate distributed cloud architectures and disruptive technologies in a realworld production environment faster, with greater flexibility, and lower up-front investment. It also presents customers with a ‘try before you buy’ scenario which gives business leaders the opportunity to prove the technology being tested before they have to make any significant investment. More than 60 partners have already implemented test environments in the e-shelter innovation lab. Mirantis OpenStack and Canonical were two of the first companies to demonstrate their cloud offerings, and other industry-leading service providers have since joined with additional end users. e-shelter, e-shelter.de
Global IT services contracts value continued to decline in 2017, finds GlobalData IT services deals experienced a steep decline in 2017, both in terms of the number of deals and total contract value (TCV), compared to both 2016 and 2015. While the TCV witnessed a significant annual decline of 33.3% in 2017 to reach a value of $61.4bn, the number of deals announced (4,099) saw a considerable decrease of 25.6% in 2017 compared to 2016, according to GlobalData. The average contract value also took a beating in 2017 compared to the previous two years. However, the average contract duration experienced a slight increase (2%) in 2017 compared to 2016, which shows that companies are still willing to enter into long term contracts with IT services providers. Application outsourcing contracts were at the forefront of the total number of deals signed in 2017, accounting for 38.4%. With respect to the TCV of the deals, infrastructure outsourcing contracts dominated the IT service contracts with a TCV of $31.8bn. North America led in terms of the TCV of the contracts announced in the infrastructure outsourcing segment, with a TCV of $16bn, followed by Europe with a TCV of $10.6bn. GlobalData, globaldata.com
industry news
Each new data centre adds up to £436 million per year to UK economy In the most comprehensive, first-ofits-kind look at the contribution that data provides to the UK economy, an independent report commissioned by Digital Realty reveals that the UK’s data economy is currently worth £73.3bn annually. The Data Economy Report also highlights the continued contribution data brings, with growth (7.3%) outstripping the wider economy (1.8%). This growth is powered by the UK’s data centre industry – the industry creates between £291m and £320m in value every year from each data centre, with the range even higher for new data centres: £397-£436m in extra annual value from each new data centre. Data centres create this value by providing and managing the infrastructure, connectivity and services that underpin success across the full range of economic activity. This includes not only IT and financial services, such as powering high-speed trading platforms and cloud storage services, but
also other sectors such as agriculture, where data allows more precise use of pesticides, better adaptation to weather trends and automation such as drones to survey crops. Investment in the data centre foundations which enable all this is
essential for the future prosperity of British businesses and the economy. The £6.2bn added value that data centres create demonstrates the rewards to be won by businesses investing in their data infrastructure. Digital Realty, digitalrealty.com May 2018 | 7
industry news
Bitglass: 85% of organisations unable to identify anomalous behaviour across cloud applications Bitglass has announced the findings of its ‘Cloud Hard 2018: Security with a Vengeance’ report, which features survey insights from over 570 cybersecurity and IT professionals on their approach to cloud security. Bitglass CMO Rich Campagna says, “Enterprise security teams are concerned about the next-generation of cloud threats that pose a risk to corporate data. There has already been immense progress in the past five years as security personnel come to the realisation that legacy security tools and processes are not enough to secure their ever-changing ecosystem.” When asked about the biggest security threats to their organisation, most cited misconfigurations (62%) similar to the numerous AWS S3 leaks over the past year, followed by unauthorised access (55%). 39% said external sharing was the most critical threat while 26%highlighted malware and ransomware. Less than half surveyed (44%) have visibility into external sharing and DLP policy violations, and only 15% of organisations can see anomalous behaviour across apps. While 78% have visibility into user logins, only 58% have visibility into file downloads and 56% into file uploads.
To protect mobile data, 38% of organisations install agents and 24% use a trusted device model, where only provisioned corporate-owned devices are allowed access to company systems. 69% of organisations rely solely on endpoint solutions for malware protection, tools which cannot detect or block malware at rest in the cloud or employees’ BYO devices. Bitglass, bitglass.com
Gartner forecasts worldwide public cloud revenue to grow 21.4% in 2018 The worldwide public cloud services market is projected to grow 21.4% in 2018 to total $186.4bn, up from $153.5bn in 2017, according to Gartner. The fastest-growing segment of the market is cloud system infrastructure services (infrastructure as a service or IaaS), which is forecast to grow 35.9% in 2018 to reach $40.8bn (see Table 1). Gartner expects the top 10 providers to account for nearly 70% of the IaaS market by 2021, up from 50% in 2016. “The increasing dominance of the hyperscale IaaS providers creates both
enormous opportunities and challenges for end users and other market participants,” says Sid Nag, research director at Gartner. Software as a service (SaaS) remains the largest segment of the cloud market, with revenue expected to grow 22.2% to reach $73.6bn in 2018. Gartner expects SaaS to reach 45% of total application software spending by 2021. Within the platform as a service (PaaS) category, the fastest-growing segment is database platform as a service (dbPaaS),
expected to reach almost $10bn by 2021. Hyperscale cloud providers are increasing the range of services they offer to include dbPaaS. Although public cloud revenue is growing more strongly than initially forecast, Gartner still expects growth rates to stabilise from 2018 onward, reflecting the increasingly mainstream status and maturity that public cloud services will gain within a wider IT spending mix. Gartner, gartner.com
Table 1: Worldwide public cloud service revenue forecast ($bn)
Cloud business process services (BPaaS)
2017
2018
2019
2020
2021
42.6
46.4
50.1
54.1
58.4
Cloud application infrastructure services (PaaS)
11.9
15.0
18.6
22.7
27.3
Cloud application services (SaaS)
60.2
73.6
87.2
101.9
117.7
8.7
10.5
12.3
14.1
16.1
Cloud management and security services Cloud system infrastructure services (Iaas)
30.0
40.8
52.9
67.4
83.5
Total market
153.5
186.4
221.1
260.2
302.5
8 | May 2018
industry news
Host in Ireland updates its report demonstrating the sustainability of the Irish data industry Host in Ireland, a strategic global initiative created to increase awareness of the benefits of hosting digital assets in Ireland, and winner of the Datacloud Europe 2016 award for Innovative Data Centre Location, has announced the release of a new update to its report, Ireland’s Data Hosting Industry 2018 Q1 Update, in collaboration with Bitpower. Updating existing baseline statistics and information, the report examines the opportunities and risks associated with the digital asset hosting industry in Ireland. With a thriving digital economy, the country serves as home to some of the biggest names in the tech industry, exporting $71bn of ICT services in 2016 alone. This new update reveals that of the four types of existing data centres in Ireland, including hyperscale, colocation wholesale, colocation and private, hyperscale facilities dominate the market with 74% of the total MW capacity, which has grown from 420 to 480 in the first quarter of 2018. In addition, the report predicts that just over €1.1bn will be invested in Irish data centre construction in 2018, reaching cumulative investment of €9bn by 2021. “By providing the industry with the most timely and accurate update on data centre activity throughout Ireland, we are helping to materialise the goal of creating a ‘Connected Planet,’ enabling industry stakeholders to access the necessary information to continue investing in the Irish digital economy,” explains David McAuley, founder and CEO, Bitpower, and Host in Ireland advisory council member. Host in Ireland, hostinireland.com
Increasingly aggressive malware driving IT professionals to re-examine backup strategies Asigra has highlighted the 2018 Breach Briefing, a new report by Beazley Breach Response (BBR) Services which found that the threat from ransomware is far from over. In defence of business continuity across all impacted industries, Asigra is calling for organisations to review their backup policies and double down on redundancy, so that multiple remote copies of mission critical backups are available when the next attack occurs. Data protection specialists agree that the number one strategy for recovering data lost to criminal encryption is through a reliable data recovery strategy. However, with strands of ransomware now targeting backup data, organisations must take extra steps to ensure their backup data is clean before conducting a recovery. Regardless of the backup platform used, a redundant data protection strategy should be employed to ensure an effective recovery. This approach requires that multiple copies of the company’s mission critical data are created. These backup sets should be stored on multiple media formats, such as secondary disk storage or the cloud with at least one of the backup data sets stored in an offsite location. Once in place, data policies should also be enhanced to include more regular test recoveries to determine the effectiveness, quality and speed of the recovery. Aisigra, aisigra.com
SolarWinds MSP research: Cybersecurity awareness doesn’t fuel better preparation SolarWinds MSP has released new research on senior security executives’ awareness of and readiness for increased malware and ransomware threats. The study, commissioned with the Ponemon Institute, asked 202 senior-level security executives in the US and UK about emerging security threats. Specifically, the study addressed those propagated by the ‘Vault 7’ leaks, and the more massive global WannaCry and Petya ransomware attacks. Most respondents did not think their organisation had the budget or technology to deal with cybersecurity threats. Just 45% said that they had the technology to prevent, detect, and contain cybersecurity threats, while only 47% felt that they had enough budget to cope. A majority (54%) of security executives admitted that their business had experienced an attack in the last year. Of those, almost half (47%) had been unable to prevent the attack. The result of these successful cyberattacks included the theft of data assets (52%), the disruption to business process (47%), and IT downtime (41%). The survey also revealed that businesses do not feel prepared to prevent attacks, with 29% citing that they would be unable to prevent a Petya attack and 28% would be unable to prevent a WannaCry attack. Another key finding was the lack of remediation. 44% of respondents who were aware of the WannaCry patch didn’t implement it and 55% didn’t patch for Petya. SolarWinds MSP, solarwindsmsp.com
May 2018 | 9
on the cover
Protect your Ethernet Chad Marak and Phillip Havens of Littelfuse examine the four key Ethernet threats board designers should be most wary of.
B
oard designers often use TVS diode arrays to provide protection for an Ethernet port. In many cases the designer uses protection to maintain equipment reliability against four main threats: • Lightning induced surges (IEC61000-4-5, GR-1089, ITU) • ESD or Electrostatic Discharge (IEC61000-4-2) • EFT or Electrical Fast Transient (IEC61000-4-4) • CDE or Cable Discharge Event
Lightning induced surges Depending on the standard or regulations being adhered to, lightning surges can be differential or common-mode with varying waveshapes. In differential mode two conductors or pins (i.e. J1 and J2) are connected between the positive and negative test equipment terminals so the energy inserted at the RJ-45 port 10 | May 2018
appears only between these two conductors (see Figure 1). The energy will be dissipated in the line side protection device shown here as Littelfuse’s LC03 Series, TVS diode arrays, but some of the energy will also pass into the transformer, creating a differential event on the driver side of the transformer, or between the Tx+ and Tx- data lines in this example. For common-mode testing, the individual conductors or data lines themselves will be tested with respect to GND. The positive end of the test equipment will connect to all of the conductors or pins (i.e. J1, J2, J3, and J6) and the negative terminal will be tied to GND (see Figure 1). In this case, very little energy will be dissipated in the LC03, assuming the line impedances are closed matched. The majority of the energy will be capacitively coupled through the transformer’s magnetics to the driver side of the transformer, appearing as a common-mode event to the Ethernet PHY.
Electrostatic Discharge (ESD) Equipment being evaluated for immunity to ESD (per the IEC61000-4-2 standard) can be conducted via contact or air discharge. There are numerous methods to inject ESD, but in all cases the ESD pulses appear as common-mode events to the circuit as the discharged energy is reference to GND.
“A system designer can maximise protection against CDE through good layout Electrical Fast Transient (EFT) practices Equipment being examined and a careful for immunity to EFT (per the standard) is selection of IEC61000-4-4 very similar to the testing done components.” for common-mode lightning surges. In the more typical configuration shown in Figure 2, all the conductors (or pins) are capacitively coupled to the positive terminal of the test generator and “surged” with respect to GND. If the data lines are well balanced,
on the cover
there will be little to no differential energy between the pairs, but again, the transformer’s coupling capacitance will transfer the common-mode energy to the driver side albeit at a reduced level.
Cable Discharge Event (CDE) CDE is a phenomenon that should be differentiated and considered separately from electrostatic discharge (ESD). The characteristics of a twistedpair cable and knowledge of its environment play an important role in understanding CDE. The frequently changing cable environment also adds to the challenge of preventing CDE damage. A system designer can maximise protection against CDE through good layout practices and a careful selection of components. The IEEE 802.3 standard calls out isolation voltages of 2250 VDC and 1500 VAC to prevent connector failures that can be caused by the high voltages generated from CDEs. To prevent arcing during these events, these isolation requirements apply to the RJ-45 connector as well as to the isolation transformers. To prevent dielectric break down and sparking on the circuit board, the line side printed circuit board and the ground should have sufficient creepage and clearance between traces. Lab tests have shown that to withstand 2000V of transient voltage, the FR4 circuit board trace spacing should have a separation of at least 250 mils. The UTP cable discharge event can be as high as a few thousand volts and can be very destructive. The charge accumulation comes from two main sources: triboelectric (friction) effect and electromagnetic induction effect.
Figure 1
Figure 2
These effects can come from pulling a PVC-covered CAT5 UTP cable on a nylon carpet that can cause charge build up on the cable. In a similar way, charge can also build up on a cable when the cable is pulled through a conduit, or dragged through other network cables. This charge build up is similar to that from scuffing of feet across a carpet. The charge build up only occurs when the cable is un-terminated and the charge is not immediately dissipated (i.e. both ends of the cable are not plugged into a system). Also, the accumulated charge has to be retained in order to cause substantial damage. The newer CAT5 and CAT6 cables have very low dielectric leakage and tend to retain charge for a long period. Charge retention time is increased in
an environment where there is low relative humidity. When a charged UTP cable is plugged into a RJ-45 network port, there are many possible discharge paths. This transient current takes the lowest inductance path and this path could be at the RJ-45 connector, between two traces of a printed circuit board (PCB), in the transformer, through the Bob Smith AC termination, or through the silicon device. Depending on the length of the cable, the accumulated charge can be a hundred times larger than a typical ESD model charge. This ensuing high-energy discharge may damage the connector, the transformer circuit, or the Ethernet transceiver. The twisted-pair cable behaves like a capacitor by storing a charge. Studies have proven that several hundred volts of charge can accumulate on an unterminated twisted-pair cable. Plus, a fully discharged cable can build up half of its potential charge within one hour. Once charged, a high-grade cable can retain most of its charge for more than 24 hours. Because longer cables have the capacity to store more charge, extra CDE precautions should be taken with systems that have cable lengths greater than 60m.
Conclusion Taking the above into consideration, when using TVS diode arrays to protect an Ethernet port, the designer should always be wary of the threats he or she is protecting against. In most all cases, the threats are a combination of differential and common-mode events that can be effectively clamped when the protection device is connected properly. Littelfuse, littelfuse.com May 2018 | 11
centre of attention
Linked In Patrick Lastennet, director of marketing and business development, financial services segment at Interxion, explains why data centres are a critical link in the blockchain.
F
or the financial sector, IT is no longer simply a business necessity – it is the business. In the face of unstoppable digital innovation, agile IT is essential to support new strategies, keep pace with the competition and harness emerging opportunities. Today, the pace of change is faster than ever. New technologies are disrupting every aspect of the financial sector. For high street banks, customer satisfaction now turns on 24/7 online banking,
12 | May 2018
as well as seamless mobile and contactless payments. In high frequency trading, low latency connectivity is the lifeblood of profitability – with even microseconds creating competitive advantage. Meanwhile, a glut of agile FinTech companies – like PayPal, Apple Pay and Oscar – are rewriting the rule book on lending, payments, insurance and many other financial services. Nine-in-ten (88%) financial industry incumbents are now concerned they’re losing revenue to these agile innovators.
A chain reaction The latest digital innovations are even redefining long-established concepts like money, with the emergence of Bitcoin, Ethereum and other cryptocurrencies into the mainstream. Bitcoin now boasts 12,000 transactions per hour, across 96 countries worldwide. Powered by blockchain technology – essentially an incorruptible digital ledger distributed across the internet – cryptocurrencies present both
centre of attention
“Capitalising on the many potential applications of blockchain technology demands that financial firms innovate at scale and speed.”
substantial risks and enormous opportunities to the financial sector. Cryptocurrencies make it possible for individuals and businesses to transact securely, independent of traditional banks. Writing for BNP Paribas, analyst Johann Palychata states that the underlying blockchain technology powering cryptocurrencies ‘should be considered an invention like the steam or combustion engine’ and has the potential to transform the entire financial sector and wash away established business models.
While blockchain technology is certainly a source of business uncertainty, the financial sector is also seizing its revolutionary capabilities to make clearing and settlement faster and cheaper. Financial transactions which would typically take several business days can now be completed in just seconds. Many of the world’s largest firms, including Bank of America Merrill Lynch, the Royal Bank of Canada and Banco Santander, are already collaborating on the Global Payments Steering Group – harnessing the blockchain for interbank global payments. According to PwC, threequarters (77%) of financial sector incumbents will adopt blockchain as part of their systems or processes by 2020. Given blockchain’s potential to slash bank infrastructure costs by 30% and drive savings of up to $12bn per year, it’s no surprise that the race to innovate is on.
Agility is essential Capitalising on the many potential applications of blockchain technology demands that financial firms innovate at scale and speed, with the freedom to experiment and collaborate easily. As such, the data centres supporting firms with connectivity, processing and storage must also be agile.
Colocation strategies are well suited to supporting this kind of rapid innovation. For instance, carrier-neutral facilities enable financial firms to pick and choose the connectivity options that best suit their business needs – whether their decisions are based on cost, service performance, resilience, or a host of other factors. By colocating, firms can also benefit from private crossconnects with other members of the financial community sharing the same facility. With a range of new services and skills within easy reach, any financial firm can dramatically boost its ability to adapt and add fresh capabilities. With more than fourfifths (82%) of financial sector incumbents expecting to increase their number of FinTech partnerships in the next three to five years to keep pace with change, colocation offers the perfect environment for collaborative innovation. Of course, access to nimble cloud based resources is also essential for firms embracing innovation. By colocating at a facility that brings instant access to a multi-cloud environment, organisations can reduce infrastructure costs, flexibly select the right cloud for the right workload, increase availability and reduce latency. Colocated hybrid cloud also leaves firms well placed to embrace a fully-optimised, mixed IT setup, that combines on-premise, colocated, and cloud-based resources. More fertile ground for technology-led business innovation would be hard to imagine. Interxion, interxion.com May 2018 | 13
meet me room
Neil Stobart – Cloudian Neil Stobart, VP of Global Systems Engineering at Cloudian, talks company news, career motivation, and the issues he feels will soon be dominating the data centre industry. What were you doing before you joined Cloudian? I studied Economics at university and had no idea what to do when I finished – a story we’ve all heard before. I had various jobs doing bar work and telecoms sales, but none were particularly fulfilling. I eventually successfully applied for an IT graduate scheme at Woolwich Building Society in ’96, when the IT industry was in an early development stage, worlds away from the industry today. It was a great opportunity to experience various disciplines, but I ended up architecting, designing and deploying call centre IT infrastructure. 14 | May 2018
From there, I took my experience and moved onto pre-sales consultancy, and that’s where it all really began. What projects are you currently working on for Cloudian? Cloudian recently made its first ever acquisition of Milan-based Infinity Storage, so a lot of my time at the moment is dedicated to integrating the company’s software and team. To help this along, I am building training programmes and identifying the best use cases for our new integrated technology.
“There are no barriers in the IT industry for different genders, races or any other background.”
What are Cloudian’s aims for the next 12 months? As well as continuing to grow and expand our customer base following our recent funding round, the company remains an advocate for cutting-edge technology - our team in Japan are currently working on an intelligent data storage device, which, completely weather proof, remains outside and close to the data source such as a CCTV camera. All the data can be processed and analysed in the box before being sent back to a centralised storage platform – very much like intelligent edge computing.
meet me room
A lot of Neil’s spare time is spent DJing
What is the main motivation in the work that you do? It’s great to be able to use my experience and knowledge to help customers overcome challenges and become successful in their own careers – I would say this is probably the most satisfying part of my job. I believe in treating people as you would like to be treated - happy customers are loyal customers. Fostering this mentality is very important to me and is a driver to deliver fantastic customer service. Which major issues do you see dominating the data centre industry over the next 12 months? AI, machine learning and data analytics are going to be the biggest issues dominating headlines for the next few years. AI sounds frightening to some people, but it’s a great tool that is going to become more and more useful in all areas of life and will allow us to do some really smart things in the future. The question we should be asking ourselves is ‘how do we make an intelligent decision based on the data that we have, and create an action based on the outcome’. To achieve this, we need a storage platform that can be integrated seamlessly into a continuous workflow that doesn’t just act as a storage ‘dump’. The solution has to be intelligent and have tentacles
What are your hobbies and interests outside of work? I absolutely love my music and we aim to go to at least one festival every year, but it’s not just a hobby outside of work! I like to DJ for myself in my little office at the end of the garden as well, it’s something I’ve enjoyed doing for years.
that can reach into all aspects of the infrastructure. That way, when data is received in one area, a notification can be sent to action it in other areas of the business workflow. How would you encourage a school leaver to get involved in the industry? Everyone in the IT industry needs a foundational knowledge base that underpins all of the work we do. But technology is constantly evolving, and that means whether you have been in the industry for a week or 20 years, we all constantly need to learn and develop to stay relevant. There are no barriers in the IT industry for different genders, races or any other background. It’s all about imagination and understanding technological challenges. I was lazy at first, and now I’m running the global team for a Silicon Valley company – an achievement I never dreamed of at school! In the end, once you have the foundational knowledge, it’s simply about working hard and having an inquisitive mind.
IT industry for its fast pace and constant evolution. I made the move to storage early on, at a time when people just didn’t know that data growth would become the hot topic it is today. It was simply luck of judgment that led me to the career I am in, and I have made a success of it by ensuring I work with the new technology and staying current.
What is your most treasured possession? Keeping within the theme of music, I recently bought myself a classic hifi turntable, which was released the year I was born. I absolutely love it.
A classic hi-fi turntable, Neil’s latest purchase
What is the best piece of advice you have ever been given? I don’t know if I can remember a specific piece of advice that I have been given. However, when I was younger, I was pretty unreliable – something that was probably lifestyle driven because, at that age, we don’t always take things seriously. But looking back, I would change my attitude if I could. Now the best piece of advice I would like to have been given is work hard first, play hard second – that’s exactly the attitude we like to adopt at Cloudian.
Looking back at your career so far, is there anything you would have done differently? In fact no, I can happily look back at my career in the knowledge that there isn’t anything I would have done differently. I love the May 2018 | 15
company profile
All The Right Moves DCN spoke with Ochea Ikpa, managing director of Technimove, specialist and market leader in migration, transition and transformation of critical infrastructure environments, to find out what sets the company apart from the competition.
C
elebrating 20 years of business this year, Technimove has built its reputation on delivering highquality migration services by offering what the company feels is a level of peace of mind unavailable elsewhere in the market. When undertaking migrations and transformations of digital infrastructure, Ochea says no other company can deliver the level of control, expertise and accountability available from Technimove.
16 | May 2018
Technimove was founded in January of 1998 with Ochea setting the company up straight out of University. “We were started to meet a requirement for London Electricity (LE), explains Ochea, “For the first eight years of the business, LE was our largest client. The company has steadily grown almost every year. On any given week, we can be undertaking projects in the UK, Europe and the US for a variety of clients.
Ochea adds, “We operate globally and currently control around 50% of the European market. This is because we offer a complete service from project management, to the physical move itself and cable installation. “The key is that we are specialists in this field, other companies might offer to move your equipment, but it will often be just one part of a whole range of moving services on offer, not a dedicated expert service like ours,
company profile
we value your equipment, just as much as you do.” Control is a theme which crops up a lot in the company’s approach. Everything in the process is either owned or operated directly by Technimove. All the staff are direct employees, the equipment including the fleet of air ride Mercedes are all owned by the company. In controlling every element of the move, Technimove looks to guarantee a pain free and successful experience. Ochea enthuses, “We are a one call, low risk, solution. Servers and related equipment are amongst the most valuable assets a company can have. What we do is free companies from the risk and allow them the freedom to make the best decisions for their assets. Often companies will avoid relocation even when it is by far the best option because of the perceived danger involved, we take that fear away.”
The complete service The company’s dedicated thorough approach begins right from the initial inquiry, through the project migration life cycle and often beyond that too. Ochea explains, “Right from the off we send staff onsite to scope the project. If successful, we will lead with project management services, and any other consultancy services that are needed. “We will audit the client environment at an application level. We also audit the devices, cable connections and power draw, amongst other things. We will then design the client’s infrastructure and new data centre layout, by way of size, type, alignment and number of racks, structured cabling, enclosures and any other requirements needed. “We pre-cable the client’s new data centre location, with both structured cabling and patching. Next step is to shut the equipment
down in its existing location, remove all cabling, de-rack, pack, move, re-rack, re-cable and power up. We then re-establish connectivity of all devices, inclusive of storage equipment. All of this is undertaken, whilst providing full insurance for each and every migration, so again the client has complete peace of mind and can concentrate on their day to day business.” Early engagement is one of the most key factors in future proofing for successful transformations of critical infrastructure environments. Ochea describes this process, “We engage with the customer at key and varying levels to ascertain what success looks like for the business, the customer and key stakeholders; and we begin with the end in mind. Our consultative approach undertakes a deep-dive discovery and analysis across all of the critical infrastructure that is in scope, as well as those dependencies from the core to the perimeter and interconnected (Internet of Things – IoT) business applications. Programme and project management services can then be aligned to deliver the desired outcome, while the customer
““We value your equipment, just as much as you do.”
remains focussed on their ‘live production’ business operations.” Expanding on what he feels are the key differences between competitors and Technimove, Ochea underlines, “We focus on quality of service to the extreme. Our service levels are exceptionally high. We believe that this is what is needed when moving client’s critical environments. We ask ourselves ‘why would you settle for anything less?’ “The other key differentiator is that we have all the expertise needed to complete a project inhouse, as opposed to outsourcing, like our competitors do. We believe, when you outsource, it weakens you. We also have great experience and a huge amount of our work is won through recommendation, our clients are often our biggest sources of new contracts.”
May 2018 | 17
company profile
Delivering on the detail Technimove’s abilities are widely recognised across the IT sector with companies such as IBM, HPE, and Fujitsu and enterprise class data centre providers like, Equinix, Ark, Interxion, Virtus, Cyxtera and Global Switch, all acting as resellers for the company’s services direct to their customers. Just one example of a recent success story came when the company took on a project for Pokerstars, styled as the world’s largest on-line poker platform. The project involved 750 racked devices and 12 EMC Symmetrix racks, all moving from Guernsey to the Isle of Man. The client’s preferred option was to fly the equipment, so Technimove provided a fully managed service, which included chartering three planes and negotiating landing times at both airports. Ochea explains, “We had a week to complete the migration, which included all of the patching, which of course needed to be perfect. We actually ended up completing the project in just five days.” Another recent project was for a major hedge-fund, which needed 15 racks of equipment to be 18 | May 2018
relocated from a Docklands data centre to one located in Iceland. Again, the company chartered a plane and provided a full end to end service, inclusive of project management, auditing and all of the cabling work. Another major recent project has been working with the University College London (UCL). This has been a two year contract to migrate the prestigious university’s equipment out of several central London buildings. Technimove worked under UCL’s Programme Lead to provide a project team to design and execute logical and physical migrations. The project involved in excess of 3000 servers. Ochea sums up, “Ultimately, we have found success by offering a higher level of service, yes our services cost a little more, but we deliver a whole lot more. However, we are still ambitious for more growth and stand ready to offer our services direct to companies large and small whose path to updating their systems involves migration. Technimove is always evolving, so new services will be added in the future. In particular we are building our ‘Transitional
The Complete Technimove Service • Transformational Consultancy Services • Rationalisation or Consolidation Consultancy • Migration Programme & Project Management • Application & Infrastructure Auditing Cabling Solutions • Logical & Physical Migration Services
Consultancy Offering’. This opens us up more to projects that do not involve migrations, such as digital and business transformation projects, for which we have several consultants currently in roles, offering services. However, we are firm believers in doing everything we do well, so we will never forsake quality and depth, for growth. Give us a call and find out how we can deliver for you.”
TVS Diode SMAJ58
TVS Diode Array SP4044
TVS Diode SMCJ58J
PolySwitch PTC
COMPLETE PROTECTION SOLUTIONS FOR ETHERNET INTERFACES Littelfuse Solutions for Voltage Surge and Overcurrent Threats Ethernet is a Local Area Network (LAN) technology that was standardized as IEEE 802.3. Ethernet is increasingly being used in applications located in harsh environments that are subject to overvoltage events such as electrostatic discharges (ESD), electrical fast transients (EFT), cable discharge events (CDE) and lightning-induced events. The 10GbE and 1GbE versions of Ethernet are very sensitive to any additional line loading; therefore, any protection components and circuits must be carefully considered to avoid degrading Ethernet’s intrinsic high data rates and 100-meter reach capability. Littelfuse offers a variety of circuit designers overvoltage solution. For product and application guidance information please visit: www.Littelfuse.com
case study
Withstand the demand Matthew Fuller, technical services director at ABM Critical Solutions, tells us why Data Centre Infrastructure Management (DCIM) is paramount for e-tail businesses to properly handle increasing consumer demand.
G
one are the days when the only way to shop was on a high street. Increased connectivity, coupled with the Internet of Things, is changing the retail landscape. The simplicity of online shopping contributed to the £16.2 billion we were predicted to spend online last year on clothing and fashion and it doesn’t look to be slowing down. So big is the trend, that over the next five years, the online fashion market alone, is forecast to increase a further 79% by 2022, reaching just under £29 billion. The question for all retailers, not just the clothing and fashion sector, is how do you ensure your Data Centre Infrastructure Management (DCIM) can withstand the demand of online? It might feel like an obvious question, but you’d be surprised how many well-known retailers fail to get this right.
20 | May 2018
“Without investing in DCIM, it means that any business, regardless of its size, is at risk.”
The DCIM of a business represents the collection of tools that help it to organise and manage its data storage, within a centre. This includes everything from drives and cables to computers. The nuts and bolts of the back end. An efficient DCIM system will assist a business to meet the growing global demand for storing information electronically. That may be devising more efficient ways to store and access electronic data, or implementing processes that prevent overheating which could cause failures. Despite being multi-millionpound businesses, many top brands fail to implement a robust DCIM and therefore fail to safeguard their entire operation. In a project recently, a £10m online retail business had not invested in its own DCIM in over 20 years, leaving it in such a fragile condition that it could have fallen over at any time. Costs and the complexity of developing and maintaining the infrastructure are usually the barriers to investment. It’s ironic that cost could prevent investment in this area of the business, yet failure to do so could be the one thing that closes the entire operation down. Most retail data centres are built on a 10-year cycle, but, the average retailer probably invests in its DCIM every 15 years. Without investing, it means that any business, regardless of its size, is at risk. For the project we completed recently, the repercussions of the DCIM failing would have been
huge. Having no recovery system in place, for example, means an organisation is worryingly exposed. A single point of failure without any built-in redundancy could have wiped out the business. A business that has its house, or in this case its DCIM, in order is going to be much less exposed to risk, which can only be a good thing as we move closer to a world that goes beyond cloud storage. Another factor for businesses to consider when thinking about investing in DCIM, is the use of enhanced environmental technology. Data centres are the largest consumer of power, but if the DCIM is carefully and appropriately managed, the energy required to organise and store large
case study
amounts of data can be used with greater efficiency. In the recent project that we undertook, we modelled several energy solutions, using the latest CFD (computational fluid dynamic) software, before introducing them to our client’s DCIM. This ensured they met with design, resilience and total cost of ownership. We also swapped florescent lighting for LED lighting to reduce the risk of failure and to consume less electricity, and reduce costs. We introduced the latest energy efficient UPS (Uninterrupted Power Supplies) distribution and cooling products to ensure productivity
uptime and environmental reductions were met, and changes were made to the flooring to reduce potential airborne contamination. The importance of DCIM should never be underestimated, particularly in the e-tail marketplace. This type of business holds vast amounts of personal data, including addresses and banking details, which effectively keeps it trading. While it is seen as a retailing business, it is also a big data centre, yet, so may e-tail businesses have data centre management at the bottom of their priority list.
The question shouldn’t be a case of ‘if’, but more a case of ‘when’, should a business invest in its infrastructure to safeguard its future. The possible outcomes for a business operating online, without an efficient and stable DCIM in place, don’t bear thinking about. ABM Critical Solutions, abm.co.uk May 2018 | 21
design & Facilities Management
Question time When it comes to choosing a data centre provider it pays to do your research. Falk Weinreich, executive vice president of Colt Communication Services gives us the five key questions you should be asking before making your decision.
A
s the demand for managed cloud and professional services rises, the considerations behind choosing the right colocation partner becomes more pertinent. There is a large range of variables that affect these providers’ ability to keep your data secure, your connectivity robust, and your costs low. Most data centres were ‘stateof-the-art’ when they were built, yet infrastructure quickly becomes obsolete if it does not adapt to the
22 | May 2018
constant changes in regulation, cyber threats, and users’ needs and expectations. Choosing the wrong colocation provider can have significant consequence on a business’s operations, including factors such as performance and business continuity problems, security vulnerabilities, and costs. It pays to do due diligence before beginning a relationship of such strategic importance, so here are the five most important questions you should ask when choosing a data centre provider.
1
Location, location, location
In today’s hyper-connected world, it may not seem important if your data is located across town or on the other side of the world. In fact, it’s crucial to know exactly where your precious data will reside. This is important not just from a legal point of view (with legislation such as GDPR governing where some data resides) but also for a range of business continuity and performance factors.
design & Facilities Management
For example, with businesscritical data you need consider factors such as the likelihood of natural disasters affecting your chosen facility. A remote data centre might come with lower costs, but the extra latency will make it unsuitable for high performance applications. For businesses that rely on Internet of Things (IoT), or those in industries where milliseconds matter like financial services, being close to the ‘edge’ is an especially important consideration in choosing the right data centre partner.
2
“Choosing the wrong colocation provider can have significant consequence on a business’s operations.”
Security
The last couple of years has seen a significant increase in ever-more sophisticated cyber attacks, committed by a range of well-resourced actors. As a result, security should be one of the most important factors influencing your choice of colocation provider. You should conduct a rigorous security assessment covering both physical protections (such as access controls, CCTV and professional security on-site, 24/7) and logical security. This should cover everything from ensuring that the provider has the most up-to-date firewall protection, along with antivirus and malware protection; as well as a security information and event management system (SIEM) for identifying and eliminating threats on the network, before they can successfully compromise customers’ sensitive data. You should also look for the full range of security certification, including those that are relevant to your business’s particular industry.
3
Power and efficiency
Power is an often overlooked factor when choosing a data centre facility, but it’s one of the most important considerations. A local
power outage or brownout can have devastating effects if a data centre does not have sufficient backup in place. That’s why prospective customers should check that their future provider has appropriate systems in place in the event of a power failure. This should include emergency on-site generation, as well as an uninterruptable power supply (UPS) and full N+1 redundancy of all components. It is also vital to consider how your business’s power needs will change over time, and ensure that the provider can supply this capacity in the future. Another related concept is sustainability. With IT consuming so much of the world’s electricity, it’s increasingly important for businesses to reduce their carbon footprint and energy usage. You should therefore look for a data centre partner that is committed to reducing its power usage effective (PUE) – the ratio between how much of a facility’s total energy use is consumed directly by IT equipment. The lower the PUE, the more efficient the facility – and the less you will be spending on power.
4
Connectivity
Connectivity is more than just the cable that carries the data from the facility to your business. The best colocation providers will have multiple data centres located throughout a region, providing an extra degree of resilience in the event of unforeseen disruption, such as a natural disaster or energy blackout. Replication to secondary facilities, or an IT architecture that’s distributed across geographic locations are some of the most effective forms of resilience, protecting against even the most
extreme causes of downtime. Ideally, you will choose a supplier that has its own carrier-neutral network, which provides a choice of connectivity provider that will best suit your budget, bandwidth and other needs.
5
Expertise
It’s easy enough to talk a good game, but there’s no reason why you should take a data centre’s claims to expertise and excellence on trust. With around 70-80% of all downtime caused by human error, you should look for evidence of their expertise. This should include relevant qualifications from respected industry bodies, as well as certification from independent auditors such as the Uptime Management. One of the biggest challenges facing the data centre industry – indeed, most industry sectors – is the global tech skills shortage. You should therefore ask a prospective provider what time and resources they commit to ongoing employee training, and their strategy for attracting and retaining this scarce talent. Not every data centre is created equal. There is a significant disparity between different facilities on questions of cost, security, connectivity, resilience and many other factors. This isn’t to denigrate any particular provider: there needs to be a range of different data centres to match the very different needs of a range of businesses and industries. To find the right one for your organisation, these five questions should form the core of your due diligence, and will help you find the right partner for a happy, successful and long-lasting relationship. Colt, colt.net May 2018 | 23
design & Facilities Management
Down to a fine art Although consolidation has many advantages, it is an unquestionably complex process. Jon Leppard, COO at Future Facilities, outlines how to simplify success using simulation, and discusses how refining the art of data centre consolidation could help ensure your facility’s performance reflects your business goals.
T
echnology, and the infrastructure that underpins it, is going through a period of rapid flux. Edge data centres are moving from concept to reality, and connectivity is rapidly advancing as the roll-out of 5G draws closer. As these trends and others emerge, organisations are adapting infrastructure to take advantage of the business benefits – often through consolidating existing facilities.
24 | May 2018
But given the complexity of this process, how can organisations that own and manage their own infrastructure maintain a level of data centre performance that is aligned with business goals during consolidation, and ensure that future performance reflects business goals? The answer lies in simulation.
Adapting performance It is commonly understood that data centres live a transitory life
of replacements and upgrades, and before too long they become unrecognisable from their original form. It is the job of facilities managers to optimise the performance of these assets throughout every stage of their lifecycle, and there are a few vital steps and processes to both increase their site’s longevity and optimise the data centre’s ability to perform mission-critical tasks. One of the most common techniques used to achieve
design & Facilities Management
this is consolidation, which is the process of amalgamating multiple data centre locations under one roof. This then enables businesses to revamp their hardware platforms with the view to condense and optimise those containing vital software. Consolidation is an intensive process that around 62% of data centres are going through at any one time, according to the Data Centre Alliance. It is vital, therefore, that both businesses and data centre operators alike continue to maintain operational efficiency during these periods of transition. Equally, with so much at stake, the importance of these projects running both on time and within budget cannot be understated. Ultimately, if the end product falls short of performance expectations, then businesses are at risk of undertaking an expensive exercise in futility. The good news is that, by factoring in the right considerations and using the latest innovations in the field of simulation, this can be avoided.
The art of consolidation The benefits of undergoing the process of data centre consolidation are ample. Not only can significant savings be made on the operational and logistical side of the business, but there are also huge knock-on advantages to the security compliance and energy efficiency side of data centre management. Moreover, with the recent innovations and improvements in the world of connectivity, many businesses are looking to migrate their IT estate into colocation providers. Consolidation represents an attractive way to reduce local data storage to compliance-critical data. Naturally, this is a highly nuanced and complicated process that raises a multitude of issues
and questions to be considered. While the list of important factors can seem endless, the most frequently experienced involve understanding which sites to close, which items of hardware can be retired, and which are needed to remain operational. It can often feel like using pieces from three different jigsaw puzzles to create one new layout with a common infrastructure. When it comes to capacity planning, there is frequently a divide between expectation and reality, with business leaders often quick to request tight turnaround times and keep overall disruption to IT performance to a minimum. However, the time pressures imposed by consolidation projects can lead to the capacity planning process being cut short, which in turn limits the effectiveness and overall performance of the finished project.
Simplifying success There is little room for live trial and error when it comes to the management of data centres, due to the risk of failure to mission-critical processes. This is particularly true when undertaking a consolidation project. With multiple moving parts and huge expenditure on the line, operators are under an immense amount of pressure to improve overall efficiency and meet their KPIs. This means they have to get it right, and get it right first time. It is through this difficult task that the benefits of engineering simulation come into play. Industries across the engineering space regularly utilise simulation to improve performance and meet business goals, and this should be no different when planning consolidation projects. For example, engineers faced a similar problem when developing smartphones – attempting to
“Consolidation represents an attractive way to reduce local data storage to compliancecritical data.”
consolidate your phone, camera, credit card and web browser simultaneously into a single device. These technicians wouldn’t have been able to amalgamate these functionalities into a singular device without extensive simulation and testing prior to even building the initial prototype. In being bold and ambitious when testing the functionality of a smartphone, using this simulation technology has driven the entire mobile device industry forward. Similar benefits can be seen when implementing change throughout the lifecycle of a data centre. Through the use of simulation tools and virtual facility modelling, data centre managers are provided a safe space to experiment and predict the impact of potential changes to the data centre environment - without the risk of impacting performance. The latest innovations in this field incorporate 3D modelling, power system simulation (PSS), and computational fluid dynamics (CFD) to develop a holistic overview of data centre performance. Simulation has become a pivotal tool in helping organisations execute businessdefining transitions with minimal risk of failure. Through effective utilisation of this technology, data centre managers are able optimise the delivery of consolidation projects, with reduced risk and within a shorter timescale. Future Facilities, futurefacilities.com May 2018 | 25
design & Facilities Management
How it’s done Owning and operating one of the largest data centres in Europe, Next Generation Data knows a thing or two about successful design and facilities management. Phil Smith, construction director at NGD, gives us some insight into the company’s best practices, which help to ensure the security, resilience and energy performance required of a modern tier three facility.
N
GD’s facility, housed in a 750,000ft2 building near Cardiff, was constructed back in the 1990s. The building was originally intended for LG Electronics as a semi-conductor plant, but ended up unused. For NGD’s purposes, however, the site offered the potential for a highly secure location in an ‘out of town’ setting away from London, while still easily accessible to the M4 corridor. Crucially, its vast space and abundant power supply from a direct connection to the Super Grid also provided the means for large scale, high density rack deployments. An earthquake and hurricane-proof
“With 97% of UK power outages occurring in the distribution network, maintaining continuity of power supply poses a significant challenge for many facilities.”
26 | May 2018
building design, with inner walls of 300mm reinforced concrete, offered levels of resilience usually associated with underground or mountain interior facilities. Spie and Schneider Electric are two of the company’s lead contractors for construction and electro mechanicals, and work closely with NGD’s inhouse design and construction engineering team. This has been the case from the outset of the original building refurbishment programme, undertaken during 2008/9, and continues today with the ongoing buildout of private custom and shared data halls for NGD’s customers.
Power distribution Maintaining continuity of power supply is critical. However, with some 97% of UK power outages occurring in the distribution network, this poses a significant challenge for many facilities. NGD’s 180 MVA power
design & Facilities Management
connection network is sourced directly from the Super Grid via a primary substation located less than a mile from the site. Its own diverse links and circuit breakers ensure NGD maintains complete end-to-end control over the entire power supply.
Cooling NGD’s cooling architecture is capable of supporting standard 4kW to 60kW racks and above, with resilience at a minimum of N+20%, and often higher. It is tailored to efficiently managing customer deployments of high density racks, typically supporting cloud, HPC and IoT environments. In recent years, the company has moved from direct expansion to the latest intelligent cooling solution from Stulz. This system determines the optimal mode of operation according to external ambient conditions and data hall requirements; allowing operation in free cooling mode for the
NGD Expansion Construction Team
majority of the year, and only providing supplementary cooling in times of elevated external ambient conditions.
Infrastructure resilience
NGD on-site power substation
While the equivalent of an N+1 approach is not uncommon, NGD has more than double the equipment needed to supply contracted power to customers with comprehensive UPS, backup generation and fully redundant power distribution from the Super Grid to each data hall. This is an N+N electrical infrastructure from the 33kV incoming power right through to the final rack positions. Both A & B powertrains
May 2018 | 27
design & Facilities Management
system (EMS). The EMS provides the ability to closely monitor and provide alerts 24/7 on critical infrastructure and energy usage throughout the facility. This helps considerably with optimising the facility’s energy consumption and PUE, and enabling predictive maintenance.
In practice
An external view of NGD’s campus
NGD's interior data hall
are completely separated with no common points of failure. However, there is no room for complacency. A small proportion of failures can be caused by human mismanagement of functioning equipment, which puts a huge emphasis on engineers being well trained, and critically, having the confidence and experience in knowing when to intervene and when to allow the automated systems to do their job. They must also be skilled in performing concurrent maintenance, and minimising the time during which systems are running with limited resilience. Predictive diagnostics, watertight support contracts, and carrying sufficient on-site spares are further prerequisites.
Testing Being totally confident of critical infrastructure requires it to be rigorously tested. Some data centres will have procedures to test their installations, but still rely on simulating total loss of incoming power. This isn’t completely fool proof, as the generators remain on standby and the equipment in front of the UPS systems stays on. This means that the cooling system and the lighting remain functioning during testing. Ultimate proof comes with Black Testing: every six months NGD isolates incoming mains grid power, and for up to 16 seconds the UPS takes the full load, while the emergency backup generators kick-in. This is achieved by isolating one leg of the specific powertrain, and is done under strictly controlled conditions.
Energy monitoring and management Data centres have historically used separate management systems to control buildings operations such as heating and lighting, alongside individual systems for the UPS, generators, air conditioning and cooling systems. From the start, NGD decided to break with tradition, commissioning Schneider to integrate the building management system PDUs and SCADA monitoring into a single energy management 28 | May 2018
A major international insurance firm recently selected NGD’s mega data centre for a complex HPC deployment. The company required a 40kW rack configuration including direct liquid cooling to ensure optimised PUE. The main drivers for the firm’s decision to outsource their requirements were the engineering complexities of the HPC environment, and the significant investment involved with the alternative of building their own high density data centre facility. In addition to removing the requirement for major Capex investment, the company recognised that HPC applications could be run more cost effectively in data centre environments capable of supporting server racks consuming 30kW or more. NGD provided a future-proofed data centre infrastructure to accommodate further expansion, and the essential engineering skills for designing and building the highly bespoke environment. Working closely with the customer, NGD’s engineering team designed, built and installed the 40kW HPC rack environment, including a bespoke direct liquid cooling system, in less than three weeks. The liquid cooling allows highly efficient heat removal and avoids on-board hot spots, therefore removing the problems of high temperatures without using excessive air circulation which is both expensive and very noisy. Next Generation Data, ngd.co.uk
Energy and power monitoring solutions for data centres
ET272 Self-addressing energy transducer
WM50 + TCD12 Modular main and sub metering for PDUs
DEA71 / DEB71 Earth leakage monitoring relays
Carlo Gavazzi UK Ltd. - 4.4 Frimley Business Park, Frimley, Camberley, Surrey GU16 7SG Tel: 01276 854 110 - www.carlogavazzi.co.uk
design & Facilities Management
Good migrations Ochea Ikpa, founder and CEO of server relocation specialist Technimove, shares his key considerations for best practices when migrating applications and platforms from an on-premises infrastructure environment, to a fully managed off-premises cloud solution – quickly, efficiently and confidently. Pre-migration stage • Have a clear vision of where IT and business should overlap in the future Consider how this vision will influence your organisation’s strategy, and communicate it broadly. Being able to share clearly why the strategy is important to your organisation and teams is paramount. • Project management to help you through the journey You should look to employ a dedicated project manager whose sole task is to deliver and take ownership of the migration – someone that has not only the technical expertise and experience of migrating to cloud, but also the right agile methodology and project management framework, prior to selecting a cloud partner. Also think 30 | May 2018
about the operational model you plan on adopting, and whether your existing support vendors can assist going forward once your cloud is off-site and in the ether. • Outline and share a clear cloud governance model Identifying the broader team’s roles and responsibilities, as well as meeting your organisation’s information security tenets of least-access privileges and separation of duties, goes a long way towards ensuring business objectives are met. You’ll need to answer numerous questions before opening up the floodgates for internal users to consume cloud services. How many virtual machines will be needed? What accounts should you have? Who will have access to what? How will you grant that access? With
the General Data Protection Regulation (GDPR) now active, it is imperative that businesses manage their clients’ and employees’ data securely. • Asset management to help identify, in-scope devices, and hardware verification pre- and post-migration Invest time in updating your inventory of applications and licencing. This will streamline the migration planning efforts and minimise the risks of missing a dependency during the migration. In addition, equipment, within the project scope, to be migrated to a new cloud solution should be fully health checked (hardware and software). It should also be rebooted prior to any migration activity to make sure it comes back up to production as expected.
design & Facilities Management
• Network/latency planning Once your infrastructure is in the cloud or off-site, what will the latency speeds be like? What will the speed of the upstream and downstream links in and out of the new site be? How long will it take to migrate or restore across the network? Any delay logging in, or a slow unresponsive web page can substantially affect a company’s brand and reputation in the market place. Even though networks and firewalls are becoming virtualised, all physical network connections and backbone networks must be planned, tested and implemented ahead of any migration to avoid any negative impact to services or end users. • Strategy and planning Think about the business objectives, the road map, risk posture, costs etc. At a high level, you will either make a decision to move the application ‘as is’, or modify it in some fashion. Whichever option you choose, try to incorporate best practices for resiliency and cost savings wherever possible, and abstract the underlying infrastructure when you can. Some common options are autoscaling and load balancing, but you should investigate all possible scenarios: right-sizing investigations can save huge amounts of time and money. Empower your teams to leverage best practices wherever it makes sense, and start optimising as soon as possible.
Migration stage • Pilot migration A quick win early on can really calm the nerves of each line of business, internal teams, users or customers. The more your staff and stakeholders become comfortable with cloud services off-premises, the faster your sponsors and clients
will see the benefits. To do so, you need consistency and transparency. We see many organisations facilitate a pilot migration in order to practise and hone their skills on a smaller, safer dry run. This is such a valuable element to a migration, because so much valuable information is learnt and fed back to the focus group steering the main migration. This will allow the folks moving bits and bytes to focus on speed and efficiency, without having to make decisions around how to migrate applications that share similar characteristics. • Automate The cloud’s agility is realised through automation, so spend time revisiting processes and establishing new ones that can take advantage of it as you migrate. The chances are that not all of your aspects can be automated, so carefully determine which ones can. • Approach the cloud as transformational To do so, adjust your internal processes so that they are able to embrace this technological change. Use that transformational nature to your advantage to align stakeholders with this new model.
Post-migration stage • Test your applications A critical piece of the migration is the integration and validation of the workloads being deployed in the cloud. Each application component should go through a series of predetermined and welldocumented tests. Obtaining sign-off from the business owners will be a lot smoother, if you ask the application owners to provide you with the test plans early on in the project. Ideally, there will be one template that all application owners populate with their specific testing requirements.
“The more your staff and stakeholders become comfortable with cloud services offpremises, the faster your sponsors and clients will see the benefits.”
• Use cloud-native monitoring tools Numerous tools are available that provide application-level insights and monitoring. Use the tools that best fit the business – your operations people will thank you in the long run, and your business owners will have clearer data points to base their decisions on. • Detailed and regular communications All migration decisions need to be clearly documented and signed off. Alert the teams across the organisation (even those not directly involved in the migration) to the fact that there will be outages and potentially new IP addresses/URLs to direct traffic towards. Don’t forget to notify any third parties that may have access to your systems. I hope these steps I’ve listed can help you make the transition to off-premises IaaS (Infrastructure as a Service). Technology is constantly evolving and the Future of Things (FoT) is driving user and customer experiences to new levels of expectation, with demand for online services growing exponentially every year. Businesses will have to work smarter and more efficiently in order to remain competitive and up to date with market trends. Technimove, technimove.com May 2018 | 31
big data
Big Business Darren Watkins, managing director at Virtus Data Centres discusses why the basics matter when it comes to making Big Data work for your business.
B
ig Data is big business. It’s been several years since machine data was famously heralded as the new oil, and there can be no doubt that it has already had a profound impact on business culture. Organisations around the world are seeking to leverage data as a critical strategic asset, helping to uncover new sources of business value – to displace competitors and protect entrenched positions. For many, big data projects have become a normal part of doing business – but that doesn’t mean that big data is easy. According to the NewVantage Partners Big Data Executive Survey 2017, 95% of the Fortune 1000 business leaders said that their organisations had undertaken a big data project in the last five years – but less than half (48.4%) said that their big data initiatives had achieved measurable results.
32 | May 2018
An October 2016 report from Gartner added weight to the idea that big data projects present fundamental challenges for many. Gartner found that organisations were getting stuck at the pilot stage of their big data initiatives. “Only 15% of businesses reported deploying their big data project to production, effectively unchanged from last year (14%),” the company said. So, clearly, organisations are facing some major challenges when it comes to implementing their big data strategies. But, what are those challenges? And more importantly, what can organisations do to overcome them? For us, while big data is undoubtedly now a strategic boardroom issue, the real issues – and the real solutions – still sit with the IT savvy within an organisation. Companies need to know how to extract most intelligence from data and how to make the process as smooth as possible.
The basics – storage and processing Fundamentally, data needs to be stored, processed and delivered in a meaningful way so that it can be used effectively. Data volumes are growing very quickly – especially unstructured data – at a rate typically of around 50% annually. But, one of the key characteristics of big data applications is that they demand real-time or near real-time response; very challenging at these sorts of scales. This means intense pressure on the security, servers, storage and network of any organisation – and the impact of these demands is being felt across the entire technological supply chain. IT departments need to deploy more forward-looking capacity management to be able to proactively meet the demands that come with processing, storing and analysing machine generated data.
big data
Historically, for a data centre to meet new needs, it would simply add floor space to accommodate more racks and servers. However, the demands for increased IT resources and productivity have also come hand in hand with increased need for higher efficiencies, better cost savings and lower environmental impact. High Performance Computing (HPC), once seen as large corporation, is now being looked at as a way to meet the challenge and is requiring data centres to adopt high density innovation strategies in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
IT is power Many companies have turned to outside organisations to help them meet these new challenges, and third party colocation data centres have increasingly been looked at as the way to support this growth and innovation, rather than CIOs expending capital to build and run their own on-premise capability. For many, cloud computing is an HPC user’s dream, offering almost unlimited storage and instantly available and scalable computing resource. For us, the cloud is compelling, offering enterprise users the very real opportunity of renting infrastructure that they could not afford to purchase otherwise – and enabling them to run big data queries that could have
“One size doesn’t fit all. Organisations need to take a flexible approach to storage and processing.”
a massive, positive impact on their organisations’ day to day strategy and profitability. However, one size doesn’t fit all. Organisations need to take a flexible approach to storage and processing. Companies must choose the most appropriate partner that meets their pricing and performance level needs – whether on-premise, in the cloud or both – and have the flexibility to scale their storage and processing capabilities as required. They must also make sure they aren’t paying for more than they need and look for a disruptive commercial model, which gives absolute flexibility – from a rack to a suite, for a day to a decade.
The big security challenge It is maybe obvious that the more data which is stored, the more vital it is to ensure its security. The big data revolution has moved at considerable speed, and while security catches up, organisations are potentially more vulnerable. Many turn to colocation to complement a cloud solution, developing a hybrid approach to meeting storage needs. These organisations recognise that moving into a shared environment means that IT can more easily expand and grow, without compromising security or performance.
By choosing colocation, companies are effectively renting a small slice of the best uninterruptible power and grid supply, with backup generators, super-efficient cooling, 24/7 security and dual path multi-fibre connectivity that money can buy – all for a fraction of the cost of buying and implementing them themselves.
Putting it all together Big data is here to stay. IDC forecasts the market will increase to approximately $32 billion this year from just over $3 billion five years ago. The demands that come with big data mean that, ultimately, the data centre now sits firmly at the heart of the business. Apart from being able to store machine generated data, the ability to access and interpret it as meaningful actionable information, very quickly, is vitally important – and therefore a robust and sustainable IT strategy has the potential to give companies huge competitive advantage. So, while organisations are already collating and storing large sets of data, we know that intelligence is only power if it’s used. The IT industry has a vital role to play in helping organisation to realise these ambitions. Virtus Data Centres, virtusdatacentres.com May 2018 | 33
Hardware optimisation
Think inside the box Steve Grady, VP customer solutions at Equus Compute Solutions, examines what the ‘big guys’ do to secure success, and what we can learn when it comes to optimising hardware in the software-defined data centre.
T
he big guys – Google, Netflix, Amazon, Facebook, etc. – use optimised white box servers in their SDDCs. They do this because white boxes are less expensive, infinitely more customisable, and often more effective than standardised servers from big name vendors. For example, a company like Google has very specific needs in its servers that standardised servers cannot offer, so the ability to customise and only buy servers to fit its exact specifications enables Google, and anyone else
34 | May 2018
using white boxes, to optimise their infrastructure. Trying to customise standard off the shelf servers to fit the needs of a large company takes a great deal of effort, and with servers not doing exactly what they’re intended for, problems will arise eventually. Both of these issues can be costly in the long run. By using white boxes, which are cheaper from the outset and meet specifications exactly, the big guys have found a way to save money and create infrastructure that is exactly right for what they want.
Big differences It is of course nonsensical for almost every company to directly emulate the practices of massive companies like Google, as there is no comparison to make in terms of server infrastructure. Google famously has eight data centre campuses in the United States and seven more positioned around the world. The largest of these facilities in the United States, located in Pryor Creek, Oklahoma, is estimated to have a physical footprint of 980,000ft2, and cost Google about $2bn to build and bring live.
Hardware optimisation
Image: Connie Zhou for Google
An overhead view of the server infrastructure in a Google data centre
“No matter how customised the big guys’ server infrastructure, the components inside are available to everyone.”
These data centre facilities worldwide support near incomprehensible amounts of data. For example, as of March 2017, Google’s data centres process an average of 1.2tn searches per year. Google doesn’t disclose exact numbers regarding its data centres, but the total number of servers in the data centres worldwide has been estimated at roughly 2.5m. All of these facts perfectly illustrate the difference between Google and its peers – and every other company.
How ‘not-as-big guys’ can be successful
Leveraging the power of white box
Despite the unique capabilities and infrastructures the big guys have deployed, not-as-big guys can leverage learnings from the big guys. Most companies will never have 15 global data centres or be part of an organisation promoting unique and innovative server designs, or be able to spend $2bn on server infrastructure. However, every company can still utilise perhaps the most important aspect of the big guys’ massive data centres: the custom white box servers inside of them. No matter how customised the big guys’ server infrastructure, the components inside the servers they use are ‘best of class’ commodity parts. They are available for purchase by anyone. Secondly, while the configurations of an organisation like Google’s servers are often unique, and often have some unique components, the use of virtualisation software, such as VMware and vSAN can be instrumental in allowing companies much smaller than Google to fully optimise their servers. The first step for these small companies is to invest in white boxes.
The power of white box is that they are fully customisable. Just as the big guys do, smaller companies can purchase white box servers from a vendor like Equus to meet their exact specifications. Perhaps a company needs lots of storage space, but not much compute power. Perhaps a company wants dual high core count CPUs and numerous expansion slots built into the motherboard to anticipate growth. A legacy server company cannot offer servers optimised in these ways. But a white box vendor can do exactly what a buyer wants, and build them a server that has, for example, eight SSDs and eight rotating disk drives, all in a 1U form-factor chassis. This kind of hybrid storage server is actually quite common among white box buyers, and is simply one example of how white boxes can lead to total hardware optimisation.
May 2018 | 35
Hardware optimisation
disk and all-flash disk white box configurations. This combines the host’s storage resources into a single, high performance, shared data store that all the hosts in a cluster can use. The resulting white box-based vSAN SDDC has much lower upfront costs and up to 50% savings in total cost of ownership.
Cost optimising a software-defined data centre
Once an enterprise has made the leap forward to using white box servers, virtualisation is the next method to use in order to emulate the successful methods of the big guys, such as Google. The recent progress in hardware virtualisation, largely spearheaded by VMware, has enabled the development of the Software Defined Data Centre (SDDC), an entirely virtual data centre in which all elements of infrastructure – CPU, security, networking, and storage – are virtualised and can be delivered to users as a service.
36 | May 2018
Inside a Google Data centre
A diagram illustrating white box vSAN deployment
The software-defined data centre enables companies to no longer rely on specialised hardware, and removes the need to hire consultants to install and program hardware in a specialised language. Rather, SDDCs allow IT departments to define applications and all of the resources they require, like computing, networking, security, and storage, and group all of the required components together to create a fully logical and efficient application. One such virtualisation software package that can enable the effective use of an SDDC is VMware vSAN (virtual storage area network). A vSAN is a hyperconverged software defined storage software product that combines direct-attached storage devices across a VMware vSphere cluster to create a distributed shared data store. vSAN runs on x86 white box servers, and because vSAN is a native VMware component, it does not require additional software and users can enable it with a few clicks. vSAN clusters range between two and 64 nodes and support both hybrid
Another strategy smaller companies can use to emulate the big guys, is to cut licensing costs by utilising VMware intelligently on their white box server. For example, if a company uses a standardised server from a legacy manufacturer that comes with two CPUs out of the box and has to run the legacy software that comes with the server, it might end up only using 20-30% of its total CPU capacity. Despite this, that company will still have to pay for the software licensing as if it was using 100% of its two CPU capacity, because legacy software used in standardised servers is usually deployed using per CPU (socket) pricing with no restrictions on CPU core count. If that company instead uses a custom white box with only one CPU with a high core count, and runs VMware, it can effectively cut its licensing in half, as VMware uses a socket licensing policy. Cutting licensing costs in half will often constitute a large amount of savings for a company, that it can spend elsewhere to further optimise its servers. This utilisation of virtualisation software, as well as using it to put virtual backups in place, are both key ways in which smaller companies can approximate the methods used by the big guys. Equus, equuscs.com
Need to know what’s happening in the in the network infrastructure industry? NCN is published 12 times a year for professionals involved in the installation of network infrastructure products,
Major in-depth features on installation practice and product awareness Informative articles from leading Industry figures Regular updates on training and industry legislation
covering data, voice and image
All the important news and comment from the network cabling sector
applications.
Pages and pages of the very latest product innovations
As a vital part of the network
Exhibition previews
communications industry, you
Detailed case studies and application stories
cannot afford to miss NCN, so
Business information
subscribe now to ensure you do not miss a single issue.
REGISTER HERE Âť
edge computing
On th edge I
n recent years there’s been a growing conversation about edge computing. Simply defined, it brings compute closer to the data source and consumer, which could be a scenario with connected cars, industrial machines, controllers or sensors. Data is the enterprise’s most valuable asset, and with its dramatic explosion in recent years from the Internet of Things (IoT), so has the need for edge computing solutions that integrate within an overall IT strategy and architecture. Edge computing is a decentralised extension of data centre networks and the cloud. According to Gartner, around 10% of enterprise-generated data is created and processed outside a traditional centralised data centre or cloud. By 2022, Gartner predicts this figure will reach 50%. The impact of this could be staggering, from both a network traffic perspective and the changes required to architectures that process this content. Creating a coherent strategy for managing this data and incorporating it into applications is a substantial lift for any enterprise and one of rapidly growing importance. Edge computing adoption is being driven by IoT adoption,
38 | May 2018
edge computing
he e
Tobi Knaup, co-founder and CTO at Mesosphere explains how abstraction and automation could help enable the next wave of distributed computing. and, as a result, edge computing can take as many forms as there are IoT possibilities. Smart cities provide numerous examples, with increased deployment of sensors on roads, bridges and other infrastructure. The number of information nodes is growing at a comparable rate to expansion of the communications network, with astounding volumes of data. With all this data streaming to and from the edge, information processing and computing topologies will undoubtedly need to change — and quickly. One alternative is to simply transmit all the data over network connections to back-end, centralised data centres or to major cloud providers such as AWS, Azure and GCP. This is both impractical due to the data volumes and the limitations imposed by network bandwidth and latency, but also challenging because of the different management needs for each of these cloud providers. A better scenario is to distribute the processing, performing as much as possible at the edge while retaining an overall holistic view. It is a highly complex task to manage a distributed computing environment that fully incorporates on-premise data centres, one or
“Edge computing can take as many forms as there are IoT possibilities.”
more public cloud providers and potentially vast numbers of edge locations. The processing needs at the edge are quickly becoming a lot more sophisticated, even more so with the rapid advance of machine learning and deep learning, and the ability of these technologies to derive even greater insight from data. Fortunately, efforts to harness hybrid cloud topologies now provide an integrated approach. Challenges facing organisations embracing a hybrid cloud strategy, stem from the wide discrepancy between cloud providers in terms of architecture, API’s, and capabilities; especially compared to on-premise solutions that have been clearly defined for many years. Organisations trying to embrace hybrid cloud frequently find it out of reach because of the following challenges: • Architecting, configuring and securing every cloud requires a specialised skill set, which means hiring more people per cloud and building multiple sets of operational tooling. • Public cloud data services have proprietary API’s, effectively locking data in and preventing portability.
•C onfiguring application deployment is different across clouds, placing significant burden on app development teams to develop, test and troubleshoot applications. • Data, the cornerstone of all applications, may not be easily transferable between clouds due to size, transfer cost, transfer time, incompatibility, or security and compliance requirements – especially in the wake of GDPR. • Some mission critical legacy applications (for example those written for and running on mainframes) may not be portable due to incompatibility with cloud technologies. While there are many tools available that attempt to solve hybrid cloud challenges, these are typically designed to solve only one or a subset of the above challenges, requiring a combination of multiple components. For example, Docker and Kubernetes simplify the deployment of container-based applications, but they don’t address the data challenge. And tools like Amazon Snowmobile can migrate massive amounts of data from your data centre, but it only sends data to one of the cloud providers (AWS). May 2018 | 39
edge computing
Royal Caribbean is transforming the customer experience with edge clouds on each ship, managed from the shore with Mesosphere DC/OS
As if this isn’t already enough, there are further challenges as organisations move up the cloud stack from the basic IaaS services layer, toward some of the managed cloud services such as databases, message queues, CI/CD, etc. Organisations frequently face a hard choice between the ease of use of cloud based managed services or going the DIY route, managing the complexity and operational overhead themselves. Those that choose the former, pay for this ease of use with the risk of being permanently locked into an expensive cloud service provider. To appropriately manage a distributed environment from the edge to data centre and hybrid cloud, requires a solution that solves all these challenges. The good news is that the edge can be incorporated as an extension of the hybrid cloud design, effectively creating a massively distributed and highly scalable computing system. 40 | May 2018
At first blush it might seem an impossibility to manage such an environment. Fortunately, that’s where abstraction and automation take over. New resource management platforms allow an organisation to treat multiple physical or virtual servers as a single powerful machine, regardless of where the compute devices are located. This means that individual components can be aggregated (or disaggregated) as management entities whether at the edge, in a cloud or across multiple clouds, or within an onpremise data center, with resources dynamically allocated based on computing need. By abstracting the underlying data centre, edge and cloud infrastructure, and providing a single unified management interface, operators and application developers have a consistent experience across the distributed computing spectrum. Instead of managing silos of
applications and systems, all applications and services can run consistently anywhere. By abstracting workloads (what is running) from the infrastructure (where is it running), data services and other workloads become truly portable, avoiding infrastructure provider lock-in. The combination of abstracting complexity and automating deployment and management is an especially powerful solution to the challenges presented by new edge computing architectures. To effectively encompass the new paradigm of edge to data centre and cloud requires not only the ability to abstract away complexity, but also the ability to deploy and dynamically manage the environment with near singleclick convenience. With these foundational elements in place, an enterprise can confidently take on the next great wave of computing. Mesosphere, mesosphere.com
next issue
Big Data & Internet of things
Next Time‌ As well as its regular range of features and news items, the June issue of Data Centre News will contain major features on Big Data and Internet of Things. To make sure you don’t miss the opportunity to advertise your products to this exclusive readership, call Ian on 01634 673163 or email Ian@allthingsmedialtd.com.
data centre news
May 2018 | 41
Security
Worlds collide Phil Bindley, managing director at The Bunker discusses security in the software development lifecycle and how developers can hope to achieve agility, whilst still delivering functionality.
A
topic that can often generate an animated debate is the importance of embedded security testing within the Software Development Lifecycle (SDLC). The juxtaposition of developing software in an agile manner and delivering the functionality and end user experience can create challenges. But if done successfully, and balanced against a robust approach to security testing at all stages of the SDLC, this can result in code that ultimately distinguishes an application from the competition, by delivering both business and user requirements. The debate is an age old and one that is even more relevant today than ever before. With an increased focus on data privacy and a ‘wild west’ of cyber attacks, software developers need to
42 | May 2018
embrace a culture of security and realise the consequential benefits of doing so. Undoubtedly there is a fine balance that has be struck. The requirement to develop at pace and enrich the application is the primary goal of a development team. But driving the product to deliver the functionality and user-friendliness demanded by the consumer should also be a priority, while making sure you are one step ahead of the competition. Then there’s also recognising the opportunity cost to the business when development delays are incurred for whatever reason. However, integrated and embedded security testing should not be seen as just a tick-box exercise – it is simply sound business practice. Not only that, but it also makes commercial sense.
Alert Logic’s 2017 Cloud Security Report revealed that 73% of attacks are directed towards the web application layer. With insecure coding practices being stated as one of, if not the, singular most reason for this chosen attack vector, it is paramount that software developers and vendors recognise this artefact and treat this with the attention that it deserves. Software development is a complex process and, on occasion, there may also be a fragmented approach to it, with different teams responsible for developing certain facets of the application. The danger is that this can lead to security vulnerabilities being introduced and so a holistic approach is required to engrain a responsible and pragmatic approach to the continual testing of the application’s security throughout its evolution.
Security
A typical process to follow would be along the lines of: concept, functional requirements and specifications, technical requirements and specifications, design, coding and testing. This is all well and good and will bring the product to market quickly. However, without having security baked-in from the beginning of the process, there are a number of potential risks that are not being properly addressed. Ideally, this process should follow a more security focused approach, whereby information security principles, strategies and requirements should be considered as part of the conceptual, functional and technical stages. Enterprise security architecture and security product standards should then be taken into account at the design stage and the software should then be coded using development standards, practices, libraries and coding examples. Testing plans should also show how to verify each security requirement and, at the
“Software developers need to embrace a culture of security and realise the consequential benefits.”
point at which the software is implemented, procedures that address integrating existing authentication, access controls, encryption and backup will also need to be thought about. This can seem like an onerous process, and by adding layers of complexity, time and cost to the SDLC, it may appear to be counter intuitive to suggest that this will ultimately cost less. This is the scenario that needs further examination. The debate centres around what developing without these controls in place can mean for businesses that live or die by speed to market and user experience. The single fundamental point is that not following an SDLC that has a security-first approach, will result in one of the following three outcomes. Firstly, by pure luck a secure application could be developed, the likelihood of which is simply a wing and a prayer. Secondly, an insecure, vulnerable application may be launched. However, the collateral damage this
could cause should be unacceptable to any business, and risks damaging the most important aspect of any organisation – its brand reputation. Thirdly, a build it and test it approach will result in a minefield of having to unpick months-worth of coding and trying to reverse-engineer back to the point at which the vulnerability has been introduced. At best, this would be a time consuming and complicated task. At worst, it would require a complete redesign if the flaw is so fundamental to the software’s functionality. Security in the SDLC is a continuum from conception to implementation, and in order to deliver secure applications in a timely and cost effective manner we must embrace the business benefits of having security embedded within the entire process. In the modern day, software developers should no longer approach security in the SDLC as a final proofing point. The Bunker, thebunker.net May 2018 | 43
advertisement
Power quality is mission critical Continuity of power is essential in mission-critical applications such as data centres, and the ability for end users to quickly and cost effectively monitor power is crucial. Will Darby, managing director of metering, at controls and automation specialist Carlo Gavazzi UK, outlines the latest trends in electrical distribution in data centres and mission-critical applications and highlights an innovative solution to power quality monitoring.
P
ower outages or power quality issues in missioncritical environments such as data centres, hospitals and industrial process plants can have devastating consequences and lead to financial losses, damage to reputation and reduced business. Mission-critical applications have complex electrical distribution systems, and it is
44 | May 2018
essential that end-users monitor key parameters affecting power quality, such as leakage currents, neutral-ground voltage, voltage stability, wave shape and harmonics. Equally, effective monitoring of power usage is essential to minimise energy consumption and running costs, and meet environmental legislation and corporate social responsibility (CSR) targets.
Quality time Why are power quality issues such as harmonics important? Let’s take data centres as an example. There are two key issues in any data centre: equipment reliability and running costs – problems with harmonics will have a detrimental impact on both. Harmonics can be caused by both current distortion and voltage distortion. Typically, current distortion will be caused by non-linear loads, while voltage distortion is most likely to be caused when an electrical device pulls current distortion through an impedance. Harmonics can lead to a reduction in energy efficiency, because harmonic currents increase losses on conductors and transformers, creating heat and increasing power and cooling costs. The heat generated by harmonics can increase equipment downtime and cause early equipment failure or breakdowns. Overall lifespan of electrical equipment can be shortened, leading to increased capital expenditure as companies are forced to purchase replacement equipment sooner than planned for. Capital costs can also be increased, because data centre managers may opt to oversize equipment to compensate for the losses caused by heating and distortion.
advertisement
Monitoring is essential So the ability for end users to quickly and cost effectively monitor power quality is crucial. Today, large data centres are the norm with multiple servers and data racks. Typically, groups of 48 servers are installed in rack panels in big data centres. Each server supply is protected by a dedicated breaker, and it is necessary to monitor each branch for effective control of electrical variables and energy cost allocation. The mains supply to the distribution panel (located close to the server racks) must also be monitored and controlled. A device is needed that is able to monitor this amount of channels, while keeping space and installation complexity under control. A traditional metering system requires too much space and a short installation time is also crucial, as a single data centre can include 100 to 200 distribution panels. Carlo Gavazzi has conducted extensive research among end users and suppliers to data centres and mission-critical applications. It became clear that an innovative solution to the market was required, one that offers: • I nstallation time savings •S pace savings over traditional metering solutions • The ability to combine branch circuit and mains supply monitoring •S calable, modular monitoring •H igh speed data links between the CT block and the main meter thus reducing EMC issues The result is Carlo Gavazzi’s WM50 branch circuit monitoring system. The WM50 is a complete solution for data centres and critical load applications. While the base unit monitors the mains supply, its two branch buses link up to eight 12-channel split-core current transformer (CT) blocks.
The system can therefore be scaled according to specific needs up to 96 branch circuits in any combination of three-phase and single-phase loads or two-phase and single phase loads. This approach reduces installation time by up to 75% when compared to existing solutions and affords a similar saving during commissioning.
Fast and intuitive The system configuration is extremely fast and intuitive: by following the graphical suggestions of the proprietary software or app, any different topological panel configuration can be easily made. All data can be transmitted to the BMS or data centre monitoring system via either Modbus RTU or Modbus TCP/IP protocols. The WM50 boasts major benefits for end users and installers: • Low measurement cost per channel – users can monitor up to 96 current channels with a single analyser thanks to the 12-channel current sensors. • Reduced installation time and errors – the system is equipped with detachable terminals for all connections. It connects to 12-channel current sensors with proprietary cables. The clips supplied with the sensors ensure that cables are always in order during installation. • Scalability – WM50 can be integrated with optional modules to expand its control and communication capacity. • Disturbance immunity – digital communications between current sensors and WM50 ensure excellent disturbance immunity. • G ranular analysis – it provides total and single load measurements (up to 96 current channels).
“The ability for end users to quickly and cost effectively monitor power quality is crucial.”
• Clarity – the wide backlit LCD display clearly shows the measurements and the configuration parameter values. • Quick configuration – the proprietary UCS configuration software (desktop or mobile version) is free and permits quick system configuration and diagnostics. An optical port is also available for quick analyser configuration. The data centre market is booming and with it the need for a quick and cost effective means of monitoring power quality in such mission-critical applications. Carlo Gavazzi has built on its experience in these markets to offer an extremely flexible, scalable, compact, easy and intuitive solution. Can you afford to miss out? Carlo Gazazzi, carlogavazzi.co.uk May 2018 | 45
Projects & Agreements
Etisalat, Singtel, Softbank and Telefonica create global cybersecurity alliance Etisalat, Singtel, SoftBank and Telefónica have signed an agreement to create the first Global Telco Security Alliance to offer enterprises a comprehensive portfolio of cyberNetsecurity services. The alliance will be one of the world’s biggest cyber security providers, with more than 1.2 billion customers in over 60 countries across Asia Pacific, Europe, the Middle East and the Americas. Through their combined resources and capabilities, the group can protect enterprises against the rising cybersecurity risks as the information security environment becomes increasingly complex. Through the alliance, members can achieve operational synergies and economies of scale that will eventually help lower costs for their customers. The group’s members operate 22 world-class Security Operation Centres (SOCs) and employ more than 6,000 cybersecurity experts. To expand their global footprint, the alliance is open to bringing in new members over time. Under the agreement, the group will share network intelligence on cyber threats and leverage their joint global reach, assets and cyber security capabilities to serve customers worldwide. Leveraging each member’s respective geographic footprint and expertise, the alliance is able to support each other’s customers anywhere and anytime, allowing them to respond rapidly to any cybersecurity threats. Etisalat, Etisalat.ae Singtel, singtel.com Softbank, softbank.jp Telefonca, telefonica.com 46 | May 2018
Veeam and Pure Storage to deliver data management platform for the always-on enterprise Veeam Software and Pure Storage have announced a new integration between Veeam Availability Platform and Pure Storage FlashArray to deliver business continuity, agility and intelligence for the modern enterprise. Integrating the cloud-friendly storage capabilities of Pure Storage FlashArray with Veeam Availability Platform makes each solution even more valuable to joint customers in today’s digitised world. “Storage integration is a key capability for the Veeam Availability Platform,” says Danny Allan, vice president of Product Strategy at Veeam. “It improves backup and recovery and then goes well beyond that to empower the digital enterprise, to leverage production data in new ways to drive value, including greater agility, faster time to market, lower costs, reduced risk, and enhanced operational management. Through our partnership with Pure Storage, we are providing customers with a radical new way to deliver competitive advantage.” Mike Meloy EVP/GM of Involta, LLC, the first MSP Partner in the US for Pure Storage and a Platinum Veeam Cloud and Service Provider (VCSP) Partner says, “Our mission is to simplify IT intelligence and end-to-end infrastructure for organisations that rely heavily on IT to achieve critical business outcomes. Veeam and Pure Storage enable better scale of our existing systems, higher performance, and smaller backup and replication windows.” Veeam, veeam.com
CIM Group and Fifteenfortyseven Critical Systems Realty begin construction of Bay area data centre campus CIM Group, in partnership with fifteenfortyseven Critical Systems Realty (1547), last week hosted technology, data centre and real estate industry executives to mark the start of construction activities for its 240,000ft2 centre campus on 7.3 acres at 400 Paul Avenue in San Francisco. “The project will offer a flexible design to serve either a single tenant or multiple users,” says J. Todd Raymond, CEO of 1547. The first phase of development includes the comprehensive renovation of two existing 1930s-era buildings totalling 54,225ft2 that are being modernised to provide creative office and support space for data centre tenants. In addition, site work will commence immediately for the campus, including a 187,000ft2 purpose-built data centre. The new two-story building will offer a secure and scalable data centre with a robust 24MW of power capacity. Strategically located just outside San Francisco’s central business district, the property is at the hub of more than 15 fibre networks served by multiple international carriers and adjacent to one of the most critical interconnection data centres in the United States. CIM Group, cimgroup.com Fifteenfortyseven, 1547realty.com
Projects & Agreements
UK Government appoints Plexal to create London’s Cyber Innovation Centre at Olympic Park Plexal has been appointed by the UK Government to deliver a major cyber security innovation centre on the site of London’s Olympic Park. Opening in Spring 2018, the £13.5 million innovation centre will be led by Plexal, hosted in Plexal City at Here East (a 1.2 million ft2 digital and creative hub), and will be delivered in partnership with Deloitte’s cyber team and the Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast. Together they have formed a cross-disciplinary team with deep entrepreneurial, engineering and cybersecurity technical skills. The London Cyber Innovation Centre will incubate 72 cybersecurity companies ranging in maturity, over a three and a half year period. Each organisation will receive a customised programme of technical and commercial mentoring from some of the world’s leading authorities on cybersecurity. Startups will also have access to and insights from further industry experts including connection to the EPIC network of international cyber clusters and research hubs which bring trade and investment opportunities on a global scale. The centre will
also convene a diverse range of investment partners from angels to VC’s and institutional investors, providing participants with a broad selection of investment opportunities. Plexal, plexal.com
May 2018 | 47
Projects & Agreements
Proact improves KMWE Group’s availability with 24x7 support KMWE Group, a supplier and partner for both the aerospace and high tech equipment industries, has chosen data centre and cloud service provider Proact to improve its IT availability. In order to support business operations, KMWE has chosen a round-the-clock monitoring service from Proact for selected infrastructure components, which will allow the company to deliver a faster, more reliable and scalable infrastructure environment. Proact proposed a solution based on enterprise-class technology combined with 24x7 proactive management and support, via Proact’s managed cloud services. Thanks to realtime monitoring of the onpremise environment, KMWE can be reassured that potential problems are resolved before they become harmful and costly to the business, thus optimising availability and reducing risk. In addition to this management service, KMWE also selected Proact’s Premium Support to provide incident alerts across the environment. By choosing from Proact’s portfolio of managed cloud services, KMWE no longer has to deploy resources to keep the infrastructure running. Instead, the organisation is able to focus on strategy and innovation, concentrating on strategic projects that can add value to the business. In addition, the continued growth of the organisation can easily be controlled due to the scalable features of the new solution. Proact, proact.eu KMWE, kmwe.com 48 | May 2018
Netscout joins Linux Foundation Networking Netscout has announced it has joined the Linux Foundation Networking (LFN) ecosystem, which encompasses OpenDaylight, OPNFV, ONAP, FD.io, PNDA and SNAS. Through membership in LFN, Netscout will be able to collaborate with the open source community with its virtualised visibility instrumentation and smart data solutions, that deliver actionable metadata and KPIs, driving network and services automation. “Netscout’s membership in LFN allows us to bring the strength of our smart data technology to enable automation platforms such as ONAP,” states Dr. Vikram Saksena, office of the CTO, Netscout. “As NFV and SDN technologies gain traction in the network, smart data driven automation platforms will be essential for creating an agile service delivery infrastructure for network operators. Our membership in LFN is an important step in helping to accelerate the agenda for automating network services in a virtualised environment.” LFN was formed on January 1, 2018 as a new entity within The Linux Foundation that increases collaboration and operational excellence across its networking projects. LFN integrates the governance of participating projects to improve operational excellence and simplify member engagement. Netscout, netscout.com
Öresundskraft and Actility announce rollout of Swedish IoT network using unique ‘City Hub’ model Öresundskraft, a Swedish energy company and fibre network operator, and Actility, a Low Power Wide Area (LPWA) network provider, have announced that they are working together to deploy a LoRaWAN IoT communication network within the Helsingborg Open City Hub, which could be the first of many rolling out across Sweden under the auspices of the StadshubbsAlliansen – ‘City Hub Alliance’. In the Open City Hub model, the IoT connectivity platform is offered by the municipality as an open access, commercially neutral network available to all companies or consumers in the city. Öresundskraft provides ‘connectivity as a service’ for the Helsingborg hub. “A Stadshubb is a regional LoRaWAN with an open and neutral wholesale business model for connectivity, which enables anyone who needs to communicate with LoRa sensors to do so easily, without having to build or operate their own infrastructure. This significantly reduces the threshold for service providers and end users to establish IoT services and solutions, thus accelerating and simplifying digitalisation in general and the development of the smart city in particular,” explains Öresundskraft’s Bo Lindberg. Acility, acility.com
Projects & Agreements
Imperial College Healthcare NHS Trust selects Tintri storage to realise virtualisation strategy Tintri has announced that Imperial College Healthcare NHS, has deployed Tintri as a central piece in its virtualisation strategy. Since deployment, the UK Health Trust has seen notable benefits, including increased storage performance and capacity, as well as a reduction of downtime and administration. Technology plays a key role in assisting the NHS – critical systems used 24/7 must have predictably fast performance. The Imperial College had begun the process of virtualising its server infrastructure, but its enterprise SAN storage was not meeting performance and capacity requirements. IT staff were constantly tuning storage to maintain performance, drawing them away from higher impact projects. With close to 1,500 VMs, this represented a significant resource overhead. After considering a number of alternative resolutions, the IT team at Imperial College deployed three Tintri systems. Immediately, the time spent managing storage dropped to near zero. The Tintri systems supported Imperial College’s workloads across both VMware and Hyper-V, shrinking its storage footprint. Tintri’s VM-level quality of service controls allowed critical VMs to perform flawlessly at all hours of the day. As a result, Imperial College was able to re-deploy its SAN storage to focus on physical servers and file servers while Tintri managed its virtual estate. Tintri, tintri.com
Virtual 24/7 health advice service becomes a reality for babylon with DataStax Getting a doctor consultation at a time that suits you can be incredibly difficult, especially at short notice. Using new technologies and artificial intelligence, babylon provides users with constant access to virtual consultations with doctors and health care professionals via text, video messaging and AI technology, based on DataStax Enterprise. DataStax, using its distributed cloud database built on Apache Cassandra and designed for hybrid cloud, enables online health service provider babylon to power its realtime application service with DataStax Enterprise (DSE). With more than 1.4 million members spanning the UK and Rwanda, babylon offers real-time, personalised health advice via mobile devices, keeping its members’ records secure at all times. babylon needed a technology partner that could support delivering ‘health advice in an instant,’ using a scalable and comprehensive data layer. DataStax Enterprise provides resiliency and continuous availability for applications, enabling companies to grow their services so that they remain scalable, responsive and accessible at all times. Using DSE, companies can deploy applications in the cloud around the world while keeping firm control over where specific sets of data are stored, who has access to them and how that data is used over time. DataStax, datastax.com
DigiPlex gains new Telia Carrier 100G backbone PoP at Stockholm data centre DigiPlex and Telia Carrier have announced a collaboration to meet customer demand for superfast, high bandwidth and low latency internet access through the installation of a new Point-of-Presence (PoP) in DigiPlex’s award winning and carrier neutral data centre north of Stockholm. The Stockholm PoP extends the partnership between DigiPlex and Telia Carrier, following the success of previous deployment at one of DigiPlex data centres in Norway, Ulven in Oslo. The two PoPs give DigiPlex customers in Sweden and Norway direct access to the Telia Carrier backbone, one of the largest and most well connected in the world and the first to be 100G-enabled in both Europe and North America. The new PoP will allow DigiPlex customers to gain all the green benefits of using a data centre in a cool climate, while retaining the ability to get low latency content to end users, wherever they are in the world. DigiPlex CEO Gisle M. Eckhoff says, “The data centre is increasingly becoming an interconnected business ecosystem for critical digital operations. We are delighted that DigiPlex customers now may take advantage of the increased level of connectivity that Telia Carrier brings.” Digiplex, digiplex.com Telia Carrier, teliacarrier.com May 2018 | 49
Projects & Agreements
Molex and TTTech announce collaboration to develop industrial IoT solutions Molex and TTTech have announced a collaboration based on their shared vision of open, flexible and interoperable systems in the Industrial Internet of Things (IIoT). Today, the industrial automation market is experiencing a tectonic shift towards more openness and tighter integration. Existing inflexible infrastructures are struggling to keep up with the changing demands of this increasingly digitised business environment. Molex and TTTech have agreed to address these demands for greater interoperability, information transparency and connectivity by leveraging their combined OT (Operational Technology) and IT expertise. “TTTech’s IIoT platform complements Molex OT solutions and together we can deliver an open, end-to-end solution operating from the sensor to the cloud and anything in between” says Riky Comini, director, industrial automation, Molex. “By matching the extensive expertise Molex has in industrial automation and industrial communication protocols, along with TTTech’s undisputed leadership in deterministic networking and open IT platforms, we can bridge the gap between OT and IT to build solutions that bring the full benefits of technology to our customers.” Molex, molex.com TTTech, tttech.com
3W Infra Connects Its Global Network to Asteroid IXP – Expanding Its Networking Ecosystem in Amsterdam 3W Infra, a fast growing Infrastructure-as-a-Service (IaaS) hosting provider from Amsterdam, has added the Asteroid Internet Exchange Point (IXP) to its ecosystem of network infrastructure providers. Under the signed cooperation agreement, 3W Infra will interconnect its high-volume (160Gbps) global network with Asteroid’s IXP in Amsterdam. This will significantly expand 3W Infra’s networking capabilities and interconnectivity options in the Amsterdam metropolitan area while reducing networking cost for its clients. Asteroid operates its network neutral Internet Exchange Point from Amsterdam Science Park – an area of 70 hectares in Amsterdam, the Netherlands, where research institutes and related companies have their presence as well as data centres such as Interxion, Digital Realty, Equinix, and NIKHEF. Asteroid has its IXP solution in the NIKHEF data centre – one of the largest Internet hubs in Europe that is part of a research institute, the Dutch National Institute for Subatomic Physics. “Asteroid delivers interconnection services, but does it quite differently to many of the existing IXP players in the market,” says Remco van Mook, CEO of Asteroid. “We don’t deliver metro-wide connectivity, transport or cloud services, and we don’t compete with our customers. In fact, we have gone back to the origins of peering, focusing on delivering highly efficient, cost effective, and low-latency local interconnection.” 3w Infra, 3winfra.com Asteroid, asteroidhq.com
Ruckus Networks introduces better cloud-managed Wi-Fi for schools, retail and SMBs Ruckus Networks has announced the European market availability of Ruckus Cloud Wi-Fi, a cloud-managed Wi-Fi solution that lets network administrators manage any number of locations through a single web or mobile app-based dashboard. Ruckus Cloud Wi-Fi is designed to help ‘lean IT’ teams at schools, retail and small-and-medium businesses (SMBs) reduce the time spent managing a multi-site network, while ensuring a first-class connection experience for students, guests and customers. Ruckus Cloud Wi-Fi lets organisations lower the total cost of ownership (TCO) by combining cloud efficiency with high-performance access points (APs) that serve more users over wider areas. As the number of connected devices grows and operations become more digitised, organisations are recognising the need for highly-reliable Wi-Fi connectivity that can be easily managed and scaled with minimal effort. This meets the need of retail organisations such as restaurant chains that want to engage with connected guests, and support wireless point-of-sale devices, while providing the great Wi-Fi experience that customers expect. “Ruckus Cloud Wi-Fi is easy to deploy, use and manage,” says Fabian Wehnert, head of digital and IT, Apeiron gmbh, a restaurant chain in Germany. “On top of that, it provides our staff and guests with outstanding Wi-Fi, which has allowed us to take our business to the next level.” Ruckus, ruckuswireless.com
50 | May 2018
Projects & Agreements
Satellite Applications Catapult deploys Cloudian for limitlessly scalable storage Cloudian has announced that Satellite Applications Catapult, a UK-based innovation company, helping businesses of all sizes realise the potential of space, has replaced its legacy NAS storage systems with the Cloudian HyperStore object storage system and the HyperFile NAS controller. Since it was founded in 2013, the organisation’s NAS device estate had steadily grown to a point where data centre footprint was becoming a premium. With future predicated data acquisition rates expecting to double from 5PB of unstructured data to an estimated 10PB per year, a new solution was required. Satellite Applications Catapult initially installed four Cloudian appliances saving nearly 75% on data centre footprint, reducing the rack space needed for this capacity from 60U to just 16U. The move also gave Satellite Applications Catapult limitless scalability to cope with future storage growth. HyperStore allows the company to expand its storage as needed by adding nodes that are automatically incorporated into its storage pool. The HyperFile NAS controller deployed with
HyperStore provides the functionality of traditional enterprise NAS, including connectivity with Windows and Linuxbased applications. The switch also slashed the company’s support costs by 40%
compared to the cost of supporting its NAS storage arrays. In addition, savings on power and cooling alone nearly offset the cost of the new storage hardware. Cloudian, cloudian.com
Interoute to support upscale of global internet company’s European business operations Interoute has announced that it has been selected by a global internet technology company to deliver over 1,000 kilometres of dark fibre across south west Europe. With growing European end-user requirements, the expanded capacity from Interoute will provide business critical assurance for the internet technology company’s platform. This will enable the organisation to deliver high performance for a range of bandwidth and processing-intensive application workloads. Joel Stradling, research director at Global Data says, “Interoute stands out in the market with its wholesale dark fibre. As owner-operator of a large and advanced pan-European fibre and data network, its strong Mediterranean presence supports customers including OTTs and internet giants with getting close to new and existing cable systems coming into the region.”
Jonathan Wright, VP of Commercial Operations at Interoute, comments, “In a competitive marketplace we are proud to be a preferred provider for global companies with substantial data demands. With billions of dollars being invested into the online and cloud space, we will continue to provide the underlying infrastructure needed to support their growing expectations.” As capacity requirements continue to grow worldwide, Interoute’s network infrastructure is able to deliver reliable and versatile solutions for its customers and their end-users in turn. Customers use Interoute’s services to access its pancontinental footprint which links all four corners of Europe to the rest of the world. Interoute, interoute.com May 2018 | 51
company showcase SPONSORED STORIES FROM THE INDUSTRY
Centiel to quadruple UPS production volumes with new manufacturing facility Centiel SA, has announced it aims to quadruple UPS production volumes with the development of a new manufacturing facility located in Lugano, Switzerland. The new factory will become Centiel SA’s global headquarters and will house R&D, production, final test, sales and marketing, logistics, finance in addition to quality control of all Centiel’s UPS solutions. Filippo Marbach, founder of Centiel SA explains, “Based upon the demand we experienced in 2017 and the growth we are already seeing this year for our 4th generation UPS technology, plus our projected forecasts, it became clear that we would outgrow our existing factory by the end of the year. Our new factory will enable us to quadruple production volumes and we are already future planning to increase production capacity even further in 2020/21.” Gerado Lecuona, co-founder and global sales director, Centiel SA confirms, “The factory move, planned for completion by September 2018, will give us both the additional space we need to increase our capacity and also maintain our existing excellence in logistics and speed of delivery.”
Filippo continues, “Significant effort has been put into Centiel’s pre and post-sales support infrastructure to ensure it delivers class leading service across the globe. When your solutions are deployed in the Arctic Circle and in the deserts and rain forests of the world, delivering excellence consistently requires considerable organisation and resources. However, we are passionate about supporting our clients and ensuring their power is always protected regardless of their location.” Centiel has recently launched new 25kW and 60kW UPS modules for its pioneering 4th Generation Modular UPS system: CumulusPower. This three-phase, modular system is now offered with 20% more power density. These new modules complete the family which also includes: 10kW, 20kW and 50kW options. Centiel also offers PremiumTower, a stand-alone version suited to applications where minimising total cost of ownership is a significant factor, offering the ultimate in UPS flexibility. Centiel, centiel.co.uk
Rackspace launches Bare Metal as a Service to simplify migration of workloads outside customer data centres Rackspace has expanded its managed hosting portfolio to include Bare Metal as a Service (BMaaS) functionality. With the addition of six new bare metal instances, Rackspace managed hosting customers who use BMaaS, can now provision infrastructure on-demand and have it delivered in minutes rather than hours. For organisations seeking more flexibility and automation options in their non-cloud environments, Rackspace bare metal infrastructure provides a pay-as-you-go pricing model and wide range of self-service APIs. Rackspace has designed BMaaS with performance in mind. Two bare metal instances are custom-built for high performance computing workloads with NVMe SSD for I/O optimisation and GPU for advanced acceleration and parallel computational capabilities. Bare Metal as a Service includes access to both physical and virtual firewalls and load balancers, additional storage and advanced networking capabilities. As organisations large and small move away from investing scarce capital into building
52 | May 2018
and maintaining their own data centres, Rackspace delivers a wide range of options to aid their long-term strategic plans. An expanded bare metal portfolio plays an important role in modernising IT by enabling a broad class of applications to migrate unchanged out of customer data centres minimising disruption, cost and risk. Controlling the risk around app-migration is extremely important when moving data intensive and mission critical applications off-premises. Bare Metal as a Service enables a simpler ‘lift and shift’ out of the data centre, by reducing the need to refactor legacy applications and providing the increased hardware access and custom OS options enterprises require. With Rackspace bare metal infrastructure, customers can use the same physical load balancing and network security platforms they leverage onpremises reducing change to deployment architecture and operations processes. Rackspace, rackspace.com
company showcase SPONSORED STORIES FROM THE INDUSTRY
Weatherite launches Wispair range of air handling units Using specially developed selection and quotation software, integrated with a purpose designed interface programme, which seamlessly links design, purchasing and manufacturing processes, Weatherite’s new WispAir range of standard/modular AHUs, combines a rapid quotation turnaround with fast-track delivery of a range of innovative, energy efficient, fully compliant air handling units. Weatherite has responded to the industry’s requirement for quick turnaround of quotations and a competitively priced, quality ‘finished product’ and has spent the last 12 months developing its ‘WispAir’ AHU range to specifically match the industry’s fast-track requirements. As Steve Cartledge, Weatherite’s sales director explains, “Lead times are getting shorter and there will always be increasing pressure on costs. The challenge for us
was to design, build and deliver, quality, energy efficient, fully compliant AHU’s that meet the client’s exact requirements, within the shortest possible time, and at the right price. We’ve done our homework and have spent some considerable time developing our quotation software and manufacturing systems and we know we can compete in this market-having won a number of major orders recently”. The WispAir range covers typical air flow rates from 0.3m3/s to 35m3/s however, Weatherite can deliver AHUs in any specific size/configuration, quickly and competitively. WispAir offers an extensive range of configurations to suit each individual application and can even supply units in multiple sections or as a flat-pack solution, to suit dimensional/ access constraints. All WispAir AHU’s are fully ErP compliant (the EU directive aimed at improving the energy efficiency and other
environmental performance criteria for related products) and incorporate the very latest technology, delivering exceptional performance, reliability and energy efficiency. Using high efficiency fans and motors and incorporating the latest heatrecovery technology, the WispAir range delivers an exceptionally quiet, compact, cost effective solution for small, medium and large applications. The WispAir range further enhances Weatherite’s extensive range of HVAC solutions. Weatherite, weatheritegroup.com
3W Infra releases mid-end and high-end, Dell-powered dedicated server packages 3W Infra’s new high-end and mid-range dedicated server packages are powered by Dell’s latest, 14th generation PowerEdge server technology, PowerEdge R440 and Dell PowerEdge R540 respectively. These server hardware types will replace the previous 3W Infra server packages based on R430 and R530. As a provider of Infrastructure-as-a-Service (IaaS) solutions on a global scale, 3W Infra has thoroughly tested the 14th generation Dell hardware and expects these new dedicated server plans to cater to the efficiency and performance needs, even the most demanding ones of a variety of its existing and new customers worldwide. 3W Infra test bench data centre Amsterdam “3W Infra engineers have run multiple tests with both of these Intel-powered Dell server types on the test bench in our flagship data centre in Amsterdam, and they found remarkable improvements when compared to the previous generation of Dell servers,” says Roy Premchand, managing director of 3W Infra. “Our testing results show that the equipment is delivering serious efficiency and performance enhancements, as promised by Dell.” “Providing up to 140TB storage capacity, 3W Infra’s new high-end dedicated server offering is able to efficiently handle
high-capacity business workloads,” adds Roy. “This would make the R540 dedicated server package ideal for applications like software-defined storage, messaging, video streaming and virtualisation, to name a few.” “3W Infra’s mid-end R440 dedicated server plan on the other hand is optimised for high-density, scale-out computing. This makes it an ideal solution for running virtualisation applications and web serving. With its dual processor architecture and ample storage and memory, our R440powered dedicated server is a good fit for HPC applications as well as standard business applications – for businesses looking for performance but also for efficiency within their dedicated server environments.” 3W Infra, 3winfra.com
May 2018 | 53
final thought
Residency regs Chris Adams, president and COO at Park Place Technologies discusses the rise of data residency regulations.
T
here is an emerging trend among nationstates requiring data on citizens be stored in their own country. Commonly referred to as data residency or data localisation regulations, these rules are becoming a major challenge to IT operations for a variety of companies doing business across borders.
Russia’s tracking of residents’ activities and opinions. No matter the intent of the rule, LinkedIn decided to opt out. From the consumer side, a VPN appearing to be in Russia will allow residents to access LinkedIn, should they see an advantage in doing so. Otherwise the six million local users will be cut off from this employmentfocused social media service.
A recent showdown between LinkedIn and Russia
An expanding challenge
Under the concept of data sovereignty, digital data is subject to the laws or legal jurisdiction of the country where it is stored. Increasingly, nation-states want oversight of their citizens’ data. Moreover, in the postSnowden era, many governments are especially interested in guaranteeing that any snooping that is going on is done themselves, not by allies or adversaries. It may come as no surprise that Russia is one of the countries exerting the strictest data controls. In fact, the country’s data localisation rules were central to LinkedIn’s recent decision to abandon the Russian market. A 2015 law now requires information on Russians to be maintained on servers in Russia, and after some reprieve, enforcement is rolling. The government claims the new law protects citizens’ privacy, but critics argue the real goal is to facilitate 54 | May 2018
It would be one thing if Russia were alone in enacting data residency rules, but dozens of countries on six continents have their own requirements. Some affect all data while others cover only specific types, such as health or genetic information. Overall, the trend is toward more, not less, government intervention and rule-making. In other cases, data residency is a byproduct of policies that make out-of-country data storage, hosting, transfer, or processing unfeasible. This may soon create expensive and difficult logistical demands. While there are blanket data localisation regulations, many countries establish such requirements only for specific types of data or particular processes, services, or transactions. Many countries faced with cybersecurity issues and privacy complaints from citizens, are looking to data residency as a solution, even though it may be of limited value. Other nations
are operating in bad faith, using privacy and security as a cover to implement data rules, whose real intent is protectionism or state surveillance. In both cases, the trends are worrisome.
The trade effects of data localisation By definition, data localisation is a barrier to global trade. It looks increasingly possible that the relatively free flow of data over the internet will soon be segmented into state-based ‘islands.’ This would greatly reduce efficiency and deliver a hit to economic growth. A report by the Information Technology and Innovation Foundation (ITIF) highlights the impacts in terms of lost trade and investment opportunities, higher IT costs, reduced competitiveness, and lower economic productivity and growth. They’re not the only ones raising a red flag. Data residency has become common enough to make the 2017 trade barriers report. No wonder, considering cross-border data flows are worth at least $2.8 trillion, according to a McKinsey study. Blockages will have consequences, some say disastrous ones. Undoubtedly, much of the current interest in data localisation is inherently protectionist. Barriers ensure that local firms - whether they be data centres, app creators, or accountants - have an advantage over foreign ones.
final thought
Data residency has the benefit of driving domestic investment and jobs, and advocates for emerging economies argue that protectionism is necessary as a temporary measure until their companies are ready to compete. Whether these nations will ultimately lower data residency requirements down the road remains in doubt.
What data localisation means for business Companies are already complaining about the costs of large data transfers and local maintenance arrangements, which can reach well into the millions of dollars. There are also significant concerns about intellectual property theft and data security, especially for information kept on servers in Asia, where breaches occur at almost twice the global average. The logistical headaches associated with data localisation rules may prove substantial as well. Many enterprises are based on cross-border business models, in which data is collected in one country and transferred to another part of the world for processing. It’s unclear how some of these operations can continue. Especially for small and internet-based companies, a lack of legal and financial expertise will make it difficult to adapt to differing regulations in each nation where they have customers. Imagine a simple e-newsletter needing to maintain email addresses for its subscribers in their own countries. Some, but far from all, regulatory structures have taken into account the challenges of under-resourced enterprises, but there are places where it may be too dangerous from a compliance perspective to contemplate doing business.
How are companies complying?
“Nations are operating in bad faith, using privacy and security as a cover to implement data rules.”
With data localisation laws cropping up all the time, enterprises must find ways to adapt. More and more companies are looking to vendors to stay up-to-date on the rapidly changing compliance landscape for physical storage and data transmission outside national borders. At present, the requirements are more onerous for companies operating in certain spheres, such as healthcare, finance, and government. Fortunately, some cloud services providers are developing specialised offerings and expertise in these and other key verticals.
As comforting as it would be to farm out all responsibility to cloud services vendors, enterprises should consider maintaining a reasonable degree of knowledge in-house, or engaging a specialist law firm to ensure their own data handling practices, as well as those of their cloud partners, don’t overstep any legal lines. Yes, this will cost money and the added complexity across an international data centre network will compromise efficiency. But with high fines and market access at stake, increased vigilance is the only safe option today and probably for years to come. Park Place Technologies, parkplacetechnologies.com
May 2018 | 55
Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include a special feature examining Big Data and Internet of Things in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at…
www.datacentrenews.co.uk