WINTER 2014
MAKING A SPLASH Is it time to invest in DCM? I N S I D E :
News • DCIM • Storage
DATAcentre
MANAGEMENT WELCOME
EDITOR JOHN HATCHER j.hatcher@turretgroup.com 01923 692670 ADVERTISING PAUL LANE 0207 348 5259 p.lane@closerstillmedia.com CIRCULATION ELAINE PRENTICE 0844 334 6661 circulation @motivationmarketing.co.uk PRODUCTION CAROL BAIRD 01923 692676 c.baird@turretgroup.com
ndependent research commissioned by Zenium Technology Partners has found that there is mounting pressure on the data centre to meet changing business requirements through the adoption of new technology, ongoing evolution and optimisation of data centre infrastructure. This shows how quickly the data centre market is moving and that there is still a huge opportunity for growth and that improvements in technology will mean that the data centre will continue to change and
I
adapt in coming years. You will be able to keep up with all these changes at next year’s Data Centre World, which takes place on the 11-12 March 2015 at Excel in London. The event promises to be bigger and better than ever and you can register at www.datacentreworld .com There are over 150 companies already booked to exhibit at the event and the free conference will deliver information that will make your operation more efficient.
John Hatcher Editor
CLASSIFIED SALES PAUL LANE p.lane@closerstillmedia.com
CONTENTS
EXHIBITION SALES RABINDER AULAKH 0207 348 5770 rabinder.aulakh@ closerstillmedia.com
4
News DCM looks at all the news from across the industry, with news from home and abroad
28
Power Socomec says that it is possible to futureproof your business
WINTER 2014
29
Cooling DCIM
12
An in-depth look at DCIM and its capacity to manage data centres efficiently
MAKING A SPLASH Is it time to invest in DCM? I N S I D E :
News • DCIM • Storage
Produced by Turret Group on behalf of
24
How intelligent workload management can boost efficiency, plus a look at how an SDN-enabled data centre can speed up your business
Suite 17, Exhibition House, Addison Bridge Place London, W14 8XP Tel: +44 (0) 20 7348 5250
14
Geist explores the process of intelligent containment
Storage Software
4
34
A look at the creation of 3D V-NAND
Energy efficiency
28
37
How using IT intelligently can boost the energy efficiency of your data centre
30
ISSN: 1753-9897 Printed by: Stephens & George © Copyright CloserStill Ltd 2014. All rights reserved. No part of this publication may be produced in any material form (including photocopying it or storing it in any medium by electronic means and whether or not transiently or incidentally to some other use of this publication) without the written permission of the copyright owner except in accordance with the provisions of the Copyright, Designs and Patents Act 1988 or under terms of a licence issued by the Copyright Licensing Agency Ltd. Applications for the copyright owner's written permission to reproduce any part of this publication should be addressed to info@turretgroup.com
www.datacentremanagement.com now has RSS feeds DATAcentreMANAGEMENT WINTER 2014 3
NEWS
NEWS IN BRIEF ■ VIRTUALLY THERE Interoute has announced it will open its second German Interoute Virtual Data Centre (VDC) zone in Frankfurt on 1st of December this year. The Interoute VDC zone in Berlin has already been hosting data and applications for customers since its opening in 2012. The second zone in Frankfurt am Main, combined with Interoute’s advanced private and public networking capability, meets growing customer demand for secure cloud enterprise grade cloud computing inside German national borders, with complete data control and ultra low, in-country latency providing a secure, flexible and in-country resilient platform for all computing needs. The addition of dual zone in-country capability to the Interoute Virtual Data Centre, available across 12 zones globally, means Interoute is able to meet the local needs of European businesses and also provide a global platform for growth and expansion.
Mounting pressure on the data centre Independent research commissioned by Zenium Technology Partners has found that there is mounting pressure on the data centre to meet changing business requirements through the adoption of new technology, ongoing evolution and optimisation of data centre infrastructure. It has driven 70% of companies to allocate at least some of their overall IT budget to focus on modernisation initiatives. However, just 21% of the respondents went on to add that they could definitely scale in response to demand from the business. It also appears that rising concerns about how best to meet fluctuating needs for storage and computing resources is driving interest in outsourcing as the solution to address these issues. Indeed 51% of respondents said that their IT infrastructure would be considerably improved if they chose to outsource data centre requirements. The research - entitled ‘Motivation to Modernise’ – found that among those with budget allocated, on average only 24% have specifically set
aside funds for data centre modernisation. Most companies (56%) have assigned 30% or less of the budgets to this area and worryingly 22% have only assigned 10% or less. The modest budgets assigned purely to modernisation may explain in part why 60% of IT professionals were only prepared to go as far as saying that they could ‘possibly’ scale to support their business needs within the next 3-5 years. Unfortunately 16% were
New gen sets Cummins Power Generation is introducing the QSK95 Series generator sets, a new line of high-performance generator sets. The QSK95 generator sets are Cummins Power Generation’s most powerful diesel generator sets to date, offering up to 3.5 MW 60 Hz and 3.75 MVA 50 Hz. They are engineered with the highest kilowatt per square foot ratio in their class, resulting in a smaller footprint that achieves a 20 percent improvement in power density. While the new generator sets boast more power, they also offer best-in-class fuel economy — over the course of 8,000 hours of operation, the QSK95 can achieve fuel savings of more than $400,000.
4 DATAcentreMANAGEMENT WINTER 2014
more negative, saying that their data centre will probably NOT be scalable over this time frame and another 2% were definite that what they currently have in place is not appropriate for their future business needs. Interestingly, the report also cited that confidence in the role of outsourcing as a potential solution for scalability and modernisation is high. 94% of the senior IT professionals questioned felt that outsourcing their data centre requirements would
improve their company’s IT infrastructure to some degree. 13% believe it would improve it radically, and another 51% said it would be considerably improved. Those with current experience of outsourcing (99%) think that outsourcing improves a company’s IT infrastructure to some degree, compared to those who do not outsource (79%). This represents a massive endorsement for outsourcing as a ‘tried and tested’ solution for a variety of short and longterm data centre issues.
Security still a concern Over 50% of organisations that host some or all of their data offsite identified security as their main concern, according to research conducted by Pulsant. This is in stark contrast to other issues such as the importance of the reputation of the data centre provider (15%) and location of the data centre itself (14%) that were only identified by a small percentage of organisations as being a main consideration. The research highlighted that 83% of organisations still keep their business critical data on-premise. However, the research also identified that there is a move in the industry towards more hybrid approaches, with the report also revealing that two thirds of organisations are making use of hybrid hosting models, while only 11% of respondents store all business data offsite. The research recognised that although 24% of organisations store all business data on-premise, there is increasing confidence in externally hosted solutions which meet specific certified criteria to host business critical data to acceptable service levels. “The maturity of hybrid hosting models which operate to certified data management best practices is now enabling business to securely migrate data off-premise. Suppliers are providing and mitigating data transition risks through migration expertise and technologies, allowing businesses to maintain control of their data and leveraging external supplier benefits,” says Matt Lovell, CTO, Pulsant. “Our research identifies that organisations continue to have data management and security concerns.”
WWW.THEstack.COM
Data center solutions from Siemens For the factories of the 21st century
Data centers store the most valuable company possession: Data – but the challenges for data centers are manifold. To address these challenges, Siemens has pooled its vast expertise and experience across several disciplines into a comprehensive portfolio, integrated solutions and global services for data centers. Our experts have teamed up to
help you manage the infrastructure of your data center for maximum uptime, reliability and efficiency, optimizing everything from data center management, automation and control, power distribution with Totally Integrated Power (TIP) to fire safety, security and services.
siemens.com/datacenters
NEWS
NEWS IN BRIEF ■ CHANNEL PARTNER Next Generation Data (NGD) says that Turrem Data has signed as a channel partner and will be taking colocation space at its NGD Europe tier 3 mega data centre. Turrem Data will offer systems integrators and large multinationals NGD’s high security ISO27001 accredited colocation data centre facilities as a Pan-European hub for hosting, storing and remote monitoring of worldclass digital security and data forensics solutions. Local systems design, implementation and further customer support services are being provided through Turrem Data’s Pan-European network of security specialist resellers. ■ NEW OFFICE Node4 has opened a new office in London. The new office, located in the City, is part of Node4’s ongoing commitment to provide direct support to its new and existing customers at a regional level. The new office will allow Node4 to meet with Londonbased customers and demonstrate its comprehensive range of solutions. This enhances direct communication with customers in the City and opens up further opportunities for them to take advantage of Node4’s extensive solutions portfolio.
Solid solution Keysource has commenced work on a data centre for leading cloud computing experts brightsolid. The centre represents a £5 million investment and will be located at the Aberdeen Journals’ Lang Stracht site. Detailed planning is underway with the data centre build scheduled to commence during October 2014 and the facility due to open in April 2015. The 2200 sq m. site will initially comprise 210 highdensity racks with capacity of 30 kW per rack. It has been designed by Keysource to expand to twice this size, allowing for storage of 400
Petabytes of data. The facility has also been designed to achieve an annualised Power Usage Efficiency (PUE) of 1.25. This will make the data centre substantially greener than the industry standard and on a par with the highest performing data centres including that of other leading cloud providers.
Standing by Ukraine
M
adek has successfully completed the delivery of a stand-by power solution for the largest commercial data centre in Ukraine. Madek’s team of engineers supplied two P1375E3 and four P1700P1 generator sets to the Parkovaya Data Centre, which is located in the centre of Ukraine. Commissioned early in 2014, the project was delivered on time, on budget and to specification. All six generator sets were fitted in the data centre’s engine room inside the special compartment. They are synchronized with each other and they will provide emergency power in the case of the mains failure. Commenting on the project, Dmitry Gladkyi, commercial director of Madek, said: “The successful delivery of this power solution for such a highprofile data centre is the latest in Madek’s ever-growing catalogue of projects which have been completed to the highest standard and quality. We have developed a reputation as specialists in data centre power provision, with our generator set experts identifying the most effective solutions to ensure clients’ peace of mind.”
6 DATAcentreMANAGEMENT WINTER 2014
FOODfor THOUGHT Franek Sodzawiczny CEO, Zenium Technology Partners
As economies stabilise, it’s encouraging to see that the investment community is once again investigating opportunities for the next best return, and that the data centre sector continues to be an attractive proposition. Of course the market landscape is changing. Mergers and acquisitions are impacting the face of the better known brands and, in parallel, new players are attempting to create their niche. But what will the critical considerations on the business agenda be in 2015? We decided to explore this question by commissioning an independent research study into the current issues in the data centre sector. Interestingly it found that whilst data centre modernisation might be a buzz word today, it may not be such a big focus tomorrow. Budgets are assigned to this issue, but it appears that they aren’t as extensive as you might expect. Whilst modernisation is considered important, it’s not regarded as important enough for many C-level executives to feel the need to be involved in the process. This is a real concern when you also take in to account that a staggering 94% of the senior IT professionals questioned admitted that their in-house data centre is technologically out of date. The disconnect between what is in place and what is needed for the future is very worrying. There is massive demand for increased storage and computing power to manage huge volumes of data, and the impending impact of the ‘Internet of Things’ will only exacerbate these problems. To that point alone, just 21% of the respondents felt that their data centre could definitely scale in response to demand from the business. The Skills Gap is set to remain a ‘hot topic’. 64% of respondents cited that they feel it will have a detrimental impact on their business and, as a result, 29% indicated that they have started taking steps to tackle this issue – but that still leaves a lot of exposure across the rest of the industry. Outsourcing, thankfully, will also remain popular – for perhaps a surprising reason. The report findings seem to suggest that companies are now viewing outsourcing as a business improvement process and not just a way of ensuring extra space is available. The key objectives for outsourcing today include the desire to improve scalability, tackle data centre modernisation issues and ultimately improve IT infrastructures (94%). This is a significant endorsement for using outsourcing as a ‘tried and tested’ solution for a variety of short and long-term data center issues. It’s great to see that the fundamental business drivers for the industry will remain the same; scalability, agility, flexibility and managing data growth. But it has to be said that if data centre and IT strategies continue to be hindered by restricted budgets, lack of foresight and limited innovation, the gap between what is needed and what is in place will only get wider.
WWW.THEstack.COM
NEWS
NEWS IN BRIEF ■ CLOUD PRICE To examine the real-world cost of cloud computing over time, 451 Research is launching a Cloud Price Index. Like a consumer price index, the 451 Cloud Price Index (CPI) is made up of a basket of goods; in this case, it includes the services required to operate a typical Web server application. In the first edition of the Cloud Price Index, the average hourly price for a typical Web application is $2.56, with the 'hyperscalers' (AWS, Microsoft Azure and Google Compute Engine) slightly cheaper at $2.36. The CPI is based on quotes and estimates derived from a range of cloud providers based on a specification of a typical multi-service cloud application. ■ FLAGSHIP Juniper Networks has introduced a virtualized version of its flagship MX Series 3D Universal Edge Routing platform to deliver the industry’s first full-featured, carrier-grade virtualized router. The Juniper Networks vMX 3D Universal Edge Router, which operates as software on x86 servers, gives service providers and enterprises the ability to seamlessly leverage the benefits of both virtual and physical networking so they can rapidly deliver services and cost-effectively keep ahead of customer demand.
Internet more important than HS2 The vast majority of nearly 200 business leaders based in the North West believe improvements to internet infrastructure are more important for their businesses than improved rail links to the north, and that the Government needs to do more to make the ‘Northern Powerhouse’ a reality, according to a recent survey undertaken by DataCentred, a provider of data centre and opensource cloud computing services based in Manchester. Headline findings from the survey: · 87% believe that devolving more power to local authorities will increase business growth in the regions and as such welcome the move to have an elected mayor in Greater Manchester · 79% think the extension of the high-speed internet infrastructure across the UK is more important for their business needs than improved rail links via HS2 and HS3 · Half of respondents are confident that the region can support technology companies and believe that these businesses will be more successful if based outside of London · Respondents were split as to what would best support the development of tech businesses outside of London between those who favoured tax breaks and those who preferred public/private backed regional tech clusters
8 DATAcentreMANAGEMENT WINTER 2014
Networking and bandwidth still an issue Emulex has announced the results of a study of 1,623 IT professionals, in which respondents provided insight into their enterprise data centre networking environments. The study, conducted in October, found that 57 percent (%) of respondents have adopted hyperscale networking environments that require massive scalability in network resources. Of those respondents, more than half (51%) named increasing bandwidth as a major challenge in moving to hyperscale environments. As applications become more network-centric, and the volume of data grows in the cloud, organisations are seeing increased bandwidth demand for front-end applications driven by mobility and Bring Your Own Device (BYOD), mid-tier big data analytics and content distribution, back-end transaction processing and storage management. The survey results below indicate that hyperscale and multitenant requirements are driving demand for higher network bandwidth to manage
vast volumes of data, lower latency to accelerate application delivery and performance, and increased security to meet service level agreements (SLAs), and regulatory and compliance requirements. Organisations have responded by increasing their network bandwidth, and more than 77% of respondents running hyperscale environments say the move to the cloud has already necessitated the upgrade of their networks to at least 40Gb Ethernet (40GbE).
New PDUs launched
E
xcel Networking Solutions has launched a new range of desktop power distribution units. The latest addition to the Excel range is designed to add greater accessibility to the work place as it presents both power and network ports in a compact and stylish design that sits on the edge of the desk, removing the need to scramble underneath to charge and power phones, tablets and laptops. The anodised aluminium desk top PDUs are easily fitted to the desktop with
clamps supplied. They are available with UK or Schuko Power Sockets and have options for 6c apertures to accept data outlets and USB
power and come in a choice of sizes. The USB outlets supply up to 2.1 amps permitting phones or tablets to be charged.
WWW.THEstack.COM
NEWS
NEWS IN BRIEF ■ AGREEMENT Equinix has announced an agreement to provide direct access to Google Cloud Platform via the Equinix Cloud Exchange in 15 markets worldwide. By offering high-performance, dedicated connections through Cloud Exchange, Equinix is helping Google customers realise the full benefits of their cloud services. ■ BILLIONS Public IT cloud services spending will reach $56.6 billion in 2014 and grow to more than $127 billion in 2018, according to a new forecast from International Data Corporation (IDC). This represents a five-year compound annual growth rate (CAGR) of 22.8%, which is about six times the rate of growth for the overall IT market. In 2018, public IT cloud services will account for more than half of worldwide software, server, and storage spending growth.
Virtually done Not ready yet Interoute has opened two new Interoute virtual data centre (VDC) locations, one in London’s Canary Wharf and a second site in Slough, bringing its total number of Interoute VDC zones to 10. Together with its recent acquisition of one of the largest networks in the UK, Vtesse, this cements Interoute’s position as one of the fastest growing providers of enterprise cloud services across the world. Matthew Finnie, Interoute CTO, said: “Interoute believes in being close to its customers and key markets because it brings lower latency and higher performing solutions, straightforward compliance management and good customer service. That’s why we are investing in many zones, rather than relying on a single or limited presence to serve a continent. Our recent UK data
centre and network expansion brings our cloud within a few milliseconds of our customers, partners and major business hubs in the UK. Low latency means higher throughput, fewer servers for the same application and less rewrites to get it there, making Interoute Virtual Data Centre the practical option for Enterprise migration to the cloud and for the most ambitious of developers."
Better connected euNetworks has launched its dc connect service, an £8million infrastructure investment providing near instant, scalable high bandwidth connectivity across 35 data centres in London with onward connectivity to 250 data centres in Europe. euNetworks has invested the £8m in pre-deploying this high bandwidth infrastructure in London to help retain the city’s position as a leading technology hub. The company works with both data centres and some of the largest technology businesses in the world, who have an ever-increasing demand for these services. Brady Rafuse, CEO of euNetworks, says: “As a leading European bandwidth infrastructure provider, we invest in our network to provide high bandwidth scalable connections. Data centre connectivity infrastructure is critically important to business today as it enables the agility and flexibility customers seek in response to dynamic market conditions and cost efficiencies.”
10 DATAcentreMANAGEMENT WINTER 2014
Over four-fifths (82 per cent) of current UK IT leaders do not believe they are yet fully ready to move from traditional server and hosting environments to IaaS providers due to a shortage of in-house skills according to new research commissioned by Reconnix. When asked if they were ready to migrate to Infrastructure-as-aService (IaaS), only 10 per cent of the 100 IT decision makers surveyed believed they were, while only a further 8 per cent had already migrated or were in the process of migration. Despite the relatively low rates of migration to IaaS, 88 per cent of IT decision makers stated that moving applications from traditional server environments to the cloud was of top, high or medium priority, with only one-in-twenty stating it was not a priority at all. Only 7 per cent of respondents were confident that they could call on all the required skills to manage applications running in IaaS environments from an in-house team. While over one-third (36 per cent) believed they had most skills in house, a combined 59 per cent had only some of the required skills, no skills or did not know. These findings were reflected in planned IaaS buying and management behaviour, with only 26 per cent planning to buy directly from the vendor. Nearly two-thirds admitted they would need some form of third party support, with 38 per cent engaging a third-party consultancy for total management, and 25 per cent planning a mixed model of in-house skills and consultancy services. “There’s a very clear desire for businesses to move applications away from traditional environments and towards Infrastructure-as-aService providers, however a lack of adequate skills seem to be holding back many IT departments from making this move,” says Steve Nice, CTO, Reconnix. “It’s natural for businesses to err on the side of caution, but this conservative approach can mean that many are missing out on the transformative benefits of the cloud. “It’s clearly a confidence issue, and the challenge is for IT departments to take the necessary steps to prepare themselves for inevitable change. By failing to take action now, they risk putting themselves at a technological disadvantage to competitors or being caught blindsided and forced to rush through a migration that could end up costing over the odds.” Microsoft Azure was acknowledged as the most trusted IaaS provider, with 36 per cent of respondents choosing it ahead of rivals IBM’s Smart Cloud (22 per cent), Amazon Web Services (14 per cent) and Rackspace (14 per cent). Only 5 per cent of IT buyers placed trust in Google’s Compute Engine ahead of the competition. “The prominence of Azure and IBM in IT buyers minds is surprising, especially considering how far ahead AWS is, both technically and in terms of market share,” explains Nice. “IT departments not used to buying cloud services can sometimes not be aware of the difference in levels of performance between IaaS providers and it can be tempting to choose a trusted name. This trust, however, could be based on a decades old relationship and not on the performance of current product offerings.” Cost was identified as the most important factor when making a decision on moving to the cloud. Respondents cited potential cost savings as the single biggest motivator (32 per cent) but also as the biggest barrier to migration (30 per cent).l
WWW.THEstack.COM
MANAGEMENT
Changing ROOMS A UK company says it has solved the data centre cabinet challenge
12 DATAcentreMANAGEMENT WINTER 2014
A
s the requirement for offsite data storage has increased so has the need for additional, larger data centres and for them to employ bigger and heavier data enclosures. This, in conjunction with the demands of Health & Safety, has presented the industry with its own set of unique challenges. One such challenge, how to safely move these large data enclosures, was identified by Rittall who are one of the world's leading international data centre system providers. Among other things Rittal manufacture and supply data enclosures, these data racks or cabinets are large and when fully populated can weigh in excess of 3,520lbs (1.5 metric tonnes) and be worth around £1 million. Once delivered to the data centre the enclosures need to be removed from their anti-vibration pallets and safely transported to their final install destination. The journey to their final installation typically involves manoeuvring the enclosures along relatively narrow corridors, through doorways with limited height, around corners and negotiating gaps whilst going in and out of service lifts. Although the enclosures are fitted with castors these are not suitable for such a demanding journey. The size, shape and weight of the data enclosures also make the load unstable, especially when on inclines or when cornering, this could lead to the unit tipping over causing serious injury to personnel and damage to both surrounding equipment and the data enclosure itself. The answer to easily and safely moving these fully loaded data enclosures needed to be found. Andy Gill, engineering director RittalCSM said, “A number of issues needed to be addressed before we would be able to provide our customers with a
solution of how to move our data enclosures to their final install destination. The combination of their size and shape and the fact that there are very few secure points to which traditional lifting equipment can connect to made the challenge even harder.” In November 2013, after searching the market, Rittal approached Wiltshirebased materials handling manufacturers BIL Group having seen one of their existing products, a cleverly designed moving system known as Skoots. The Skoots moving system comprises of a pair of wheeled units, each with a hydraulically controlled toe plate. The units attach to either end of a load with the toe plates being inserted between the load and the floor and both units are secured together with strapping for extra safety. The toe plates on both units are then raised evenly, lifting the load from the ground and making it easy to manoeuvre. The Skoots moving system has been successfully used for many years for applications such as moving vending machines, large freezer units and shop display cabinets. “The existing Skoots system certainly made a good starting point, however we quickly realised that in order for us to fully solve all of the unique challenges presented to us it would require some redesign and product development on our part.” said Mark Farrell, Managing Director, BIL Group By January 2014 BIL presented Rittal with a first prototype and it was quickly decided that BIL should continue to work on the project in order to produce a final working production model. Over the next few months, one by one, all issues were solved ¬- a toe plate bar was added to enable the Skoots unit to hook securely under the chassis of the data enclosure to prevent slipping and it was designed to be adjustable to enable it to cope with any changes to
the enclosure's depth, such as the addition of cable management trays. The traditional Skoots strapping used to secure the units together was replaced with heavy duty adjustable clips and an improved hydraulic jack was fitted. The stability issues were solved by the inclusion of removable outriggers or stabilisers. Having them removable make the units very versatile and allows the load to be rolled the last few feet into its final position even if space is tight. By May 2014, after having passed extensive independent testing procedures and been successful in field trials at data centres in Houston, New York, Santa Clara, Dublin, Amsterdam, Singapore and Hong Kong, the Skoots SKENCSYS1 data enclosure moving system was ready. It has also achieved CE accreditation and has a pending international patent application. “We are very pleased with end result and now have a product that is extremely versatile and can easily be adapted to fit most OEM data enclosure cabinets currently available, with that in mind I'm looking forward to helping other data centres solve the problem of how to move their data enclosures both safely and easily.” commented Tim Murrow, UK Sales Manager, BIL Materials Handling Division. BIL Group are currently in production of the Skoots SKENCSYS1 fulfilling orders for a number of international clients and have also since completed a further project with Rittal developing a special narrow Skoots unit to move cooling cabinets. To find out more about the Skoots SKENCSYS1 data enclosure moving system call 01249 822 222 or visit www.bilhandling.co.uk/skootsskencsys1-promo. BIL Group are also exhibiting at Data Centre World, 11-12 March 2015, Excel London stand C100.
MANAGEMENT
GIVING A, competitive edge UK service provider Logicalis was able to bring competitive pricing structures to the marketplace through leveraging the CA DCIM solution, which helped it reduce power costs and other service delivery overheads. As a result, Logicalis has increased efficiency, maximised availability and reduced power utilisation for its managed services offerings and was able to go to market with more competitive offerings.
14 DATAcentreMANAGEMENT WINTER 2014
L
ogicalis is an international enterprise service provider of integrated IT solutions and services. It designs, specifies, deploys and manages complex IT infrastructures for more than 6,000 corporate and public sector customers. The group’s expertise spans a range of sectors and IT solutions, including collaboration, cloud computing, data centre hosting, managed services and business analytics. Understanding power consumption in the data centre is fundamental for the company – but also highly complex. The company’s primary facility in Slough, England is designed for 400 racks containing a minimum of two power distribution units (PDUs). There are also uninterruptable power supply (UPS) systems, network switches, meters and environmental control devices, bringing the total of data points to more than 1,000. Collecting periodic data from all of these devices for calculating power usage effectiveness (PUE) and reporting was a time consuming task, and did not give the Logicalis team the real-time visibility necessary to identify anomalies in energy use that represent wasted expenditure. In addition, visibility of power consumption was key for maintaining IT service availability – whether it involves a hosted infrastructure or a private cloud environment. With power management becoming such a critical factor in both the delivery of managed and cloud services, Logicalis needed to transform its manual approach to capturing consumption data. The company recognised that by offering their customers energy efficiency,
sustainability data, cost transparency and power reliability, they have the opportunity to differentiate their managed services offerings in the marketplace. Logicalis was already using a number of CA Technologies solutions to support its network managed service and found them easy to use, reliable and cost-effective. Logicalis then chose the CA DCIM solution because of its proved ability to provide a bridge between facilities infrastructure assets and IT devices. CA DCIM gathers data from facilities and IT devices via SNMP, Modbus, and BACnet protocols without requiring any additional hardware. It has been integrated with existing Building Management Systems (BMS) so that advanced analytics and reporting capabilities can complement existing BMS functions. The solution performs advanced calculations with results stored as time series data. CA DCIM provides that data to administrators as well as other IT and Facilities management solutions, including capacity management, performance management, infrastructure management, cooling systems, virtualisation management, and the service desk. With CA DCIM Logicalis was able to visualise, monitor, and better manage the use of power, cooling and IT capacity in the data centre by helping staff to identify possible faults with devices, detect changes in power and temperature, minimise power hotsposts and overheads and report to customers on their power consumption levels. The solution has also solved the problem of time consuming task of collecting data for management and reporting, as it
automatically polls, calculates, gathers and stores both live and historical data. The Logicals team documented financial savings in energy consumption and staff time as well as top-line revenue generation due to competitive differentiation. Although by design Logcalis’ PUE was already as low as it can get, due to it being a brand new data centre, by monitoring the company has been able to keep on top of faults affecting efficiency very quickly. For example, CA’s solution picked up a chiller that was not free cooling as efficiently as another – which turned out to be a fault with a valve. The Logicalis team noted that they typically see energy savings rating from 2.5% to up to 30% and more, for older data centres operating legacy equipment. In addition, the team documented a 93% reduction in time spent collecting data and producing quarterly reports, including PUE calculations. The results of the financial analysis show that Logicalis achieved a 159% ROI over the threeyear term on their investment in the CA Technologies DCIM solution. This represents a payback period of 11 months, which includes a 6-month ramp-up period before benefits were accrued. The analysis took into account expenses, assets amortisation, and depreciation. By using the CA DCIM solution as part of its managed services offerings, Logicalis has been able to increase efficiency, availability and overall business agility. The investment has helped Logicalis pass these savings on to their customers, differentiating them in the marketplace and helping them to out-innovate the competition.
// Data Center Infrastructure Management Experience the inner calm that comes with successful data center management. FNT‘s Data Center Infrastructure Management (DCIM) solution is the central management and optimization software for your data center. From the building infrastructure (power, cooling, oorspace, etc.) and IT infrastructure (such as networks, servers, and storage) down to the services (software, applications, and services), DCIM from FNT enables a comprehensive and integrated view of your valuable data center resources.
further information at www. fntsoftware.com/DCIM
// when transparency matters.
MANAGEMENT
DCIM: saving you time and money Keith Sullivan, marketing director EMEA, (below) for Corning Optical Communications tells DCM about simplifying DCIM deployment to save time and money
16 DATAcentreMANAGEMENT WINTER 2014
D
ata centre owners need to manage and store an increasing amount of data and handle the growth of applications in a finite amount of space, which places increasing importance on equipment density and environmental, power and infrastructure design. To improve operational efficiency, minimise risk of downtime and plan for the future, data centres need complete visibility, control and management of their assets, capacity and the environmental requirements. All the infrastructure and equipment needs to be documented and managed throughout the life cycle with environmental data to provide a total and holistic view of the data centre. Managing the physical cable infrastructure with thousands of data paths connecting all types of IT-related equipment (such as servers, storage and network switches) using a variety of connectors, cables and patch panels is an important part of any DCIM requirement. Especially when you consider that a single connectivity channel contains many pieces of IT equipment and may travel through multiple rooms and racks. What you don’t want to happen is for someone to inadvertently remove the wrong patch cord during the many moves, adds and changes (MACs) that are carried out over time, causing unplanned, potentially costly downtime. When talking to data centre managers we have gained an understanding of the practicalities and processes in a new build, as well as the addition of new servers and storage devices and the refreshing of technology in existing structures. Typically these processes span across multiple teams, driven by work-orders, with each team executing their individual process. Therefore management of connectivity and the various connected and non-connected assets means that an up-to-date record
of the entire infrastructure needs to be constantly updated and made available to everyone, all the time. One approach to document and manage physical connectivity has been through the use of Active/Intelligent Patching Solutions. However, away from the active patching, the solutions provide management only through the use of paper work-orders that need printing, executing and updating at the desktop. Active/Intelligent Patch Solutions are only focused on the cable and patching element of the DCIM requirement and are therefore of little use to anyone except the patching teams. A true DCIM solution will encompass this cable patching element along with the rack space, power, cooling, ports and all the associated environmental metrics. For a DCIM solution to add any value, it is imperative that the integrity of the data is kept to a maximum. In order to achieve this, the solution must be simple to operate, foolproof and drive rigour in the process. For example, the Cormant-CS handheld device displays and updates workorders in real time whilst in the data centre. The final step of any work-order is to scan a barcode on the newly added asset and on the location of that asset, fully updating the database and eliminating any paper trail and/or manual updates to multiple databases. In order to provide the most efficient, rigorous and complete DCIM
solution, Corning has partnered with Cormant-CS by fully integrating Corning’s data centre solution and cabling processes into the Cormant-CS database and software. This ensures a best-of-breed physical infrastructure solution supported by a best-of-breed DCIM software and management solution. As an example of how the two are working together, Corning’s Pretium EDGE modular high-performance cabling solutions are designed to provide unequalled rack density and ease-of-access to speed up installation by up to 35%, reducing MAC costs by 25% and improving cabling ROI by up to 50%. These benefits can then be multiplied when combined with a DCIM solution, such as Cormant-CS, which improves the overall data centre productivity and asset utilisation by between 20% and 50%. There is no need for low-density active/intelligent patching systems with their management overhead, sensors, flashing lights, proprietary cords and higher price tags. With asset tracking of advance cabling solutions through standard barcode labels, the intelligence truly is in the software. This results in faster and easier DCIM deployment, which saves time and money for today’s agile data centre environment, providing a holistic approach to managing the increasing complexity of cabling, IT equipment, space, power and cooling.
In the future how much power will you need?
protecting you from the unpredictable Modular UPS solution guaranteeing service continuity, scalability and optimized costs. A flexible response for meeting unpredictable changes in power demand.
Fully modular system
Totally redundant design
Enhanced serviceability performance
ʼForever Youngʼ service concept
For further information please call 01285 86 33 00 or email info.ups.uk@socomec.com
www.socomec.com SOCOMEC · Units 7A-9A Lakeside Business Park · Broadway Lane, South Cerney · Cirencester · GL7 5XL UK
POWER
The full PACKAGE Paul Brickman of Crestchic explains how the thriving Genset market is boosting sales of packaged transformers on a local and international level
T
he recent Global Power Rental Market 2012 – 2016 report from Technavio forecasts the industry to grow at a compound annual growth rate of 17 per cent over that period . It also states that one of the key factors in this market growth is the increasing demand for electricity. Originally built upon a niche market, the global temporary power industry is still relatively new and a good proportion of the general power industry is still unaware of the flexibility that it offers to the wider marketplace. Although Crestchic is well established in the power generation arena from the manufacture of loadbanks, temporary power is an area that still shows great potential and it is important that the industry keeps abreast of the benefits of products like multi-tap transformers in order to revolutionise power generation. We are seeing the regional and international temporary power business as the main driver behind the sales of our step-up transformers, mostly where companies require a multi megawatt (MW) temporary power station at short notice. These can be provided by rental operators supplying reciprocating high-speed diesel and gas generators. Initially, industry leaders were tapping into areas where local energy providers were unable to supply power of the scale required or even at all, for example natural disasters or the London 2012 Olympics. Where is the demand? Take, for example, Africa where there is an insatiable, unstoppable demand for power. A continent which contains some of the fastest growing economies however, transmission and distribution generation capacity is generally under-developed and underinvested. Building multi-megawatt (MW) power stations can take years to design, build and commission – the type of power that temporary rental power stations can provide in less than eight weeks. There is a vast growth in population but this is not married with the pace of utility infrastructure development. 18 DATAcentreMANAGEMENT WINTER 2014
Another area creating demand for packaged portable transformers is the mining industry, predominantly located in remote areas and away from the main electricity grid. It is an industry that is built upon commodity prices and is very energy intensive. There is an urgency to get mines up and running due to the fluctuation in commodity prices which is why there is so much demand for temporary power companies which can get the sites online so quickly. Emergency breakdowns are also a growing market area, where typical failure of old installed sub-stations may occur and especially in the extractive and refining oil and gas industries where plants need to be up and running quickly again to avoid costly downtime. Built of steel Crestchic’s oil-filled transformers are built with significant strengthening in the oil tanks and are more robust for the punishing environments of the portable rental market which we sell into. This could be anywhere from the Middle East, Africa, or even offshore oil and gas. Customers in this market-place have significant demands and the packaged transformers need to be highly robust due to the harsh rental environment. However, along with this they need to be easily transportable. It is extremely common for old shipping containers to be re-used for this purpose, as they are readily available at low cost. However, recycled shipping containers are not necessarily the most robust solution, because cutting holes in the existing containers weakens the steel and general structural integrity. By manufacturing containers that are bespoke in design and engineered to be portable, Crestchic ensures that they are as strong and safe as possible. Using cross-sectioned steel and additional steel in the build process ensures a minimum lifespan of 10 years. Furthermore the structural integrity is recognised by the Lloyds Register Quality Assurance (LRQA).
Crestchic’s oil-filled transformers are built with significant strengthening in the oil tanks and are more robust for the punishing environments of the portable rental market which we sell into. This could be anywhere from the Middle East, Africa, or even offshore oil and gas.
The veins of the packaged transformer The sole reason for packaging portable transformers is to ensure accessibility and flexibility. This means no time is wasted dealing with several suppliers to obtain the various components such as the transformer itself, switchgear, ancillary electrical items and enclosures. There are also no costs for on-site assembly and little to no civil engineering is required. If an organisation is generating electricity between 400 – 480 Volts (V) at 50 to 60 Hertz (Hz), transformers step-up from this to a range of voltages typically anywhere between 3.3kV V and 36kV with multiple voltage taps available at a range of voltages in between, depending on the customer’s location in the world. Essentially this creates the capability of generating significant amounts electricity at a low voltage and which is then easily introduced onto medium/high voltage grid systems. Inside the container sit various components such as the input isolators, cooling fans and extraction, voltage tap selection and medium/low voltage switchgear arrangement. Everything is kept in separate compartments to accommodate the main transformer and ABB Safe Plus medium voltage switchgear, another important feature of the packaged transformer. Crestchic operates from two to four MVA in a 10 foot container and up to eight MVA in a 20 foot container. The voltage range covers a multitude of international standard grids and industrial applications at relevant frequencies – they have to be global because customers use them all over the world which allows them to work in broad spectrum of countries. Some people refer to them as packaged substations. It is important that the general power market is able to differentiate between packaged transformers and traditional transformers. The most obvious benefit being that it is a flexible distribution of power that can go anywhere and, all in all, we are seeing this solution become more common across the globe.
For 2015, there will be 200 world class speakers and over 250 leading suppliers. Register your interest at www.datacentreworld.com
THE VERY BEST PLACE TO TAKE CRITICAL INFORMATION FROM OTHERS. IRONIC REALLY.
“The largest most influential gathering of data centre expertise ever assembled in the UK.”
11 − 12 March 2015 ExCeL London
www.datacentreworld.com
MANAGEMENT
DCIM IS DEAD, long live............ Philip Petersen, CEO, AdInfa says that DCIM is having an identity crisis
20 DATAcentreMANAGEMENT WINTER 2014
I
don’t know about you but I think this little acronym, DCIM, is having a bit of an identity crisis. Think back just a few years and the closest you would come to DCIM was the memory card in your digital camera – it was the folder with all your photos in (Digital Camera IMage). Then along came Gartner’s David Cappuccio who used the same four-letter acronym to mean something completely different – Data Centre Infrastructure Management. In this new data centre guise, DCIM was defined to cover tools that monitor, measure, manage and/or control data centre use and energy consumption of all IT-related equipment (such as servers, storage and network switches), and facilities infrastructure components such as power distribution units (PDUs) and computer room air conditioners (CRACs). So far so good but then people started to include asset management and space planning, what-ifs and modelling and things began to get a little messy. Add in to the mix some major marketing money from a few with particular features to
promote and you have the situation of today: DCIM is what you want it to be. Whilst this flexibility may seem very attractive, it can also appear very confusing to customers and, ironically, to vendors, too! Does this matter? I think it is one of the key reasons why the adoption rate of DCIM has been much slower than predicted by analysts during the last few years. Other key reasons are the overhyping of capabilities of many of the products on the market combined with pricing that seems designed to put buyers off. In an effort to get a better understanding of how DCIM is perceived in the wild, so to speak, I have spoken to data centre managers and vendors, participated in and reviewed discussions on fora such as LinkedIn and read numerous articles on the subject. I discovered some common threads and these are some of the comments and thoughts I gathered from those encounters:• DCIM is more of a platform than a product • most DCIM products cannot do everything and most people cannot afford to buy everything up front particularly when it takes such a long time to implement • once understood and accepted, DCIM will become invaluable……..but • DCIM never lived up to what they told us it would do so we had to go back to manual record keeping • current DCIM products are complex and expensive with insufficient thought given to the user – the data centre manager • the Holy Grail is an intelligent dashboard which shows exception conditions only and through which one can get at all the information associated with a particular piece of kit, from manuals to maintenance records • reporting tools should have
enough analytics such that staff can be more proactive than reactive • DCIM tools need to be modular Whilst the market waits for the perfect product to come along, the consensus among users seems to be that the best place for most data centre managers to start is with monitoring. It is rare indeed to speak to a data centre or facilities manager who does not think that automated metering and monitoring of power, cooling and environmentals is a good idea. Many want to see this information down to the rack and circuit level and some want to go further and are interested in monitoring at the power strip outlet and ICT device level. Yet few have implemented much or any of this even when they have the metering infrastructure in situ. Talk to the same data centre managers about what their major concerns are and availability and capacity are to the fore. Talk about pain points and typically they include lack of time, lack of visibility, lack of tools and, for some, too many tools to be helpful. As for business issues, inability to meet clients’ needs for reporting on actual power consumed accurately and automatically and preferably through a client portal is a common one for colocation
MANAGEMENT
companies; for corporates, it is often about cost management and social responsibility reporting. Given these kinds of demands from the market and the increasingly high profile of data centres generally, why has the adoption rate of DCIM of whatever flavour been so slow to date? Few, if any vendors offer a complete, homogenous solution despite the
marketing and advertising claims. DCIM is often perceived as complex, confusing and costly with long deployment cycles. This toxic combination is stifling adoption and causing many who have been considering what DCIM might do for their data centre to put the decision off or drop it down the priority list because making the business case for it is deemed too hard. When the byzantine and
expensive suites are bought they often take months to implement and customize and require a lot of training in how to use them; the problem is that when the trained staff move on 6 months later, the use of the suite declines because others don’t have the knowhow. Thus another pricey piece of shelfware is born! That outcome is not in anyone’s interest. Forget about acronyms and labels and focus on what is important for your business and your data centre. As is true for so many things, keep it simple, or as simple as possible, to begin with. Start with something definable and bounded that can deliver demonstrable, deliverable benefits quickly. This usually means automated metering and monitoring matched with information flows that are matched to the levels of detail suited to your particular business. Such an approach can provide critical intelligence about your infrastructure and identify opportunities for optimization, particularly in power and cooling which are the big drivers of data centre operating costs. Most importantly it can deliver rapid return on investment (RoI). No, “DCIM is not dead, it’s just resting”, as the old gag went.
INFORMATION LIFECYCLE MANAGEMENT
Strategic CHOICE Andy Dean, pre-sales manager at OCF says that Information Lifecycle Management will save you money, but do you have the strategy to take advantage?
G
et it right and Information Lifecycle Management [ILM] will ensure the intelligent movement of data between grades of storage based on a number of factors including age, type and size of data; and access requirement. In doing so, ILM reduces storage infrastructure costs, ensures fast access to data and increases staff productivity. Think of it like buying a vehicle for specific journeys. These days with fuel costs so high wouldn’t it be great to have access to different types of car depending on the journey being made - a fuel efficient car for the motorway, an electric car for the city, maybe even a motorbike for traffic-heavy journeys? But when would you buy the vehicles, up front or piecemeal? How would you decide which vehicles to use, and when? Would you opt for the fastest car for every journey even if you knew better? In the HPC and big data world, we are in a situation where tape is very cheap for electricity and capacity demand, but it is completely inadequate for day-to-day data retrieval (for most workloads). At the other end of the spectrum, Solid State Drives [SSDs] are excellent at both small files and sequential throughput, but very expensive per terabyte. Somewhere in between these two extremes, there is Serial Attached SCSI [SAS] and then Nearline SAS [NL-SAS].
22 DATAcentreMANAGEMENT WINTER 2014
The vehicles to store our data are plentiful, but an ILM strategy is needed to ensure best, most appropriate use. Get it wrong and ILM could make an unstructured storage setup worse. Creating a strategy Generally speaking, it is usually the storage experts within the IT team who set the ILM strategy, but they aren’t always best suited to understanding exactly what everyone in the organisation is using their data for. Ideally, creation of a strategy would be collaboration between the users and the IT team to ensure that the ILM policies are created to everyone’s benefit. Once strategy ownership and contributors are agreed, it’s time to understand the current and desired lifecycle. Practically speaking, to pull the strategy together, a simple whiteboard session can help. For a larger project, I would recommend creating focus group to understand everyone’s requirements comprising of a cross-sections of participants such as data users, management and IT. Whichever method is used, the ‘ILM team’ needs to understand what data they have stored, where they have stored it, and who in the organisation has stored it. Look at all available sources - project information, databases with records related to projects, data extracts, etc. Metadata should be added to all
it is usually the storage experts within the IT team who set the ILM strategy, but they aren’t always best suited to understanding exactly what everyone in the organisation is using their data for.
this information to help it be retrieved in the future. They also need to think about the future: if we store a file today and, even if we have it on a tape in 15 years’ time, how will anyone access it? Will we still have that version of the application to open what was saved? This thinking requires data retention type policies and standardising the way organisations have stored the data [ideally looking at open standards to ensure the data will be accessible in the future]. Again, adding metadata can certainly help to get something useful out of the data in the future. As an aside, creating an ILM strategy is a good opportunity to set privacy policies on data access at a high strategic level. Once you have decided on your requirements, there are solutions out there that can help to enforce these. Infrastructure Once the ILM strategy has been written and agreed, it’s often the case that additional hardware or software will be needed to help implement it. If an organisation needs a simple ILM solution, then there are options for both software and hardware plug and play appliances. Hardware appliances that offer performance tiering at a hardware (block) level could also be placed behind a software solution, which may offer some benefits depending on the workload [especially if the workload isn’t fully understood]. If organisations are looking at longer-term storage then the strategy is likely to demand multiple types of hardware and software to build a workable solution that could support all user requirements. The really exciting aspect about ILM projects is the opportunity to get the most out of an organisation’s data – now and in the future. Once the business understands what its highlevel requirements are then it is a good time to speak to specialised providers who can work with them to decide the best way forward.
SOFTWARE
Boosting EFFICIENCY Roberto Mircoli, CEO, Eco4Cloud, says that you can boost energy efficiency in data centres through intelligent workload management
E
nergy consumption in data centers is gaining crucial relevance along the lines of its significant growth globally and more and more efforts are being devoted in the industry to improve the efficiency with which data centers are run. This in turn is a combination of the efficiency of the data center physical infrastructure as well as the computational efficiency, and this last component can gain dramatic improvements thanks to state of the art innovation in Intelligent Workload Management (IWM). In fact, especially in virtualized data centers the consolidation of the workloads is a great way to increase computational efficiency, since it allows the applications to always be clustered in fewer and more efficient hosts. New and innovative techniques that address dynamic workload consolidation based on selforganizing processes and on the distribution of the decisions are coming to market. For example the one devised and implemented at Italy’s ICAR-CNR and Politecnico di Torino and then productized and brought to market by Eco4Cloud, yielding several key advantages such as higher efficiency, energy and cost savings between 20% and 60%, better informed capacity planning, improved quality of
service, high scalability. A good PUE value is not enough to ensure the overall efficiency of a data center, because this metric does not consider the actual utilization of computational resources, so the physical efficiency of the data center should be combined with the efficiency in using its computational resources. In fact the PUE index is useful to measure the relative efficiency of the physical infrastructure, but does not relate to the total energy consumption or the computational efficiency, strictly related to the workload distribution among hosts. There are several ways to optimize the use of physical hosts hence increasing the computational efficiency of a data center. For example, some applications are CPU intensive while others are memory intensive: hardware requirements are clearly different in the two cases. The option of leveraging the proper choice of hardware, however, is only available at the capacity planning stage, when the data center is firstly designed and developed, or when new machines are acquired. Workload consolidation is a powerful means offered by Virtualization to achieve remarkable energy and cost savings at any time during normal data center operation. All virtualization platforms (for example
Example of hosts usage before consolidation (left) and after consolidation (right). In this simple case consolidation allows 50% of the hosts to be hibernated.
24 DATAcentreMANAGEMENT WINTER 2014
There are several ways to optimize the use of physical hosts hence increasing the computational efficiency of a data center. For example, some applications are CPU intensive while others are memory intensive: hardware requirements are clearly different in the two cases.
VMWare, Microsoft Hyper-V, KVM) allow several Virtual Machines (VMs) to be hosted by the same physical host, and provide primitives to move a VM from one host to another in a short time and without service interruption. The objective of consolidation is to cluster the maximum number of VMs onto the minimum number of physical hosts, so that the unneeded hosts can either be put into a low power state (leading to energy saving and OpEx reduction), or switched off and devoted to the execution of incremental workload or additional services (leading to CapEx savings, thanks to the reduced/postponed need for acquiring additional physical hosts). The graphs show an example of workload consolidation. Before consolidation (left), the workload is distributed to 8 hosts whose resource utilization is between 20% and 50%. After consolidation (right), the same VMs are redistributed so that 4 hosts take all the workload, with resource usage between 70% and 90%, while the other hosts are hibernated in order to save energy. In modern data centers a scalable strategy to Intelligent Workload Management is to distribute the intelligence instead of centralizing it in a single point, while switching from deterministic algorithms to self-organizing solutions that are able to adapt to the dynamic nature of the problem. A bio-inspired strategy of this kind was proposed by a research project carried out by the Institute for High Performance Computing and Networking of the Italian National Research Council (ICARCNR) and by the Politecnico di Torino which led to relevant patents and the spin-off of Eco4Cloud. Deployments in different scenarios have yielded tangible benefits in terms of DC efficiency and 20-60% reduction of related energy bill, refined capacity management, improved SLAs.
SOFTWARE
Improving the speed of business with AN SDN-ENABLED DATA CENTRE Jean Turgeon, head of networking and chief technologist at Avaya looks at the need for speed in the data centre
S
oftware defined networking is the new buzz word in the networking and data centre industry. But what are the technology and business benefits it can deliver and should your data centre manager be looking at this technology now? Jean Turgeon, head of networking and chief technologist at Avaya outlines what CIOs should know about the impact of this technology on the data centre, before their data centre manager comes knocking on their door. Leaps in the use of social media coupled with the omnipresence of mobile technology mean that we are all consuming more and more bandwidth, using numerous applications and storing ever increasing amounts of data on our devices and in the cloud. Today’s employees work from a host of different locations, on a range of different devices, creating an always accessible culture
26 DATAcentreMANAGEMENT WINTER 2014
that - as all CIOs and IT Directors are (often painfully) aware - puts the data centre at the heart of most organisations’ IT operations. Consequently, businesses need to ensure that their data centres are robust, yet able to support new services, quickly. This is where Software Defined Networking (SDN) comes in.
control and simplify technology management. SDN should respond to changing business speeds, while also providing the efficiencies and the security requirements that a modern enterprise needs. So far so technical, but what does this actually mean from a business perspective?
The basics The main goal of SDN is to reduce the complexity, eliminate propriety solutions, allow the fast deployment of business critical applications and establish quick and easy adds, moves and changes. SDN works in the network infrastructure to separate the part of the network responsible for routing traffic from the part that actually carries the traffic. This means the data that is being carried is separate to the control traffic, ensuring that the infrastructure is more dynamic. As such it should give data centre managers far greater
Automation Automation-related time, cost and accuracy efficiencies are by far the biggest benefits that SDN is predicted to deliver to the data centre. The applications running on any data centre server are subject to access or policy controls. Typically, every router and switch on a network has software pre-installed that controls what it does. For example, the CEO might have access to everything, but the receptionist may only have access to documents that do not contain any financial information. In a traditional data centre, each element of these policies needs
SOFTWARE
to be manually configured on each router and switch. SDN automation is likely to allow data centre managers to deploy a ‘configure once, roll-out across all locations approach’ with changes made via a centralised management console - saving time and costs and improving accuracy. However the real benefit should come with the deployment of new applications. The configuration is applied to the network and not just the interfaces, so any changes are also applied at the network level. SDN promises to provide the CIO with the capability to deploy applications much more quickly than conventional mechanisms, whilst retaining total control. Cloud-ready Cloud adoption, by businesses of all sizes, shows no signs of abating. SDN could offer considerable benefits to data centres supporting public cloud apps by using in-built intelligence in the routing layer of the data centre to choose the optimal configuration. Typically, data centres hosting applications within a public cloud environment have to manually provision the required resources, often resulting in configuration errors and delays. SDN should eliminate this provisioning, reducing cost, saving time and improving the accuracy of network changes. The need for speed Business in the 21st century moves at a much faster pace than ever before. Reflecting this, the data centre also needs to be more nimble and able to provide services more quickly. SDN has the potential to play a key role in delivering this speed, thanks to the automation it provides based upon a centralised view and not a list of interfaces: changes can be made in near real-time across the network as a whole. Services can be automatically provisioned and if issues do occur, SDN technology should intelligently and instantly re-route the services much more quickly than an engineer could. While the network needs to be as agile as the business it supports, recent research from Avaya reveals that, because of the time network changes take, companies wait almost a month (27 days) before they are able to make a significant improvement to a business system. At an average of 10 changes a year, businesses can wait up to nine months for improvements that can help their company grow, increase employee and sales productivity and improve business analysis. Bottom line, network complexity affects margins and decreases the ability of IT to
provide the company with a competitive edge. By reducing complexity, SDN promises to dramatically reduce the network waiting game, helping to speed up and improve business performance. A new way of thinking However, perhaps the biggest benefit of SDN for both networks and data centres, will be a fresh mindset. Since SDN separates the control function from the rest of the data centre, the technology enables higher level management of the data centre environment, allowing a more holistic view of the data centre. This should give data centre managers the opportunity to think in a completely different way about their data centre. Recently I had a new kitchen installed. When I was planning it with my wife I assumed that the sink would be in the corner, because that is where the plumbing comes into the room. However, my wife wanted the sink in an island in the middle of the kitchen because there it would be equidistant between the fridge and the stove. While I defined my request by previous parameters and constraints, my wife defined her request by use and flexibility. Similarly in an SDN environment, data centre managers should no longer be constrained by the plumbing and instead can concentrate on the services the data centre delivers. With SDN, software is expected to simply become a toolset and the data centre manager can move to focus on solving business problems, not overlaying a software vision on top of them. The role of the CIO Today, SDN is still in its infancy with very few businesses currently running fully SDNenabled data centres, but this is changing rapidly. Most data centres already deploy some elements of SDN, for example almost all modern enterprise-class switches deploy SDN technology, while fabric-enabled next generation networking solutions already deliver many of the expected benefits of SDN. According to IDC the SDN market will be worth more than $3.7bn within two years. Many data centre managers want to move towards greater SDN enablement but are unsure of where to start, and are therefore often reticent when it comes to discussing migration with their CIOs. I would urge CIOs and IT Directors who want to take advantage of the increased agility and reduced costs that SDN promises, to raise the topic with their data centre teams. Together they can start building the business case for the technology
Cloud adoption, by businesses of all sizes, shows no signs of abating. SDN could offer considerable benefits to data centres supporting public cloud apps by using in-built intelligence in the routing layer of the data centre to choose the optimal configuration. Typically, data centres hosting applications within a public cloud environment have to manually provision the required resources, often resulting in configuration errors and delays.
now. Planning now can help CIOs understand their business goals and map their path to achieving them. With the network an essential means to achieving your business strategy, an evolutionary approach is almost certainly the best way forward. Today is the time to start comparing current costs with the efficiencies you would gain from a software-defined model and demonstrating the compelling benefits of the SDN approach. 6 key questions CIOs should ask their data centre managers 1. “Do you know what our business strategy is?” To be able to deliver the applications and infrastructure for the business, the CIO needs to fully understand the business strategy – if you do not, you are more likely to become a business ‘blocker’ than an enabler. You then need to ensure that the data centre manager understands this strategy too. 2. “How much time does your team currently spend updating the network configuration?” Hand in hand with this question goes, “How long does it take to provide new services?” and “What percentage of errors and service issues have been due to misconfiguration in the past year?” There is likely to be considerable cost associated with manual firewall configurations, router updates etc., this will form a basis of your business case. 3. “What is your current performance: cost: power ratio?” “How might this change with greater SDN deployment?” Power vs. performance vs. cost is becoming a key industry measure for SDN and should therefore be part your business case. 4. “What are the key network management tools you propose to use?” Integrated network management tools remain essential for diagnosing the causes of network problems – unfortunately they often seem to be overlooked! 5. “How interoperable is your SDN vision?” A solution where SDN is fully integrated into the physical network fabric provided by a single, proprietary-standards vendor, is unlikely to offer the same long-term benefits and flexibility as an interoperable solution. Using open standards-based products provides an easy-to-deploy, flexible solution that will work with any physical network infrastructure. 6. “How is your SDN deployment going to impact the design of the data centre?” If your data centre team can convince you that they are building the best data centre combination of servers, storage and networks, then your work in creating the business case will be greatly reduced.
WINTER 2014 DATAcentreMANAGEMENT 27
POWER
Futureproof YOUR BUSINESS Socomec says that its Modulys GP2.0 with in-built flexibility can futureproof your business
R
ising energy costs combined with increasingly stringent industry standards and regulatory requirements are driving building owners and infrastructure managers to apply the latest thinking – and technology solutions – to improve their green credentials whilst maintaining control of operating costs. Socomec’s New Modulys GP2.0 delivers unrivalled flexibility and solving some of the greatest challenges in terms of redundancy and future availability. Because the system is modular – and the cabinet is free from electronic components – potential failure modes have been eliminated and end-of-life criticality ceases to be an issue. Furthermore, the system also allows for the easy integration of future modules. Rather than overspecify – and overspend - in order to guarantee that a system is futureproof, the Modulys range means that facilities and IT managers can now scale as and when they need to. Additional redundancy can be achieved by introducing one more battery and power modules – so there is no need to duplicate the system hardware to achieve redundancy. With no single point of failure – and zero risk of fault propogation – the Modulys is totally resilient and guarantees optimum
28 DATAcentreMANAGEMENT WINTER 2014
power availability.” With real-time hot swap functionality, module maintenance is simple, fast and safe – avoiding any risk of downtime as the mean time to repair is reduced to a minimum. Highly Adaptable The Modulys range has been engineered to be highly adaptable; based on a modular, rack-mounting system, it has been designed to work with hot/cold aisle arrangements. Easy to integrate within existing IT infrastructure, the Modulys is also easy to position and assemble thanks to its light, empty cabinets and independent modules. In addition, the range offers power scalability up to 600kW, making it ideal for unscheduled site upgrades or incremental power evolutions. 3 parallel systems can be configured horizontally to achieve the required 600kW. The installed power of a single system can be increased up to 200kW by adding power modules in increments of 25kW. Achieving the desired power scaling is simple: the units feature a redundant architecture employing parallel Plug-In modules. When a new power or battery module is plugged in, the system automatically self-configures to bring the
new capacity online.
The Modulys range has been engineered to be highly adaptable; based on a modular, rack-mounting system, it has been designed to work with hot/cold aisle arrangements.
Unbeatable Energy Efficiency UPS manufacturer, Socomec, is at the forefront of the development of critical power systems, and is committed to supporting data centres in the pursuit of more energy efficient buildings and driving down the total cost of ownership.(TCO) The Modulys GP2.0 is the latest addition to the Socomec Green Power 2.0 range of UPS products. These transformer-less units combine unbeatable energy efficiency at 96% with unity power to provide the ultimate “future-proof” critical power solution. Boosting the Bottom Line With a combined focus on performance and efficiency, energy costs, operating expenditure and the environmental impact are minimised and uptime is maximised – throughout the equipment lifecycle. The deployment of Socomec’s GP 2.0 UPS can deliver 12% saving compared to other UPS solutions, when comparing the TCO over 10 years. Also boosting the bottom line, The Carbon Trust’s ECA scheme means that with purchases of new Socomec equipment, a business can write-off 100% of the cost against taxable profits in the first year after the investment is made. Manufactured in Europe to exacting standards and stringent specifications, Socomec’s product performance and lifecycle support are unrivalled. Andrew Wilkinson, regional managing director of Socomec comments; “We invest heavily in research and development to create critical power systems that minimise TCO throughout the product’s life-cycle. This enables our customers to deliver savings that are sustainable, providing a long-term contribution to the profitability of their businesses.” Wilkinson concludes; “In order to address the most important industry challenges – of today and tomorrow - the latest product developments require a specialized and interdisciplinary approach to provide high performance, reliable and cost-effective power solutions that are flexible enough to be scaled to meet the rapidly changing demands of data centres.”
COOLING
of air that is a common inefficiency in data centres. • Intelligent containment eliminates the need for the servers to force air through a “passive chimney”. • Because the hot exhaust is sent straight to the drop ceiling or return plenum, there is no need for technicians to work in an oppressively hot environment associated with hot aisle containment. • The use of free cooling can be extended as all heat is captured at source and is not allowed to mix with the supply air causing localised hot spots. This allows for higher supply temperatures, as recommended by ASHRAE, that are associated with free cooling. • There is no loss of data centre floor space associated with In-row coolers. In-row systems typically require half a cabinet of space for cooling units for every few cabinets of servers and rear-door heat exchangers take up aisle space resulting in fewer aisles in your data centre. • No additional power or plumbing for cooling or condensation is required. And, higher return temperatures means you can regain lost cooling capacity. • Intelligent containment is scalable for future growth. For adding additional cabinets or increasing IT load, the modularity of the system allows for both. Thus allowing you to operate only the CRACS necessary for proper cooling.
INTELLIGENT containment Dave Wilson, global marketing director for Geist explores the process of intelligent containment as the means for controlling airflow in a Data Centre environment.
I
ntelligent Containment is an innovative method of controlling airflow that allows you to operate your data centre at the higher end of the recommended ASHRAE scale. Air is supplied at the higher end of the temperature range and airflow volume is controlled automatically. Additionally, hot and cold air are completely isolated and prevented from mixing. This approach differs from traditional cooling as it stabilises the intake air temperature to within a few degrees of the supply temperature across all measured points in the data centre. Furthermore by segregating the supply from the return, higher return temperatures are achieved. Higher return temperatures means higher CRAC efficiency. Key to making this solution work is the software embedded within the Intelligent Containment system. Air pressure inside the cabinet is monitored and checked against the air pressure of the room. This pressure differential information is routed to control software, which then converts that information into commands to increase or decrease aggregate supply fan speed, dependent on the server load. Intelligent containment can be deployed in two ways; in a row-based or rack-based configuration. The Row-Based solution offers the option to begin with a small number of intelligent containment units spread across the top of multiple cabinets. This allows for the
addition of new cabinets and units as growth or density increases. The intelligent containment software will increase or decrease the fan speed synchronously with the IT equipment load. Rack-based solutions are ideally intended for isolated cabinets with higher densities – generally 15kW to 30kW (see illustration 2). The intelligent containment system, in a rackbased configuration has been shown to be an ideal solution for all types of blade servers. Rack based solutions are also ideal where a diverse mix of cabinet makes and models exist. Both row and rack-based systems have been deployed jointly successfully in the same facility. The flexibility and scalability of the intelligent solution makes it ideal for all facility types: small, medium, enterprise and colocation. Benefits of Intelligent Containment Intelligent containment solutions offer a number of advantages: • The system calculates how much air the servers are using. This data, on how much supply air is being consumed, is fed via a DCIM solution, such as Environet by Geist, back to the CRAC units. This is then the control point for the CRACs on how much to produce – balancing the needs of the servers with the supply of the CRACs. The end result is even greater facility control and lower operating costs by eliminating the oversupply
Both row and rackbased systems have been deployed jointly successfully in the same facility. The flexibility and scalability of the intelligent solution makes it ideal for all facility types: Small, Medium, Enterprise and Colocation.
Summary Intelligent containment is a highly innovative, scalable efficient and cost effective means of complete airflow control. It can be retrofitted to most data centre environments. By increasing return temperatures it increases CRAC efficiency that, in turn, can increase capacity and can reduce the number of CRACs needed. Fewer CRACs running lowers operating costs and increases redundancy. For new builds, it can lower infrastructure costs by using only the CRACs needed. Additional cooling can be added when needed and a raised floor is not necessary. The key to costing an intelligent containment installation is to weigh up the capital costs against savings of on-going operational costs. That is where Intelligent Containment makes the most impact. By accurately controlling airflow, installations where this system has been retrospectively deployed have recorded savings of up to 40% on cooling costs. Now you have to agree that’s intelligent!
WINTER 2014 DATAcentreMANAGEMENT 29
CASE STUDY
Hopton, Iceotope founder and CEO. Romonet used its Software Suite, based on its patent pending predictive data centre modelling technology, to simulate data centre energy use and costs using both Iceotope’s technology and a number of traditional airbased cooling technologies. They also simulated a traditional hot/cold aisle data centre cooling system to provide a suitable ‘baseline’. Romonet also simulated rack exit door cooling systems and in-row cooling systems. Romonet then delivered a comprehensive report analysing the cost over time of each technology and showing exactly how the simulation had been created, in order to demonstrate the validity of its results.
Cool runnings
Benefits In Romonet’s simulation, Iceotope’s technology was shown to produce cost savings more than three times greater than those of the next most efficient air-cooling method, cutting the cost of the scenario data centre by 32%. Armed with this proof of concept, Iceotope could demonstrate the validity and potential profitability of its technology to investors and partners that its technology could provide significant savings to customers.
Romonet has provided in-depth engineering analysis and TCO life cycle to help Iceotope secure funding from strategic technology investor.
I
ceotope servers offer full-time, free cooling for hostile environments, cloud services and HPC environments. Its liquid-cooled server platform has been modelled and engineered to ensure it harvests as much heat from electronics as possible in the most efficient way. As a result, organisations can reduce data centre cooling costs by up to 97%, ICT power load by up to 20% and overall ICT infrastructure costs by up to 50%. This case study looks at how Iceotope used Romonet’s Software Suite and professional services to analyse and prove the performance and benefits of its technology compared to traditional, air-cooled servers. With this proof, Iceotope was able to attract $10 million in funding to continue developing its technology.
that could demonstrate this was a difficult task. As a start-up, Iceotope did not have customers using its technology inside a production data centre, which would demonstrate the potential of its technology. Even if it had, there would still be obstacles: real data centre operators may be unwilling to offer their precise energy use for comparison in such a way while, even if a willing customer was available, differences in performance in production data centres could be attributed to factors beyond the cooling technology used. The alternative was creating multiple testbed infrastructures to demonstrate the savings Iceotope’s technology could create. Yet this was prohibitively expensive and impractical. It was clear that Iceotope needed another way to show its potential.
Challenge As a technology start-up, Iceotope needed to attract further investment in order to keep developing its technology and improve on the efficiency and sustainability benefits it already provides. In order to invest, potential partners need to be certain that Iceotope’s technology could fulfil its potential. In particular, Iceotope needed to demonstrate that it provided superior energy and cost efficiency compared to high-density air-cooled systems commonly used in High-Performance Computing data centres. However, providing real-world examples
Solution Iceotope decided that it needed an accurate simulation of how its technology would compare to traditional air-cooling systems in order to prove its potential to would-be investors and partners. It therefore approached Romonet, as it knew that Romonet’s expertise in modelling and predicting data centre performance and costs would give the most accurate picture possible. “We knew we had developed a great product, we just needed the proof-points. Romonet are quite simply the best at what they do, so they were automatically our first choice,” said Peter
30 DATAcentreMANAGEMENT WINTER 2014
Romonet used its Software Suite, based on its patent pending predictive data centre modelling technology, to simulate data centre energy use and costs using both Iceotope’s technology and a number of traditional air-based cooling technologies.
Outcome Thanks to Romonet’s report, Iceotope has been able secure funding to continue its development: most recently a $10 million investment in January 2014 from bodies including Aster Capital and Ombu Group, with strategic sponsorship from Schneider Electric (acting through Aster Capital); one of the world’s largest engineering technology companies. Iceotope has used this investment to continue developing and refining its technology for large scale deliveries in 2015, to ensure it will provide the greatest possible benefit to the industry. Conclusion In order to demonstrate the potential of its liquid-cooling system and attract investment, Iceotope needed proof that it could significantly outperform conventional cooling systems. Iceotope contacted Romonet to simulate the performance of its technology in the data centre environment, compared with today’s most efficient air-cooling systems. Romonet’s simulation both validated Iceotope’s claims and gave it a vital proof point that was used to attract and secure investment, most recently $10 million in January 2014.
SAVE UP TO 90% ON COOLING COSTS WITH EVAPORATIVE COOLING
Evaporative Cooling from EcoCooling
Why EcoCooling? • 90% less energy than air conditioning • No refrigerants • Low carbon
Data Centre Cooling Achieve PUE’s lower than 1.1 with ASHRAE compliant conditions.
• New build and retro fit • ROI in under a year • Over 200 installations in the UK • ASHRAE compliant conditions • Achieve PUE's lower than 1.1 EcoCooling Ltd @EcoCooling1 Tel: 01284 810586 Email: sales@ecocooling.org Web: www.ecocooling.org Symonds Farm Businesss Park, Newmarket Rd, Bury St Edmunds, IP28 6RE
MANAGEMENT
AUTOMATIC choice Dr. Thomas Wellinger, market manager data centre R&M, says that automated asset and cabling infrastructure management can be achieved without hassle
T
he more servers you have, the more challenging it becomes to monitor every operational aspect of your servers and switches, as well as cooling and power equipment and any other linked IT hardware. Radio frequency identification (RFID) tagging can, however, provide a solution for inventory automation as well as ultra-fast and accurate core monitoring and feedback. Today, DC operators face the challenge of maintaining extremely high levels of availability, whilst significantly improving efficiencies and lowering costs. DCIM (Data centre infrastructure management) plays a vital role in this. One of the key elements of DCIM is monitoring, whereby data is collected from sensors, meters and management systems. Once collected, this data needs to be collated, normalized, analyzed and presented in an immediately understandable format. What’s more, as DC tasks increasingly move to service models, and especially the cloud, infrastructure and hard/software changes in the data center need to be communicated to internal and external parties providing the service, instantly and in an error-proof, automated manner. Seamless, semi-automated processes need to be introduced which bridge traditional technology and management divides. Making your DC’s rack-based assets more visible Besides providing alerts, DCIM is essential to generating performance data which can serve as the basis for improvements and enhancements and can be fed into a datacenter asset management tool. This centralized system can store detailed information about the physical equipment as well as operational and workflow data and, in many cases, track changes, such as deployment and movement of physical assets. Linking change management to an asset management system means information is always up to date. Manually managed infrastructure data typically has a 10% error rate and 20-40% of ports in a network are forgotten over time. It also takes a great deal of staff time. An automated solution continuously monitors each connection in one or more data centers or local networks and a (remote) central server records the cabling status. This type of AIM-based solution offers functions for management, analysis and planning of cabling and network cabinets and can halve network
32 DATAcentreMANAGEMENT WINTER 2014
monitoring and management costs. Updates are automatically generated when new devices are integrated or changes are made. Unused patch panels and ports in active equipment are immediately detected. Data can be traced in real time with a PC or smartphone, locating faulty connections within seconds. A holistic approach to DCIM Another important responsibility of datacenter managers is cabling and connectivity management. DCIM software solutions are already catering to this requirement, for example by using a software control layer to map and monitor cabling assets and physical connections in real time. This ‘intelligent physical layer management’ IPLM complements existing IT network management and monitoring products. Any disturbance or inefficiency can be found almost instantly and remedied. Audits also become significantly faster and easier, as does inventory maintenance. Integrating cable management into DCIM improves uptime and enables fast and efficient reaction. It allows ‘drilling down’ to individual links between specific racks and other equipment, including switches, routers, firewalls and network devices. In data center infrastructure, cabling is often one of the key cost points, despite the fact that it is not as visible as hardware components. Proper connectivity management is essential to accommodating growth and offering flexibility. If cabling doesn’t seem to be working properly, having an extensive history of moves, adds and changes can prevent extensive field research and testing. This allows a more proactive approach to infrastructure management, showing up potential growth areas and difficulties. Problems and improvement areas can be examined and solutions proposed before they are actually physically carried out on site. You can visualize patch panel cable management from remote locations, for example. When selecting a cable management and documentation solution, it’s definitely worth checking whether it fits in with current or future DCIM requirements. Tagging solution A data center can easily contain 10,000 individual assets. Once a service ticket has been issued, a maintenance tech might easily spend a day locating a faulty asset or link. Introducing RFID on DC assets to
help automate IT inventory tracking tasks, looking at everything from racks and servers to individual blades inside a chassis. The status and location of assets can be dynamically monitored, along with parameters such as temperature and humidity. During relocations, each data center asset can be tracked separately. RFID is a mature, proven technology which also helps reduce security risks. For critical IT systems, this translates to significantly enhanced reliability, efficiency and effectiveness. DCIM tools leveraging RFID use updates more effectively. Maintenance and warranty information can also be included in RFID tags, allowing asset-specific monitoring of warranties (with expiry dates) SLAs or depreciation. Another huge advantage is the fact that this way of working eliminates time-consuming and often errorprone manual checking, processing and data entry. In recent years, barcodes have been used to monitor DC infrastructure. However, these are often difficult to access and read, as they’re obstructed by cables and rack components. RFID reads 100% of all equipment and even internal components easily and accurately. Barcodes tend to be more vulnerable and scanning introduces errors, especially wherever reading conditions are less than idea. Furthermore, barcodes are read-only and easily replicated, whereas RFID tags can be modified and used to trigger events, can be encrypted and aren’t easily copied. They can also send alerts whenever assets are moved from their designated location. Keeping an eye on cabling In short, an integrated DCIM solution incorporating cabling parameters offers automated, end-to-end, realtime, on-demand physical asset inventory and management for your data centers at the touch of a button. This can support your risk-mitigation, compliance, business, capacity planning and security needs whilst makes error-prone, time consuming manual work obsolete. Audit that once required folders full of spreadsheets and took weeks to complete can now be performed in hours. Any upfront investment can typically be earned back in a year or so. Apart from drastically reducing the time spent on creating inventories, capacity is freed up to spend on core tasks, which actually contribute to the bottom line. Management reporting improves significantly, as does documentation on which strategic choices and hardware purchases are based.
345,: 'MB( . Company Headquarters . Holsteiner Chaussee 283 . 22457 Hamburg . Germany products@stulz.com Near you all over the world: with sixteen subsidiaries, six production sites and sales and service partners in more than 120 countries. www.stulz.com
D ATA CENTRE COOLING SOLUTIONS S A L E S / S U P P O R T / S E R V I C E / S PA R E S STULZ UK Ltd . First Quarter . Blenheim Road . Epsom . Surrey . KT19 9QN 01372 749666 . sales@stulz.co.uk . www.stulz.com
STORAGE
Overcoming MOORE’S LAW Thomas Arenz, head of MI, SBD and marketing communications EMEA at Samsung Semiconductor looks at the creation of 3D V-NAND
I
n keeping with Moore’s Law, the last 40 years have seen conventional Flash memory reduce substantially in size, with the number of internal cells roughly doubling every two years. Originally, in the 1990s, the first NAND flash storage was developed on a basic design rule of 120nanometers. Now, as a result of continued reductions in size, this design rule had been shrunk down to a mere 10nanometer-class, allowing manufacturers to fit 64 times more cells into the same storage. As it stands, a typical 1x nanometer 128Gb 3Gb Multi-level cell device currently contains around 43 billion NAND cells (enough to store 16Gb). These cells are organised into a two-dimensional planar structure, with each row of cells forced together in an attempt to minimise the size of the final memory device. While Moore’s Law is often seen as an inevitable part of technological advancement, such advancements do not happen on their own. As the size of technology decreases, engineers are faced with entirely new and often unseen challenges that need to be overcome. In the case of NAND Flash Memory, these challenges first became apparent when improved manufacturing process technology proceeded beyond the 10 nanometers - class
34 DATAcentreMANAGEMENT WINTER 2014
limit. As the size of these devices decreased, more Flash cells were forced closer and closer together within smaller spaces. At this point, it became increasingly apparent that engineers may one day be faced with a potential “scaling limit” - a point at which Moore’s Law is met by inherent physical limitations. Over the last 10 years, many people have begun to wonder whether such a scaling limit has finally been reached. The most common manifestation of this limit has taken the form of cell-to-cell interference. With 43 billion cells crammed into an ever-smaller space, the gap between each individual cell has had to decrease. As a result of this decrease, cells will often begin to interfere with one another’s behaviours, ultimately leading to long-term cell damage and even data corruption. Previously, when memory products have had a design rule of 30 nanometers or more, such interference was easily controlled through the design. As the design rule becomes smaller however, the probability of data corruption dramatically increases. Additionally, as well as impacting upon cell function, the shrinking of NAND cells also makes it more difficult to find an appropriate light source for the photolithography process. As an example, the light source for a 40nanometers class design rule can be used
At Samsung, we spent the last 10 years (as well as building a new Fab in excess of $11 billion) looking for an innovative way to address this problem. Either we needed to find a new way of designing our memory products, or we had to accept the possibility that a genuine scaling limit had finally been reached.
to inscribe photo mask patterning on a NAND wafer without any issue. Once the design approaches 1x nanometer however, this light source cannot penetrate the smaller pattern. As a result, the development of an adequate light source for 1x nanometer patterning requires huge investment in new equipment. Introducing 3D-VNAND At Samsung, we spent the last 10 years (as well as building a new Fab in excess of $11 billion) looking for an innovative way to address this problem. Either we needed to find a new way of designing our memory products, or we had to accept the possibility that a genuine scaling limit had finally been reached. The most significant roadblock was clear: there were only so many cells that could be contained within the existing planar structure. As such, we looked to develop an entirely new concept for cell arrangement – switching horizontal rows to vertical columns. By adopting the concept of “layers” instead of nanometers, we were able to avoid squeezing more cells into the same horizontal space. Instead, we would build on top of the existing planar structure in a new vertical direction – thus V (vertical) NAND. In its simplest form, this concept originally started out as nothing more than an analogy comparing living in a house with living in an apartment building. Through this analogy, the advantages of layering vertically grew increasingly clear. Whereas a two-story home would only house four people, the first apartment block meant that hundreds more people could be housed within the exact same landmass. At the same time, this did not necessarily involve the people moving closer together and ultimately interfering within one another’s daily lives. Going upwards solved the problems of going outward. By applying this analogy, Samsung was able to create the theoretical concept for a vertically layered, three-dimensional V-NAND structure.
STORAGE
While the first successful attempts at building this structure only succeeded in stacking 24 vertical layers, this has already increased to 32 and will continue to grow for years to come without encouraging interference. By significantly reducing the risk of interference, 3D V-NAND can dramatically improve the endurance of each individual cell. In the case of write-intensive applications, this can result in as much as a 5x improvement in the longevity of a device. This could also include a 100% increase in write speed. For read-centric devices on the other hand, the improvement could be as high as 10x the longevity. In all of these cases the end user would benefit from lower energy consumption, reduced maintenance costs and an increase in storage capacity within the same physical space. The Challenges While the analogy of switching from a house to an apartment may make 3D VNAND sound like an extremely simplistic transition, it is one that has taken almost 10 years and 300 patents to get this far. The creation of 3D V-NAND faced a series of significant challenges, one of the biggest was the decision which material to use when developing the technology. Whereas traditional NAND technology is produced using conductors (through which charges can move freely), the vertical layering of V-NAND technology requires the use of an insulator. Such insulators are much closer to solids, meaning that any charges will struggle to move through them at the necessary pace. In an effort to overcome this problem, Samsung researched an 8-year old technology known as Charge Trap Flash (CTF). By reapplying its cylinder-shared architecture, we were able to develop a system in which electrical charges are temporarily placed in a silicon nitride-holding chamber. This chamber then acted as an alternative to traditional floating games, being released as required at the necessary speed. While attempting to solve such challenges did not come cheap, the complexity of 3D V-NAND has allowed Samsung to remain the only V-NAND supplier on the market today. Through the rise of big data, Samsung Semiconductor believes that this investment will soon demonstrate its full economic potential. The data-centre industry is beginning to realise that it cannot go on indefinitely increasing the number of storage servers without eventually running out of physical space. Sooner or later this expansion will have to stop, either as a result of corporate budgets, energy consumption regulations, or simply a sheer lack of available storage space. The only option is to increase capacity without having to increase the size of storage devices. This is the true challenge of the data centre marketplace, and it is a challenge that 3D V-NAND is ultimately helping to address.
WINTER 2014 DATAcentreMANAGEMENT 35
MANAGEMENT
DATA CENTRE OPTIMIZATION: It’s time to deploy DCIM software! Mark Harris, VP of data centre strategy for Nlyte Software says it’s time to invest in DCIM
36 DATAcentreMANAGEMENT WINTER 2014
F
or much of the 1990s and early 2000s, data center professionals spent the majority of their time satisfying the needs of their users by pulling various hardware and software technologies together and keeping them running for mission-critical applications. Interoperability was always in question and the expertise to mix vendors and solutions became highly sought-after. Beginning in 2008 with the banking crisis, a significant level of economic stress was experienced which affected all industries across the globe. This fiscal pressure was felt by Information Technologies (IT) and was happening ironically at the same time as IT infrastructures were becoming even more critical in nature due to the rapid adoption of mobile and alwaysconnected user style of computing. This drove a required cost-based model of computing which became a forcing function for IT professionals to think about efficiency as a primary metric of their success. In response, all types of efficiencyoriented projects were undertaken. We, the IT professionals, have gone from using racks of individual power hungry servers to using low power servers, and even multi-core blade server chassis. We initially were using servers deployed with a single operating system and over the years replaced each of those with as many as 20 virtualized operating systems per server. Not to mention the
variable-speed drives in our CRAC units and hot or cold containment aisles that reduce cooling costs. We have done an amazing job at squeezing our cost structure using basic tangible technologies that are simply better at doing the same type of computing that we all practiced for years. That brings us to 2014 where we continue to need to look for ways to optimize the data center even more. The simple “low-hanging fruit” has already been harvested, so what next? The answer is, Data Center Infrastructure Management or DCIM. DCIM allows you to create a baseline or solid understanding of what you have and why you have it. It enables you to correlate your actual workload handling to the specific servers that are doing the work, and in turn correlate those servers to the power they are consuming. DCIM goes further by allowing you to add into the computing model the business value and organizational alignment of each component in the data center, and ultimately DCIM allows you to look for ways to align the demand for computing with the actual supply of it. While DCIM has been discussed for many years, as an emerging technology it has been on the periphery of our vision. Many IT professionals have been aware of it, but have been focused on the more easily executed efficiency projects and hence have not been able to explore the application of DCIM to their own data centers. Well, this month
the world's leading information technology research and advisory firm Gartner released their DCIM “Magic Quadrant” which is used to benchmark critical capabilities needed in forward thinking computing environments. The introduction of Gartner’s DCIM Magic Quadrant underscores the critical need to deploy DCIM today. Remember, the majority of the ‘easy’ optimization has already been done, so DCIM is part of the next phase of optimization. So what capabilities does DCIM bring to your table? It brings deep understanding of your data center, down to the physical components. Every patch cable, every server. DCIM offers insight to why each device is in production, who owns it, how much power it is consuming, what business function it is providing and even when it should be replaced. DCIM enables the smooth management of change, whether remedial or planned, and supports technology refresh initiatives. It has been said by the industry’s brightest analysts that DCIM provides the most significant opportunity to optimize a data center’s efficiency. DCIM is here now, and many of the largest and most successful organizations in the world are already involved in evaluations or deployments. Your data center is likely your singlemost expensive corporate asset, so the addition of a strategic approach to managing all of its change would be well worth your time to begin this next chapter.
ENERGY EFFICENCY
DRIVING energy efficency Matt Salter, sales director at Redstone considers how using IT intelligently can have a real impact on energy efficiency…
E
nergy efficiency has been on the business agenda for a long time. However, organisations will all differ as to how far up their own agenda it is and to what degree their organisations are actually energy efficient. Technology vendors have made it easier for organisations to be more efficient as all new products should come with information on energy usage. The government too has also played its part in persuading businesses to take energy efficiency more seriously with large organisations needing to report on their energy usage and meet certain carbon targets. For an organisation to become truly energy efficient however, it needs to start thinking about the ways in which it uses IT and how it can be employed to reduce growing energy consumption and costs; it’s time to start using IT intelligently. Lights left on, desks powered up when not in use, inefficient use of office space and the HVAC running when no-one is in the office are
common daily occurrences in organisations throughout the UK. These practices are all an enormous drain on power and are needlessly wasting expensive energy. Leaving chargers plugged in or the lights on are just lazy habits that we’re all guilty of and require a user to take responsibility for switching them off. Let’s look at the equipment on most office workers’ desks as an example: there’s usually a monitor, a phone, a workstation and all these devices still consume power when the desk isn’t in use. If you add up evenings, weekends, working days out of the office and holidays, this is a significant amount of energy that is being wasted. Intelligent booking systems can help solve this issue by only powering up a desk when it is in use. When the user comes into the office they can swipe in to book a desk and the desk’s equipment will power up automatically, and then when they come to leave and swipe out, everything will power down. This system significantly reduces day-to-day power
The data centre is perhaps the biggest culprit when it comes to energy usage and analyst group Forrester claims that they are responsible for 45% of all IT energy consumption.
consumption in the office environment and doesn’t rely on the user to switch anything off. Giving employees the ability to book their own seat in the office allows for desks to be used when and where they are needed. Potentially, on a quiet day when many employees are out of the office, any staff that do need a desk on that day could just occupy one floor, meaning that power to the parts of the office building can be turned off for that day. Even the office lighting can be programmed to work intelligently. Intelligent lighting can offer real energy savings to businesses and can help ensure the lights are on only when needed. For example, if an office worker wants to come into the office out-of-hours, the can swipe in and only the areas of the building where they need to go will light up. Lights can also ‘daylight harvest’ and respond to the ambient light conditions so if it’s bright outside, full lighting isn’t required in the office. The data centre is perhaps the biggest culprit when it comes to energy usage and analyst group Forrester claims that they are responsible for 45% of all IT energy consumption. Controlling energy consumption in the data centre is therefore imperative, but for many owners understanding how and where to measure energy usage is the challenge. RFID energy management systems offer an intelligent solution to this issue. These systems use tags or wireless sensors to monitor environmental conditions, such as temperature and humidity, at various locations within the data centre and then relay the data to readers. Software then interprets the information and identifies any problem areas so steps can be taken to improve efficiencies and reduce power consumption. Sensors can be used on the front of server cabinets to ensure supply temperatures are within the recommended range. Air pressure sensors can be used to identify low-pressure areas that may be causing hot spots. Installing environmental sensors at the correct location as well as fully integrating and utilising the information produced by these sensors can help to deliver real energy efficiencies to the data centre. Although it is a great power consumer, IT should instead to be viewed as a solution to better control our working environments and as an aid to reducing power consumption where it really matters. Only when we can truly embrace the intelligence that IT can deliver will it have a real impact on our organisations.
WINTER 2014 DATAcentreMANAGEMENT 37
ENERGY EFFICIENCY
replacing less often than conventional lighting. It is more expensive to install, but just as homes across the UK are finding it can make a significant differences to your bills when you consider that on average a DC will have 50-60 bulbs.
COSTING the Earth?
Plug Fan/EC motors Customers can make parts of existing air conditioning systems rather than going through expense of a complete replacement. An example, replacing drive belt systems with much improved Electronically Commutated Fans. The variable speed of these units, reduces wear and tear because they are electronically managed. They can immediately bring energy savings of up to 45%, meaning they pay for themselves in no time.
Secure IT looks at driving greater efficiency in the data centre
A
lot of effort is put in to the environmental credentials of data centres at the design stage. Ultimately they have to be as efficient as possible to lower running costs, and that means thinking carefully about layout, cooling, lighting and hardware. Everyone gives themselves a pat on the back when the site goes live, and rightly so, but keeping a data centre performant from an environmental perspective has to be a front-ofmind and on going exercise. The right cooling parameters at go live is just the start of the cycle. ISO 50001, introduced in 2011, is the legislation that demands the search for continuous improvements in energy efficiency, and was developed by experts from more than 60 countries. It targets large companies, compelling them to sign up and demonstrate their energy efficiency improvements year on year. You can’t realistically achieve this without environmental monitoring – its critical for every organisation and data centre: you can’t improve what you can’t measure. There are a vast number of monitoring systems and software options available, most of them ‘off the shelf’. This is not the place to make any recommendations on software, but
38 DATAcentreMANAGEMENT WINTER 2014
there is a very clear trend that we have noticed, and that is the attractiveness of open source solutions. It has nothing to do with cost, customers find the use of open standards that future-proof monitoring, and the ways in which many of these packages can be tailored to the environment, a very powerful combination. The reach of ISO 50001 will be extended to cover smaller businesses. So, with the importance of monitoring looming, how can companies start to make inroads into improving their energy efficiency in the data centre? It is not just computer hardware and software that is always evolving to improve the energy efficiency and utilisation of the estate. Technology throughout the data centre is doing the same, and if incorporated can only have a positive impact on a company’s overall energy efficiency rating. All good and well for those building a new data centre with state of the art energy efficiency products and facilities, but for those looking at an existing DC facility – what options are there to cut energy straight from the bottom line? LED lighting LED lighting is more efficient and needs
Air-conditioning along with UPS use the most energy within the DC, so it stands to reason that any energy savings on these areas will assist. The mixing of hot and cold air within the AC environments leads directly to inefficiency, so making every effort to keep these separate is important. Blanking plates can be used to seal off the hot/cold areas and are easily retrofitted.
Blanking plates and Air conditioning Air-conditioning along with UPS use the most energy within the DC, so it stands to reason that any energy savings on these areas will assist. The mixing of hot and cold air within the AC environments leads directly to inefficiency, so making every effort to keep these separate is important. Blanking plates can be used to seal off the hot/cold areas and are easily retro-fitted. Many companies have started to reclaim heat from air conditioning and are also looking to use solar energy, which although not suitable for powering a data centre, can be used for general office lighting etc, in order to off-set any energy usage within the data centre. Brush Strips! So, how do you improve data centre energy by up to 63%?? Install brush strips to your data centre to seal cable openings. A basic improvement that according to Gartner’s 11 Best Practices on saving energy in a data centre is currently Number 1. A vast amount of money is spent in cooling data centres, which without brush strips is largely wasted as cooled air is lost through unsealed cable openings. The point we are trying to make here is that simple changes, and relatively small investments can make a huge difference to energy efficiency and consumption in the data centre. As the biggest consumers of energy for many companies, this helps them make incremental and significant progress on the ISO 50001 submissions. But remember to measure, this will help you prove the value of these investments and gain budget for even more improvements.
Access Control – Keyless Locks
Data Centre Services
Data Centre Solutions
MOVE IT, RACK IT , POWER IT, PATCH IT I Dedicated team to assist you with Data
Fit keyless locks to server racks, cabinets and cages.
Centre migrations, moves and changes I Cabinetry , cabling , fibre/copper
01635 239645
sales@kitlock.com www.kitlock.com Go Keyless. Go KitLock.
patching, transport, PAT testing I Assistance by day / week or as required I Technical couriers Contact Philip McCabe 07715 665184 www.systems-trans-connect.co.uk contact@systems-trans-connect.co.uk
Fire Protection
KitLock is a Codelocks Ltd brand
Cold & Hot Aisle Containment Solutions Hot Air Structural Cabling Solutions
Cold Aisle
Cold Air
Server Racks
Cold Aisle / Hot Aisle Containment is a simple, cost effective non intrusive solution to datacentre heat problems.
World World class cabling architecture systems systems from leading manufacturers. Copper cabling solutions for 10G Base-T Ba ase-T copper 100Gbit ethernet solutions. Comprehensive stock holding available availa able for next ne eed is met. day delivery to ensure your every need KRONE
2A Albany Park, Frimley Road, Camberley, Camberley, Surrey, Surreyy, GU16 7PL 01276 40577 405777 77 sales@cablelines.com www.cablelines.com www.cablelines.com
AMP
cablelines
Providing Structure To Your Network
I Reduce your PUE I Free site survey I Retrofit solutions
Call - 01753 695090 www.cold-aisle-containment.co.uk
Complete Data Centre Solutions
Cablenet - Platinum Distributor of Raritan in UK Make the intelligent choice - affordable metered PDUs with environmental monitoring and other advanced intelligent features built right in.
sales@cablenet.co.uk www.cablenet.co.uk Tel: +44 (0)1276 405300 Fax: +44 (0)1276 405309
Reduce your DC Management Overhead Standby Power Systems
High Quality, Managed Lifecycle and IMAC Services for all Data Centre Equipment Data Centre Migration and Hardware Relocation Specialists Best in Class Equipment Installation and Cabling Standards Live Equipment and Cabling Audits Rapid Deployment and Patching Services Infrastructure Management, Design and Planning End of Life Secure Equipment Decommissioning
Talk to an expert today on 01245 392 572 or visit our website www.joycesolutions.co.uk
/.-,+*+).(+'+&%$#&* /."'*$%))%$,#'.!. %,'$+'%' + /. &% +*.!. + + ,%). #& *
/. . . . ,$ +%& /. #% . . # +&. #',$#&,' /. +). %'% + +'$
%,). *.%$ . .*%)+* &$#' ## &# # .#&. +) . . .
www.burtonwoodgroup.com DATAcentreMANAGEMENT WINTER 2014 39
â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â–
Cold & Hot Aisle Containment Solutions
Products & Services
To advertise here please call Paul Lane on 0207 348 5259