Mission Critical Power

Page 1

missioncriticalpower.uk

ISSUE 11: June 2017

12

Ian Bitterlin: is the Cloud more or less resilient than your own data centre?

18

Is a skills gap in power management putting mission critical infrastructure at risk?

26

Uptime Institute: data centres explore innovative onsite power generation

The infrastructure The infrastructure designed for your designed for your network edge. network edge.

4he small solution that gives you a big edge... -ICRO¬$ATA¬#ENTRE¬3OLUTIONS¬7INNER¬OF¬$#3¬!WARDS¬

4he small solution that gives you a big edge... -ICRO¬$ATA¬#ENTRE¬3OLUTIONS¬7INNER¬OF¬$#3¬!WARDS¬



3

IN THIS ISSUE

08

Data centre trends

12

Should you look before you leap to the Cloud? There is no mass exodus; but what are the dominant trends?

Viewpoint The Cloud: more or less resilient than your own data centre?

07 News Zombie servers pose a serious threat to security

18 Infrastructure A skills gap could pose a risk to critical infrastructure. A recent survey shows that power management is poorly understood

26

missioncriticalpower.uk

ISSUE 11: June 2017

12

Ian Bitterlin: is the Cloud more or less resilient than your own data centre?

18

Is a skills gap in power management putting mission critical infrastructure at risk?

26

Uptime Institute: data centres explore innovative onsite power generation

The infrastructure The infrastructure designed for your designed for your network edge. network edge.

Uptime Institute Could onsite power generation be the next wave of innovation?

4he small solution that gives you a big edge...

28

-ICROÂŹ$ATAÂŹ#ENTREÂŹ3OLUTIONSÂŹ7INNERÂŹOFÂŹ$#3ÂŹ!WARDSÂŹ

4he small solution that gives you a big edge... -ICROÂŹ$ATAÂŹ#ENTREÂŹ3OLUTIONSÂŹ7INNERÂŹOFÂŹ$#3ÂŹ!WARDSÂŹ

22

Demand-side response

Cover story

Is traditional DSR dead? There is a lack of understanding of where the true value lies

Optimising cooling and keeping in control of costs

Comment

4

DCIM solutions

16

UPS

42

News

6

Cooling & Air Movement

22

Lighting

44

Data Centre Infrastructure

8

Energy Storage

31

Products

47

Testing & Inspection

37

Q&A

50

Viewpoint

12

To subscribe please contact: missioncriticalpower.uk/subscribe missioncriticalpower.uk

June 2017 MCP


4

COMMENT

Zombie attack: fact or fiction? Recent events have highlighted the vulnerabilities of large organisations to cyber crime and the disasterous impact that malware can have on vital operations. This is a wake-up call for all those tasked with ensuring 100% uptime of mission critical services. The WannaCry ransomware attack, which started on 12 May 2017, is the biggest single incident that the UK National Cyber Security Centre (NCSC) has faced. Furthermore, the cyber threat to UK business is significant and growing; in the three months since the NCSC was created, the UK has been hit by 188 highlevel attacks, which were serious enough to warrant NCSC involvement, and countless lower level ones. The NCSC warns that connected devices in areas such as energy (eg smart meters), physical security (eg networked security cameras) and facilities automation (eg connected indoor LED lighting), provide “tangible competitive and business advantage, but the risk of connecting devices may be difficult to assess”. As a result, it is “likely that there will be an increase in high profile hacking incidents" which impact businesses due to lax security in connected devices. The threat posed by hackers is not just an issue for managers seeking to protect senstive data, or prevent disruption to business systems and IT-related services. Cyber security expert, Applied Risk, has witnessed a trend where hackers are rapidly shifting their scope of focus. While targets

Editor Louise Frampton louise@energystmedia.com t: 020 34092043 m: 07824317819 Managing Editor Tim McManan-Smith tim@energystmedia.com Production Paul Lindsell production@energystmedia.com m: 07790 434813

MCP June 2017

Sales director Steve Swaine steve@energystmedia.com t: 020 3714 4451 m: 07818 574300 Commercial manager Daniel Coyne daniel@energystmedia.com t: 020 3751 7863 m: 07557 109476 Circulation enquiries circulation@energystmedia.com

previously included financial services and banks, industrial environments now represent an increasingly lucrative target. However, the threat against industrial environments is not limited to cyberattackers seeking financial gain. Hackers could also include nation states, state sponsored actors and potentially even competitors seeking an edge. Data centres also need to address the security risk within. A report by Koomey Analytics and Anthesis warns that 'zombie' servers are unlikely to have the latest security patches, which makes them an 'open door' to many enterprise data centres. Other experts have warned that hackers could target SCADA, PLCs, distributed control systems, or other software-based systems within the data centre, to knock-out critical power or cooling infrastructure. According to Dell’s 2015 Annual Security Report, cyber-attacks against SCADA systems doubled in 2014, to more than 160,000. Outside the facility, stability of the power supply could be compromised. Three Ukrainian energy distribution companies were victim to cyber attack in December 2015, resulting in electricity outages for approximately 225,000 customers across the IvanoFrankivsk region of Western Ukraine. Attackers gained unauthorised entry into a regional electricity distribution company's corporate network and ICS, resulting in seven 110 kV and twenty-three 35kV substations being disconnected for three hours. Governments, utilities, industry and all business sectors will need to be prepared. The reality could be far more scary than the apocalyptic fiction... A zombie attack is coming to a server near you… Louise Frampton, editor

Energyst Media Ltd, PO BOX 420, Reigate, Surrey RH2 2DU Registered in England & Wales – 8667229 Registered at Stationers Hall – ISSN 0964 8321 Printed by Headley Brothers Ltd No part of this publication may be reproduced without the written permission of the publishers. The opinions expressed in this publication are not necessarily those of the publishers. Mission Critical Power is a controlled circulation magazine available to selected professionals interested in energy, who fall within the publishers terms of control. For those outside of these terms, annual subscriptions is £60 including postage in the UK. For all subscriptions outside the UK the annual subscription is £120 including postage.

Follow us for up-to-date news and information:

missioncriticalpower.uk



6

NEWS & COMMENT

Awards highlight innovation in the data centre sector The Data Centre Solutions Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena. Highlighting the continued drive for innovation in the sector, the awards were presented during a gala dinner at the Grange Hotel St Paul’s, London, on 18th May 2017. Among the winners included Schneider Electric, which won the award for Data Centre Cabinets and Racks Product of the Year. As the IT landscape evolves with more and more services being outsourced to the Cloud, the equipment and applications being kept on-premise have become increasingly important and are frequently business critical. Schneider’s Micro Data Centre solutions allow dataintensive, as well as latency and bandwidth-sensitive edge computing loads to be deployed securely using the same standardised physical infrastructure found in some of the world’s largest and most complex data centres. Receiving the award on behalf of Schneider Electric, Rob McKernan, SVP Europe and Global IT Channels Governance, said: “Racks

Award winners: Schneider Electric and enclosures are the building blocks of every IT environment. We’re delighted that those involved in the DCS Awards have given this accolade to Schneider Electric’s Micro Data Centre solution for its innovative approach to emerging edge data centre requirements." Other winners included UPS manufacturer Riello, which scooped the award for Data Centre Power Product of the Year. Riello’s Multi Power Combo was recognised as an 'outstanding product in its field', providing high power in a compact space. Multi Power Combo marks the second modular product developed

by Riello UPS and features both UPS modules and battery units in one. The product is capable of housing up to three 42kW power modules and 20 battery units across five battery shelves, delivering total power output of 126kW or 84kW with N+1 redundancy. Collecting the award, Riello’s general manager Leo Craig, said: “We’re thrilled that the Multi Power Combo has been recognised in these prestigious industry awards, which are very competitive. We’re proud of what our Multi Power Combo can offer as a product, with its outstanding power density and highest levels of power protection.”

The Data Centre Cooling product of the Year award was presented to Vertiv for its Liebert AFC adiabatic freecooling chiller range, which combines the extremely high energy efficiency obtained by adiabatic freecooling, together with the 24/7 availability of scroll or screw backup compressors. The design is said to further improve freecooling operation while reducing energy consumption. The Data Centre Energy Efficiency Project of the Year award went to Telehouse, for its Telehouse North Two facility, which features the 'world's first multistorey indirect adiabatic cooling system', enabling a PUE efficiency of 1.16, while providing a power capacity of 18.5 MW. Other highlights included an award for Data Centre Facilities Management Product of the Year, which went to Sudlows for its Advanced Load Integrated System Testing (A:LIST), the latest addition to its Data Centre Testing Toolkit and Data Centre Cabling Product of the Year, which went to Corning’s EDGE8 Solution. The winners were decided by popular vote.

Demand from data centres energises diesel genset market Although low economic growth is expected to keep the diesel generator set (genset) market subdued until 2020, several factors are sustaining market potential, according to Frost & Sullivan. These include the growth of data centres, rising infrastructure investments, low diesel prices due to falling oil prices, and capacity market auctions in the UK. Environmental and emission regulations in Europe are expected to play a key role in the future usage of diesel gensets.

MCP June 2017

This will cause industries to move toward cleaner burning power generation such as renewables and gas. Companies that focus on client relationships, and offer newer, highquality, reduced diesel fuel emission products like hybrid gensets, are positioned for growth. “The diesel genset of 1MW power range segment is set to benefit as demand intensifies from data centres, with their high-power requirements and need for solutions

with quick start-up times and maximum reliability,” explained Frost & Sullivan energy and environment industry analyst Manoj Shankar. “Data centre networks across Europe are witnessing increasing investment from technology companies such as Apple, Salesforce.com, IBM, Google and Amazon, to tackle intensive data creation, tougher data protection laws and the expansion of banks and technology companies within Europe.”

missioncriticalpower.uk


7 Rapid growth for modular power solutions

News in brief

Modular solution providers will tap the huge growth opportunities in data centres, the latest report from Frost & Sullivan predicts. Critical power manufacturers are rolling out a range of innovative modular products with leading-edge features and functionalities to keep pace with the technological evolution. As the modularity trend gathers pace, data centres will emerge a powerful market in 2017. Modular solutions will register a growth rate of 21.3% in the critical power industry, mainly aided by the 83.4% revenue contribution from modular data centres. “Data centres’ changing needs in terms of data storage, security and speed will drive the uptake of modular critical power products, especially uninterrupted power supply (UPS) systems,” said Frost & Sullivan energy

66% of cooling fails According to a major survey into data centre cooling, UK data centres are achieving poor levels of cooling utilisation with an average 66% of installed cooling equipment not actually delivering any active cooling benefits. EkkoSense analysed some 128 UK data centre halls and more than 16,500 racks to reveal that the current average data centre cooling utilisation level is just 34%. The root cause was attributed to continued poor management of airflow and a failure to actively monitor and report rack temperatures.

& environment senior industry analyst Gautham Gnanajothi. “Connected UPS systems will have a big role to play in energy management and efficiency, and will provide peaking power and demand response, which translates to high adoption potential.” The report, Global Critical Power Industry Outlook 2017, adds that the market is expected to grow at 9% in 2017, with UPS generating the highest share of revenues, and light-emitting diode (LED) drivers and photovoltaic (PV) inverters contributing significantly as well. Non-traditional cooling solutions will account for only a small portion of the data centre cooling market; however, they will grow at 15.7% in 2017. Non-traditional cooling technologies are likely to grow two-and-a-half times faster than traditional cooling systems.

Zombie servers pose significant threat to companies’ cyber security A report by Koomey Analytics and Anthesis has highlighted the problem of ‘zombie’ servers — not only are they a drain on revenue but they also pose a security risk, warn authors Jon Taylor and Jonathan Koomey. In previous work, the pair showed that about 30% of the enterprise servers in a fivefacility, 4,000-server sample were comatose, performing no useful computing over a sixmonth period in 2014. In this follow-up analysis, the authors assessed the percentage of comatose (also known as zombie) servers in a sample taken in 2015, which covered four times as many servers and twice as many facilities. The analysis showed that about one quarter of physical servers were zombies in companies that had taken no action to remove them (which corresponds to the vast majority of companies running enterprise data centres). In addition, the data shows missioncriticalpower.uk

If the monetary incentives are not enough to ensure prompt action, concern over cyber security really should

that about 30% of the virtual machines running on some of the physical servers (known as hypervisors) were also comatose, demonstrating that the same institutional and measurement problems that inhibit discovery and elimination of zombie physical servers also lead to significant numbers of zombie virtual machines. The authors point out that finding and eliminating comatose servers would save many enterprises money, but more importantly, taking that action would eliminate an unappreciated security risk. Zombie servers are unlikely to have the latest security patches, which makes them an open door to many enterprise data centres. They warn that ‘if the monetary incentives are not enough to ensure prompt action, concern over cyber security really should". To download the report, visit: https://tinyurl.com/ klmxevj

Kohler acquires Pure Power Systems Kohler Co. has acquired Pure Power Systems, a Dublin-based independent distributor and service provider of uninterruptible power supply (UPS) systems. The transaction will see Pure Power Systems become part of Kohler’s UK-based UPS sales and service company – Uninterruptible Power Supplies Ltd (UPSL). Pure Power Systems has locations in Dublin and Limerick and was owned by company founder Ian Jackson. Equinix expansion Equinix has announced the completion of its acquisition of 29 data centres and their operations from Verizon Communications. The $3.6bn all cash transaction includes more than 1,000 customers, of which over 600 are net new, and approximately three million gross square feet of data centre space. Spread across 15 cities in North and Latin America, the new assets bring Equinix’s total global footprint to more than 175 International Business Exchange (IBX) data centres across 44 markets.

June 2017 MCP


8

DATA CENTRE INFRASTRUCTURE

A

global survey of more than 1,000 data centre executives, conducted by the Uptime Institute, reveals that hybrid data centre infrastructure is now the dominant strategy for enterprises – organisations have their own data centre infrastructure, but are using colocation providers, as well as some Cloud providers. While the majority of IT organisations are moving some of their workloads to the Cloud, the percentage of workloads residing in enterprise-owned/operated data centres has remained stable at 65% since 2014. On average, respondents reported that nearly two-thirds of their IT assets are currently deployed in their own data centres, with 22% deployed in colocation or multi-tenant data centre providers, and 13% deployed in the Cloud. With the explosive growth in business critical applications and data, enterprises continue to see the data centre as not just important but essential to their digital-centric strategies. “We are not seeing a wholesale exodus to the Cloud platform… Enterpriseowned data centres are still the primary compute venues,” says Uptime Institute senior director of content and publications Matt Stansberry. However, the survey does reveal that the Cloud is ‘siphoning off’ some workloads, although this trend was more prevalent among the largest organisations. Other findings include the fact that 60% of IT server footprints are flat or shrinking. “There are some reasons why this run-away growth has been slowing,” Stansberry explains. “We are seeing a continuation of Moore’s law. Processors are going to get smaller, more efficient and less expensive.” He points out that there is increased performance at the processor level, continued adoption of server virtualisation, and rapid Cloud computing adoption. “People are getting MCP June 2017

Data centre trends: it’s all about the hybrid Research by the Uptime Institute has highlighted the latest trends in the data centre sector – the majority are adopting a hybrid strategy, although an ‘exodus’ to the Cloud has not materialised. Avoiding downtime remains a priority but are enterprises overconfident in their IT resilience strategies? Louise Frampton reports increased efficiency and better performance out of their existing assets,” explains Stansberry. The survey also found that around half of enterprise facility teams are updating legacy sites. The Uptime Institute points out that if organisations are

making the most out of existing assets and upgrading live sites, operations (sequencing, training and maintenance) becomes even more critical, especially at older facilities. The survey found that data centre budgets have remained strong in 2017. Compared to

2016, nearly 75% of companies’ data centre budgets have increased or stayed consistent this year. Data centre resilience IT resilience is growing, complementing redundancy. The majority of companies missioncriticalpower.uk


9

The Cloud is siphoning off some workloads but there is no ‘mass exodus’. Should you ‘look before you leap’?

(68%) rely on IT-based resiliency. Using an IT architecture with multiple, geographically distributed data centres, they rely on live application failover in the event of an outage. At the same time, 73% of respondents said they are not deploying lower redundancy physical data centre sites despite increased adoption of IT-based resilience. “You still need to have the batteries, the UPS, the diesel generators,” says Stansberry. “People are not ready to make the leap.” Nevertheless, the majority of respondents were confident in their organisation’s multisite IT resilience strategy (68%) and said they believed that their IT services would ‘function as expected’ in the missioncriticalpower.uk

event of an outage. However, the Uptime Institute issued a word of warning, citing the example of Southwest Airlines in the US. A single router error set off systemic failures. “They had a plan for failover and it didn’t work… In the process of this IT resiliency failure, millions were lost in earnings and stock devaluation,” Stansbury comments. “Generators and UPSs are cheaper than this kind of crippling outage.” Downtime matters, as the survey shows: more than 90% of data centre and IT professionals believe their corporate management is more concerned about outages now than they were just a year ago. While 90% of organisations conduct root cause analysis of any IT outage, only 60% report that they measure the cost of downtime as a business metric, however. Growth in Tier certification The Uptime Institute also reports significant growth in Tier certifications in the established data centre sectors, particularly the UK, Amsterdam, Paris and Frankfurt. However, it sees the most growth in the newer markets where data centres are less established: “Organisations are seeking reassurance that they are not over-engineering their data centres or building in faults that they may not find for four or five years,” says managing director Phil Collerton. “Certification is also important to shareholders. Within the banking sector in Turkey, for example, there are strict rules about keeping data onsite, so brand-new data centres are being built and everyone is becoming Tier certified. There is an element of competition driving this trend…Banks are saying ‘our competitor is Tier 3 certified, so why aren’t we?’ If you are spending $50m you [and your shareholders] want to know it is being spent wisely.” The Uptime Institute is »

Data centre power outages: who’s to blame? The Uptime Institute survey shows that minimising the risk of power outages is still a high priority for data centre managers, but what are the main factors that need to be tackled to avoid such disasters? According to the Uptime Institute, the reason is much more complex than simply ‘human error’ and all too often the issues go much deeper within an organisation. “There have been some high-profile outages in older facilities and I would question the original commissioning of these sites,” says Phil Collerton, managing director of the Uptime Institute in Europe, Middle East and Africa. “Some of the data centres that have had problems in the UK were commissioned late 90s or early 2000s, when there was starting to be a squeeze on spending, so I would question how rigorous the commissioning was. Usually, when building a data centre, you are running to a tight time-scale and you have a date for going live. As the finish date gets closer, there may be delays, so you end up squeezing the three-week commissioning and integrated systems testing. “This is one area that people should never cut back on. There are so many things that happen five or 10 years later that can be traced back to incomplete, poor and badly recorded commissioning.” Another problem area for some older facilities is maintenance. Although there are schedules stipulated by the manufacturers of data centre infrastructure, in times of hardship, it can be tempting to cut back and defer maintenance. “This can come back to bite you,” warns Collerton. “If we find a lot of planned maintenance has been cancelled, when we are certifying a data centre, this always raises a red flag…These patterns lead to bad practice and failures down the road.” Often data centre failures are attributed to human error and this may refer to the fact that an operator ‘pressed the wrong button’ or perhaps didn’t react properly. But Collerton points out that their action is at the end of a whole chain of events starting at the very top with management decisions on budgets, staffing, training, maintenance and a whole host of other factors, that culminate in something going wrong. “It is often the operator that is blamed, but it is not necessarily their fault,” says Collerton. He points out that, at newer sites, outages are often due to cutting corners during the building of the data centre: “The design may be fine when it is handed over to the construction company but when it is being implemented they may see ways of saving money or they may add in ‘value engineering’. “Suddenly, it is not the same as the original design. You may find you are trying to fix something when the documentation is wrong. When I used to build data centres, I would have someone walk around the facility every day to check what has been done. “Often you would find things that have been ‘botched’ or changed. Data centres need to be vigilant during construction, commission adequately, and, above all else, maintain the site properly. It is a false economy to cut corners,” Collerton concludes. June 2017 MCP


10

DATA CENTRE INFRASTRUCTURE

focused on risk-mitigation, compliance and ensuring best practice. However, each regional market is different in terms of maturity, so the focus may vary. “The UK, Amsterdam, Paris and Frankfurt built their data centres around 15-20 years ago and they are running strong. When you have been running something for that long, you know its strengths and weaknesses; where the points of failure are and you take the operational procedures to avoid it,” says Collerton. He points out that in the more mature markets, where there is not a lot of building going on, operational best practice is the main focus. “If you run a portfolio of data centres how do you know team A, B and C are doing the same thing?” Internal audits can be conducted but the results risk being biased, he points out. A third-party, independent assessment can be performed, using common criteria, developed with 25 global data centre companies, to evaluate staffing, training, maintenance, management, capacity planning and operations.

We find that we frequently have to talk people down from Tier 4 to a Tier 3, saving them money in design and certification costs. In some markets, they want ‘a Ferrari’ when a Ford Mondeo will suit their purposes All the factors that affect day-to-day data centre operations are assessed: “We look at how the site is running equipment – for example, are they ‘killing’ the UPS? Have the alarms been set properly? Is it always on the job training for the staff? How do they work with their vendors? How often are they cancelling maintenance?” Comparisons can be made with other sites within an enterprise’s data centre portfolio. In Africa, where there is currently significant investment, the Uptime Institute is working with governments on flagship projects – such as ensuring the reliability of the first sub-

Saharan electronic voting operation. Countries in these regions, that want to secure inward investment, need to demonstrate that their infrastructure and operations meet internationally recognised certification. It is not just emerging markets that can benefit, however: “Germany has the TUV standard, which is good, but businesses outside the country want a global standard that they understand,” says Collerton. “Spain and Italy are strong markets for us, as they are building new infrastructure.” The Uptime Institute technical team is two-thirds engineering and onethird operations and best

practice. This is key to the organisation’s approach: “We are not prescriptive. We start by asking, ‘what do you use the data centre for?’ It is not about ‘how many UPSs you need or how much redundancy you need. What is the business reason behind the data centre?” Collerton explains. “We often have people contact us and say ‘we want a Tier 4’… it is the most complex, but it may not be right for your business. It may be too complex… We find that we frequently have to talk people down from Tier 4 to a Tier 3, saving them money in design and certification costs. In some markets, they want ‘a Ferrari’ when a Ford Mondeo will suit their purposes.” The Uptime Institute is recognised worldwide for the creation and administration of the rigorous Tier standards and certifications that enable data centres to achieve their mission while mitigating risk. It has awarded about 1,200 certifications in more than 80 countries and rained nearly 2,000 professionals with accredited Tier training.

‘Look before you leap to the Cloud,’ councils warned Researchers from London’s Brunel University have tracked what happens when local councils transfer services to Cloud computing and warned that local authorities and public sector organisations should “do their homework” before switching to the Cloud. Local authorities across Europe have been urged to move in-house IT services – such as servers, email and telephones – to internet-based providers, amid pressure to reduce their total investments in IT infrastructures and resources (eg data centres). Warwickshire County Council and the London Borough of Hillingdon were among the UK’s first to announce plans to switch in about 2012. A study of three local councils found the Cloud brought several advantages but authorities tend to make the shift too hastily, with one council instantly hit by hackers. “These findings have messages for both local government and central government,” says Dr Uthayasankar Sivarajah, part of the Brunel University research team. One of the authorities faced an immediate security breach that caused chaos,” added the lecturer in operations and information systems management. “Data was accessed illegally by an unauthorised third party and the private sector Cloud provider blamed human error.” Government strategists predicted in 2011 that switching to the G-Cloud or government Cloud could

MCP June 2017

save £3.2bn because as a shared service, costs are spread among organisations. But despite cost-cutting pressure, many public sector managers see the Cloud as more a liability than labour saver, with data security and downtime the biggest fears. Making it easier to work from home and better information management are key advantages to councils switching to Cloudbased technologies, the team found. Major cons meanwhile are a lack of data ownership and loss of control and governance, because of a grey area around who has access to information. The report, Risk and rewards of cloud computing in the UK public sector, also revealed a general feeling among workers that their authority’s move was a purely rushed attempt to meet the political agenda. “There are huge black holes between what the councils are trying to do and what they are achieving,” says Dr Sivarajah. The biggest lesson to councils, he underlines, is that “the right person needs to drive and lead the implementation and sell it to the workers. At operational level they could all see real benefits in cost savings. But it is still early days and we don’t know what the long-term impact will be. That may take 10 years to find out. It might reduce the headcount in IT departments, but I can’t see it cutting out the need for them altogether.”

missioncriticalpower.uk


Unlock flexibility Maximise returns

Headline Partner

DSR Event 2017 Demand-Side Response

Thursday 7th September / The Banking Hall, London

100 free places available to end users Register your place today at dsrevent.uk Partners

Media Partners

Organised By


12

VIEWPOINT

The Cloud: more or less, resilient than your own data centre? Ian Bitterlin discusses the likely reasons behind high profile outages, the lessons to be learned, and offers an insight into how to avert failures in the future

T

he latest outage in Microsoft Azure services, on 31 March 2017, was in Japan and lasted more than seven hours until ‘most’ services were back on line. This follows a similarly long Azure outage in 2014 that was eventually blamed by Microsoft on ‘human error’. The Microsoft press release following the outage makes interesting reading and I will attempt to pick through the snippets of information and come up with a slightly more useful lesson to be learned on MCP June 2017

the press release’s title; ‘UPS failure causes cooling outage’. Of course, seven hours of downtime in a year is only 99.9% availability – much lower than any end-user would accept from their own facility, and if you consider a ‘finance’ application then a failure once every couple of years, regardless of how long the outage was, would be beyond a disaster. This raises the interesting point that people choosing ‘Cloud’ versus in-house either don’t seem to realise that

‘Cloud’ is just someone else’s data centre or they focus on a contract littered with servicelevel agreements and penalties and believe the salesman’s puff about the reliability attributes of ‘Cloud’. Very few buyers of Cloud services will ask to see the facility – and where would the salesman take them? It is a Cloud, after all, floating, fluffy and nebulous… In fairness, I don’t think that MS Azure, on its current record, achieves anywhere near the availability offered

(and achieved) by most of the colocation providers, while most prospective Cloud purchasers do not have their own facility to compare anything with. The cost of colocation is certainly a lot less than building your own and, importantly, comes out of Opex rather than Capex. So, what about this latest failure? Well, you can find one version of the press release here: https:// tinyurl.com/krxdaj6 There is one salient point: the failure resulted due to a lack of cooling, not one of loss missioncriticalpower.uk


13 However, I don’t know of any UPS (type, topology or manufacturer) where one module in an N+1 redundant group trips off-line and doesn’t leave the rest of the load happily running at N – or in this case dropping from N+2 to N+1. Add to that the statement that only a part/zone of cooling capacity dropped off-line. In fact, there is one ‘rotary’ solution that fits this scenario and that is DRUPS with a dual-bus output, one ‘no-break’ feeding the critical load and one ‘short-break’ that supports the cooling load after a utility failure has occurred. While the ‘short-break’ output is a single feed, the section of the cooling load is, assuming the system was designed properly, always dual-fed across two DRUPS machines and so should have simply transferred automatically to a healthy DRUPS machine in the remaining N+1 group. But, so what? The press release clearly states that the site personnel (not MS-Azure but a third-party facility management company) incorrectly followed an of voltage, and the cooling system was powered by the UPS system – a rare solution only reserved for high-density applications. Unlike a server, the cooling system does not need a UPS system for ‘continuity’ of voltage (10ms break and it is ‘goodnight Vienna’) but is only ever needed to avoid rapid increases in server inlet temperature in highdensity applications (>10kW/ cabinet) while the cooling stops on utility failure and before the generator jumps-in (10-15s) and then cooling system regains full capacity (5-10 minutes even in an old-technology chiller). In this case, where the cooling zone was off-load for hours, clearly UPS was not actually needed for the cooling system so switching it onto a utility feed might have taken 20 minutes once the problem was noticed. It appears that MS Azure actually spotted the loss of missioncriticalpower.uk

cooling capacity (only a part of the data centre) from a remote location that was a couple of hours’ drive away. Then, for reasons that are not clear to me, it points out that the UPS that ‘failed’ was ‘rotary’ and specifically ‘RUPS’, which isn’t a recognised term (it is either hybrid rotary or DRUPS, diesel rotary UPS) but all types of UPS ‘fail’ by transferring the critical load to its automatic bypass. This slight mystery is compounded by the statement that the UPS was ‘designed for N+1 but running at N+2’. This would infer partial load in the facility and a slight disregard for UPS energy efficiency as turning an unneeded UPS module ‘off’ would raise the load on the remaining system and save power – something particularly useful with rotary UPS as partial load efficiency is not a strong point.

what should have been a heartracing 15-20 minutes’ recovery procedure into a seven-plushour mini-disaster. Mentioning the UPS in the press release takes the eye off the underlying problem and my view is that it would appear to be, as usual, 100% human error, and several of them. The designer made it too complicated by having UPS-fed cooling that did not respond well to a UPS ‘going to bypass’ event. Someone wrote an emergency recovery procedure that had a mistake in it. Someone made the decision not to test the procedure(s) in anger, either at the commissioning stage or later. The local technicians were not allowed to simulate failures in a live production scenario and train in the process so that when the procedure failed, they didn’t have the experience of the system to get around the problem. Human error. Latent failures, just like this example, are exacerbated by not testing the system in anger on a regular basis, thus keeping your technicians aware, agile and informed. So, what about the question

There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man’s lawful prey emergency procedure to regain cooling capacity and that ‘the procedure was wrong’. Then they had to wait for MS staff to arrive and fix the problem – something which, no doubt, involved switching circuits that had failed to switch automatically. Could the local staff be described as ‘undertrained, unfamiliar and underexercised’? But, if so, whose fault is that? Certainly, the failure has little to do with a UPS. It may have set off a chain of events that turned

posed in the title? You have no way of telling, but as services are increasingly commoditised I would suggest that the answer will increasingly become ‘less’. Don’t forget what John Ruskin said: “There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man’s lawful prey”; or ‘you get what you pay for’, but my favourite Ruskin quote is: “Quality is never an accident; it is always the result of intelligent effort.” June 2017 MCP


DATA CENTRE RACK SECURITY ACCESS CONTROL E-LINE by DIRAK mechatronic locking solution offers controlled access to enclosures anytime, anywhere. Available in modular designs for all data and server racks, providing 24/7 controlled security access of all your enclosures – whether as a standalone solution or integrated into a management system they offer a seamless audit trail of all activity. E-LINE locking systems can be easily retro fitted to any existing or new make of cabinets. For more information about complete control and monitoring with E-LINE by DIRAK locking solutions, contact 2bm today.

enabled, efficient, ready.

architects of data centre change 2bm Dirak MLR A4 DPS advert.indd 1-2


2bm Limited t: 0115 925 6000 e: info@2bm.co.uk Eldon Business Park, Eldon Road, Chilwell, Nottingham NG9 6DZ

2bm.co.uk 31/05/2017 16:51




18

DATA CENTRE INFRASTRUCTURE

Is skills gap putting critical infrastructure at risk? Startling findings from a major survey of data centre professionals show there is an urgent need to improve the industry’s skills and knowledge of power management in order to minimise the threat of outages and improve efficiency. Louise Frampton reports

R

esearch conducted by independent industry analyst firm Freeform Dynamics reveals that a chronic skills gap risks undermining attempts to future-proof infrastructure to meet the demands of increasing digitisation. With more than a third of respondents (36%) having suffered a prolonged and disruptive outage within the preceding three months (and human error indicated as a major cause), the findings highlight the importance of tackling this weak link.

MCP June 2017

The survey of more than 300 European data centre professionals showed that only one in three (36%) were fully confident in their knowledge of power management – this skills gap is causing a lack of confidence in data centre resilience, as well as the ability to respond effectively to power-related incidents. The findings also cast doubt on the ability of data centre managers to handle the growing demands of the digital transformation of data centre infrastructure, and their ability to deal with the

increasing complexity of power management. The survey, commissioned by power management company Eaton, also showed that more than half of respondents believed their facilities infrastructure needed strengthening in terms of power and cooling (53%) and resiliency and disaster recovery (55%). Furthermore, 35% said managing power distribution within the data centre was a significant challenge, while 42% said it was becoming more of a challenge.

Some of the feedback given by respondents also suggests that too many data centre managers are using outdated power management techniques, leading to both energy inefficiency and power-related outages. Technology was highlighted as part of a solution to mitigating human error and interviewees acknowledged the growing importance of modern software power management tools, which can help to prevent mistakes being made – eg through policy-driven automation. missioncriticalpower.uk


19

One respondent commented: “The power side of things has historically focused on the physical infrastructure but it’s now moving towards building more intelligence into the system through software.” Another respondent pointed out that this can create yet another skills gap that needs to be plugged: “The increasing role of software has opened up a skills gap. The shift is taking power engineers out of their comfort zone. It’s difficult to hire people with relevant skill sets, so we have to train people. They then need to gain experience on the job.” Energy efficiency The research further highlighted a number of key challenges elating to energy efficiency: for example, improving PUE was considered a ‘significant challenge’ by almost a third of the respondents (32%) for today’s data centre, while missioncriticalpower.uk

more than half (48%) said it was becoming more of a challenge. Surprisingly, only 26% said that managing power related charges and costs were currently a significant challenge, although nearly half (48%) agreed that it was becoming more of a challenge. Many commented that they were seeing a renewed focus on energy efficiency. As one respondent stated: “The emphasis on green among our customers waned a while back, but this is now kicking up again, and that puts the focus back on energy efficiency as a competitive differentiator.” Other respondents agreed that sustainability is a significant consideration, adding that “most of the pressure for energy efficiency and transparency comes from investors rather than regulators”. It is clear from the report that data centres are increasingly being questioned on their social responsibility credentials, as large consumers of energy. However, when it comes to driving improvement, it is often hard to secure cooperation and funding. As one respondent commented, while there is pressure to reduce power consumption, “the application and IT folks do not regard this as their concern. They think it is someone else’s problem.” Another respondent added that although there is a will to improve energy efficiency, they found it “difficult to get investment even with an ROI case.” The researchers point out that, if efforts in relation to the core data centre infrastructure are not going to be undermined, discipline is required elsewhere – not least within IT operations teams and among those involved in software development. Data centre professionals commented that poorly thought through installation of IT equipment can disrupt cooling efficiency, which consumes unnecessary power. “Many of our applications were built with no thought for

efficiency, e.g. compute power consumed,” commented one of the respondents. “This has undoubtedly led to us wasting a lot of energy over the years.” Ongoing monitoring and management are also key to energy efficiency, so having the right tools is important, as the feedback from another respondent illustrated: “Power monitoring and management tools are critical. Yes, they are an expense, but without them you are running with a lot more waste and risk than you need to.” Commenting on the findings, Michael Byrnes, Eaton’s director of sales, data centre business, EMEA, said: “Data centre workloads are intensifying as the business places more demands on it. Those pressures are compounded by a lack of confidence in the skills, tools and expertise to manage the data centre environments effectively, particularly in power. IT managers and data centre professionals need a simple, holistic view and

integrated control of the infrastructure so that they can be confident they are managing the data centre effectively.” Dale Vile, CEO of Freeform Dynamics, added: “Data centres are under a lot of pressure as they deal with continued growth and additional pressures, such as the growing use of virtualisation or new initiatives. “There seems to be a widespread lack of knowledge concerning the availability of new tools that would help data centre managers.” To address some of the issues raised by the survey, Eaton is publishing a series of papers – the latest provides data centre managers with essential advice on how to ensure power supply reliability is taken into account when commissioning a new data centre. The paper, Fast Track to Improved Power Supply Reliability, offers practical advice on optimising a data centre’s power chain and explains how, by considering the individual requirements of »

Pressures are being compounded by a lack of confidence in the skills, tools and expertise to manage the data centre environments effectively, particularly in power Michael Byrnes, Eaton June 2017 MCP


20

DATA CENTRE INFRASTRUCTURE

The survey highlighted the growing importance of modern software power management tools, which can help prevent mistakes

all components, a data centre’s power infrastructure can be designed to meet both current and future requirements to guarantee business continuity. Data centre power distribution systems in their entirety extend from the available power sources – typically incoming transformer, generator and UPS; out through the switchgear and circuit breakers to the supported ICT, cooling and associated loads. The paper points out that it is essential to appreciate issues not only related to each item of power equipment, but also how these items interact with one another. This needs special attention as the nature of these interactions can vary as the data centre load changes. The paper examines what needs to be considered in terms of power distribution and UPS components to achieve a system that is reliable and protected against unscheduled power event, while other topics include: three- and four-pole switching, the impact of a UPS on a power system, possible fault conditions including arc flashes, operation and maintenance issues, minimising exposure to human error, as well as the latest industry standards and possible MCP June 2017

consequences of failing to follow good design principles. The implications of future growth in the power infrastructure are also considered – how are fault scenarios and selectivity addressed as modules are added to scalable systems, for example? These and other questions are discussed, with the aim of tackling some of the gaps in current knowledge. While this paper looks at

power distribution, other papers will also cover topics such as UPS and Power Distribution, Feeders and Optimisation. The full series of papers will provide knowledge to improve reliability and safety, while preventing unnecessary outages. “Mission-critical applications rely on having a continuous supply of clean power under all conditions, making the design of the supporting

power infrastructure crucial,” Byrnes concludes. “An early consultation with an experienced supplier is essential for identifying and overcoming possible challenges, some of which the installers may not even be aware of, in order to ensure the system’s safety, reliability and availability.” The papers can be downloaded at eaton.eu/Infrastructure_with_ Intelligence

The skills gap: the Uptime Institute’s view Phil Collerton, managing director of the Uptime Institute in Europe, Middle East and Africa, says that there is an ageing engineering workforce and universities are struggling to attract people into mechanical and electrical engineering disciplines. “There is a shortage of well-trained engineers and there is a big shortage of women in the data centre industry. We have been quite vocal in the last 12 months about the need to address this,” he comments. Collerton points out that, in the past, the focus has been on academic qualifications. Now the data centre sector is looking at how it can use new government levy schemes to fund training of the workforce to meet their requirements. One scheme in the US is focused on retraining army veterans and there are efforts to bring this model to the UK. The Forces are a great source of well-trained people able to work in a high pressure, critical environment. Some data centres set up ‘boot camps’ that were quite successful but there is a need to replicate this and to market it further, says Collerton. The difficulty, he explains, is recruiting at entry level. Anglia Ruskin University offers a Masters course in data centre management and there are some other courses now available, but Collerton believes that there needs to be a greater focus from the government. Currently, there is a lack of understanding of the internet at government level and the importance of these issues. More people need to be made aware of the possibility of working in a data centre and be encouraged to consider this as a career path.

missioncriticalpower.uk



22

COOLING & AIR MOVEMENT

Optimising cooling: keeping in control of costs Specifying cooling systems without considering their control methods can lead to issues such as demand fighting, human error, shutdown, high operation cost and other costly outcomes. So how can data centres effectively optimise cooling efficiency?

T

he choice of cooling architecture, including hot and cold air containment, is of paramount importance for minimising the operating expense of a data centre. However, an effective control system is also essential when hoping to achieve the maximum energy efficiency and PUE. Achieving efficient use of electrical power is a major concern for data centre operators for both cost and environmental reasons. Next to the IT itself, the largest consumer of power in a typical data centre is the cooling system. Assuming a 1MW data centre with a PUE of 1.91 at MCP June 2017

50% IT load, for example, the cooling system will consume approximately 36% of the energy used by the entire data centre (including IT equipment) and about 75% of the energy used by the physical infrastructure (without IT equipment) to support the IT applications. Given its large energy footprint, optimising the cooling system provides a significant opportunity to reduce energy costs. Three steps to achieving this goal are: selecting an appropriate cooling architecture; adopting an effective cooling control system and managing airflow in the IT space. One key approach to

reducing cooling plant energy is to operate in economiser mode whenever possible. When the system is in economiser mode, high-energy-consuming mechanical cooling systems such as compressors and chillers can be turned off, and the outdoor air is used to cool the data centre. There are two ways to use the outdoor air to cool the data centre: • Take outdoor air directly into the IT space, often referred to as ‘fresh air’ economisation • Use the outdoor air to indirectly cool the IT space In certain climates, some cooling systems can save

in excess of 70% in annual cooling energy costs by operating in economiser mode, corresponding to more than 15% reduction in annualised PUE. The latest white paper from Schneider Electric highlights some of the critical issues affecting the cooling process, which include the fact that cooling system capacity is always oversized due to availability requirements and data centres operating under less than total IT capacity; the IT load in terms of equipment population and layout frequently changes over time; cooling system efficiency varies with factors other than missioncriticalpower.uk


23

IT load such as outdoor air temperature, cooling settings and control approaches; and compatibility issues arise due to the installation of cooling equipment from different vendors. The paper warns that traditional control approaches involving manual adjustments to individual pieces of equipment such as chillers and air conditioners lead to uneven cooling performance as the effect of adjustments made to one unit can lead to hot spots elsewhere in the data centre. Frequently, too, there is no visibility to the performance of the entire cooling system, a flaw that is often compounded by the presence of poor-quality or badly calibrated sensors and meters. The paper also recommends approaches for effective control systems entailing the use of automatic controls for shifting between different operation mode such as mechanical mode, partial economiser mode and full economiser mode. Indoor cooling devices should be coordinated to work together under a centralised control system with the missioncriticalpower.uk

flexibility to change certain settings based on immediate requirements. The white paper proposes that control systems be categorised into a hierarchy of four levels: namely devicelevel control, group-level control, system-level control and facility-level control for maximum efficiency. Device-level control involves the control of individual units such as chillers. Group level control refers to the coordination of several units of the same type of device, typically from the same equipment vendor and controlled by the same control algorithm. Systemlevel control coordinates the operation of different cooling subsystems within a data centre, for example a pump and a CRAH (computer room air handler). Finally, facilitylevel control integrates all functions of a building into a common network that controls everything in the building from heating, ventilation, air conditioners and lighting systems to the security, emergency power and fireprotection systems. Characteristics of effective control systems According to the white paper, an effective control system should look at the cooling system holistically and comprehend the dynamics of the system to achieve the lowest possible energy consumption. The following lists the main characteristics of effective control systems: • Automatic control: The cooling system should shift between different operation modes like mechanical mode, partial economiser mode, and full economiser mode automatically based on outdoor air temperatures and IT load to optimise energy savings. It should do this without leading to issues like variations in IT supply air temperatures, component stress, and

Given its large energy footprint, optimising the cooling system provides a significant opportunity to reduce energy costs downtime between these modes. Another example of automatic control is when the cooling out-put matches the cooling requirement dynamically, by balancing the airflow between the server fan demands and the cooling devices (ie CRAHs or CRACs) to save fan energy under light IT load without human intervention. • Centralised control based on IT inlet: Indoor cooling devices (ie CRAHs or CRACs) should work in coordination with each other to prevent demand fighting. All indoor cooling devices should be controlled based on IT inlet air temperature and humidity to ensure the IT inlet parameters are maintained within targets according to the latest ASHRAE thermal guideline. • Centralised humidity control with dew point temperature: IT space

humidity should be centrally controlled by maintaining dew point temperature at the IT intakes, which is more cost effective than maintaining relative humidity at the return of cooling units. • Flexible controls: A good control system allows flexibility to change certain settings based on customer requirements. For example, a configurable control system allows changes to the number of cooling units in a group, or turning off evaporative cooling at a certain outdoor temperature. • Simplified maintenance: A cooling control system makes it easy to enter into maintenance mode during maintenance intervals. The control system may even alert maintenance personnel during abnormal operation, and indicate where the issue exists. »

Ecoflair is reported to reduce cooling operating costs by 60% compared with legacy systems based on chilled water or refrigerant technologies June 2017 MCP


24

COOLING & AIR MOVEMENT

Next generation cooling technology Next generation economiser technology has now been developed by Schneider Electric with the aim of addressing some of the aforementioned issues. Launched at Data Centre World, (London Excel), the Ecoflair indirect air economiser cooling solution uses a proprietary polymer heat exchanger technology to help optimise operating temperatures, while keeping energy consumption to a minimum. According to John Niemann, Schneider Electric’s director of cooling product management, the technology is capable of reducing cooling operating costs by 60% compared with legacy systems based on chilled water or refrigerant technologies. Increased efficiency The company claims that, even when compared with other indirect air economiser systems, the overall efficiency of Ecoflair has been found to be better by between 15% and 20%, This increased efficiency allows data centre operators the ability to increase the IT load with the same electrical infrastructure. Studies undertaken by Schneider Electric suggest this could mean 30% more IT with the same electrical infrastructure when compared to typical cooling topologies such as chilled water or DX-based technology. By way of comparison, a 1MW data centre based in London using a traditional efficient Effective control for cooling systems will maximise energy efficiency and reduce risk of hotspots

£75K

Projected annual savings claimed to be possible using Ecoflair chilled-water cooling system would operate at a PUE of 1.14 whereas the same facility using an Ecoflair system would reduce PUE to 1.039. This not only results in annual financial savings of about £75,000 but also greater efficiency and reduced carbon emissions which are increasingly important in today’s environmentally conscious world. “We have been targeting the Cloud and colocation customers with this product,” says Niemann. “Following feedback from data centres, Ecoflair was developed to address four key challenges: to improve Capex, free available power for IT, improve availability and increase flexibility.” Reducing capex He explains that the reduction in overall Capex is due to the smaller electrical infrastructure needed from a smaller electrical distribution and backup power requirements. Niemann suggests that the Capex savings could be as much as 6% when using the Ecoflair product. “In terms of helping to ensure availability, it is important that the equipment

can be easily maintained while keeping systems up and running. We have introduced features to Ecoflair to facilitate this,” Niemann continues. Instead of a large, traditional heat exchanger, the design features small, modular segments which can be easily removed, maintained or replaced, thereby minimising downtime and inconvenience. Niemann explains that the tublar design prevents fouling that commonly happens with plate style heat exchangers. This minimises maintenance and impact to performance over the life of the heat

also means you can allow a smaller space for service clearance, which is another benefit.” Niemann points out that flexibility is also important: “We are seeing centralised Cloud data centres, with Cloud capability being duplicated at The Edge in colocation environments. Because of the variability of different types of data centre and infrastructure projects, flexibility is required to adapt to these different site locations.” Available in 250kW and 500kW modules, Ecoflair is designed to offer this flexibility

In certain climates, some cooling systems can save in excess of 70% in annual cooling energy costs by operating in economiser mode exchanger. In addition, the polymer is corrosion-proof compared to other designs that use coated aluminium which corrodes when wet or exposed to the outdoor elements. “Many of the larger Cloud and colocation sites are worried about the life of the data centre, operational simplicity and how to maintain systems – having modular components, such as the heat exchanger, as part of the design helps offer them peace of mind that they can keep the data centre operational… The modular design of the Ecoflair

and enables customisation according to the cooling requirement and local conditions. The scalable approach makes Ecoflair particularly suited for colocation facilities rated between 1 and 5MW (250kW modules) and large hyperscale or cloud data centres rated up to 40MW (500kW modules). The modularity also allows the cooling to grow at the same rate as power upgrades, according to the needs of the data centre as IT loads expand. Indirect air economisation can be deployed regardless of most environmental or climactic conditions pertaining to the data centre’s location – the technology is typically suitable for at least 80% of all global locations. Schneider Electric points out that such adaptability helps data centre owners to standardise the cooling architecture of their facilities around the world providing repeatable designs that speed deployment and reduce operational and maintenance costs. To download the white paper, visit: https://tinyurl.com/ hm9kpuk

MCP June 2017

missioncriticalpower.uk


MAINTENANCE

25

When the power doesn’t come on: are you covered? When a UPS fails, a quick response is required but many enterprises find that, when it comes to their maintenance contract, what constitutes a ‘response’ is unclear. Riello UPS general manager Leo Craig warns companies to read the small print

L

eo Craig wants to shake up the mission critical power industry and is warning organisations to scrutinise suppliers on their response times to failures of UPS systems. “A lot of ‘weasel words’ are used in maintenance contracts like ‘four-hour response’, but often it is not defined what this response will actually be,” he comments. “Is this simply a phone call or is it a visit? If it is the latter, is it by a qualified person who is able to fix the equipment? We are aware of one instance in which a plumber was sent by the supplier!” He adds that organisations need to ask what happens once their supplier has ‘responded’. Will there be a guaranteed ‘fix’ and what is the timeframe? If they don’t fix the UPS, what is the penalty? “Often these maintenance contracts benefit the supplier and not the customer,” says Craig. “This is unacceptable. There are many contracts in missioncriticalpower.uk

the sector that tie in clients and cost them dearly. I believe this is unethical practice.” To address the issue, Riello’s premium Platinum Elite contracts will offer a four-hour response – 24 hours a day, seven days per week – with an engineer on site, and a guaranteed fix within eight hours. Craig explains that there will be financial penalties that benefit the client if these targets are not met. In addition, the client can then decide if they wish Riello to continue to attempt a repair or to replace the equipment. “It is about putting the control back in the hands of the client and not the supplier. The contracts will be fair, transparent and crystal clear,” he says.

Also, there will be no autorenewal – clients will be asked whether they wish to continue, instead of finding themselves locked into a contract for another year, because they simply omitted to cancel it. “We are looking to give the ultimate customer experience with maintenance contracts.. There are too many get-out clauses and ‘ifs and buts’ in contracts today. Often they have 90-day notice periods and are very restrictive; we hear of some horrendous horror stories, and we want to expose this in the industry and take a different approach,” comments Craig. He points out that some businesses believe they are getting a good deal from their supplier as they claim to offer

A lot of ‘weasel words’ are used in maintenance contracts like ‘four-hour response’, but often it is not defined what this response will actually be

a two-hour response rate, but on closer inspection of the contract, the ‘response’ may be just a phone call. “We believe that response times need to be realistic, but we want to set the gold standard for the industry and encourage enterprises to look at the small print,” Craig continues. In addition to a choice of packages, with various response times to suit the needs of the client, Riello can offer bespoke maintenance contracts. “A UPS is an electronic device and it will go wrong at some point in its life. Businesses need to decide what their expectations are and what they want to happen. Predictive maintenance is limited in what it can anticipate and the best way to avert disaster is to ensure that the system is designed to be resilient, in the first place. If things go wrong, you need to have the right maintenance contract and right expectations to ensure that the equipment is back up and running quickly, so that if the other system fails, you are still covered.” Ultimately, maintenance is just one piece of the puzzle. Craig says that he has found some businesses that have had one UPS supporting their entire global infrastructure putting their operations at serious risk. There needs to be due diligence, a risk assessment and a business continuity plan. “Businesses need to look at the ‘what ifs’. For critical environments and where the financial losses can be high, it is not enough to install a single UPS – some organisations make this first step and don’t get any further; what happens if the UPS fails? For some businesses an outage may cost millions.” June 2017 MCP


26

POWER GENERATION

Data centres: where’s the next wave of innovation? In the future, data centres may be powered by nuclear energy or even human waste. Sound too far-fetched? Think again… The sector is exploring innovative approaches to onsite generation and anything is possible. Louise Frampton reports

P

UE has gone as far as it can but energy usage and power generation will become the focus for innovation in data centres, says Phil Collerton, managing director of the Uptime Institute in Europe, Middle East and Africa. Highlighting some the key trends in the sector, Collerton reveals that energy efficiency and downtime remain high on the agenda but data centres are also starting to ‘think outside the box’ in terms of onsite power generation. Could we see nuclear powered data centres in the future? Collerton thinks it is a possibility. There has been a rise in onsite power generation, which MCP June 2017

has included gas-burning generators and in some markets, such as the Nordics, data centres are now being built in close proximity to hydro-power sources. One example, in the US, is Google’s data centre in Oregon, which is also using hydro from a local dam; Thor in Iceland is using geothermal energy to power its servers and other equipment, while a variety of pioneering approaches are now being explored – including nuclear. In 2015, plans were announced to build what could be Russia’s largest data centre right on top of the Kalinin nuclear power plant.

Power innovation “I recently visited a site next to a sewage waste disposal facility where they are looking at using this as a source to generate power…Power is where the next wave of innovation is going to come from,” says Collerton. The Uptime Institute’s recent survey of over 1,000 data centre professionals found that 21% have installed onsite primary power generation (renewable or natural gas), while a further 22% are considering it. However, interest in onsite power generation varies according to region: installations in Europe account for 13%, compared to 21% in the US and Canada, 24%

in Africa and the Middle East, and 33% in Russia and CIS. Collerton explains that sustainability and cost savings are some of the key drivers, behind increasing interest in onsite power innovation, but there is also a desire to be perceived as “leading edge”. In addition, we are also going to see a lot more power going back into the grid. He points out that, in Switzerland, data centres are obliged to connect their backup generators to the grid and there are benefits in terms of energy taxation. However, there is still reticence to participate in demand-side response schemes in other locations, such as the missioncriticalpower.uk


27 policy, and he believes that having a role in supporting the grid will become a part of this agenda in the future. Five years ago, people were talking about high density and super high density rack. According to the Uptime Institute, this trend has not materialised – because of virtualisation, there hasn’t been a need to build these ‘super high power racks’ to meet demand. This has had an onward impact on the power requirements of the data centre infrastructure Nevertheless, there is a growing imperative to provide good stewardship of corporate and environmental resources. Uptime Institute recently launched an ‘efficient IT programme’, to provide a holistic approach to eliminating waste and reducing resource consumption. As part of this, it recently visited LinkedIn, in the US, to look at how they are using energy, how many redundant servers they are operating, whether they have an accurate picture of all their assets within their facility, how much they

Data centres are starting to ‘think outside the box’ in terms of onsite power generation

UK, which needs to be overcome. “The main reason for this reticence is possibly control; data centres worry what will happen if they suddenly need the spare capacity…there are some considerations but I think it will happen,” says Collerton. Pressure from peers and customers will ultimately drive this forward, he believes: “Once one or two of the big colocation data centres start to get on board, others will follow,” he predicts. There is a need to educate and encourage end users to ask searching questions, according to Collerton. Customers are increasingly asking data centres about their green credentials and energy missioncriticalpower.uk

$10m

The amount saved by AOL by turning off 10,000 servers

energy, but it will be difficult for data centres to lower their PUE any further. “We need to look at how to reduce the IT load. It is time to go back to the IT companies and say: ‘You need to make your servers more efficient and your applications run more efficiently.’ It won’t change the PUE number but it will get the overall usage down. “We may see some further efficiency in terms of power generation and use, on the UPS side, but the biggest opportunity is in the IT area and for organisations to looks at whether there is a need to use all their servers,” continues Collerton. He reveals that, following a review, AOL turned off 10,000 servers saving around $10m. “Enterprises need to looks at their assets – frequently, organisations do not have up-to-date information on what these are and what they do. There is a lot of waste,” says Collerton. “In colocation data centres, people are billed for the energy they use – it is different for the enterprise where the IT team doesn’t

There will be continued developments in onsite power generation and data centres will look to do things cheaper and more innovatively. At the same time, there will be an increasing focus on energy efficiency can save by implementing efficiency measures, as well as how they can reduce or use heat. For adopters of efficient IT, Uptime Institute’s experts have identified cost savings between 3x-200x the cost of the assessment. Collerton points out that IT teams are “rarely on the hook” for the actual energy costs within an organisation. PUE has been an important initiative, driven by the Green Grid, which has changed the way people upgrade data centres. People are much more aware of how they run their units to save

necessarily see an energy bill. “With colocation, you pay for everything you use – by nature it is more efficient. But there is still an opportunity, to go back in, once the installation has been running for a few years, and identify whether all the assets are being used.” In the future, Collerton predicts that there will be continued developments in onsite power generation and data centres will look to “do things cheaper and more innovatively”. At the same time, there will be an increasing focus on energy efficiency. June 2017 MCP


28

DEMAND-SIDE RESPONSE

Is traditional DSR dead? There is a lack of understanding in the market around the value of different demand-side response (DSR) schemes, according to Endeco Technologies CEO and co-founder Michael Phelan. He warns that many businesses are becoming ‘locked into commodity schemes where there is no money’. Louise Frampton reports

A

s capacity market revenues have fallen, the question has been raised: is traditional DSR dead? According to Michael Phelan, the issue is that the DSR market is changing and businesses now need to be looking to the most profitable schemes. He believes that energy balancing schemes can still offer attractive revenue opportunities, but the real potential lies with fast response schemes. “As a DSR business, you could not rely on the traditional DSR, STOR and capacity MCP June 2017

markets, and have a business – or certainly grow a business. There is more of a balancing problem than a capacity problem, at present… This is why we are seeing low prices on the capacity and short-term markets. The main revenue streams are around frequency response – ie the ‘faster’ services. There are other revenue opportunities around smart tariffs, peak avoidance and optimisation, but these are smaller.” Businesses are being asked to play a much more active role, using intelligent energy

platform technologies, to manage how much power they use and when they use it. DSR allows load and/or generation from large energy users to be added or removed from the network, in order to stabilise grid frequency. This adds stability to the system, particularly as increasing amounts of distributed generation, such as wind and solar, are added to the grid, and coal and gas-fired reserves decline. The goal is to give the UK’s power infrastructure more ‘flexibility’. Dynamic firm frequency

response (DFFR) is the latest revenue opportunity for large energy users and is a continuously provided service used to manage the normal second-by-second changes on the system. It relies on complex control and fast response, and offers much higher revenues than for traditional DSR. According to Endeco’s figures, an organisation can benefit from an income of about £70,000 per MW in return for its availability to meet unscheduled energy peaks on the grid, by participating in DFFR. This is missioncriticalpower.uk


29 Businesses must look to the most profitable schemes in light of falling capacity market and balancing revenues

£70K/MW The revenue achieved by participating in DFFR

much higher than the £35,000 for non-dynamic frequency response and £20,000 for STOR. “A year ago, we were all getting £27,000 per MW capacity; this year it is around £6,000, so it has fallen quite a lot, which affects the business case. £6,000 per MW is not a lot of money for potentially having to do some work, putting in some equipment and testing…This has caused more than its fair share of problems for aggregators… This is nearly a 70% fall. The value, instead, has gone into fast frequency response (FFR) or DFFR,” comments Phelan. It has been suggested that the DSR market may have become ‘too commoditised’ or ‘over complicated’. Phelan points out that “there are a lot of schemes and a lot of prices”. The problem, in his view, is that no one is explaining where the real value is, so missioncriticalpower.uk

people are “getting locked into ‘commodity schemes’ where there is no money”. If you do not have the capabilities to deliver fast response schemes, you are not going to “shout about” these higher value markets, Phelan comments. While there is greater clarity in Ireland on “where the value is”, the UK market needs help in understanding that the faster schemes offer the biggest returns. Phelan suggested that National Grid could have a role to play in offering some clarification to the market. Fast and slow adopters As a large aggregator with over 200 sites, operating in the UK and Ireland, Endeco currently enables energy users from a wide range of sectors (including food and drink, metals/glass, plastics, aggregates, chemicals, airports, water, data centres and

utilities), to participate in grid balancing schemes. However, some business sectors require more convincing than others and many organisations still struggle to understand what is required at a technical level – presenting a barrier to uptake. Phelan says there is a need to raise awareness of the potential of DSR, reduce confusion in the market and encourage increased participation. Industrial and commercial operators with refrigerator chiller load or induction heating load are proving to be the most receptive to participating, according to Phelan. The greatest uptake is among operators that are particularly profit and margin driven – such as in the food industry, for example. Mission critical applications, where the risks associated with an outage are very high, are currently more conservative and it will take some “confidence education”, to bring these sectors on board, he acknowledges. Aggregators will need to work with them more closely to understand that participation can be achieved safely, while there needs to be greater understanding that assets need to be utilised and tested – so why not profit at the same time? “The analogy I use is that, if you bought a truck, you wouldn’t expect to leave it

outside, and simply turn on the engine and hope that it will work,” says Phelan. “You need to test it, to take it for a run sometimes and carry a heavy load. If you don’t do this, you increase the risk of it failing.” “Data centres have large Rotary UPSs and it is a ‘no-brainer’ for them to participate in frequency response but they are nervous about participating. Early adopters are confident in their assets but I think many are concerned about contractual barriers that may prevent them from participating in DSR,” says Phelan. “Colocation facilities, in particular, are very risk adverse – data centres are interested in having a conversation [about these schemes] and we are seeing one or two coming on board, but generally adoption is slow in this sector. We are confident that they will come on board in time, however.” He adds that hospitals are starting to look more at DSR opportunities but are also slow to adopt and are ‘less confident in their assets’ than data centres. “We expect them to come very late to the party,” says Phelan. Investing in technology The move towards fast response schemes has meant that the business has shifted its focus towards battery technology. Phelan explains that many data centres still »

The main revenue streams are around frequency response – ie the ‘faster’ services. There are other revenue opportunities around smart tariffs, peak avoidance and optimisation, but these are smaller June 2017 MCP


30

DEMAND-SIDE RESPONSE

use lead acid batteries, which are not capable of the amount of switching that the grid wants them to do at present. “We expect there to be transition towards lithiumion over the next 12 months. There is a lot of value in this area,” he explains. Endeco has received significant inward investment from ESB (a leading Irish utility) and this will help support plans to supply batteries free of charge to deliver DFFR, in the future. “With battery technology, you can receive better payments, as you can participate in dynamic or enhanced schemes. Without this, you may end up in schemes that offer £30,000 as opposed to schemes that offer more than £100,000. There is huge potential,” Phelan comments. When a business decides to participate, the aggregator will establish the organisation’s

parameters for response by defining operational constraints and priorities. Phelan points out that this approach to DSR ensures that businesses can respond when required, without risk or impact on their processes or productivity. “Instead of using random tripping, we use constrained management. This means we do not have any effect on the assets. If random tripping is used, the assets could be off at a time when they can’t be off, or damage could be caused to the assets by tripping too much,” he says. The company takes care of the necessary hardware and software installation, as well as the online monitoring and reaction systems, and the dayto-day running of the system. This is offered without any Capex requirement. The aggregator points out that DSR schemes will inevitability change over

time, so it is important that companies are technologyready to access the more financially attractive tariffs as and when they come on line. The more lucrative the scheme, the more complex it is to participate, so participants need an aggregator that can help them ensure they are future proofed with a flexible platform. The Brexit factor Ultimately, traditional DSR isn’t dead, according to Phelan, but it is lower value. In the wake of Brexit, he believes that fast response revenue schemes will become more important as businesses will need to ensure they take advantage of every competitive advantage they can. “In the long-term they will need this; they cannot afford for costs to go up at a time when tariffs are just about to kick in,” concludes Phelan.

In the wake of Brexit, fast response revenue schemes will become more important as businesses will need to ensure they take advantage of every competitive advantage they can

Source: Endeco Technologies

Figure 1: Comparison of demand-side response schemes MCP June 2017

missioncriticalpower.uk


ENERGY STORAGE

31

Enhancing power security with lithium-ion technology There are opportunities for data centre operators to use lithium-ion technology to improve security and power supply integration. On the other hand, valve regulated lead acid (VRLA) batteries have been providing reliable, cost-effective service since day one of the data centre industry. Peter Stevenson, senior technical coordinator for GS Yuasa Battery Europe, weighs-up the pros and cons of the technologies

Lithium power cabinet

T

he prime purpose for uninterruptible power Supplies (UPS) is data processing security. So a new UPS electricity storage technology should be able to offer improved security opportunities. In the case of lithium-ion batteries, this is possible on several levels, compared with the incumbent lead acid (VRLA) battery technology. As well as power security improvement, the total cost of ownership (TCO) for any commercial data activity must

missioncriticalpower.uk

be an important consideration. Given the extended life, and opportunities for significantly reduced operating costs, this is also an area where lithium-ion can provide benefits compared with lead acid. On the other hand, the valve regulated lead acid (VRLA) battery has been providing reliable, costeffective, service since day one of the data centre industry and will not be beaten on initial purchase price in the foreseeable future. Reliability in any system is facilitated by simplicity. At a

The longer, flatter life trajectory of lithium is the basis for total cost of ownership reduction as well as security enhancement

chemical level the processes taking place within a lithiumion cell are fewer and simpler than those involved with lead acid operation. While charging or discharging a lead acid cell the internal structure of the active materials is first dissolved then rebuilt within the electrode structures. The sulphuric acid electrolyte also takes part in the chemical reaction and water is decomposed to oxygen and hydrogen in energy wasting side reactions. In comparison the solid electrode structures, in lithium-ion – metal oxide cells provide fixed crystalline lattices, which act as hosts for single lithium-ions that move freely between the positive and negative electrodes as the cell is charged and discharged. Detailed mathematical models are routinely used by lithium-ion design engineers to predict the performance of cells in complex operating patterns. This level of definition is not possible for lead acid cells even after 150 years of development. One particularly elusive property of lead acid cells is the voltage depression effect that occurs in the first few minutes of discharge. This effect is particularly pronounced at high currents and following long periods of float charge. Although the voltage recovers later, the trough voltage can cause unpredictable tripping of a UPS. This effect limits the minimum reliable autonomy that should be applied with VRLA to five minutes or more. By contrast, lithium-ion  June 2017 MCP


32

ENERGY STORAGE

This gas is recombined within the cell but releases heat during the process. More than 90% of the current used during float charging can be wasted as heat. With lithium-ion chemistry selfdischarge also occurs but there are no significant side reactions to generate heat and the electricity consumed to maintain full charge is much lower. A benefit of the oxygen recombination process in VRLA batteries is that it can act as a release valve for surplus energy when a series of cells are in an imbalanced condition. Simply float charging will allow cells with low state of charge (SoC) to become fully charged while high SoC cells dissipate surplus energy as heat. For lithium-ion does not exhibit this voltage depression and systems can be sized even for a few seconds, if a very high power cell design is selected. The greater predictability of lithium-ion also extends to ageing effects. In the case of lead acid batteries the primary failure mode is corrosion of positive electrode grids. This proceeds throughout life but the harmful effects are only manifested over weeks or months at end of life, accompanied by a rapid drop in performance. By contrast, lithium-ion cells’ main ageing process is a steady thickening of a solid electrolyte film at the surface of active materials. This results in a steady, predictable loss of capacity and associated rise in internal resistance. If a cell has, for example, taken 10 years to reach 80% of original performance, it will take another 10 years, under the same conditions, to reach 60% of performance. The longer, flatter life trajectory of lithium is the basis for TCO reduction as well as security enhancement. Rather than replacing all cells as they approach end of life, perhaps defined as 80% like VRLA, it is acceptable to add MCP June 2017

new parallel battery sets to existing units. This can also fall in line with expansion plans for a datacentre that is not fully loaded initially, but requires additional power and incremental replacement flexibility in later life. Another significant TCO opportunity is to save air conditioning costs with lithium-ion batteries. The design life for VRLA batteries is typically based on 20°C operation with a halving of life expectancy for each 10°C increase in average operating temperature. Lithium-ion design life calculations are typically based on 25°C and can be operated continuously up to 30°C without significant effect. For European operations this means passive or fresh air cooling is adequate rather than air conditioned systems. VRLA batteries are particularly sensitive to thermal management because they generate heat internally while they are in standby operation. Lead acid batteries require continuous charging to replace self-discharge losses. At the voltages required for complete charging the water in the cell electrolyte decomposes to form oxygen gas.

integrated cabinet packages that can be connected directly to UPS, in place of VRLA, with minimal alteration. While the benefits of lithium-ion can be applied to existing data centre operations, there are rapid changes occurring in the wider electricity supply industry. These changes, particularly with regard to renewable energy, could have far reaching implications for the structure of the electricity supply network and the commercial interaction between users and suppliers of electricity. Storage and demand management are at the forefront of current developments in this field. On the basis that up to 5% of UK electricity passes through a UPS, which contains storage and power management

On the basis that up to 5% of UK electricity passes through a UPS, which contains storage and power management capabilities, it is not too difficult to envisage a smarter use of these assets in future cells this mechanism is not available so electronic battery management systems (BMS) are used to apply resistor shunts across high SoC cells to dissipate excess energy. The integrated BMS adds significantly to the cost of lithium-ion modules but in high security applications this is often applied for VRLA at extra cost. Yuasa offers fully

capabilities, it is not too difficult to envisage a smarter use of these assets in future. While there are clearly opportunities for data centre operators to use the new lithium-ion technology to improve security and power supply integration, the established VRLA technology can still support existing structures most effectively.

Lithium-ion battery technology from Yuasa missioncriticalpower.uk


Preventing power failure

A single bad battery within a UPS battery string poses a risk of downtime for critical operations, which is why regular maintenance of batteries is critical to ensure backup power. Midtronics has developed the Celltron Advantage (CAD) battery tester, an easy-to-use, reliable tool for service providers and operations and maintenance teams. The solution includes hand-held battery testers, comprehensive battery monitoring systems and various software applications. Celltron Advantage is designed to test valve regulated lead acid (VRLA), vented lead acid (VLA), and NickelCadmium (NiCd) batteries, and is capable of providing battery voltage down to one volt, as well as measuring inter-cell and terminal connection resistance. “We are constantly looking at ways to improve our service to our customers,” says Jim Laughlin, technical director at Linnet Technology, a UK-based UPS sales and service company. “As our service is concentrated on battery maintenance of UPS and emergency lighting systems, it is crucial we have the best diagnostic equipment at our disposal to highlight any problems at the earliest opportunity. This allows our customers to take the necessary steps to reduce costly power outages and ensure power continuity.” After reviewing a variety of solutions, Linnet purchased a number of Midtronics CAD-5000 units (base kit) and found the ease of use, time saved on site and detailed reports via the software were a significant benefit. Instead of following recommended battery replacement schedules, trending batteries via testing with Celltron Advantage allows replacement when it is most cost-effective. This prevents replacing batteries before their life cycle has ended or after it is too late and they are no longer effective. “Having discussed our needs with BCL Power (another UPS service company in the UK), their advice and help ensured we had the necessary kit to meet all our needs,” Laughlin reports. Built to rugged standards, the battery testers are portable, weigh less than three pounds and are easily transportable. The system provides conductancebased diagnostics (proven effective for identifying and trending battery health), and offers a less invasive testing approach that reduces battery discharge, voltage measurement skew, and allows for more tests on a single internal battery without a recharge. The user can also take the battery temperature with one device, eliminating the need to carry multiple tools, while the system offers the capability to trend battery amp hours using conductance technology. missioncriticalpower.uk


34

CONNECTIVITY & CABLING

Putting cables to the test The satisfactory performance of newly installed high voltage cables should not be taken for granted. To detect routing faults in new cables as well as those that have been in service for some, testing is an absolute necessity, warns Hein Putter, product manager at Megger

I

s there really a need to carry out anything other than cursory tests on newly installed high voltage cables? After all, installation techniques are well established and contractors are familiar with what’s needed. Surely satisfactory performance of new cables can be almost taken for granted? The answer to this question is an emphatic no! Unfortunately, faults are routinely found on newly installed cables. An example, admittedly an extreme case, concerns an 11 km cable installed to serve a wind farm. The installation work had been carried out by four different contractors, and all had simply laid the cable in the ground with no sand backfill. On testing, this cable was found to have between 10 and 20 sheath faults per kilometre. So let’s be very clear from the outset that exhaustive testing of new cables isn’t something that’s optional – it’s an absolute necessity! But what form of testing should be employed? In answering this question, we can start by saying that DC testing is generally acknowledged to be of limited value in detecting typical problems relating to poor workmanship. Experience shows that very low frequency (VLF) testing is a much better option. It is, however, important to be clear that VLF testing will undoubtedly uncover major workmanship problems but, depending on the voltage source used – VLF, resonant or 50/60Hz – undetected problems may sometimes remain. It is nevertheless reasonable to say that if a VLF test shows a cable to be fault-free, it is safe to energise, although there may still be undiscovered incipient MCP June 2017

Cable test van with integrated PD coupler

problems that could shorten the cable’s service life and lead to premature failure. The most reliable way to uncover these “hidden” problems is with partial discharge (PD) analysis. Once again, it is necessary to sound a note of caution. No test technique is guaranteed to find every conceivable cable problem. PD analysis is by far the most dependable technique and it will uncover almost all workmanshiprelated problems, but it will not, for example, find high contact resistance faults within

new joints that will lead to in-service thermal breakdown, What it will find is faults such as poor cable preparation, most types of poor jointing technique and exotic faults including insect damage. In summary, PD analysis is a reliable, easy-to-implement method of quality control for newly installed cables. It makes it possible to check on the performance of service contractors, to check on the quality of work produced by the cable owner’s own staff and to significantly increase the

reliability and availability of the network. As well as quality control, PD analysis is also invaluable for post-repair testing to check that the repair has been carried out satisfactorily and that the cable has not suffered further damage as a result of, for example, high fault currents. Finally, PD analysis is an excellent way of monitoring the condition of cables that have been in service for some time, to detect possible deterioration of the insulation and cable accessories. missioncriticalpower.uk


35 Of course, when considering PD analysis the next question is what voltage waveform should be used to energise the cable under test? Several internationally accepted and widely used options are available, and each has its own pros and cons. Let’s take a look at the three principal options, and

means that they can test longer cables and, in many cases, all three phases can be energised and tested in parallel. Another very significant advantage is that the PD results are directly comparable with those that would be obtained at power frequency, so the test gives a very good indication of the likely in-service behaviour of the cable. The biggest drawback of VLF CR testing is that this waveform cannot be used to perform Tan Delta testing. As with sinusoidal testing, there is a small risk of cable damage during testing but, once again, this risk is only significant with old watertreed cables. Damped AC (DAC) As with VLF CR testing, DAC test sets have a high test capacity for their size, allowing long cables to be tested, as well as three-

examine their advantages and disadvantages. VLF sinusoidal at 0.1Hz This has the major advantage that it can also be used for Tan Delta measurements. Its biggest drawback, however, is that the PD characteristics obtained from testing with this waveform are not directly comparable with the behaviour of the cable at power frequency, which makes it difficult to relate the results to the future in-service performance and reliability of the cable. There is also a minor possibility of the cable being damaged by the test, though this only a significant risk with old water-treed cables. VLF cosine-rectangular (CR) at 0.1Hz For a given physical size and weight, test sets using this waveform have a much higher testing capacity than their sinusoidal counterparts. This missioncriticalpower.uk

What engineers really need is a test set that offers a choice to test voltage waveforms in one device, that allows standardscompliant VLF testing at 0.1Hz, that automatically evaluates and interprets measurement data, and that incorporates a time-saving voltage withstand testing with accompanying diagnostic measurements. And it is to meet these needs that Megger has developed its new TDM series of cable test sets. These innovative test sets combine cable testing, cable diagnostics and sheath testing in a single device. They allow standards-compliant VLF testing at 0.1Hz of cables with capacities of up to 5.5 µF at 36 kVrms, and up to 10 µF at 18 kVrms. In addition, they have integrated Tan Delta measurement facilities with automatic interpretation of the test results in line with IEEE 400.2, and they support PD analysis with VLF sinewave,

engineers to choose the best option for each and every application. DC is available for sheath testing and sheath-fault pin pointing, VLF sinewave for standards-compliant testing of short cables and for Tan Delta or PD diagnostics, VLF CR for standards-compliant testing of longer cables and PD diagnostics with 50Hz slope technology, and DAC for guaranteed non-destructive PD diagnostics. These versatile cable test sets are available in vehiclemounted or transportable versions, and feature modular construction that allows users to purchase only the components they initially need, with the option of easily adding extra functionality in the future. A particularly interesting feature of the new test sets is their support for phaseresolved PD (PRPD) pattern

PD analysis is invaluable for post-repair testing to check that the repair has been carried out satisfactorily and that the cable has not suffered further damage as a result of high fault currents phase cables with the phases connected in parallel. DAC testing also delivers PD results that are directly comparable with those that would be obtained at power frequency and it has the added benefit that it is very unlikely to cause cable damage as the test voltage is only applied to the cable for a short time. The drawback of using the DAC waveform is that, as is the case with the VLF CR waveform, no Tan Delta testing can be performed. Because each of these voltage waveforms has its own benefits and shortcomings, it is clear that it would be very useful to be able to select the best waveform to use on a case-by-case basis, according to the needs of the particular application. It’s equally clear, however, that purchasing, maintaining and transporting separate test sets for each waveform is an option that’s far from being attractive.

DAC and VLF CR (50Hz slope technology), with real-time data evaluation. Finally, they also offer monitored withstand testing. By offering a choice of four test waveforms in one device, TDM series test sets allow

analysis. It is generally accepted that PD diagnostics will find all but a very small minority of workmanship-related problems in cables. It is also widely known, however, that cables with some of these problems »

The TDM series of cable test sets combine cable testing, cable diagnostics and sheath testing in a single device June 2017 MCP


36

CONNECTIVITY & CABLING

can remain in service for 10 years or more without failing. The difficulty is deciding whether a defect detected by PD analysis is of this type, or whether it is of a type that will result in the cable failing much more quickly. PRPD can help to answer this question. With PRPD, PD intensity is essentially plotted against the phase angle of the applied test voltage. This produces patterns – the characteristics of which are different for different types of defect. This is not, in fact, a new idea. It has been used with motors and generators for many years, but its application to cable diagnostics is novel. By recognising and categorising the PRPD pattern, it is possible to assign it to

Cable fault detection will always be challenging, but new developments and options are, without doubt, starting to make life much easier a specific discharge family and thereby distinguish, for example, between corona, surface/interfacial and cavity discharges. This provides valuable information about whether a fault detected by PD analysis is likely to cause a failure in the near future, or whether it is likely to remain dormant almost indefinitely. Like most test techniques, PRPD is not a panacea. In particular, it provides little or no useful information if there are two or more faults

on a cable. The main criteria for evaluating defects are still PD inception voltage, PD level (in combination with location data), PD repetition rate and PD concentration. Nevertheless, PRPD is another very useful tool in the cable engineer’s testing toolbox. We have seen that PD analysis is an inexpensive and relatively easily implemented method of checking the quality of workmanship on newly installed cables and that this analysis can be carried out

using a variety of different voltage waveforms, each with their own advantages and drawbacks. We have also seen that the new products in the Megger TDM range mean that one product puts testing with all of these waveforms at the fingertips cable test engineers, allowing them to choose the most appropriate for every application. Finally, we hve looked at a new cable analysis technique – PRPD – that helps to categorise faults detected by PD analysis. Cable fault detection will always be challenging, but these new developments and options are, without doubt, starting to make life much easier and more convenient for its practitioners.

Optimising data centre cabling systems Cindy Monstream, data communications specialist for Legrand’s Ortonics product line, talks through key areas ripe for cost savings

D

ata centres are under increasing pressure to deliver new levels of efficiency, while reducing operating expenses. With the rapid movement toward virtualisation and Cloud infrastructure, downtime reduction is a necessity. In terms of performance, it is important to consider your options in relation to structured cabling. There are systems, for example, that can substantially cut insertion losses for fibre systems and offer copper cabling systems with the performance headroom to provide for future growth. Data centres should also improve cable management to support appropriate bend radius and cable connections to remove performance inhibitors. In addition, as data centre staff numbers shrink, delivering time efficiencies is key. It is important to consider: • Flexibility and scalability: select systems that provide flexibility for multiple cabling types and scalability. Flexible cable infrastructure will support all cabling approaches without having to change cable management • Speed of installation: device changes and the resultant cabling modifications for moves and additions are constantly changing, so specify systems that are quick to install • Cable infrastructure improvements: simplify how cables are routed and

MCP June 2017

managed to save installation time. Consider vertical managers to support more cable without adding new management Space is another key area that needs to be addressed: data centres should focus on increasing rack density and ‘going vertical’, which offers new ways to gain space. Potential options include taller racks – a 9 inch rack will have up to 38% more RU space than a comparable 7 inch one. Support for improved air flow and cooling also needs to be provided – rack-based cooling, passive

cooling and other methods can increase rack density with more active devices within the same space. Cabling solutions that ensure higher port density opens up space for active devices – for example, Ortronics, can support 144LC connectors in one rack unit, reducing the number of RUs dedicated to connections. Combining fibre and copper in one rack unit should be considered as mixed-media offerings can save space, thanks to a single rack unit. Smaller outside diameter cable can also increase space available in the cable runs or in a raised floor. Improving customer-vendor experience is often overlooked in terms of efficiencies, yet one of the biggest frustrations, is when products do not work together as expected. Data centres should demand a single point of contact – helping coordinate solutions, delivery schedules and work with contractors. Every project has unique demands, but experience pays. Tapping into your partner’s knowledge and experience, can shorten a project timeline and enable efficiencies. Finally, energy use and sustainability are also focal points for reducing costs. Data centres should look for products with minimal hazardous materials; that feature increased recycled materials and waste reduction; as well as next-generation air flow and cooling improvements. missioncriticalpower.uk


TESTING & INSPECTION

37

Investing in infrared? Resolution matters! Thermal imaging can provide a useful tool for identifying electrical faults, as part of a data centre’s inspection strategy. But for effective decision-making, capturing the finer details in high resolution is critical, warns Flir Systems’ Andy Baker

D

ata centre systems’ failures are costly, not just in terms of revenue loss but also company reputation and shareholder value. So it is critical that any electrical fault is spotted in its infancy before it can potentially compromise service. A popular method for detecting these faults is thermal imaging. This technology has become mainstream in the past decade and its cost has fallen substantially thanks to its scope of application across many industry sectors. It is also the subject of continuous development, presenting prospective purchasers with a lot of choice. The range now extends from pocket-sized models and infrared-enabled smart phones to low cost point-and-shoot troubleshooting cameras and high end models with every function necessary for the professional thermographer. So how do you assess the best model for your needs? Here are some important pointers to help ensure the scope of your thermal imaging camera matches the scope of your job… The best you can afford Most thermal imaging cameras have fewer pixels than visible light cameras, so pay close attention to detection resolution. Higher resolution infrared cameras can measure smaller targets from farther away and create sharper thermal images, both of which add up to more precise and reliable measurements. Also, be aware of the missioncriticalpower.uk

Thermal imaging has become a popular method for detecting faults

difference between detector and display resolution. Some manufacturers will boast about a high-resolution LCD to mask their low-resolution detector when it is the detector

resolution that matters most. For instance, LCD resolution may spec at 640 x 480, capable of displaying 307,200 pixels of image content. But if the IR detector pixel resolution is

only 160 x 120, giving 19,200 measurement points, the greater display resolution accomplishes nothing as the quality of the thermal image and its measurement data are always determined by detector resolution. Higher resolution thermal imaging not only provides more accurate quantitative results, it can also be very effective in showing findings in finer details to others. This can help speed the decision-making process for improvements and repairs. As well as clarity of image for effective problem diagnosis, resolution is very » June 2017 MCP


38

TESTING & INSPECTION

important from a safety perspective too. For electrical inspection, there is no point in buying a low-priced, lowresolution troubleshooting camera that can only give you a clear image when it’s six inches away from the target! Accurate results Consistency of measurement accuracy is a very important factor when determining the value of a camera. For best results, look for a model that meets or exceeds +/- 2% accuracy and ask your supplier for details of how they assure the manufacturing quality of the detector to guarantee this. That is not the only criteria, however. In order to produce correct and repeatable results, your

Higher resolution thermal imaging not only provides more accurate quantitative results, it can also be very effective in showing findings in finer details to others. This can help speed decision-making camera should include in-built tools for entering both values for emissivity – the measure of efficiency in which a surface emits thermal energy – and also reflected temperature. A cabinet may be hot in the thermal image but its shiny surface could just be reflecting the heat from overhead lighting or indeed the body heat generated by the camera operator. A model that gives you an easy way to input and adjust these parameters will produce the accurate temperature

measurements you need in the field. Other helpful diagnostics to consider are multiple moveable spots and area boxes for isolating and annotating temperature measurements that can be saved as radiometric data and incorporated into reports. Standard file formats Many thermal imaging cameras store images in a proprietary format that can only be read and analysed by specialised software. Others have an optional JPEG

It is vital not to underestimate the importance of training MCP June 2017

storage capability that lacks temperature information. Clearly, the most useful is a format that offers standard JPEG with full temperature analysis embedded. This allows you to email IR images without losing vital information. Radiometric JPEGs can also be imported from wi-fi compatible cameras to select mobile devices using apps that allow further image editing, analysis and sharing. Also look out for models that allow you to stream MPEG 4 video via USB to computers and monitors. This is especially useful for capturing dynamic thermal activity where heating and cooling occurs rapidly and for recording motorised equipment or processes in motion. Some cameras feature composite video output for cabling to digital recorders while others include HDMI outputs. In addition, new mobile applications have also been developed that allow streaming video over wi-fi. All these capabilities help you share findings more effectively and enhance your infrared inspections and reports. Study the options Today, most thermal imaging cameras come with free software so you can perform basic image analysis and create simple reports. Advanced software for more in-depth and customisable reports is also available, allowing you to take full advantage of your camera’s capability and features. Investigate these tailored software programmes thoroughly to see which makes the most sense for your needs. Finally, it is vital not to underestimate the importance of training. The best thermal imaging camera in the world is only valuable in the hands of a skilled operator. missioncriticalpower.uk


The Directors' Energy Report 2017

Download your copy now at theenergyst.com/ directors

Directors Survey revised.indd 1

Produced by

Supported by

2/20/17 9:13 AM


40

ENERGY EFFICIENCY

Waste not, want not: act now to reduce consumption Chauvin-Arnoux UK general manager Julian Grant provides an insight into the use of portable power and energy loggers to identify and address areas of wastage

Y

ear on year, energy costs are rising and the global demand for data storage is increasing. It will continue doing so for the foreseeable future, putting financial pressures on data centres to not only use energy efficiently but also not waste it. Energy use is also subject to ever increasing environmental scrutiny and legislative requirements, as the world strives to reduce consumption and businesses attempt to reduce their carbon footprint, adding more weight to the MCP June 2017

issue. So important, in fact, are energy costs in relation to data centre performance, that management and measurement metrics, such power usage effectiveness, carbon usage effectiveness and data centre energy productivity, to name just a few, have evolved to enable accurate comparison and benchmarking. Rather than getting into the immense detail of data centre infrastructure management (DCIM), where you will encounter all the metrics detailed above and more, this

article will instead concentrate on key areas where energy may typically be being wasted, ways to identify and address areas of energy wastage or inefficiency, key considerations, and examples of how improvements can be achieved. Reports by the Carbon Trust have shown that up to 20% of any business’s energy costs are due to wasted energy caused by inefficient equipment. In addition to that, British Gas smart meter data analysis from more than 6,000 SMEs showed that they were using 46% of

their total electricity out of hours. While data centres operate 24/7, they are not always manned, and typical examples of areas where energy is wasted include lighting in occasional rooms (bathrooms, corridors, canteens, etc), office lighting, vending machines and hot water dispensers left on when no staff are present, and car park lighting left on during the day or at night when nobody is about. Significant energy use in missioncriticalpower.uk


41 data centres can be attributed to the HVAC systems used to cool the IT hardware, and in some cases this can represent as much as 50% of the total energy consumption. It follows, therefore, that not only does the system need to ensure effective and efficient cooling, and the absence of hot spots that could lead to equipment failure, but also avoid any overcooling, or cold spots, that will result in unnecessary energy use. With the majority of the rest of the data centre power consumption being drawn by the IT infrastructure itself, energy efficiency of that equipment becomes paramount and should be measured, and any items that are drawing power but no longer being used should be identified and considered for decommissioning. Surveys conducted in the US have shown instances where one third of all hardware in a data centre may be unnecessary, often remaining powered up for many years for no other reason than it has always been there. These issues can often build up unnoticed over time as a data centre grows and evolves to meet new customer needs, and with new equipment being added as more powerful and cost effective hardware emerges on the market. Help is at hand though, and in the absence of a sophisticated DCIM solution, or as part of an additional preventative maintenance programme, there are a selection of portable instruments available to provide the solutions. Before any corrections can be put in place, inefficiencies need to be exposed and understood. Fortunately, identifying electrical usage

20%

of energy costs are due to wasted energy caused by inefficient equipment over time in areas of a facility, or specifically monitoring consumption of a particular item, has been significantly simplified nowadays with the availability of portable power and energy loggers (PELs). Power and energy loggers The best power and energy loggers are extremely simple to use and can be installed in distribution panels or connected directly to equipment around the data centre without difficulty, and removed as easily without the need to shut down any parts of the electrical installation. They are capable of measuring and storing tens of thousands to several million readings of voltage, current, frequency, power, etc over a selectable time period, and the results can be retrieved later or transmitted locally or remotely via Bluetooth or LAN. Apart from showing the consumption of the various items on the electrical network, PELs can also show any imbalances on thrtee-phase systems caused by the IT and HVAC systems not being connected such that an equal amount of current is drawn from each phase. This alone would cause unnecessarily high billing. Every data centre has some kind of backup supply, and the specific knowledge of a data centre’s power consumption over time, as measured by a

PEL, is crucial in determining the capability of that backup to supply sufficient power. PELs also have the ability to identify harmonics and other disturbances and can be used to ensure the quality of the supply from standby generators is suitable for the potentially sensitive IT equipment it may be called on to power. Add a thermal imaging camera, allowing the temperature of all of the items in the data centre to be quickly viewed and recorded where necessary, as well as losses identified within the building fabric, and analysis of the effectiveness of the HVAC systems, and most energy inefficiency issues within the data centre can be discovered and corrective action taken. The thermal camera will show overheating IT equipment, which apart from the risk of failure and all that would entail from an operational point of view, must be consuming too much energy by virtue of the fact that it is generating excess heat, and could ultimately result in a fire. To make things even easier, today it is possible to source a complete energy efficiency kit, containing simple to use PELs and thermal imaging cameras, with associated reporting software, that allow any facility manager or maintenance electrician to perform energy surveys on their data centre and potentially make significant savings. In a world where energy is not getting any cheaper, and the competition to provide efficient and cost-effective IT services is growing year on year, there has never been a better time to put in place measures to reduce consumption and save money.

Every data centre has some kind of backup supply, and the specific knowledge of a data centre’s power consumption over time, as measured by a PEL, is crucial in determining the capability of that backup to supply sufficient power missioncriticalpower.uk

June 2017 MCP


42

UNINTERRUPTIBLE POWER SUPPLIES

How to select between modular and monobloc UPS systems Deciding which type of UPS is right for your data centre today and in the future is an important consideration, says AEG power Solutions’ Alessandro Nalbone

Should you choose a modular UPS system or go for a more traditional monoblock type?

T

he reliability of any business-critical application is dependent upon the quality of the mains power supply to which it is connected. Power quality issues, short-term power outages and complete mains power failures can lead to disruption and loss of service, data corruption and downtime. An uninterruptible power supply can mitigate and remove these risks but there are several topologies to choose from. MCP June 2017

Should you choose a modular system (in vogue) or go for a more traditional monoblock type? What should you base your selection on: energy efficiency, load right-sizing, access for and ease of maintenance, resilience and reliability, future scalability and even overall investment budget? The role of a UPS systems is to provide a conditioned and stable power supply, providing protection from mains-borne

power problems, small outages and longer-term complete power failures. Typically, a UPS system will use a battery pack to provide power, long enough to bridge the gap until the mains power supply is restored or for a generator to start-up or a further secondary AC source to be brought into circuit. A UPS also provides clean power and can tackle instances where the input power waveform is of a poor quality, such as under-voltages, over-voltages, input frequency fluctuations, harmonics, spikes or power surges. A typical UPS system should provide up to 15 years’ operation. As a maintained

capital investment, it is important to consider total cost of ownership (TCO). The TCO calculation should cover the cost of the initial purchase (Capex), which includes the cost for the UPS and the battery system, but also the cost of any special infrastructure required for the installation – eg HV/LV switchgear, cables, static transfer switches and generator sets. In addition, annual operating costs (Opex), both for maintenance, consumables, kWh cots and energy efficiency, must also be considered. It is here that the choice of UPS technology (between traditional monoblock and modular UPS systems) can show significant differences. Modular approach Modular is one type of UPS architecture that is gaining popularity in many data centre and server room environments. A modular UPS provides protection in ‘power blocks’ (typically rated 10, 15, 20, 30 and even 50kVA), enabling protection to scale up to hundreds of kVA in a single system. Modular UPS provide the flexibility and scalability required to respond quickly to changes in the load profile. The use of a modular UPS allows organisations to adopt a ‘pay-as-you-grow’ investment strategy. Instead of needing to invest up front in a system that will initially be over-specified, a smaller investment can be made, reducing overall capital

Modular UPS provide the flexibility and scalability required to respond quickly to changes in the load profile missioncriticalpower.uk


43 expenditure. Similarly, if the requirement for the protected load reduces, modular power blocks can be removed, and then redeployed elsewhere in an organisation’s infrastructure, or simply ‘frozen’, and kept in stand-by mode inside the UPS frame, until needed by the load. Flexibility This flexibility is particularly useful for data centres providing Cloud-based services. Their load and server optimisation will vary from day-to-day and from monthto-month. A scalable UPS can cost-effectively meet this demand. As well as scalability, the other key benefit of a modular system is how it can improve efficiency through right-sizing. This means that the UPS can be configured as an N or N+X system to power today’s load, reducing overall power consumption, improving operational efficiency and thereby reducing TCO. This is an important factor, considering how a high percentage of a datacentre’s costs are electricity usage related. When using any UPS system (in an N+1 configuration), the actual UPS may also only be protecting light loads (often less than 50% of the overall UPS capacity). Where this is the case the UPS must be capable of still achieving a high operational efficiency. A feature common to some, if not all modern modular UPS systems. For larger applications, another approach can offer further benefits: the ‘blockmodular’ architecture. This uses larger blocks of typically 250kVA that can be combined to reach the required load capacity. While this approach retains most of the flexibility of a pure modular approach, and can again provide high operational efficiencies, it can also be a more cost-effective route for large and mega-sized data centres. Once the required load capacity approaches 500kVA, it’s usually worth considering blockmodular as an alternative power protection strategy. missioncriticalpower.uk

Modular UPS designs are typically transformerless and use an IGBT-based rectifier and charger configuration. These two features provide a compact size and footprint and high operating efficiency. The modules have a standard kVA/ kW size and are housed in a frame or cabinet with a compact size and footprint. Batteries may be internal to the cabinet or housed externally. Where a transformer is required for isolation, this can be built-into the external design. Primarily this will be for more industrial and remote sites. Monobloc UPS also tend to be transformerless and IGBT-

based but operate as standalone systems with little flexibility for load right-sizing and future scalability. Monobloc UPS designs can run from less than 10kVA to up to 1000kVA in discrete sizes. They can be used a N-type systems in isolation or can be configured to operate as a capacity or N+X system for added resilience up to several MVA. Dependent upon the UPS size, the systems may include an internal maintenance bypass switch or have this built-into the appropriately sized switchgear. The important point is that the LV switchgear and cabling must be installed from day-one leading to a higher initial Capex. Modular approach

Key principles for designing a power continuity plan One of the main challenges can be to ensure that power to the critical load remains uninterrupted, even when the UPS system is being maintained, tested or repaired. For this, the following design principles should be considered: • Security – in terms of the electrical supply availability and power quality • Protection – against future potential faults both upstream of the UPS and downstream • Maintainability – with easy access for maintenance and testing without interruption to the load • Design simplification – to remove single points of failure while minimising installation costs • Upgradeability and flexibility – in terms of a scalable architecture for future growth • Scalable, flexible and resilient power – modular or stand-alone UPS

The UPS may also be run at less than optimal load, leading to lower energy efficiency and a higher TCO. Both types of transformerless UPS system may achieve high operating efficiencies up to 95.5% and even as high as 98-99% in ECO mode. These levels of efficiency should also be achieved (if the UPS has a flat efficiency curve) down to as low as 25-35% load. Modular UPS also tend to offer an ‘idle/sleep’ mode for unused modules which can further improve operational efficiency. Compared to monoblock UPS, modular systems offer improved scalability (both vertically and horizontally) and a more compact kVA/kW per metresquared ratio. When choosing which UPS topology to install, consideration should also be given to the type of battery to be used. Valve regulated lead acid (VRLA) batteries are the traditional choice but today some modular UPS also provide the potential to operate with Lithium-ion battery technologies. Using Li-ion provides a UPS with a battery set that can cycle faster (charge/discharge) and may have a longer working life up to 10-15 years reducing in-life battery replacement. Another consideration for the future could include the role of the UPS as a virtual power plant (VPP) whereby the UPS is charged overnight when electricity costs are low and then used to power the loads during the day when electricity costs are higher. For this type of application, a Li-ion type battery is the only choice. There are many factors to consider when selecting a UPS technology, appropriate to an organisation and its actual and anticipated load profile. A comprehensive approach is important and should take into account both day-one costs and overall lifetime operating costs including UPS topology, battery type, replacement and eventual decommissioning and recycling of the entire system at the end of life. June 2017 MCP


44

LIGHTING

Make sure your emergency lighting is always ready Emergency lighting is an essential safety requirement for any building. Uninterruptible Power Supplies director Alan Luscombe explains how a well-designed, correctly specified and properly maintained battery backup system can ensure the lights come on when they need to

T

he presence of emergency lights that operate reliably when called upon and provide sufficient illumination along all escape routes could make the difference between safe evacuation and panic, injury or even death. Accordingly, emergency lighting is an essential part of any building services installation — and subject to extensive British and European legislation. Emergency lighting, by definition, depends on a continuously charged battery backup power source. The lighting can detect a mains power failure and switch to battery backup automatically and immediately. The battery power source must be welldesigned, well-maintained, always fully charged and ready for use, and highly reliable. Wider understanding The Industry Committee for Emergency Lighting has published a guide to emergency lighting design, intended to give engineers a wider understanding of the different types of MCP June 2017

emergency lighting and their correct application. The guide references the British and European standard BS EN 1838, which specifies the luminous requirements for emergency escape lighting and standby lighting systems installed in premises or locations where such systems are required. It is principally applicable to locations where the public or workers have access. The overall objective of emergency escape lighting is to enable a safe exit from a location

in the event of failure of the normal supply. The guide shows specific forms of emergency lighting, as shown in Figure 1. The emergency escape lighting allows safe exit if the normal power supply fails, and forms part of a building’s fire protection system. Escape route lighting allows safe exit from buildings by illuminating escape routes and ensures that firefighting and safety equipment can be readily located and used.

Open area or anti-panic area lighting reduces the likelihood that people panic while enabling safe movement of occupants towards escape routes. Highrisk task area lighting ensures the safety of people involved in a potentially dangerous process or situation, and allows proper shutdown procedures to be completed for the safety of other occupants; for example, protecting people from dangerous machinery. Standby lighting allows normal missioncriticalpower.uk


45 Legislation calls for three hours’ battery backup during a power failure; this is a long duration, and calls for a significant number of batteries backed by a robust battery care regime to ensure reliable operation if the power fails

legislative requirements exist for certain premises, such as theatres and cinemas. The Construction Products Directive specifies that prompt lighting is provided automatically and for a suitable time in a specific area when the normal power supply to the lighting fails. The lighting must comply with BS 5266-1. BS EN 50171 also applies. The Signs Directive states that signs requiring some form of power must be provided with a guaranteed supply. Three-hour duration is generally required for emergency lighting, although one-hour duration may be acceptable in some premises if evacuation is immediate and re-occupation is delayed until the system has recharged. To conserve power, light output may be reduced, often down to 10% of normal light output. Emergency light fittings are either maintained or nonmaintained. Maintained types operate together with other lights in the area during normal supply; however, they also continue to operate, although at a lower level, during a power failure. Non-maintained fittings are normally switched off, only powering up from battery when the mains fails. They are not part of the general lighting scheme and usually comprise emergency exit signs and other such fittings. Static inverters Uninterruptible Power Supplies is well-known as a supplier of both UPS systems for data

activities to continue after the normal mains supply has failed. This lighting does not provide fire protection unless it meets the same equipment, design and installation requirements as emergency escape lighting systems. The major legislative standards that apply to emergency lighting systems include The Construction Products Directive (89/106), The Workplace Directive (89/654) and The Signs Directive (90/664). Other UK missioncriticalpower.uk

centres and static inverters for centralised emergency lighting applications. These products share some attributes. Above all, whether they are supporting a mission-critical data centre or an emergency lighting system, they must offer very high availability and readiness for instant action when it counts. They both make use of the same key components – rectifier, batteries and inverter – and both deploy these components in various configurations according to their applications’ priorities. In fact, UPS systems would support emergency lighting perfectly well, but with considerable oversizing. Inverters such as UPS’s PowerWAVE EL range include single-phase and threephase solutions from 500 VA to 160 kVA and are designed specifically for emergency lighting and similar applications. These inverters can work in different modes – passive, active or no-break standby – to suit the emergency lighting type. For incandescent or fluorescent lights, a passive standby mode is sufficient. Power is delivered directly from the mains to the lights during normal operation, but if the mains fails the load is automatically transferred to the output of the inverter, which is powered by the battery. As the inverter’s components are de-energised during normal operation, energy is saved, component life is maximised through reduced stress, and hence less maintenance is required. »

Figure 1: Forms of emergency lighting as defined by ICEL June 2017 MCP


46

LIGHTING

However, if high-pressure discharge lamps are being used, a no-break static inverter becomes a better solution. This is because the lighting load is always fed from the inverter, with no switchover delay during a power failure. Avoiding a power interruption prevents the restrike delay of possibly up to one or two minutes that occurs with many high-pressure discharge lamps. These inverters can also be used with incandescent and fluorescent lights; in all cases, flicker is eliminated and lamp life extended as the lighting system is constantly fed with stabilised, smoothed and conditioned power. Active standby offers a variant on no-break mode, in which the inverter is constantly running but off-load. This reduces stress on the inverter, improving its reliability and risk of failing when the lighting load is switched in during a mains blackout. The PowerWAVE EL inverters comply with the latest European EN 50171 specification for emergency lighting, and all can support a 120% continuous overload as demanded by the standard. The products power up with a ‘soft start’ to limit inrush current and prevent overloading of marginally rated mains supplies. For more capacity or higher availability, multiple inverters

can be paralleled in hot standby, redundant or symmetric parallel mode. The EL inverters are easy to install, with front access for fast maintenance. Different models with various options are available, including ingress protection ratings to IP41 and ambient operating temperatures to 40°C. High efficiencies to 97% or better at 100% load minimise operating costs. Remote monitoring and communications is supported with both dry contacts and RS232 serial communications. The inverters feature an LCD display, which provides accurate, detailed information about loads, batteries and inverter with advanced diagnostics. Some models include an intelligent battery monitoring to maximise service life. While the EL inverters are designed to meet the demands of emergency lighting systems with high availability, they need regular inspection, service and maintenance to underwrite their ability to do so. Most if not all emergency lighting inverter owners will set up an ongoing preventative maintenance and emergency callout contract with their inverter supplier or possibly a third party. As the heart of any EL inverter, and its most vulnerable component, the battery

When the lights go out Passengers on the underground in San Francisco were recently forced to navigate their way across a pitch-black platform during a power outage when emergency lighting failed to come on. According to local newspaper The Mercury News, passengers on the Bay Area Rapid Transit (Bart) had to use phone lights to find their way out Following the incident, the problem was found to go far beyond the one station. About one-third of the agency’s stations would be left in the dark during a power outage, with no functioning emergency lights, officials reported. The admission came after Bart staff completed an investigation into the batterypowered backup systems that are supposed to turn on immediately in the event the power goes out.

should receive particular attention; however, capacitors, fans, contactors and other components should also be regularly checked and replaced as part of the preventative maintenance schedule. As stated already, legislation calls for three hours’ battery backup during a power failure; this is a long duration, and calls for a significant number of batteries backed by a robust battery care regime to ensure reliable operation if the power fails. Accordingly, UPS is seeing a trend towards adoption of its PowerNSURE dedicated battery monitoring, management and care system. Using remote access via the Web, PowerNSURE checks the

internal resistance, temperature and voltage of every battery sequentially. It then uses an equalisation process to correct charging voltages and obtain a balanced charging condition across every battery in the string. This eliminates gassing, dryout and thermal runaway, and prevents battery undercharging. The constant monitoring and controlling of the individual charging voltages for each battery guarantees the availability of the battery at all times. An early indication of problems allows suspect batteries to be replaced before they compromise the inverter’s backup capability. Figure 2 shows an example of a failing battery. It shows how battery six is underperforming after 30 minutes of discharge into a 45-minute run. By using this warning to replace the battery immediately, the lifetime of the complete battery system is extended without risk to the load. Under ideal circumstances, a building’s emergency lighting will never be needed. However, facilities managers cannot control the quality or availability of their site’s incoming utility power, so a battery system that delivers effective emergency lighting without interruption if the mains does fail is essential. This protects the building’s occupants from harm, allows an orderly shutdown of dangerous equipment and ensures compliance with building and equipment safety regulations.

Figure 2: PowerNSURE battery performance graph MCP June 2017

missioncriticalpower.uk


PRODUCTS

47

Enclosure-based cooling solutions support growing IT needs In today’s data-driven business models, even small companies have to contend with growing IT needs. When they expand their computing environments and install the latest class of servers, the issue of cooling the equipment becomes acute. Legacy aircooling systems are no longer sufficient. The Rittal LCU DX and LCP DX solutions deliver enclosure-based cooling via direct expansion units. The units are easy to install simply by mounting them on the side panels inside IT racks. Expanding a small-scale, air-cooled IT hardware environment to create a multi-enclosure facility often calls for a new cooling strategy. The first and most fundamental question is whether waterbased or refrigerant-based cooling is more appropriate. It also makes sense to understand the total cost of ownership – including both capital expenditure and ongoing operating costs. Direct expansion (DX) solutions for cooling IT equipment are the quickest and easiest solutions to implement and require less capital expenditure than water-based

ones. DX solutions employ conventional refrigerant-based cooling with a split system and a compressor. Cooling is via a closed-loop refrigeration cycle, featuring an evaporator, a compressor, a condenser and an expansion valve. Rittal’s LCU DX (Liquid Cooling Unit) offers enclosure-based cooling with DX units mounted inside 800 wide IT racks. It is available with up to 6.5kW output in both single and dual redundancy variants. The LCU system features horizontal air circulation, supporting the conventional

method of front-to-back air flow to the 19-inch racks. Cold air is blown directly in front of the components. After being warmed by the servers, the air is drawn into the cooling unit at the rear of the enclosure and passes through the heat exchanger, which cools it down. This method requires IT enclosures that are sufficiently air-tight, such as Rittal’s TS IT series. Otherwise cold air will escape, impacting overall efficiency. LCU DX units can be installed in 800mm server enclosures. The Rittal LCP DX (Liquid Cooling Package) is another example of a rack cooling unit. Suitable for 12kW power dissipation, it can be mounted on the side of an IT enclosure, enabling a single device to cool two enclosures. One version of LCP DX blows cool air out to the front and can be employed to create solutions with a cold aisle that cools multiple IT racks. By deploying state-of-the-art modular and rack-based climate control systems, enterprises can remain flexible and responsive to changing IT needs – however uncertain – well into the future.

Compact power solution

Corrosion protection

ABB’s MNS-Up innovation integrates uninterrupted power supply and switchgear technologies into a single and compact system. By integrating with ABB’s Ability Mission Critical Power Control System, the reliability of critical power applications can be ensured. The Ability Mission Critical Power Control System is part of ABB’s portfolio of digital offerings that enable customers to do more with their assets, providing local and remote system visualisation, control and remote diagnostics for the highest power reliability. MNS-Up enables users to save up to 10% capital in electrical infrastructure. It requires up to 30% less space compared with traditional architectures and can be up and running as much

British cooling systems manufacturer Airedale International has launched an additional after-sales service to extend the life of heat exchangers in HVAC equipment. Airedale now provides long-lasting anticorrosion treatment to maximise the efficiency and performance of air conditioning systems such as heat exchangers, evaporator coils, condenser coils in external chillers, condensers, dry coolers and more. Corrosion is the number one cause of efficiency loss in heat exchangers, typically when exposed to extreme weather conditions and pollution. Aluminium heat exchangers and coils are relatively corrosion resistant, even without any type of coating;

missioncriticalpower.uk

as 20% faster due to reduced installation and commissioning time. Fully integrated, MNSUp allows switchgear and UPS modules to be safely and rapidly exchanged without disconnecting power. Responsible energy consumption and facility growth are ensured through planned incremental additions. MNS-Up’s modular design expands in 100kW steps so that companies just pay as they grow. Each frame of the system can support up to five 100 kW UPS modules. Up to six frames can combine to provide 3MW of backup power supply.

however, the harsh conditions in certain air conditioning applications often require additional protection, more specifically concerning offshore or coastal applications, power plants, industrial environments, urban dense applications, and any areas that have high levels of airborne pollutants. Protecting aluminium coils can triple the lifetime of the heat exchanger, prevent early deterioration, capacity loss and the need for coil replacement. Applied protection could also save up to 30% on annual energy costs. Airedale now offers three corrosion treatment plans that promise to protect, refresh and renew coils in heat exchangers that are between three and eight or more years old.

June 2017 MCP


48

PRODUCTS

Modular circuit breaker range prevents downtime WEG has launched the DWB series of modular circuit breakers, which can reliably interrupt short-circuit currents up to 80kA at 415V to protect plant, equipment, cables and wiring, as well as motors and generators. The circuit breakers are ideal for a wide range of applications and feature compact design, robustness and reliability, so engineers benefit from maximum operational reliability and uptime under the most demanding conditions as well as space-saving installation. The DWB moulded case circuit breakers are designed with rated operating voltages up to 690V AC or 250V DC and are available in three-pole and four-pole versions in six case sizes with rated operating currents from 16 to 1,600A. The circuit breakers and associated equipment and components can be switched on and off remotely with related

accessories. The modular design of the DWB series allows the complexity of the circuit breakers to be adapted to the requirements

Reliable surge protection

Eaton’s new MTL SD Modular range provides comprehensive protection from transient surge events up to 20kA, offering a high level of protection, coupled with high packing density. With more than 50% of premature electronic equipment failures being attributed to surge and maintenance failures, Eaton’s MTL SD Modular range offers complete costeffective surge protection to valuable instruments and distributed control systems. “The delicate circuits

MCP June 2017

and devices in today’s equipment and systems make their susceptibility to transient surge events much greater,” said Roger Highton, MTL process connectivity product line manager at Eaton. “Underestimating the importance of reliable surge protection devices can be extremely costly if the worst should happen. The MTL SD Modular range is unique in offering 20kA protection with a module width of just 7mm, allowing maximum protection of valuable assets in minimal space.” The design of the MTL SD modular device reduces maintenance cost and downtime, as modules can be quickly and easily replaced. The pluggable part is held in place with a simple retention tag and can be removed from its base without de-energising the protected device, saving the user valuable time and complexity.

of each application. For example, the needs of simple cost-sensitive applications can be met using circuit breakers with fixed overload and short-circuit tripping mechanisms based on the thermal-magnetic operating principle. By contrast, circuit breakers with adjustable overload and short-circuit tripping mechanisms can be deployed in more sophisticated solutions. Circuit breakers with adjustable current-dependent or time-delayed overload and shortcircuit tripping using electronic tripping mechanisms are available to meet complex requirements. These versions can be used to construct selective protection networks to ensure that only the circuit breaker directly ahead of the fault location is tripped and all other parts of the network are reliably supplied with power.

Transformerless UPS AEG Power Solutions has announced Protect Plus S500, its new transformerless UPS that combines high efficiency values with a compact footprint and flexible configurations. Reducing overall cost of ownership, this latest addition to the AEG PS range is designed to protect critical loads for small and medium applications where low power consumption, ease of maintenance and space are important considerations. With Protect Plus S500, AEG Power Solutions is completing its transformerless UPS range, protecting mission critical applications from 400VA to 4MVA, providing data and IT or industrial players with the solution they need to secure power and ultimately data, infrastructure or people.

The Protect Plus S500 is a double conversion UPS (VFI SS 111). Its eco mode allows secured operation up to 99%, reducing the utility costs associated with operating a device of this type. Moreover, it produces less heat waste resulting in minimised air conditioning costs. System AC/AC efficiency is up to 95.5%. Primarily designed for high availability, ease of maintenance is an integrated design factor for the Protect Plus S500, which includes removeable internal modules contributing to low MTTR (mean time to repair). The hot connection and disconnection of parallel units and the CAN busbased distributed control systems, ensures optimum load sharing and allows the system to be easily expanded up to six units, both in power parallel or N+x redundancy.

missioncriticalpower.uk


PRODUCT & SERVICES DIRECTORY

49

Contact sales@energystmedia.com BATTERY MANAGEMENT

FLOW METERING

GENSET CONTROLLERS

www.janitza.com

Clamp-on flow & heat/energy metering solution from Micronics.

DSE8610 MKII SHAPING THE FUTURE OF SYNCHRONISING.

www.micronicsflowmeters.com or call

POWERING MONITORING

+44 (0) 1628 810456

3 in1

Monitoring System

Reliable and efficient power supply

AVAILABLE

NOW

EnMS – Energy Management (ISO 50001) PQ – Power Quality (EN 50160) RCM – Residual Current Monitoring Video

REDUNDANT MSC

MADE IN BRITAIN

EXTENDED PLC ENHANCED FUNCTIONALITY COMMUNICATIONS

TO LEARN HOW DSE SYNCHRONISING SOLUTIONS WILL ENHANCE YOUR MULTI-SET APPLICATIONS VISIT WWW.DEEPSEAPLC.COM

T +44 (0) 1723 890099 E sales@deepseaplc.com

UPS 0999_Micronics U1000HM 44 x 110mm Ad05/10/2015 v4.indd 1 19:40

Premium Power Protection for Data Centres

Reliable power for a sustainable world 'PS GVSUIFS JOGPSNBUJPO DPOUBDU VT PO PSËTBMFT VQT VL!HF DPN

missioncriticalpower.uk

Call: 0800 269 394 www.riello-ups.co.uk

To feature your company’s products or services on this page contact sales@ energystmedia.com

June 2017 MCP


50

Q&A

Mike O’Keeffe Vertiv’s VP services in EMEA talks about politicians, the Industrial Revolution and playing golf with Rory McIlroy Who would you least like to share a lift with? Given the political landscape at the moment, I wouldn’t like to find myself sharing a lift with a politician of any persuasion. There is so much time being wasted in politics; endless games and personal endeavours. With our third significant vote in recent years coming up, there has to be a better way to develop a fair society. So if I was stuck in a confined space with a politician, I’d loathe it and no doubt would have a few words to say. You’re God for the day. What’s the first thing you do? If you can somehow get that sort of power, I think it’s important you do something meaningful with it. I’d like to improve peoples’ levels of tolerance for each other. There’s enough space and resources in the world to go around – as a global society, we need to get on better and learn to tolerate each other’s differences. If you could travel back in time to a period in history, what would it be and why? I love history, so this is a tough one for me. I would love to go back to the Industrial Revolution because it was a time of incredible infrastructure change. Around the late 18th and early 19th centuries people were building so much with their hands – railways, canals, roads – and that’s an unbelievable physical transformation that I would love to witness in person. Who or what are you enjoying listening to? I’m not a huge music fanatic but I’ve recently got my hands on an Amazon Alexa, which is doing the selection for me. When I tell it to play music it tailors playlists MCP June 2017

and adventure called In Patagonia by Bruce Chatwin. I’ve been very lucky to travel in my job. The book was the first of a new genre of travel writing, where instead of providing a description of what was going on, you get a real feel for the place in the narrative. If you could perpetuate a myth about yourself, what would it be? To be frank, I’m quite an open character. What you see is what you get with me. for me based on my profile. Being a middle-aged man, the artificial intelligence suggests the likes of the Rolling Stones, Bob Dylan and Van Morrison – I can relax to it and l like how it uses artificial intelligence to do it all for me. What unsolved mystery would you like the answers to? We are all intrigued by a missing person or murder mystery. It would be very special if a missing persons case could be solved, and you could reunite them with their loved ones. What would you take to a desert island and why? I feel very millennial saying this but it would have to be my iPad – I use it to read books, watch films and catch up on the news. As an engineer, I’d fancy the challenge of creating a power source – most likely solar in this case. So if I’m relaxing on this desert island, it’s my iPad that’s got to be with me. What’s your favourite film (or book) and why? It’s a book about travel, history

What would your super power be and why? Vision. Being able to accurately predict the future is an immense superpower. Whether you’re running a business, or just trying to run your personal life, if you’re able to see how things may turn out, you’ll get rewarded. What would you do with a million pounds? Invest it in job creation schemes and work with charitable organisations to help eradicate social inequalities. There’s no benefit to anyone if it’s sitting in the bank, and I’m all for giving it back for a good cause. What’s your greatest extravagance? I like space. At my family home we’re fortunate enough

to have a big garden, and plenty living space. I’m not into posh wines, I don’t drink expensive

whiskies and you won’t find me in the latest gourmet restaurants. But I think with the space my family and I have – that’s a real luxury. If you were blessed with any talent, what would your dream job be and why? I’m a big sports fan so it would have to be some sort of athlete. In a perfect world I’d be hitting under par on the golf course with Rory Mcllroy in the morning, then a key member of the back row playing rugby at Twickenham. What is the best piece of advice ever been given? Always search for balance in everything you do. People talk a lot about work-life balance but you’ve got to find balance in all aspects of your life. What irritates you the most in life? Wasting time simply getting from A to B – airports are the worst for this. If a flight takes two hours, we all know from door to door that ends up being five or six. We could all be so much more productive if we spent less time just travelling. What’s the best thing – work wise – that you did recently? Vertiv just celebrated the 10th anniversary of the Academy, a sort of corporate university that trains our staff and partners, building value and expertise. To see us all come together has been extremely rewarding. Not just for me but for everyone in the business. missioncriticalpower.uk


A BRAND NEW ENERGY EVENT FOR TODAY’S BUSINESS MARKET

17-18 APRIL 2018 – National Motorcycle Museum, Birmingham The Energyst Event will provide you with essential information to cater for the changing needs of the modern energy professional. It will equip users with tf orp urpose knowledge, insight and solutions in a conference and exhibition that is focussed solely on energy.

Register for your free ticket at theenergystevent.com



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.