missioncriticalpower.uk
ISSUE 24: October 2019
8
Pressure to cut costs and skills shortages pose risk for data centres, warns Ed Ansett
20
Is energy storage and flexibility the key to resilience in uncertain times?
42
Wave goodbye to unsustainable IT: ocean-powered data centre planned for Scotland
3
IN THIS ISSUE
8
20
Power to change?
Demand-side response
Ed Ansett warns that pressure to cut costs, skills shortages and a culture of secrecy are increasing risk for data centres
Are energy storage and flexibility the key to resilience in uncertain times?
14 Blackout Who should be in the dock for the power outages that hit the UK in August? Ian Bitterlin gives his views
44 Skills shortage CNet’s Andrew Stevens warns that the data centre sector is facing a crisis, but there is no simple fix
33
missioncriticalpower.uk
ISSUE 24: October 2019
8
Pressure to cut costs and skills shortages pose risk for data centres, warns Ed Ansett
20
Is energy storage and flexibility the key to resilience in uncertain times?
42
Wave goodbye to unsustainable IT: ocean-powered data centre planned for Scotland
Sequential expansion Kao Data’s expansion will support the life sciences sector
48
12
Hidden cost of cooling How can operators reduce their opex and improve their green credentials?
Front Cover Schneider Electric
Comment
4
Standby Power
22
Renewables
42
News
6
Battery Storage
26
Power Distribution
50
Data Centre Optimisation
8
Power Quality
30
Products
56
Onsite Generation
36
Q&A
58
18
UPS
To subscribe please contact: missioncriticalpower.uk/subscribe missioncriticalpower.uk
October 2019 MCP
4
COMMENT
Backup put to the test... In the wake of August’s power outages, Professor Dieter Helm, a government adviser on energy policy, commented: “The very idea that the electricity system could be brought to its knees just because a couple of power stations dropped off at short notice should send alarm bells ringing.” Likewise, the very idea that mission critical facilities could be caught out by such an eventuality also sends alarm bells ringing. There were numerous reports of the chaos that followed. By now, mission critical sites will have had the opportunity to reflect on whether their power systems were robust enough to withstand a major event on the grid and will no doubt be asking questions, if their backup failed. This edition of MCP focuses on the best practices required to ensure resilience in the face of uncertain grid stability, from the implementation of technologies such as uninterruptible power supplies, to regular maintenance for backup generators and robust testing using loadbanks. We look at some of the lessons learned and how to avoid being hit by outages in the future. Ian Bitterlin blames privatisation for many of the problems experienced by the grid on 9 August, but the government’s role goes much deeper, in his view. He questions whether a lack of investment in key services, such as the NHS, have led to budget cuts to essential maintenance and testing of backup power systems. It is tempting
Editor Louise Frampton louise@energystmedia.com t: 020 3409 2043 m: 07824 317819 Managing Editor Tim McManan-Smith tim@energystmedia.com Design and production Paul Lindsell paul@energystmedia.com m: 07790 434813
MCP October 2019
Sales director Steve Swaine steve@energystmedia.com t: 020 3714 4451 m: 07818 574300
to slice and dice budgets in order to fund frontline care, but frontline care is unable to continue if the backup power systems fail to kick in. So, how much longer can we continue to rob ‘Peter to pay Paul’ before someone realises that ‘Peter’ is fundamental to the operation? Others are calling for greater participation in frequency response in order to balance the grid. As mission critical sites already have the assets in place to take part in demand-side response, there has been a great deal of discussion around the need to encourage more facilities to engage with aggregators in these schemes. However, the 2019 DSR: Shifting value report, launched at the recent DSR Event in London, stated that it has been “a challenging year for demand-side response”. Uncertainty over charging reforms and removal of established cost avoidance and revenue streams has been compounded by the suspension of the Capacity Market. Though frequency response prices have stabilised, they remain depressed. The current state of the DSR market is unlikely to encourage risk-adverse sectors to take the first steps towards participation. Nevertheless, the majority (58%) of end users that do provide flexibility say they are satisfied with the outcome. So, what exactly can we learn from these end users? Speaking at the recent DSR Event, DSR manager Rob Wild revealed that Seven Trent Water had “dipped its toe in the water”. Payback was found to be around three years and proven to be “one of the best business cases within the organisation”. Most importantly for our readers, the company experienced “no issues” from a risk perspective. Louise Frampton, editor
Energyst Media Ltd, PO BOX 420, Reigate, Surrey RH2 2DU Registered in England & Wales – 8667229 Registered at Stationers Hall – ISSN 0964 8321 Printed by Printed by PCP
Commercial manager Daniel Coyne daniel@energystmedia.com t: 020 3751 7863 m: 07557 109476
No part of this publication may be reproduced without the written permission of the publishers. The opinions expressed in this publication are not necessarily those of the publishers. Mission Critical Power is a controlled circulation magazine available to selected professionals interested in energy, who fall within the publishers terms of control. For those outside of these terms, annual subscriptions is £60 including postage in the UK. For all subscriptions outside the UK the annual subscription is £120 including postage.
Circulation enquiries circulation@energystmedia.com
Follow us for up-to-date news and information:
missioncriticalpower.uk
6
NEWS & COMMENT
DSR: market hit by uncertainty, but end users remain satisfied with outcome Firms helping to balance the power system via demandside response (DSR) say they could provide significantly more flexibility – given sufficient reward and revenue certainty. Meanwhile, most companies that do not provide DSR would consider doing so – if it did not impact day-today operations. The findings, from a survey conducted by MCP’s sister title, The Energyst, come as National Grid ESO recommends reviewing security of supply standards to determine whether they require strengthening – and whether further reserves may be needed. The 2019 DSR: Shifting value report acknowledges that it has been a challenging year for demand-side response. Uncertainty over charging reforms and removal of established cost avoidance and revenue streams has been compounded by the suspension of the Capacity Market. Though frequency response prices have stabilised, they remain depressed, the effect of more batteries coming on stream. Aggregators are
The findings of the 2019 DSR report and key challenges were discussed in front of a full house at the DSR Event, held at One Moorgate Place in London concerned, telling National Grid ESO’s Power Responsive forum at the start of the year that more providers could exit the market than enter and that “it’s hard to make a business case [for DSR] in the UK right now, compounded by Brexit”. Aggregators and large customers that provide flexibility want Ofgem to delay making decisions on its charging reforms until the future of the Capacity Market becomes clear and to better align the reviews on residual charges with forward looking
aspects. They fear key pillars of the business case for DSR will be removed before rewards for behaviour that benefits the system are put in place. This year’s DSR survey reflects that sentiment. While the majority (58%) of end users that provide flexibility remain satisfied with the outcome, the trajectory is negative – from 86% in 2016, 77% in 2017 and 68% in 2018. Asked why they feel less positive towards DSR than 12 months ago, this answer from a water company is perhaps most succinct:
“Regulatory change (TCR/ MCPD), FFR drop in value, CM suspension – the removal of value seems to be fast while access to new markets/removal of barriers is slow.” However, other providers cited wider access to the Balancing Mechanism as a reason for feeling more positive about DSR than 12 months ago. Among the key findings included the fact that six in 10 DSR providers could offer more flex without affecting their business; Triad avoidance is the most popular activity, then frequency services and the Capacity Market; four in 10 are still using diesel backup for DSR/peak charge avoidance; while six in 10 said they were satisfied with DSR overall. In addition, almost nine in 10 businesses surveyed, that are not currently participating in DSR, would be interested… if there was no disruption. The report was launched at the 2019 DSR Event, held in London. Thanks to sponsors Power Responsive, EDF Energy, Enel X and UK Power Networks, the report is available as a free download at theenergyst.com/DSR/
Digital Realty expansion to support IoT boom Digital Realty has announced the official opening of Cloud House, the latest facility in its Digital Docklands campus of highly connected data centres in London’s Docklands area. The investment will support the growth of London’s technology ecosystem, with the city set to experience a multibillion-pound technology boom, according to a new study conducted by Development Economics. The Digital Realty-commissioned study, Digital Capitals Index: London, examines the value that innovative technologies will deliver to the city’s economy over the next decade. The report focuses on MCP October 2019
four of the most widely discussed technology innovations: artificial Intelligence (AI), the Internet of Things (IoT), 5G and Blockchain. These four innovative technologies combined will add £6.25bn to London’s economy in 2019, with IoT contributing the most at £3.09bn (49% of the total) primarily through improvements in operational effIciencies. By 2029, however, these new technologies are tipped to contribute an estimated £24.29bn to London’s economy, £18.04bn more than 2019. The most spectacular growth is expected to come from 5G, with its economic contribution to London’s economy set to increase 3,000%
over the next decade, from £130m to £4.29bn, as 5G becomes the foundation for the deployment of many other innovative, data-led technologies. Digital Realty’s investment in the Digital Docklands is designed to underpin the growing importance of data-led technologies to London’s economy by ensuring the city’s businesses have the right digital infrastructure to adopt and deliver on complex technology, wherever and whenever they need. The Digital Docklands campus is highly connected, offering the highspeed global connectivity required to deliver on the promise of AI, IoT, 5G and Blockchain. missioncriticalpower.uk
7
Bristol Airport switches to renewables Bristol Airport is switching to a 100% renewable electricity supply in a significant step towards reducing its carbon footprint. The announcement follows the recent publication of a carbon roadmap setting out how the airport will become carbon neutral by 2025 for emissions within its direct control. The new three-year agreement with global renewable energy supplier Ørsted will see the airport’s annual electricity use of 17 million kWh powered entirely by renewable sources. Electricity is the largest contributor to carbon emissions from onsite airport operations. In addition to the electricity used in the terminal and other buildings, a growing number of aircraft stands are equipped with fixed electrical ground power (FEGP), reducing the need to use diesel powered engines for essential pre-flight services.
Deal with Ørsted will see Bristol Airport powered entirely by renewable sources
There is more to do but this is a clear statement of our intent to reduce our direct emissions.” Ashley Phillips, managing director at Ørsted Sales (UK), said: “It’s exciting that an international airport like Bristol is placing such strong emphasis on sustainability. At Ørsted, we want to drive the transition to low-carbon energy systems in the UK, and support organisations like Bristol Airport that share this ambition of creating a greener energy future.”
Over the duration of the contract an estimated 14,000 tonnes of carbon will be saved across the airport site as a result of the move to renewables – equivalent to the emissions from driving 34 million miles in an average car. Simon Earles, planning and sustainability director at Bristol Airport, said: “From October our terminal and other facilities will be powered by renewable energy – a significant step on our journey to carbon neutrality.
Data Centres Ireland Data Centres Ireland will provide key insights into the Irish data centre sector, exploring the opportunities and challenges in this region. The data centre market in Ireland is experiencing significant growth: an additional 29 data centres have received planning permission and are under development, and capacity is forecast to double from 600MW to 1,200MW in the next five years. Data Centres Ireland is also going from strength to strength. Last year saw 16 new data halls come online, taking the total to 53. This year’s conference will feature more than 60 speakers, including Matt Pullen, CyrusOne; Gary Watson, Keppel Data Centres; and James Connaughton, Nautilus DataCentres. datacentres-Ireland.com
us 37 n sit d do Vi tan Lon s on CD D at
Monitor, Manage, Maximize with EkkoSoft Critical
24% average cooling energy savings with an ROI of 10 months Unlock cooling energy
Remove thermal risk and
Proactive management
Simulation of future IT and
savings in house by
compare key estate wide
of your power, space
cooling loads with scenario
following Cooling Advisor
site performance metrics
and cooling capacity
planning for ‘What If? Analysis
Find out more at www.ekkosense.com/video or book a demo www.ekkosense.com/demo T. +44(0)115 823 2664 E. info@ekkosense.com W. www.ekkosense.com
8
DATA CENTRE OPTIMISATION
Powering change: how can data centres up their game? Are current power systems fit for purpose and what are some of the key issues impacting data centre performance? Ed Ansett, from i3 Solutions Group, discusses the changes that need to be made to drive improvement in the sector. Louise Frampton reports
T
he Uptime Institute’s latest annual survey confirmed there is a need for improvement in terms of data centre resilience. Just over a third (34%) of all respondents to the 2019 survey had experienced an outage or severe IT service degradation in the past year and power loss was the single biggest cause – accounting for one-third of incidents. So, are current power systems fit for purpose and what are some of the key issues affecting data centre performance? Ed Ansett, of specialist data centre consulting firm i3 Solutions Group, warns that fierce competition in the data centre market has increased pressure to cut costs and the consequence has been an increased risk of failures. He argues that, in terms of quality of service, resilience, and sustainability, there are significant challenges ahead.
data centre power systems. People in the industry, at all levels, are having to source their information, on this specialised area, through word of mouth, magazines and by attending the small number of courses currently available. One of the problems that we have in this industry is training, especially at operations level. “I have recently been assisting with some legal issues, working on behalf of clients that have had substantial failures. Usually the cause is an
Skills gap and resilience According to Ansett, one of the key issues affecting the resilience of data centres is the current skills gap: “While there are courses and training available, there is a lack of large-scale vocational education. You cannot attend university or college and study
engineer making an error, due to a lack of knowledge. This skills shortage is a fundamental issue – it is endemic in the whole of the data centre industry,” commented Ansett. He believes that vocational, practical training needs to be made a priority, but attracting talent into the profession is
MCP October 2019
34%
of all respondents to the Uptime Institute’s 2019 survey had experienced an outage or severe IT service degradation in the past year
also a challenge. In the US, the ‘Salute’ scheme is harnessing the skills of military veterans and channelling them into the data centre sector. Ansett believes that this approach could have significant potential for creating a pool of talent. Time for a change? To drive improvement in the resilience of data centre infrastructure, Ansett also believes that there needs to be further discussion around power system topologies. “There are four main topologies in the data centre sector. A 2N power system topology is fault tolerant, so there is an A and B system – if A system fails, the B system will take over. They are two entirely separate power
systems, but there is a price to pay for having one system completely in reserve. “Distributed redundancy and block redundant topologies are quasi fault tolerant – they will survive most types of events but not all. There is also a fourth, rarer, topology – iso-parallel. However, over the last 10 years, the most popular approach has become distributed redundant – many colocation providers choose not to have 2N topology due to the cost,” Ansett explains. He believes current strategies around power topology require a radical change: “A business may have an array of IT services that vary in terms of criticality, but they all go into the same data centre or data hall with a single power missioncriticalpower.uk
9
failover for data centres is not a sustainable trend, in terms of resilience, in his view: “Because of commoditisation, there is a race to the bottom in terms of the standard of design of data centre construction and I foresee a significant issue… The trend towards lower levels of resilience and software failover is going to be interrupted at some point… We need to bear in mind that adding more and more software platforms and increased failover complexity, will mean less reliability.” On the other hand, Ansett points out that data centre power systems are often over specified and under-utilised: “The average utilisation is 30-40%. This is a huge waste of money and energy,” he comments.
Fierce competition in the data centre market has increased pressures to cut costs and increased the risk of failures system SLA [service level agreement]. The approach used is too blunt. Either the power system is catering for the highest SLA, in an environment requiring multiple IT service levels, in which case there is over provisioning and services are costing more than they should, or the power system service level is somewhere in the middle, in which case there are some IT services receiving less than the required SLA.
Either way, this is a problem,” comments Ansett. “If you owned three cars – a sports car with an open roof, a mini and a 4x4 – you wouldn’t take your sports car out in the middle of winter up 1:4 hills. Yet this is what we are doing. I believe the data centre power system needs to be more granular – it needs to be able to match the IT service level,” Ansett explains. He believes adaptable
redundant power is the holy grail for the data centre industry. “There is an old saying, in reliability engineering, that you are only as strong as the weakest link in the chain. We must ensure all the links in the chain are of equal strength – whether it is the IT networks, compute, storage, or the power and cooling,” he continues. Ansett is keen to point out that the move to software
If you owned three cars – a sports car with an open roof, a mini and a 4x4 – you wouldn’t take your sports car out in the middle of winter up 1:4 hills. Yet this is what we are doing. I believe the data centre power system needs to be more granular – it needs to be able to match the IT service level missioncriticalpower.uk
Learning from incidents In addition to tackling the skills gap and redesigning current approaches to topology, there also needs to be a change to the culture of secrecy within the data centre sector, in Ansett’s view. The Data Centre Incident Reporting Network (dcirn.org) was set up as a not-for-profit organisation to manage an independent, voluntary confidential reporting programme for data centre operators and personnel working in the data centre industry, in order to share information and thereby improve the safety and reliability of data centres and the services they provide. As a member of the executive committee, Ansett believes passionately that incident reporting is vital to promoting learning and driving improvement in the sector: “As society becomes more and more dependent on technology, the likelihood of outages having a significant human impact is inevitable. At some point, governments will step in and mandate the sharing of this information. “The Data Centre Incident Reporting Network charity is aiming to share anonymised » October 2019 MCP
10
DATA CENTRE OPTIMISATION
reliability standpoint. Battery and kinetic storage are also attractive, provided both are evaluated in terms of their sustainability,” he comments.
The skills gap – and lack of training at operations level – remains an issue insights across the industry, so that everyone can learn from the incidents investigated. I want this to always remain non-profit making and free of charge for as long as possible.” He adds that the emphasis is moving away from reporting ‘failures’ to reporting near misses: “By shifting the emphasis to near misses,
NDA. They were shocked – this wasn’t ‘intellectual property’ that shouldn’t be shared; it shouldn’t be kept secret. Of course, I understand their reticence to share information – it is an admission of guilt and, in some cases, negligence, but there are many cases where the sharing of information has done nothing but good.”
board, this will be the tipping point,” he comments. Providers won’t entertain DSR for altruistic reasons, he points out. “To encourage participation, there needs to be a simple metric that people can understand which measures the benefit in terms of carbon savings, as well as an attractive financial return.”
There is an old saying, in reliability engineering, that you are only as strong as the weakest link in the chain. We must ensure all the links in the chain are of equal strength – whether it is the IT networks, computer, storage, or the power and cooling I believe we will see more traction,” he comments. “Some years ago, I was asked to investigate an incident relating to the Stock Exchange. “At a de-briefing at the Monetary Authority I was asked the question: ‘Have you seen this failure before?’ I said ‘yes’, but when they asked for details, I had to reply that I was under an MCP October 2019
Demand-side response Ansett believes that there will be some movement towards engaging with demand-side response in the future, but the sector is conservative and reticent about making changes, so adoption will take time. “Once some of the big players in the sector come on
Energy storage The DSR market will also be fuelled by developments in energy storage, according to Ansett. “There is a lot of innovation in this area and it is unclear which technologies will ‘win’. However, I like the concept of the solid oxide fuel cell and fuel cells generally, from an environmental and
Energy efficiency He points out that the sector’s performance in terms of sustainability and energy management is also under scrutiny, at present. The Uptime Institute’s most recent survey shows that improvements in energy efficiency have flattened out and even deteriorated slightly in the past two years, with the average PUE reported to be 1.67, in 2019. “If you drop litter, you expect to be fined. Organisations must be compelled to tackle inefficiencies; it shouldn’t be something they are ‘asked to do’ as part of a ‘guideline’. I believe we will reach a point where organisations will be mandated to ensure basic levels of efficiency. “The major data centre operators are very aware of the issue and are doing something about it. However, there is only so much you can do at the data centre building level,” Ansett continues. Embodied energy He believes the next big issue to hit the sector will be ‘embodied energy’. Embodied energy is defined as “the sum of the energy requirements associated, directly or indirectly, with the delivery of a good or service” (Cleveland & Morris, 2009). “This is a big piece of the puzzle that the industry is waking up to and it needs consideration. The problem is calculating it. Just because it is difficult, doesn’t mean we shouldn’t do it. It is going to be a large percentage. “We talk about the energy passing through the data centre, but we are not talking enough about the materials being used to put the facility together in the first place.” Ansett concludes: “In the future, people will talk about how wasteful our generation has been.” ● missioncriticalpower.uk
12
C
are New England’s Women & Infants Hospital, in Providence, Rhode Island, is a centre of excellence in women’s health, offering a wide range of services in infertility treatment, breast care, gynaecologic cancer and prenatal diagnosis. One of the largest obstetric facilities in the US, the hospital has a first-class reputation for leading-edge, utero foetal surgery and genetic diagnostics. Whether performing operations to correct spina bifida or cardiac defects in the womb, or performing genome analysis for chromosomal abnormalities, the site is reliant on highly sensitive technology, operating 24 hours per day. Secure backup power is essential to ensuring safety during extremely challenging procedures, often performed on some of the smallest and most vulnerable patients – before they are born. In the past, the Providence area has experienced significant challenges to the grid supply. The proximity of the hospital to the bay puts the site at increased risk of water and wind damage during extreme weather events and the area has been hit by some severe hurricanes and blizzards, leading to power outages in the area. It is not just the weather that threatens the stability of the power supply, however. Care New England’s research buildings, on Elm Street, are also located on the oldest part of the grid in Rhode Island. Four years ago, a major electrical fire under the street blew manhole covers across distances of more than 30 metres. Ensuring resilient IT Given these challenges and the criticality of the data being generated both at the bedside and in the research labs, resilient backup power is vital. If power was lost, due to an event with the grid, the hospital would be unable to able to access imaging, bedside patient monitoring systems, electronic patient records and other IT based systems, via the network. Power quality and resilience MCP October 2019
Powering the miracle of life Resilient power is crucial when performing complex foetal surgery and genetic diagnostics. Care New England is protecting its most precious patients – and its reputation – through Schneider technology. Louise Frampton reports
is crucial to protect the data produced by the huge number of prenatal tests and scans undertaken to diagnose foetal conditions. The vital work undertaken at the bedside and in the research labs produces large volumes of prenatal ultrasound images, video clips, genetic screening results and other data, stored alongside the many thousands of electronic patient records. Most departments across Care New England have ‘downtime machines’ – physically secured and encrypted workstations that contain time-limited local copies of the patient records. These machines are protected by Schneider Electric technology and emergency power. In the
event of a systemic power failure coupled with a network failure, the clinical staff will continue to have access to the patients’ electronic health records. There are also two feeds from the grid to ensure resilience – if one supply goes down, the site can switch to the second. The emergency power generation can provide a minimum of twodays’ of backup power and is tested on a monthly basis by shutting down the mains power. Care New England’s critical infrastructure – including the UPS – is monitored via Schneider’s EcoStruxure software, offering real-time recommendations to optimise infrastructure performance and mitigate risk. The software also
helps identify issues from the outside power supply and can offer an insight, as part of a ‘look back exercise’, in the event of any problems. At the bedside Stephen R Carr is the director of the Prenatal Diagnosis Center and Maternal–Fetal Medicine Diagnostic Imaging. He explains that power quality and resilience is essential to the work he carries out. “At the Prenatal Diagnosis Center, we perform tests such as amniocentesis, as well as chorionic villus sampling [a prenatal test in which a sample is removed from the placenta for testing]. If the foetus has an accumulation of fluid in the missioncriticalpower.uk
COVER STORY
finish the ultrasound procedure and give the information to the patient.” The Prenatal Diagnosis Centre currently uses APC Smart-UPS XL 3000 units on the majority of its ultrasound machines across the system, protecting the safety of the most vulnerable patients. Highlighting the criticality of edge IT for the hospital, Dr Carr adds: “Latency and resilient power are everything. I perform 20,000 patient consultations per year. Each patient study contains between 80-100 still images, as well as video, and I perform 50 of these per day. I produce terabytes of data, which must be stored indefinitely. I need to be able to access this data at all times and I constantly want more bandwidth.” bladder, I can insert a catheter, or if there is an accumulation of fluid in the chest, I can insert a chest tube. The hospital also performs spina bifida repair within the uterus. “These prenatal tests and treatments require the use of high-resolution foetal ultrasound, down to the submillimeter level. Using this technology, I can image the lens in the baby’s eye or the flaps inside the valves of a heart the size of a thumb. “But for this technology to work I need smooth, reliable power. In the past, I was able to operate for three hours on the UPS, when the substation was taken out by a helium balloon. Despite the outage, I was able to missioncriticalpower.uk
Protecting research data Several million dollars’ worth of clinical and research lab equipment across the hospital system is also protected by Schneider Electric products, as well as emergency power. This is helping to protect the stateof-the-art genome sequencing equipment installed inside the research labs. This building is located in the Knowledge District in Providence, an area that has the oldest and least reliable power grid in the state. Part of the Care New England group, the Kilguss Research Institute is home to the Centre for Perinatal Biology (CPB). The centre started as a National Institutes of Health-funded Center of Biomedical Research
Excellence (COBRE). It is now a self-supported centre undertaking advanced research on foetal development and reproductive medicine. The genetic testing laboratory undertakes important screening tests using cutting-edge micro array machines to detect the expression of thousands of genes at the same time. DNA micro arrays are microscope slides that are printed with thousands of tiny spots, each containing a DNA sequence or gene. Often, these slides are referred to as gene chips or DNA chips. A thousand times more sensitive than a microscope, the machine is capable of detecting small changes in the genome, including deletions. This can help identify genetic disorders such as spinal muscular atrophy, for example. Dr John Pepperell, who oversees the DNA testing
13
is also ensuring uninterrupted power for the Women & Infants Division of Genetics (part of the Department of Pathology and Laboratory Medicine). The department uses three stateof-the-art Vanadis machines to extract foetal DNA, to screen for genetic disorders such as Down syndrome (trisomy 21), Edwards’ syndrome (trisomy 18) and Patau’s syndrome (a rare genetic disorder caused by having an additional copy of chromosome 13). Schneider Electric’s SmartUPS (2200) units provide high density, true double-conversion online power protection for the genetics laboratory. Installed next to the Vanadis machines, the smart-UPSs act as a backup, while the units’ advanced electric relays ensure that the supply of the electric current is stable. The technology can alter the voltage levels and maintain a constant flow, in case of a voltage fluctuation, and protect the connected loads from
Latency and resilient power are everything… I produce terabytes of data, which must be stored indefinitely. I need to be able to access this data at all times and I constantly want more bandwidth procedures at Care New England, comments: “While the machine is running, it is important to have no power interruptions, so the machine is hooked up to the reserve power. It is a very sensitive piece of equipment and it takes three days to process the test. “However, if the process is half way through a chip, and a power interruption results in a pause to the operation, uneven photobleaching can occur and the data will be unusable. Repeating the process significantly increases the cost of micro array testing.” Smart UPS Schneider Electric’s technology
surges, spikes and other power disturbances. The connectivity of the UPS to the network or cloud offers additional peace of mind, enabling remote monitoring of the status of the units. This is especially useful as the Vanadis machines operate overnight, when there are no laboratory staff present. Ultimately, unplanned outages can threaten patient safety, delay important tests and cause damage to a provider’s reputation. Resilient backup power, connected UPS and EcoStruxure software is giving Care New England peace of mind – ensuring reliable uptime and protecting its most precious patients. ● October 2019 MCP
14
VIEWPOINT
Blackout: who should be in the dock? Questions remain over the events surrounding the major blackout on 9 August. Ian Bitterlin gives his views on where the blame lies
T
he partial blackout in the UK on 9 August that cut-off one million consumers and triggered chaos on the railway network is now getting lots of folks excited and the government is petulantly organising an enquiry involving National Grid (the energy system operator) and all of the DNOs, especially Ørsted and RWE, the owner/ operators of the two power plants that appeared to have triggered the event – Hornsea, a North Sea wind farm; and Little Barford, Bedfordshire, a 740MW CCGT (combined cycle gas turbine) plant, respectively. It was a windy day in the North Sea with a relatively low (and steady) national load demand of 28.2GW from 09:00 leading up to the first event – Little Barford tripping off load – at 16:52 when it was 29.5GW. The second event –Hornsea unloading 737MW – occurred just after Little Barford. Ten minutes after these events the national average load had by dropped 100MW – very probably due to some of the, MCP October 2019
now stationary, electrified train network. To be pedantic, there were three reported individual events that the initial ESO report linked as one overall event, the first being a lightning strike. On the face of it, with the events being separated by over 100 miles, I thought that linking them as ‘one’ was ridiculous,
6%
The combined losss of capacity on the grid, resulting in the ESO automatically instructing DNOs to cut off load and reduce demand but I concluded that it was more of a ‘system’ failure than two individual power station failures, so one event is the best way of looking at it. The lightning strike, which some commentators have labelled a red herring, probably happened; it was a warm 23°C humid August evening with 200mm of rain that day and with
numerous lightning events as usual but, regardless, ‘something’ had to trigger Little Barford to trip and it is more important to focus on why that mattered and why it caused Hornsea to come out in sympathy. Lightning is always a useful excuse that helps God to take some blame but strokes happen all the time. Usually, they only cause a section of the inter-meshed grid to trip and a customer group to lose voltage for three seconds or so, until the auto-reclosure system kicks in – rather than causing a whole station to trip offline. More detail might be helpful to understand the Little Barford trip, therefore. That day the grid was being fed with a very creditable fuelmix of 46% renewables (31%, 8.7GW, of wind and 13%, 3.8GW, of solar-PV, although the PV was falling as the evening drew on), only 1.7% from coal, 22% from nuclear and 29% from natural gas. The sequence of contributory events is then a matter of some conjecture, supported by the preliminary ESO report of 20 August: Little Barford tripped
off 244MW and reduced the online generation capacity by 0.8%. This would have produced a rapid rise in voltage and dip in frequency associated with a fast rate-of-change of that frequency (ROCOF) and, I think importantly, increased the percentage of renewables feeding the grid. Sometime around here, another 500MW (1.8% of system load) of embedded generation tripped offline as the grid voltage and its frequency fluctuated. As the frequency dropped towards the safety limit of 48.8Hz, Hornsea – incapable of ‘increasing’ the wind strength – reduced generation capacity by 763GW (2.7%), leaving just 62MW generating. Little Barford’s steam generator then shed another 244MW (0.8%) as its systems automatically reacted to the system frequency alarms. In a matter of seconds, rather than minutes, the combined loss of capacity was about 6% and the 94% capacity that was still connected did not have the overload capability to maintain the system frequency within missioncriticalpower.uk
15 grid is currently undergoing an upgrade to the protection settings to enable the usage of high levels of intermittent renewables that are expected in the future – witnessed by an odd press release a few weeks ago announcing that the grid would be capable of ‘100% zero-carbon’ distribution by 2020, which means, I inferred at the time, that the grid is not ready yet. Maybe 46% variable/ intermittent sources are the safe limit? There was a Danish paper (they are very often >40% wind powered) a few years back that questioned if >50% wind was ever possible while maintaining grid stability. When writing this piece on 30 August, I checked the UK
abandoned, since 12MWe is no good to anyone in a 740MWe plant. It is interesting to note that the 6% drop in capacity was matched by approximately 6% of consumer disconnections. I wonder how they chose the victims? But, when it comes down to the nitty-gritty, it is the government and Ofgen that should be the prime suspects and in the dock. Why? Well, the blackout was caused by the unusual event of two power stations tripping offline at nearly the same moment in time. The grid automatically shut off load to protect the system (and the consumer) from dangerous voltage swings
and sold off very cheaply compared with the investment made with tax-payers’ money over many previous decades. It was ‘inefficient’ due to ‘too many’ staff, ‘too much’ (ie the right amount of ) preventive maintenance and ‘too much’ spinning reserve. It was too conservative… Well, the maintenance investment has gone down and the spinning reserve cut to below the bone when the wind blows hard – and we all saw the result. It is a rare occurrence but is likely to occur a little more frequently as we increase the proportion of intermittent renewables. If we want a utility that doesn’t fail, then the management of
When it comes down to the nitty-gritty, it is the government and Ofgen that should be the prime suspects and in the dock... The blackout was caused by the unusual event of two power stations tripping offline at nearly the same moment in time safe limits, resulting in the ESO automatically instructing DNOs to cut off load and reduce demand. There is some evidence that DSR and emergency frequency support was also called for by the ESO but, by the time it tried to work, 6% of the load had been shed and the frequency bounced back to 50Hz, leaving the emergency systems with nothing to do. Some, allegedly, tried to take load rather than support it. I haven’t used the term ‘spinning reserve’ yet but the problem, once 6% was lost, was that there was not enough safety margin in the system (of capacity versus demand) to cope with the dynamic behaviour. This was aggravated by the high proportion of wind and solar that was running at the initial event – they are likely to be run at full capacity all the time that the wind blows and the sky is bright; this is a very different form of ‘spinning reserve’ as we know it from steam turbines and an inflexible response to load change. It was also certainly aggravated by the fact that the missioncriticalpower.uk
grid at 12.30pm and found 48% renewables (32% wind, 16% solar), our usual 19% nuclear, 20% CCGT, 7% biomass, no coal and 3% being imported via the interconnects, mainly French nuclear; almost the same fuel mix as 9 August, with a heavy reliance on low inertia intermittent sources. We should note two interesting points about the power stations involved: Hornsea is only at the start of its life, coming on stream in February this year with 28 turbines installed out of a planned 74 and it is on track to become one of the largest wind installations in the world at 6GWpeak – so this problem can only get more severe. On the other hand, on 9 August, Little Barford needed a helping hand like a gridscale battery and, somewhat coincidentally, it nearly had one. A trial 12MWe polysulphide bromide flow battery had been installed but it failed to demonstrate that it could be scaled up and the project was
and low frequency. The fact that the second station was an offshore wind farm when the wind was gusting strongly could have added to the stability problem. But why did it trip the grid? Because the system at that instant had (and always has?) Insufficient spinning reserve so that the private owner/ operators can save costs and increase profits for their stakeholders. The government (past, but the present bunch of any political persuasion would probably do the same thing) sold off the electrical utility because it was inefficient and costly to run; no doubt also to bolster a flagging economy at the time. However, it was undervalued
that utility should be based on engineering principles, not cost reduction. To get the highest possible contribution from intermittent renewable sources and move to a zero-carbon grid then we need either massive storage facilities or >40% nuclear generation to replace the gas, which has largely displaced coal already, or some combination of the two. There were knock-on events that deserve comment: Ipswich Hospital reported that one out of 11 generators failed to support them but let us immediately remind ourselves that hospitals are not dependent on generators (or UPS) in life-safety areas and, in this case, only outpatients, X-ray and pathology were affected. A later statement blamed one switchboard auxiliary battery. However, it is important that generator system testing (in hospitals, just as in data centres) is an essential feature of the planned maintenance routines. Unlike data centres, for hospitals the testing » October 2019 MCP
16
VIEWPOINT
is mandated by regulations according to health technical memorandum HTM 06-01, where it says, in section 17.64, ‘…include tests on the protection relays, battery units (on load), auxiliary relays, timer relays, coils, terminations and linkages forming the open/close mechanism…’ The hospital claimed to have followed this lengthy and detailed HTM but does not appear to have sufficiently tested the changeover switchgear battery/charger ‘on-load’, since, according to its statement, its relied upon the ‘recommended life’ of the batteries being OK. In data centres it is highly recommended to test generators on-load, including the switchgear changeover, every month since utility ‘failure’ is regarded as a ‘normal event’ and we know, from bitter experience, that batteries are
backup power, just like a critical infrastructure. There were 371 service cancellations and 220 partially cancelled after a 15-minute blackout. Maybe it is the same problem of privatisation reducing quality, to reduce costs and increase profits? The role of DSR (by its very nature off-line) was clearly of no use in this type of unplanned and ‘instant’ event. If the grid trips, DSR cannot connect to it. In fact, if the grid is not reasonably stable then DSR in the form of standby generators will trip off-line to protect itself. If I was Ofgem I would be asking one question at a time and the first one would be ‘where was the output from Dinorwig’ – our pumped storage 1.8GW system (coincidentally 6% of the 9 August load loss), that was designed to cover for such events of demand peaks or supply troughs?
It is the privatisation of the utility (and the split of responsibility into parts, each of which is driven by profit, with clearly insufficient system responsibility) that lies at the heart of this blackout a plant item whose service life is only a proportion of their design life, usually 80% at best but much lower if neglected. So blaming the utility for the failure of the hospital plant is not a reasonable argument and, more likely, a cut in maintenance and testing is a direct consequence of government cuts in NHS funding. Again, is the government to blame? Similarly, the electrified train network needs to review its own power strategy rather than rely entirely on the grid. It is a remarkably bad engineering design that results in 60 trains (Govia Thameslink Railway in the South East) needing an engineer sent to manually ‘reset’ each one after a power cut. Network Rail needs to understand that electric trains are much less autonomous than diesel-electric and that their control systems need MCP October 2019
Now a governmental review? What a waste of tax-payers money; getting experts to explain to a bunch of amateurs that Ofgem and National Grid should control (command) the DNOs to ensure that enough on-line (not DSR) dynamic reserve capacity is in place, even if it costs the DNOs money. It is the privatisation of the utility (and the split of responsibility into parts, each of which is driven by profit, with clearly insufficient system responsibility) that lies at the heart of this blackout, and the others to come. Renationalisation is probably out of the question, but we certainly need a safe system and we have that. Maybe the price of electrical safety in a low-carbon future is a utility system that trips offline from time to time… but that can produce other safety issues if backup is missing. ●
The August outage and UK data centres Jack Bedell-Pearce, CEO of 4D Data Centres, says planning for resilience was the key to ensuring uptime As investigations are under way by Ofgem, National Grid and the generating companies, this is a great time for us to reflect, as a data centre operator, on what happened. The simple answer is: nothing. Customers in our data centre did not see a change in their power supplied, or the operation of their services. Due to having resilient UPS systems and onsite diesel power generators, whenever we see something happen on the National Grid which could impact our power supply, our automated systems fire up the generators to ensure a consistent and continual supply. Once a month, we do a full building blackout test and run on these systems for at least an hour. This test gives us the confidence that everything in that automatic chain from detection systems, transfer switches, UPS systems and generators all perform exactly as expected – with no panic or cause for concern. We also have an extensive planned preventative maintenance programme to ensure all of the equipment is in the best possible condition when needed. At precisely 16:53:39, we saw the frequency (normally ~50Hz) drop below the normal tolerance of +/- 0.5Hz from our main grid connection. This has been backed up by the reports seen nationally and from our peer operators with whom we discuss and share information on key information relative to operational effectiveness. While we did not experience any issues at our data centres due to our resilient backup systems, it did impact a number of our employees who were travelling home on Friday. There is a designated protocol referred to as ‘demand control’ (or disconnecting higher power usage customers from the network). You may also have heard this referred to as load shedding. For large power users in the UK, there is a set response that automatically causes them to disconnect their usage from the electricity supply should the frequency fall below a set threshold, defined in a technical standard referred to as G59/2. I am sure this will be reviewed as to the relevancy and ‘impact’ on the nation as a whole. For protecting the overall electricity network, disconnecting part of the railway network makes a lot of sense. However, there is a heavy impact from that disconnection in comparison to, say, data centres which have the appropriate backup power systems in place. Ultimately, all of this is will be going through an extensive review from all of the interested parties, and I hope this will guide future policy decisions on protecting the network. These rare events do occur (the last major incident was 11 years ago, in 2008) and act as a huge learning point for all involved. missioncriticalpower.uk
18
UNINTERRUPTIBLE POWER SUPPLIES
How healthy is our power infrastructure? Power Control’s Robert Mather looks at the implications of the August blackout and highlights the importance of applying best practice within hospitals, and other critical facilities, to avoid risk
F
ollowing the blackout on 9 August, National Grid confirmed that due to the fall in frequency of the system, it followed its planned isolation process by disconnecting selected segregated areas. Although regarded as standard procedure by National Grid, it has raised some questions. How have these areas been chosen? Is it done at random? Can the businesses affected by the scheduled shutdown be compensated? And to what extent can this be the acceptable safeguard? Satisfying the latter question, National Grid has recently issued a statement confirming that it is accelerating its plans to increase its backup power provision. With the 1,000MW reserve completely exhausted from the latest power outage, which drew 1,400MW of power from the energy system, the company’s priority is to avoid another ‘system shock’ at all costs. The firm has also admitted that “a lot of lessons have been learnt” from the recent blackout. MCP October 2019
Critics believe that National Grid’s conflict of interest in safeguarding the nation’s electrical framework will be detrimental to the successful implementation of its power protection strategy. The company owns the UK energy infrastructure and has overall responsibility for balancing electricity supply and demand. It is also facing widespread criticism for its lack of foresight. Despite endless warnings that a large-scale blackout could occur, nothing has been done to mitigate the impact. Not only does National Grid need to take responsibility for increasing the amount of storage available but our power plants do too. It is already
acknowledged that sudden drops in power are becoming increasingly common and, as recently discovered, there is not enough in reserve. While backup power is being positioned on a large scale, in power plants up and down the country, businesses of all sizes should also be adopting the same backup power protection principles. UPS systems Society continues to embrace technology, which only means an even greater strain on our electrical infrastructure. Backup power solutions such as uninterruptible power systems (UPS) are fast becoming commodities with more and more homes and
It is not enough to simply plug in a basic UPS system. Careful consideration must be given to the size, location, configuration and internal structure of the UPS to meet best practice and guarantee patient safety
organisations including them within their IT and electrical infrastructures. A UPS system provides emergency backup power not just in the event of a complete power loss but it also provides a constant clean power source to safeguard against sags, spikes and surges – frequencies that are becoming a common occurrence. Power Control is helping businesses across the country to implement more resilient power protection policies. Ensuring the correct UPS system is specified is a key part of Power Control’s role. It does not just address the initial backup power necessity but looks at the complete critical power path. This ensures that UPS capacity is measured appropriately to accommodate growth and achieve efficient total cost of ownership (TCO). Future proofing and understanding TCO form an essential part of power management strategies. It is important to analyse the complete life span of equipment missioncriticalpower.uk
19 Businesses ignore the possibility of a concurrent failure to mains power (primary) and the backup generator (secondary) only to be thrown into darkness when a power cut happens
from initial purchase through to management and maintenance. Heightened demands on the grid have made this close analysis more critical than ever as a measure towards safeguarding against the increased vulnerabilities in the UK’s power source. Hospitals: the importance of HTM compliance The power outage on 9 August also highlighted the importance of applying best practice within healthcare trusts. It was reported that a prominent hospital and a number of other critical care facilities were left without power after a backup diesel generator failed to start. It could be assumed that these sites had not recognised the best practice guidance within the guidance HTM 06-01 as they were left vulnerable to the power failure. Potential failures of a secondary power supplies are clearly overlooked compared with primary power supply failures. Businesses ignore the possibility of a concurrent missioncriticalpower.uk
failure to mains power (primary) and the backup generator (secondary) only to be thrown into darkness when a power cut happens. To be HTM compliant a tertiary power solution, such as UPS, must be installed. For those with tertiary backup power provisions, it is important that the internal components meet the design set out in the HTM. It is not enough to simply plug in a basic UPS system. Careful consideration must be given to the size, location, configuration and internal structure of the UPS to meet best practice and guarantee patient safety. However, the batteries control the reliability of the entire system. To adhere with the HTM guidelines, the batteries should have a 10 year life expectancy to ensure the long-term security of function. UPS batteries require a suitable environment, as detailed in the manufacturer’s operating manual, to fulfil their life expectancy. Typically, the ambient temperature around the UPS should be 20°C with adequate ventilation and cooling. At 30°C, the life expectancy of a typical valve-regulated lead-acid (VRLA) battery is reduced to 50% and 25% at 40°C. A VRLA battery is recognised as being a nearzero-gassing battery by the HTM and so presents a lower environmental hazard to the UPS and surrounding area. It is also important to note that the VRLA battery must comply with the BS EN 60896 (21 and 22) standards with threaded insert connection posts and flame retardant case materials. Another UPS component mentioned in the HTM guidelines is the bypass
switch. These should be rotary locking switches located on the input. Furthermore, external battery DC isolators are required in hospital environments. These are ideally situated on the front of the cabinet or an accessible wall. Although isolation (zerophase shift) transformers do not feature inside the UPS, they are essential to the overall infrastructure to prevent problems occurring when the input neutral is switched or broken. These transformers can be placed on the output. However, it is more beneficial for them to be installed on the UPS input. Consideration needs to be given based on the electrical infrastructure design. The UPS system itself should conform to the following standards: • BS EN 62040-1 • BS EN 60146-1-1 • BS EN 61439-6 • Energy Networks Association’s G514-1
Ingenio Plus UPS
To meet the minimum requirements of redundancy, an N+1 configuration must be in place. Furthermore, the HTM requires each UPS to be sized with enough capacity to individually be able to fully support the whole load. For example, where the critical load is 100kVA, two UPS systems carrying an absolute maximum of 50% load each would be necessary. Although the updated 2017 edition of the HTM 06-01 suggests that modular UPS systems can be used, further consideration is required. Modular redundancy is not treated as true redundancy due to there being multiple points of failure. Regular maintenance All too often dutyholders become complacent because a UPS is already installed. However, without regular maintenance, how do you know whether it is still effective and fit for purpose? The HTM covers some of the components within a UPS, but there are many more delicate electrical parts that all need to be working in harmony to provide the power when it’s needed. A popular choice of UPS system for hospitals is the Borri B9000 FXS as it is fully compliant with all international product standards, can run off two power sources and the batteries come with a 10 year design life as standard. Identifying the correct UPS system for hospital facilities requires careful planning and expertise. With more than 25 years of experience in providing backup power solutions, Power Control is ideally placed to offer guidance and support for ensuring the healthcare unit’s critical power complies with HTM guidelines. ● October 2019 MCP
20
ENERGY STORAGE
Energy storage and flexibility: key to resilience in uncertain times? While a UPS is sufficient to deal with short-term power failures, the installation of a behindmeter-battery could deliver far greater value to mission critical facilities, argues Grid Beyond’s Michael Phelan. He tells MCP how sites can increase resilience and generate revenues
I
n the days following the August blackout, National Grid described the outage as a “rare and unusual event”. This has raised an important question about the resilience of our networks during the transition towards a fully decarbonised grid by 2025. GridBeyond chief executive Michael Phelan explains: “As our grid becomes progressively decarbonised and ever more decentralised, large energy users – and especially those for whom 100% uptime is a must – should take more responsibility for their energy efficiency and resilience. “By participating in grid balancing programmes, businesses support National Grid not only by taking control of their energy demand and sustainability, but helping to digitalise the network and ultimately mitigating future power-cuts.” MCP October 2019
Towards net zero The increased level of renewables and low carbon sources, and a reduction in coalfired power stations, affects the inertia on the energy network. In the context of power production, inertia is the ability to carry on long enough after a fault occurs for the system to rectify the imbalance by increasing power output or reducing demand. Inertia, required to protect the increasingly decarbonised network, can come, for example, from fast-acting batteries. National Grid explained that “following the event, the other generators on the network responded to the loss by increasing their output as expected. However, due to the scale of large generation losses, this was not sufficient, and to protect the network and ensure restoration to normal operation could be completed as quickly
as possible, a backup protection system was triggered which disconnects selected demand across GB.” It is important to note that in the aftermath of the blackout, multiple energy experts stated that there is no reason to believe that wind farms or other renewable generators are in any way more likely than traditional energy sources to disconnect from the grid. Phelan believes that the recent power failure will help to shape future policy and programmes for frequency management: “The energy transition is the way forward but how to achieve the net-zero economy requires careful consideration. “Regardless of the exact cause of the outage, one thing is certain: the UK needs more investment in grid balancing and battery technologies if a
2050 net-zero target is to be achieved.” Boosting resilience It is widely understood that installation of additional gridscale battery projects increases inertia and network resilience, and helps to mitigate blackouts. Grid-connected energy storage has proven to be an effective solution in Australia, which like the UK is an island network. After a major blackout in 2016, Tesla installed a record breaking 100MW/129MWh of lithiumion batteries to help protect the network in the event of any future power issues. “The purpose of this battery was specifically to defend the power grid from trips like this after outages in the summer of 2016,” wrote the news site, Electrek. “When the Victoria power plant tripped [in 2017], the power grid’s frequency missioncriticalpower.uk
21 began to drop – from 50Hz to below 49.80Hz. The battery responded before the original power plant completed its disconnection from the grid.” The interest in energy storage projects is growing in the UK, as not only experts but now politicians start to see them as a necessity. In his first speech as prime minister, Boris Johnson praised the UK’s battery manufacturing industry, pledging his strong support for energy technology projects. However, as noted by experts, so far the promises are not being followed by adequate actions, and there is a strong need for policy changes and significant increase in investments. As National Grid works out how best to manage generation loss situations in the future, and while we wait for more battery projects to take off, businesses – and in particular those whose core business requires 100% uptime – should take resilience into their own hands with behind the meter battery installations. Protecting power “Last month’s blackout tested the contingency strategy of critical power sites. The vast majority of them avoided disruptions, thanks to onsite generators and batteries that dispatched in response to the power cut,” says Phelan. In most cases, a UPS is sufficient to deal with shortterm power failures. However, the installation of a behindmeter-battery that works in harmony with onsite assets delivers far greater value to the organisation. GridBeyond, a leading demand response and energy services provider, has developed an intelligent energy platform called Point. The technology enables businesses to access the hybrid
battery and demand network – a solution that combines industrial energy consuming assets with commercial batteries. The network significantly boosts assets and sites’ flexibility, increasing robustness and ensuring resilience against any issues on the grid network. “The unlocked flexibility can be further used to participate in grid balancing services, generating revenues and savings, boosting businesses’ environmental credentials and supporting National Grid with decarbonisation and digitalisation,” explains Phelan. “At the same time, the Point platform monitors energy assets on the site for any inconsistences and malfunctions. If an issue is detected, the predictive maintenance alert is triggered, helping to prolong equipment’s life cycle and securing businesses operational continuity.” GridBeyond works with a number of critical power sites, both in the UK and Ireland, including Irish Water, Northern Ireland Water and NHS Royal Devon and Exeter. Jane Mellor, head of operational procurement at Northern Ireland Water, says: “We consider sustainability and climate change mitigation through decarbonisation as priorities that inform our decisions on the future direction of the business. As such, we are committed to using innovative approaches to energy management and new technologies to deliver water and wastewater services for the lowest environmental cost. “By working with GridBeyond, NI Water is demonstrating a continuing commitment to delivering high-quality services, while simultaneously enhancing natural and social capital.” ●
Regardless of the exact cause of the outage, one thing is certain: the UK needs more investment in grid balancing and battery technologies if a 2050 net-zero target is to be achieved missioncriticalpower.uk
More frequency response required? The blackout prompted a flurry of calls for the system operator to rethink its approach to frequency response procurement According to consultancy Aurora, the blackout could lead to a rethink on how much reserve National Grid ESO procures through ancillary services such as frequency response. Per a briefing note: “This event highlights the need to place sufficient value on being able to manage frequency through ancillary services such as frequency response. Being prepared for every possible eventuality may be expensive, but we have seen that even short outages cause high levels of disruption and associated cost if key infrastructure such as airports, hospitals and railways are David Middleton: taken out. ‘We need to replace Currently about £170m per year the inertia on is spent on frequency response – the grid’ doubling this would add £2 to an average annual household bill. As renewable penetration increases and the expected opening of Hinkley Point C later in the 2020s adds to the largest infeed loss, requirements are set to grow significantly. National Grid must address this if it is to meet its stated aim of operating a zero-carbon grid by 2025.” David Middleton, head of commercial delivery at Origami Energy, took a similar view. “We need to replace the inertia on the grid that is being lost through the closure or mothballing of large generating units. While some new power stations are being constructed, progress is slow. In the meantime, National Grid Electricity System Operator (ESO) should consider purchasing more frequency response,” he said. “Increasing frequency response capacity can be achieved quickly. “We also need more energy assets that can provide fast response from storage and quickly turning demand on or off to balance frequency supported by real-time visibility and control.” Steve Shine, executive chairman at Anesco, a battery storage developer and operator, commented that the incident could have been foreseen: “It would be easy for National Grid to write this Steve Shine: ‘Greater incident off as a fluke event, but volume of faster they have actually been aware of response services this potential issue for many years. needed’ “Indeed, it can be seen in their System Operability Framework publications and was referenced in their System Needs and Product Strategy document,” said Shine. “What is needed is a greater volume of faster response services. This would have prevented the need to turn the power off.” October 2019 MCP
outages, negating the potential risk of costly commercial, reputational and legal issues. However, it is vital that this does not become a tick-box exercise. Implementing a testing regime which validates the reliability and performance of backup power must be done under the types of loads found in real operational conditions. Ideally, all generators should at the very least be tested annually for real-world emergency conditions using a resistivereactive 0.8pf loadbank. Best practice dictates that all gensets (where there are multiple) should be run in a synchronised state, ideally for eight hours but for a minimum of three. Where a resistive-only loadbank is used (1.0pf ), testing should be increased to two to four times per year at three hours per test minimum. In
hospitals failed backup power could mean a threat to life. Capable of testing both resistive and reactive loads, this type of loadbank provides a much clearer picture of how well an entire system will withstand changes in load pattern while experiencing the level of power that would typically be encountered under real operational conditions. The inductive loads used in resistive/reactive testing will show how a system will cope with a voltage drop in its regulator. This is particularly important in any application which requires generators to be operated in parallel (prevalent in larger business infrastructures such as major telecoms or data centres) where a problem with one generator could prevent other system generators from working properly or even
While the government, regulators and power companies are working closely to mitigate the risk of power failure to the country’s infrastructure, businesses for whom power is critical would do well to consider taking a more localised approach carrying out this testing and maintenance, fuel, exhaust and cooling systems and alternator insulation resistance are effectively tested and system issues can be uncovered in a safe, controlled manner without the cost of major failure or unplanned downtime. Inadequate testing The reality is, in many instances, that those in charge of maintaining backup power have no regular testing schedule, making an assumption that occasionally powering the generator up, or testing for a minimal period, will suffice. By not testing the system adequately, the generator is put at risk of failure. In the event of a power outage, like the one in August, the impact on businesses such as data centres can be enormously costly, while for missioncriticalpower.uk
failing to operate entirely. This is something that is simply not achievable with resistive-only testing. The recent power cut was the first of its magnitude in more than a decade and unforeseen vulnerabilities were exposed, throwing into question the resilience of the UK’s power network. While the government, regulators and power companies are working closely together to mitigate the risk of power failure to the country’s infrastructure, businesses for whom power is critical would do well to consider taking a more localised approach. At the very least, by having backup power in place and adopting a proactive testing regime, businesses are taking preventative action towards mitigating the catastrophic risk associated with power loss. ●
— Let’s write the future by using data to make better decisions. As the world’s first digitally-enabled power transformer, the ABB’s digital power transformer is inherently smarter to help grid operators meet the challenge of the energy transition. It comes with digital capabilities right ‘out of the box’, enabling operators to gather, analyse and monitor transformer health data – and as a result, enhance reliability and efficiency while future proofing their business. For more information email info@gb.abb.com or visit abb.com/uk
24
STANDBY POWER
It is important to ensure standby generators are maintained and tested on a weekly basis
Will the backup stack up? Finning UK & Ireland’s Jason Harryman provides some valuable advice on how to ensure your backup generation kicks in when it is needed, and what to do if it does not
T
he major power outage on 9 August saw almost a million people across large areas of England and Wales affected in what was the country’s most severe blackout in more than a decade. With Ipswich Hospital investigating why a backup generator failed to kick-in during the power cut, many critical infrastructure providers such as data centres will now want to check that their equipment is operating as expected, should an incident such as this occur in the future. Testing times Whether for public safety, national security or business continuity reasons, mission critical facilities must remain operational at all times. Yet, because backup generators are designed to operate from standby for much of their life, it is important to ensure they are regularly tested. A routine testing procedure of backup generators should be in place. Indeed, for mission critical facilities, it is recommended that testing should be undertaken on a weekly basis. Mechanical components within the backup generator containing moving parts must be used frequently in
MCP October 2019
order to make sure they do not become inoperative and faulty. One element that it is critical to test is battery voltage. A measurement of the battery voltage during start-up will reveal whether any problems are potentially on the way. For example, if battery voltage is too low, then a backup generator may not be able to start quickly enough in the case of a power outage, which could lead to serious repercussions. Inspecting any issues Do not overlook how important it is to recognise and act on any unexpected issues that may be identified by the backup generator’s controller. Check regularly that no reporting faults have been identified. If they have, then deal with these as a priority. It is critical that any potential issues that the system’s controller might identify around the standby temperature, for example, are investigated. A hot
engine for standby is needed, as it will then deliver load better than from a cold start should sites be faced with a power outage. Generators designed to operate from standby will only come online in the event of an emergency. Therefore, they are not in regular use and may not be subjected to the same stringent inspection regimes as other capital plant. When not in use, for instance, a backup generator’s fuel can become a common issue if preventative measures are not taken, as fuel can become contaminated by water condensation, dirt ingress or rust over time. This can lead to filter blockages, or premature wear of fuel injectors or pumps. As a result, it is crucial that the appropriate equipment inspections are taking place. Is the SLA fit for purpose? A service-level agreement (SLA) means critical infrastructure
providers can be confident that they can rely on repair and maintenance expertise from a trusted supplier, so backup generators will remain operational no matter what the circumstances are. This provides sites with assured peace of mind, as well as fixed budgeting costs. Nevertheless, it is critical that the SLA is fit for purpose, at an appropriate level to meet demand. Many believe that their SLA will automatically cover emergency call-outs, which is a common misconception. At the time, many will have been tempted to opt for a more costeffective SLA, which might not provide the site with the repair and maintenance support needed. This will often be due to the belief that they might have the in-house skills and capabilities to deal with any potential generator issues, and the decision has been made as part of a cost-saving exercise. Therefore, it is always recommended that operators check the terms of their SLA and ensure it meets their site’s demands. Users should seek a trusted partner with a strong track record of delivering reliable backup systems, which considers each site’s individual requirements. Backup generators require regular maintenance and testing to ensure they are operating properly, and this should be supported by a suitable SLA. By taking these steps, critical infrastructure providers can be safe in the knowledge that they have taken every precaution and have the right provisions in place should a power outage occur – such as the one recently experienced in large areas of England and Wales. ●
It is critical that the service-level agreement is fit for purpose, at an appropriate level to meet demand. Many believe that their SLA will automatically cover emergency call-outs, which is a common misconception missioncriticalpower.uk
26
BATTERY STORAGE
Importance of battery choice for UPS performance
Mark Coughlin, from EnerSys, discusses the demands being placed on UPS batteries within the data centre sector. He argues that operators should consider using thin plate pure lead as an alternative to traditional lead acid and lithium-ion to optimise UPS performance
D
ata centres today experience a rising incidence of power outages and grid fluctuations caused by increased urbanisation and demand. Meanwhile, their workload is expanding, with a move to multi-user hosting services and larger data storage capacity requirements. These factors increase pressure for ‘best in class’ technologies and reliable power. UPS batteries are also directly impacted by reduced autonomy times, now typically between 30 seconds and five minutes, compared with historical averages of about 15 minutes. This is because of the shorter times needed to start up generators and switch loads. MCP October 2019
Fast recharge times are also desirable, allowing batteries to be recharged quickly in order to be able to support further power outages.
43%
Improved energy efficiency offered by TPPL compared with VRLA batteries through reducing float current requirements Energy efficiency has become an overarching concern for all data centres, not just because of the financial impact of large-scale operation
and rising energy costs but also due to pressure from stakeholders – and legislation – to pursue effective carbon footprint reduction policies. Concerns about energy costs and grid power availability are driving growing interest in using UPS battery assets for energy storage applications, as a way to generate further revenue. In firm frequency response applications, for example, UK-based data centres could provide battery energy back to the National Grid on demand. Alternatively, the batteries could be used for peak shaving, reducing data centre energy costs by supporting loads when electricity cost is high and then recharge the battery when
low-cost electricity supply is available. Such strategies can bring significant cost savings, and generate money when supplying energy back to the grid. However, they demand longer battery autonomies than the five minutes typically needed for UPS backup. Currently, there are relatively few active sites deploying this strategy. Nevertheless, manufacturers such as EnerSys have conducted trials with batteries that can support these applications. Technologies and trends Battery chemistries currently available for UPS backup missioncriticalpower.uk
27 include lead-acid, lithium-ion (Li-ion) and nickel-cadmium. There are also non-battery technologies such as flywheels and super-capacitors. This article focuses on the two types that currently dominate the data centre industry: lead-acid, which represent more than 90% of the UPS market share; and Li-ion, which is attracting increasing interest due to its purported performance benefits and high visibility through its use in electric vehicles. Li-ion is attracting interest through being attributed with performance features superior to traditional valve-regulated lead-acid (VRLA) batteries, which are typically either gel or absorbent glass mat (AGM) designs. Compared to traditional VRLA equivalents, Li-ion offers a high cycle life, together with a significant size and weight reduction. Li-ion batteries also have high charge efficiency, with excellent partial state of charge tolerance – in fact, partial charge is preferred for long cycle life and operation in float conditions at full state of charge. The self-discharge rate of Li-ion is also low, which results in prolonged shelf life when in storage. Finally, it has good high and low temperature performance, and no gas emissions.
EnerSys batteries in a data centre UPS application missioncriticalpower.uk
Concerns about energy costs and grid power availability are driving growing interest in using UPS battery assets for energy storage applications However, Li-ion’s comparison with traditional VRLA reveals some challenges along with its benefits. Accordingly, we show how thin plate pure lead (TPPL) technology, as an advanced form of lead-acid chemistry, offers a number of advantages over traditional VRLA batteries. Despite historical cost reductions, Li-ion pricing remains a barrier for many users. With pricing depending on many factors including purchase volumes and the exact chemistry used, Li-ion is currently significantly more expensive than lead-acid. Furthermore, although space-saving may be important within data centres, weight reduction, which Li-ion batteries offer, is seldom critical. Similarly, the high cycling capability of Li-ion is not a driving factor for selection within UPS applications, where batteries are mostly floating at near full state of charge. While considered a safe technology, any Li-ion solution, unlike lead-acid, must include a battery management system (BMS) to ensure safe charging and discharging. This increases complexity, and requires users to have a thorough understanding of Li-ion technology. However, the BMS provides built-in diagnostics, which identify most problems and allow minimal maintenance. Additionally, consideration  October 2019 MCP
28
BATTERY STORAGE
The DataSafe XE range of batteries is designed for UPS applications
must be given to the mean time before failure (MTBF) of the electronic components factored into Li-ion calendar lifetime calculations. Lifetimes of 15 years are claimed, but service life is not proven in the field. By comparison, advanced TPPL, with 12-plus years’ design life, provides eight to 10 years’ service life, while traditional VRLA 10-year design life batteries typically provide five to six years’ service life. Charging is another important consideration. Firstly, to fast-charge Li-ion, higher charging capacity, with increased cost, may be required. Also, in many cases the charging architecture would need to be replaced or changed to support different Li-ion battery charger voltages, so two different UPS rectifier types would be required across a data centre attempting to deploy both Li-ion and lead-acid batteries. Other factors, while not immediately specific to the data centre environment, should also be considered MCP October 2019
when selecting a battery technology. During transportation, Li-ion faces legislative shipping restrictions, while lead-acid batteries, including AGM and TPPL, is classified as non-hazardous for all transportation modes. Then, at end-of-life, lead-acid has an inherent value and is about
both chemistries. As a lead-acid-based battery technology, TPPL is reliable, well-proven and easy to transport, handle and recycle. Crucially, advanced TPPL technology significantly improves energy efficiency, by providing up to 43% energy reduction compared with traditional VRLA batteries
service life 25% longer than for traditional VRLA. Additionally, storage life is increased from six to 24 months due to low self-discharge rates. Advanced TPPL technology is used today in many demanding critical applications. Data centre users can access TPPL through DataSafe XE batteries, which are specifically designed for UPS applications. They support autonomies of under five minutes, while offering all the above TPPL features. What of the future? Lead-acid technology is expected to dominate the market for at least the next few years, although enquiries and niche projects suitable for Li-ion will continue to grow. In particular, applications requiring high cycling will be seeking advanced TPPL or Li-ion solutions. Depending on the application, Li-ion could be the preferred battery type. Nevertheless, before opting for Li-ion as the technology for a particular application, a full consideration of the requirements should be undertaken. The assessment should reflect the total cost of ownership, with the benefits and challenges of Li-ion compared against other available technologies,
Irrespective of the technology chosen, battery monitoring systems will become increasingly popular, due to the battery condition visibility and opportunities for predictive maintenance that they provide 95% recyclable by a very well-established network of smelters; this possibility, however, is not mature for Li-ion. Optimised performance We have seen why Li-ion, while attracting increasing attention, has been slow to penetrate to data centre market. Ongoing development driven by the powerful automotive sector may change this, but advanced TPPL technology offers data centre managers the best of
through reducing float current requirements. Further energy savings accrue as it can operate, within warranty, at elevated temperatures, reducing airconditioning requirements. Meanwhile, advanced TPPL battery technology reduces data centre vulnerability to multiple mains blackouts, through very short recharge times and time to repeat duty. Battery replacement costs are also reduced through low internal corrosion rates, yielding a
including TPPL. Irrespective of the technology chosen, battery monitoring systems will become increasingly popular, due to the battery condition visibility and opportunities for predictive maintenance that they provide. This will also bring UPS applications into the increasingly pervasive Internet of Things environment, making them visible as components of the larger data centre infrastructure. ● missioncriticalpower.uk
30
POWER QUALITY
M
ost people in IT are aware that UPSs use batteries to seamlessly take over the critical load if the incoming mains supply fails. While this is certainly true, UPSs form an equally important role in managing power quality when the load is being supplied from the utility grid. This is because any mains supply is liable to many types of disturbance apart from total blackout – and these have the potential to damage or destroy any unprotected sensitive load. Data centre loads are typically described as critical, both in the sense of their availability to the application relying on them, and of their own dependence on high-quality power at all times. Online processing and e-commerce applications are good examples of this, because of their obvious requirement for uninterrupted availability. Other equally critical applications include data processing computers, precision manufacturing equipment, medical devices in applications such as life support and patient monitoring, telecommunications network equipment and point of sale (POS) terminals. Possible consequences A power supply transgression’s immediate effect could be to cause an equipment failure arising from component damage. However, even if the supply problem only halts the load rather than damaging it, the consequences can still be serious. An unexpected hardware stop will cause a software crash, leading to data loss or corruption. Business transactions will be interrupted and lost, exposing the enterprise to wider financial implications and loss of reputation. In April 2015, for example, a data centre in Oregon belonging to Legacy Health System, a nonprofit hospital system, suffered a power outage following a power surge; this caused a complete shutdown of the organisation’s servers, information network, and access to clinical and electronic management systems. The power outage was caused MCP October 2019
Understanding the UPS’s role in power quality management Alex Emms, operations director at Kohler Uninterruptible Power, explains the UPS’s vital role during mains presence, as well as in blackouts, and shows the importance of selecting online systems by a contractor drilling into a cable. In a manufacturing environment, the results could be equally, if not more, serious; control systems could be driven into inappropriate operation, for example. Both production equipment and product can be damaged, with time needed for cleaning up as well as repairs. So, how serious is the threat to your particular installation, and what can you do to mitigate it? The answer depends on three factors: the type of power disturbances that could appear
on your mains supply; the type of equipment to be protected and its susceptibility to these problems; and the steps you take to provide protection. Let’s look at these factors, see how they interact, and draw some conclusions on providing protection appropriate to your circumstances. Power problems Figure 1 shows the problems most commonly experienced. Spikes are short duration rapid voltage transitions superimposed on the mains
waveform. They can inflict both positive and negative voltage excursions, damage or destroy electrical and electronic components, and corrupt software. Software problems may be particularly difficult to track down and rectify, as they may not show until some time after the damage occurred. Spikes are typically caused by thermostats or other equipment switching high electrical currents, or load switching by power companies. Locally grounded lightning strikes are without doubt the most serious missioncriticalpower.uk
16BL – HIGH CURRENT CONNECTOR SYSTEM
Safety and high flexibility – for extreme environmental conditions Our single-pole circular connectors offer extended functionalities specially tailored to the needs of mobile power supply or industrial applications – for easy, safe handling with simultaneously increased current-carrying capacity and low insertion force.
Figure 1: Power problem summary
16BL – Product highlights at a glance: ■
Full functionality with a smaller number of components
■
Higher current-carrying capacity up to 630 A
■
Temperature range from −40 to +120°C
■
IP65, IP68, IP69 and salt spray resistance
■
New 45° bayonet locking system
■
Crimp and reusable AxiClamp cable connections
■
UL standards
www.staubli.com/electrical
and dramatic cause of spikes, particularly when induced into telecommunications cables. Electrical noise caused by disturbances between the supply lines and earth is called common mode noise. Conversely, normal mode noise, which arises from disturbances between line-to-line and line-to-neutral, can originate from sources such as lightning strikes, load switching, cable faults and nearby radio frequency equipment. Electrical noise can cause computers to hang and corrupt data. Surges
are voltage increases above normal mains levels that exceed one cycle. They typically appear after a large load is switched off or following load switching at substations. With long duration, voltage surges can degrade a computer’s switched mode power supply components and lead to premature failure. Sags are drops in the mains supply that can last for several cycles. They are generated similarly to negative spikes but have a much longer duration. Sags are very common occurrences that are usually »
Staubli is a trademark of Stäubli International AG, registered in Switzerland and other countries. © Stäubli 2019 | Photocredits: Stäubli
missioncriticalpower.uk 10.2019_16BL_mission-critical_93x275_en.indd 1
11.09.2019 11:56:27
32
POWER QUALITY
the result of switching on large loads like air conditioning equipment, or starting rotating machinery. Sags can cause a computer reboot if the mains voltage falls so low that the computer believes it has been switched off. Harmonics are generally caused by non-linear loads which draw large peak currents from the mains supply. Loads containing controlled rectifiers, switched mode power supplies, or rotating machines are particularly noted for generating this type of interference. These include computers, photocopiers, laser printers and variable-speed motors. Harmonics cause a disproportionate rise in current, resulting in increased temperatures which can cause component failure and general equipment overheating. Brownouts are identical to sags but are of much longer duration and generally more serious. They arise when the mains supply cannot cope with the present load demand, so the generating company drops the overall network voltage. Brownouts can last for several hours in extreme circumstances. Blackouts are complete power losses, where the mains supply fails totally. Caused by supply line faults, accidents, thunderstorms and a range of other conditions, they have an obvious, sometimes devastating effect. Equipment susceptibility Computers typically have specified upper and lower limits for steady state slow averaged rms line voltage variations of between +/-5% to +/-10%,
line voltage sags and power line interruptions of up to a 1/2 cycle (10ms), although not all units have this much ride-through capability.
The PowerWAVE 9250DPA UPS from Kohler Uninterruptible Power
It has become clear that to provide true UPS protection against all potential supply contingencies, selecting a system with dual online topology is essential depending on the manufacturer, but will tolerate short duration line voltage excursions outside those limits. The shorter the duration of the excursion, the greater the excursion which can
Figure 2: Online double conversion UPS topology
MCP October 2019
be tolerated. Some computers have sufficient energy stored in their internal power supply reservoir capacitors to sustain the DC supply to logic circuits during
UPS protection From the above we can see that computer equipment resilience to mains disturbances is very limited; managing the mains supply power quality at all times is essential. The most important measure is to select a UPS with online double conversion topology, as shown in Figure 2. This provides the highest level of power protection because it positions the rectifier and inverter as barriers between the supply and the load; these remove mains-borne noise and transient voltages. In fact, the load is not connected to the mains supply at all – instead, it is driven by a pure, well-regulated sinusoidal output from the inverter. Importantly, the inverter maintains its level of supply regulation even when it is operating from the UPS battery during a power failure. As well as filtering out events such as spikes, the UPS also protects against excursions beyond a preset voltage range caused by surges or sags. It does so by switching to battery power, as it would for a complete blackout. Externally connected components can complement the UPS’s protection role. Radio frequency noise interference and spikes can be substantially reduced by fitting suitable filters and some form of isolation transformer in the supply line. Surge suppression components can also be fitted. Above, we have seen how a critical load can be threatened as much by live utility mains supply problems as by blackouts. It also becomes clear that to provide true UPS protection against all potential supply contingencies, selecting a system with dual online topology, is essential. Users then enjoy both protection from mains aberrations, and battery autonomy during blackouts. � missioncriticalpower.uk
COLOCATION DATA CENTRES
33
KAO sets its sights on life science market Kao Data has secured a 1.5MW contract with EMBL-EBI, a leader in genomic sequencing, and has started its second phase of campus technology deployment
K
ao Data has signed a new customer contract with EMBL-EBI, a not-for-profit, international research infrastructure and global leader in the storage, analysis and dissemination of large biological datasets. This is a key development in Kao Data’s strategic goal to become the leading provider of compute capability to the UK’s life science community, based along the London-StanstedCambridge corridor. The EMBL-EBI contract will initially utilise 1.5MW of capacity across six technology cells within Kao Data London One’s first technology suite (TS01), offering the ability to scale quickly into the new development in TS02 as future demands require. EMBL-EBI provides some of the world’s most comprehensive open access biological data, used by millions of researchers in academia and industry globally. To-date, its data centres store more than 270 petabytes (277,000 terabytes) of raw storage; an amount that continues to grow daily, as new information from life science research and genomic sequencing aids scientists in the quest to understand our world and cure mankind’s most lifethreatening diseases. missioncriticalpower.uk
Although a research organisation with global reach, EMBL-EBI requires local data centre resources and Kao Data’s London One facility is situated a short distance away along the M11. This provides easy access for the research institute’s data centre engineers, ensuring continuity of in-house support, uptime for data users and a significant saving in operating expenditure. Additionally, the location of the Harlow campus enabled decision makers to visit the site and build relationships with the Kao Data team, who demonstrated the facility’s state-of-the-art design and OCP-ready capabilities. Its ultra-low PUE, reduced carbon footprint and commitment to 100% renewable energy sources provide a highly energy efficient environment with a
Data centre space, physical security and infrastructure availability were critical in our decision-making
EMBL-EBI requires local data centre resources low cost of operation, which played an equally important factor in the selection process. “The biological data we store and share through our data resources are used by life science researchers all over the world to power new discoveries,” says Steven Newhouse, head of technical services, EMBL-EBI. “As such, data centre space, physical security and infrastructure availability were critical in our decision-making. Kao Data provided a thorough, detailed and consultative approach, which satisfied our technical and operational requirements. Its team also demonstrated an in-depth understanding of the needs of the life sciences. Their offer gave us the ability
to quickly scale within a single campus, which is another key benefit in this data-driven environment.” Kao Data CTO Gerard Thibault adds: “EMBL-EBI is an incredibly important customer and marks a significant achievement in our strategic plans, demonstrating that Kao Data has a technically advanced, highly sustainable and energy efficient solution that provides key benefits to the life sciences community. “Our ability to exceed expectations and secure the contract against incumbent suppliers further illustrates that Kao Data’s technical abilities, our resource and processes surpass legacy data centres in the market.” ● October 2019 MCP
Advertorial
Put your motors to the test A modest investment in equipment for testing electric motors will ultimately save you a lot of time, money and inconvenience
M
otors play a critical role in industrial processes, which means that if you are having problems with one of your motors, you need to be able to diagnose the trouble quickly and accurately to keep costly downtime to a minimum. Additionally, according to the Carbon Trust (carbontrust.com), motors and drives use almost two-thirds of the energy consumed by industry in the UK so it is good practice to test motors regularly to ensure that they are in good condition and operating reliably. If you make a modest investment in a good motor test set, it will pay for itself many times over. But what exactly is a good motor test set and which tests should it offer? Some of the most useful tests related to motors are those that check insulation resistance, as insulation degradation is one of the most common problems found in motors. Three types of test are used for checking
motor insulation: the standard insulation resistance test (IRT), the polarisation index (PI) test, and the dielectric absorption ratio (DAR) test. IRT is basically an instantaneous measurement where you apply the test voltage and the result is displayed immediately. This is very useful for a quick indication of insulation condition. The PI and DAR tests involve measuring the way the insulation resistance changes when the test voltage is applied for a period of time – 10 minutes for PI and 1 minute for DAR. These tests take a little longer to perform but the results are more informative, and you can compare them directly with results from ‘typical’ motors. Your test set should support all three of these options, with automatic timing and computation of results for the PI and DAR tests. For dependable results, the test voltage should be stabilised, and the test set should have a guard terminal
to eliminate the effects of surface leakage. The results should also be automatically temperature compensated. You will also want the facility to measure low resistances, so that you can check connections and bonding as high resistance connections often lead to heating and failures. To
basic functions for measuring capacitance, inductance and continuity. And, of course, to ensure that the test set is safe to use even in demanding conditions, you will want CAT III 600V safety rating. A test set that meets all these requirements is the new MTR105 from Megger.
“If you want to keep your motors in tip-top condition and minimise costly downtime, you won’t find a better option than the new Megger MTR105” ensure accurate results irrespective of the length of the test leads, the test set should use a four-wire Kelvin measuring technique. Other facilities you will need include a phase rotation check so that you can be sure motors will rotate in the correct direction, along with
It provides all of the features discussed and many others in a single robust handheld instrument that is fast and easy to use. It has an IP54 ‘weatherproof’ ingress protection rating and a tough, shock-resistant over moulding, features that make it suitable for use in even the toughest conditions. As an added bonus, it can store 256 test results for later downloading so there is no need to scribble the results on that easily lost scrap of paper. If you want to keep your motors in tip-top condition and minimise costly downtime, a good motor test set is an investment well worth making, and you won’t find a better option than the new Megger MTR105. Simon Wood, Megger Distribution Manager
For more information, visit uk.megger.com or email info@megger.com
Motor testing at its best MTR105
Rotating Machine Tester The new Megger Baker MTR105 hand-held static motor tester is designed for performing multifunctional tests on motors. All in one device at an affordable price. n Insulation Resistance tester n DLRO (Digital Low Resistance Ohmmeter) n Voltmeter n Motor rotation tester n LCR meter n Temperature measurement n CAT III 600 V (up to 3000 m)
Learn more at megger.com/MTR105
36
T
ONSITE GENERATION
he Rugby World Cup is being held in Asia for the first time in 2019, marking a historic year for both the host nation, Japan, and the sport. With 1.8 million tickets available for the event, 600,000 of which have been purchased by international fans, this year’s tournament is set to be one of the biggest ever. With spectators travelling to Japan from more than 170 countries to attend matches live, and millions more watching the action at home, the need for reliable power is a priority. Fans expect to be able to enjoy the tournament without interruption, whether they are in the stadium or watching in their living rooms. Drawing on its experience in delivering major events globally, Aggreko was appointed to provide power to the 12 match venues as well as the contracts to power both the international broadcast centre – which will provide television and digital coverage to a record global audience – and the domestic broadcast centre. Aggreko is no stranger to the challenges associated with powering high-profile, global events. In the past year alone,
Aggreko is providing reliable power for the Rugby World Cup in Japan, ensuring uninterrupted enjoyment of one of the biggest tournaments in history
World in union: Powering the Rugby World Cup it has deployed solutions to power Glastonbury, golf’s Open championship and Solheim Cup, and the Commonwealth Games, to name just a few. That said, each event presents its own
unique challenges, careful planning and bespoke solutions. Aggreko has worked in close partnership with the Rugby World Cup organisers in order to deliver reliable and efficient
power that meets their needs and priorities. With the tournament taking place at 12 separate venues across the length and breadth of Japan over a six-week period, the event itself provides
37 Over the course of the tournament, Aggreko is deploying a total of 32.5 MVA across the 12 venues and broadcast centre. This involves the deployment of 71 generators, serviced by 615 distribution panels and 1250 transformers engineers to be on hand 24/7 to manage any potential issues as and when they arise. Many of the stadia are also located in densely populated, urban areas. To address the concerns of the event organisers, Aggreko had to consider how it would reduce the noise of the equipment being deployed. Coupled with these challenges has been the need to identify solutions that meet strict Japanese regulatory standards. When scoping out the project, Aggreko also had to ensure that the equipment would abide by the specific permanent venue restrictions, which differed from stadium to stadium. a significant logistical test. The venues span a distance of more than 2,000km, with each having unique environmental challenges. Each individual site also needs to have the right infrastructure and requires
Power solution Aggreko approaches each project with fresh thinking – the same solution is never deployed twice. Its team of skilled engineers recognise the priorities and differences for each project and create
bespoke solutions to suit specific needs. Over the course of the tournament, Aggreko is deploying a total of 32.5 MVA across the 12 venues and broadcast centre. This involves the deployment of 71 generators, serviced by 615 distribution panels and 1,250 transformers. To support the generators, Aggreko has moved its specialist fleet of distribution cable from its Dubai hub and supplemented this with in-country transformers to meet local voltage requirements. Efficiency Aggreko has been also been tasked with increasing overall fuel efficiency. Measures have been put in place to optimise the running hours of equipment, to optimise usage and improve consumption levels throughout the tournament. With noise reduction a priority for the tournament
organisers, Aggreko deployed exhaust attenuators and noise curtains at two of the stadia located in densely populated areas. Aggreko’s team has created tailored, flexible power solutions for the Rugby World Cup, taking into consideration the specific requirements of the site. Resilience As with any project, as much as you plan, there are always unexpected hurdles to overcome along the way. While the Rugby World Cup has been no exception, Aggreko’s team has ensured it has worked closely with the tournament organisers to deliver the most suitable and efficient power packages. The meticulous planning that has been undertaken by Aggreko during the past 12 months means that the Rugby World Cup will be enjoyed by fans across the world without any disruption. ●
Advertorial
Future proofing cooling in data centres… Every watt counts during round-the-clock operation in data centres
N
owadays, data centres are springing up wherever you look. So much waste heat is generated in each of these data centres that around half of the required electrical energy must be used for cooling the hardware alone. Yet the information age has only ust ‘warmed up’ the larger the olume of data is, the higher the required cooling capacity and thus the energy consumption will also be. These facts are accompanied by the desire for higher computational capacity over the same floor space, which results in the fans needing to move more air without increasing in diameter n today’s precision air conditioning units, for example, air-duct crosssections are designed for larger quantities of air, which improves cooling and minimizes flow losses.
Optimised for new requirements adapted to the higher air flow and lower head. Like Precision air conditioning units are usually its predecessor, the new impeller is made of glassdeployed in data centres. These units – also fiber-reinforced polypropylene. The outer diameter referred to as computer room air conditioning and installation height have remained the same in (CRAC) units – guarantee a constant temperature spite of the optimisation. This enables the limited and humidity in data centres and network control space available in CRAC units to be utilised to the centres. CRAC units with heat exchangers are best effect. used in this case (Figure 1). The design of the CRAC units significantly influences the choice of suitable fans. In light of the considerably lower A higher air flow with high efficiency back-pressure requirements, these are now also For this particularly advanced development, the required to work at the optimum operating point to engineers in Mulfingen set their sights on the ensure that they can run energy-efficiently to save aerodynamics, and the results are plain to see: on operating costs. After all, every watt counts the new RadiCal not only delivers a higher air during round-the-clock operation in data centres. flow than its predecessor but also operates more To this end, motor and fan specialist ebm-papst efficiently (Fig. 3). Thanks to the computer-assisted Figure 1: A CRAC unit Mulfingen has expanded the tried-and-tested optimisation methods used, the new RadiCal with a diagonal heat RadiCal model series (Figure 2). impeller is not only considerably better than the exchanger. The blue areas In order to tailor the operating point of the previous model but is currently also the best represent low-speed air RadiCal fans to the higher air flow desired by the impeller in the world for this application, ie for this flows, while the red areas market, the aerodynamics of the impeller were air performance range and degree of efficiency. accordingly represent overhauled. Powerful simulation tool computational Its predecessor achieves its maximum static high-speed air flows fluid dynamics (CFD) was a great help in doing overall efficiency of 61.5% at an air flow of 12,000 so: in conjunction with numeric optimisation m³/h. The maximum of 68.5% for the RadiCal is approaches, this resulted in a number of achieved at an air flow of 13,000 m³/h. The new RadiCal impeller individual detail improvements that can lead to efficiency improvements in other applications, too. make a real difference overall. As The efficiency advantage of the new RadiCal impeller also such, the width of the impeller, shows itself in an installed state. In a CRAC unit, three RadiCal the size of the intake area, fans running in parallel were compared using the previous and new the blade contour and solution he result was a reduction of more than in the fans’ blade thickness were all electrical power consumption. Thanks to lower flow losses in the impeller, lower turbulence and less laminar separation, the fans also have more pleasant noise characteristics.
Figure 2: The new RadiCal has been tailored to the changed market requirements
Efficient GreenTech EC technology The impeller is perfectly tailored to the likewise optimised reen ech motor and is olted directly onto the motor’s rotor Due to the aerodynamic shape of the fan impellers and the EC
“...the new RadiCal impeller is not only considerably better than the previous model, but is currently also the best impeller in the world for this application�
Figure 3: The new RadiCal (green) not only supplies a higher air flow than its predecessor lue , ut also operates with higher e ficienc
motor integrated into the impeller itself, the centrifugal fans comprise an efficient and space-saving unit (Fig. 4). The power electronics integrated into the GreenTech EC motors enables the speed to be adjusted to meet requirements by means of a 0–10 V control signal or via Modbus RTU. The high level of efficiency is even maintained in partial-load operation. When using the Modbus RTU interface, numerous operating parameters can also be queried and monitored in ongoing operation alongside the control signals. When needed, the operator can quickly modify operating parameters in order to promptly react to changing requirements. Simultaneously recording operating hours facilitates preventive maintenance for effective minimisation of servicing time. Should servicing still be needed, the affected fans are easily identified thanks to the Modbus RTU communication. Fail-safe functionality enables safe operation, even in the event of a bus communication failure; the fans simply continue to run at the currently set speed.
Figure 4: The width of the impeller, the size of the intake area, the blade angle and blade thickness have all been adapted to the higher air flow
Save energy by retrofitting your CRAC units with EC motors. ec@uk.ebmpapst.com
www.ebmpapst.co.uk +44 (0)1245 468555
Practical and future-proof
RadiCal centrifugal fans are available for all common line voltages and frequencies. In addition, installation is simple and practical. As such, the installation position of the fans can be varied due to the optional fixing bracket: It is possible to install them with the motor shaft in either a horizontal or vertical orientation. The new RadiCal fans therefore represent a practical and future-proof solution for powerful, energy-efficient CRAC units. If you are interested in optimising the cooling for your data centre, email us at info@uk.ebmpapst.com or call us on 01245 468555.
ebm-papst MCP OCT Half Page Vertical ad.indd 1
19/09/2019 09:24
40
DEMAND-SIDE RESPONSE
Tapping into a revenue stream for the water industry Severn Trent Water has engaged in DSR via two aggregators for the past three years. Now it plans to ramp up its activity – and harness the knowledge to inform battery storage investment
have committed to the triple pledge of net zero carbon, 100% renewables by 2030 and 100% EVs where the vehicles exist, so the work we have done on DSR plays into that quite well,” says Wild.
D
emand-side response manager Rob Wild says Seven Trent Water has approximately 15MW of connected flexibility. About 10MW is generation-based, the remainder load from its treatment processes. Wild thinks there is potential for up to 50MW of flexibility across the estate. Severn Trent’s involvement has largely been STOR, the Capacity Market and FFR, but it is eyeing the wholesale market and Balancing Mechanism as value continues to shift. Good payback Overall, Wild says DSR has worked well. “Payback is around three years, which is currently one of the best business cases within the organisation,” he says. “From a technical risk perspective and operationally, we have not had any concerns,” says Wild, which has increased management confidence to invest further in flexibility. “Handing over control to a third party was quite a big deal,” he says, adding that the key to assuaging concern was engaging all stakeholders from the outset. “The first time we looked at DSR, we put a team together representing all stakeholder groups – particularly the tech and standards team, given we are a standards-heavy industry,” he explains. “They were involved all the way down to choosing which MCP October 2019
aggregators to work with. We did a full procurement exercise, which may seem over the top, but it meant we could give stakeholders confidence,” says Wild. “If I was starting from scratch [in bringing DSR into a business], that would be a key message: Involve stakeholders all the way, and bring in the right resources – that can be expensive, but if you build it into the business case, you can do it.”
50MW The potential for connected flexibility across the Seven Trent Water estate
Better data, lower bills Wild says going through the DSR process and connecting up assets has led to a greater understanding of their performance: “It gives you more granular operational data, which has led us to realise that we have good amounts of headroom within processes. That is deliberate, but is has allowed us to get into the nitty gritty and
work out if processes are truly optimised from a performance and energy efficiency perspective, which, for a company like us that uses £100m of power, is always going to be worth more than DSR.” Market Insight Severn Trent will also use the knowledge it is building of flexibility markets to shape future investments. “It means we can have more informed conversations and it’s also applicable to other activities, such as storage,” says Wild. “It is unlikely that we are not going to be operating dedicated storage in the future, so it is really important to understand the economic case.” Building confidence Wild says there is still much to learn, but the knowledge acquired to date feeds into Severn Trent’s wider environmental programme. “This year, we’ve found top-down support on this. We
Life after diesel The possible exception to that is diesel, which Severn Trent has been running in some DSR programmes via back-up generators. To comply with the Medium Combustion Plant Directive, the company is fitting abatement technology (SCR), though Wild says the biggest challenge is “interpreting the legislation ... there is not a huge amount of upfront guidance.” He says from a “practical perspective, the Capacity Market [contract] pays for MCPD compliance”, though the cost of abatement rules out smaller engines. Ultimately, the company is looking at technologies that could replace diesel for standby generation. “I am really interested in hydrogen as a storage vector, because we potentially have it available as part of the treatment process,” says Wild. To discover options for new storage, generation and flexibility opportunities, Severn Trent ran a ‘soft market test’ during the summer. It also asked for feedback on its procurement process with a view to enabling smaller companies to provide solutions. Wild says the plan is to use the feedback to go to market “in the near future for batteries, storage and aggregation services.” ●
To read more end-user experiences of participating in DSR and the views of industry, download the 2019 DSR report at theenergyst.com/DSR missioncriticalpower.uk
41
UPS: a money making machine? MCP asks PPS Power’s Stephen Peal about the challenges and opportunities around power protection Q. What do sites often get wrong when it comes to ensuring resilience? A. Not appreciating the criticality of the load. To some people the load may just be ‘a bunch of computers’ but assessing worst case scenario and what the results of a complete power failure actually are often highlights just how important the load is. With that knowledge customers are better informed and more accepting of solutions involving redundancy. Bypass switches are also
another point – the importance of a maintenance bypass switch cannot be stressed enough. During fault or maintenance the bypass switch keeps things going, albeit on raw mains, but even that’s better than complete downtime. Q. What are the most important factors that need to be addressed to ensure a successful UPS installation? A. Quite often customers will not factor in environmental considerations. What may seem like a good environment to install a UPS can change over time. If other equipment is housed in the same room at a later date, is the room temperature going to be affected by the additional heat output? Is access to that room going to be impeded at a later date by other building works? This will affect vital maintenance down the line.
It is the job of the UPS provider to take into consideration any possible stumbling blocks that may arise during delivery and installation, and also advise the customer on how to minimise any disruption during this time but also in future. A thorough site survey prior to delivery and installation will normally highlight any issues. But ensuring that at time of delivery the delivery route and final location are clear, and then during installation any cable routes are also clear is vital to reduce delay. Often people will not think about high-level cable routes and the additional space for access via a suitable platform. Q. How is the UPS market evolving?
A. We are seeing an increase in modularity and also the use of UPS and batteries as energy storage and demand response devices. Demand-side response schemes for large installations has the potential for turning a UPS – an energy-sapping box – to a money-making machine and a device that could legitimately contribute to the bottom line of a business while also helping reduce CO2. The data centre market is the most logical benefactor for these schemes, but there is understandably a lot of caution around using one of the most valuable assets a data centre has to feed back to the grid – it is going to take a large mind-set change for a lot of people. ●
42
RENEWABLE ENERGY
Wave goodbye to unsustainable IT The world’s first ocean powered data centre will be built in Scotland, giving access to predictable renewable generation with grid backup, at a location which benefits from low temperatures
MeyGen will be powered by clean, renewable ocean energy
S
imec Atlantis Energy (Atlantis) has announced plans for a tidal-powered data centre in the Caithness region of Scotland. The power supply for such a data centre would include electricity supplied via a private wire network from tidal turbines at the existing MeyGen project site. Described as “the world’s first ocean-powered data centre”, the facility will have the potential to attract a hyperscale data centre occupier to Scotland. It is expected that the data centre would be connected to multiple international subsea fibre optic cables, offering a fast and reliable connection to London, Europe and the US. Further connectivity to the central belt using domestic terrestrial networks could significantly improve Scottish data and connectivity resilience. MCP October 2019
The MeyGen project has a seabed lease and consents secured for a further 80MW of tidal capacity, in addition to the 6MW operational array which has now generated more than 20,000MWh of electricity for export to the grid. The target operations date for the data centre is expected to be 2024, in line with the expansion plans for the tidal array. However, a smaller initial data centre module could be deployed sooner to draw on the output from the existing tidal array. Atlantis has been working with engineering firm Aecom to assess the feasibility of connecting to high-speed international fibre
optic connections and undertake the systems design for a data centre with access to predictable renewable generation with grid backup, at a location which benefits from low temperatures to assist cooling. The data centre could also alleviate constraints on other local renewable energy development, which is restricted by the current grid capacity and the closure of renewable energy subsidy mechanisms. Projects including MeyGen would be able to sell power directly to the data centre via a new private wire network and therefore are expected to benefit from a premium to the wholesale
We believe that Scotland can play a key role in the global data centre industry thanks to its ready access to clean energy
power prices which are achieved when dispatching output via the National Grid. The private wire connection is intended to provide an alternative pathway to construction of the next large phase of the MeyGen project without reliance on the UK government’s current limited support schemes for renewable energy. Atlantis is in discussions with leading data centre operators to progress plans for the data centre and facilitate the expansion of MeyGen using the Scottish supply chain. Tim Cornelius, CEO of Simec Atlantis, comments: “Data is being touted as the new oil. It is arguably becoming the world’s most valuable resource, and the amount of data requiring storage is increasing at a staggering pace. However, data centres are undeniably power hungry, and the clients of data centre operators are rightly demanding power be sourced from renewable and sustainable sources. This exciting project represents the marriage of a world-leading renewable energy project in MeyGen with a data centre operator that seeks to provide its clients with a large amount of computing power, powered from a sustainable and reliable source – the ocean. “At MeyGen we have many of the ingredients to provide clean power to the data centre, including a large grid connection agreement, proximity to international fibre optic connections and persistent cool weather. We also believe that Scotland can play a key role in the global data centre industry thanks to its ready access to clean energy and we are eager to play our part at Atlantis to turn this potential into reality.” ● missioncriticalpower.uk
44
TRAINING
The knowledge gap: is upskilling the answer? The data centre sector is facing a crisis around skills shortages. Andrew Stevens, CEO and president at CNet Training, looks at the key issues and potential strategies for addressing the problem. He warns there is no quick fix
I
t is no secret that the skills shortage is one of the greatest challenges facing the data centre industry. Relatively few young people are choosing to pursue a career in data centres, while a recent mid-point survey from Vertiv, Closer to the Edge, showed that 16% of employees who currently work globally in data centre roles plan to be retired by 2025. For employees from the US and Canada, this number jumps to an eyewatering 33%. The existing skills shortage is going to intensify at an everincreasing rate unless new talent can be encouraged to enter the industry and existing team members upskilled to progress through the ranks in the next few years. Ongoing struggle Companies are continuously struggling to recruit and retain qualified workers. The growth of the industry is creating new positions but a large percentage still remain unfilled as there are not enough skilled people to fill the roles needed. Outages also remain a serious problem: the 2019 Uptime Institute Data Centre Survey reported that just over a third (34%) of all respondents to the survey had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe MCP October 2019
IT service degradation in the past three years. This signifies to me the importance of having the correct team for the job. Mistakes are extremely costly to businesses. Unfortunately a lack of competent and well-trained/ educated and knowledgeable staff is only going to increase the number of mistakes and outages and, in turn, cost those businesses greatly.
16%
of employees who currently work globally in data centre roles plan to be retired by 2025 Businesses need to start reacting now – the industry has been talking about this situation for the past 10 years but unfortunately not a lot has changed. We need to learn from other industries that have been through similar skills shortages. The main issue we face is the lack of collaborative working: a lot of companies have gone off on their own, creating individual initiatives, but while everyone is working on their individual projects, the fundamental issue of the skills
shortage still remains, and nothing currently seems to be having a real impact. The industry needs to start working much closer together to create an industry initiative and united message that everyone is working towards. In marketing, the more consistent and the more visible the message is across all marketing channels, the more likely the campaign is to succeed. Instead of working on smaller, company-level projects, we need to be putting on a united front to help raise awareness of the industry. Businesses should be looking at a variety of different methods to help improve the situation. Upskilling existing team members is one possible way – looking at who they currently employ and what simple steps can be taken to increase team members’
knowledge, for example, internal mentoring, shadowing senior team members and working alongside specialist external education providers. We often see a lot of job adverts that require a minimum of two years’ prior experience. How can we bring in new talent to the industry if everyone needs a specific amount of experience and if other transferable skills or experience are not taken into consideration? However, there are still issues with seeing upskilling staff as the total resolution. It is likely to still cause a gap and other issues within the team. If there are no new recruits, then the work is still split between the same number of people who can only do so much. With increasing demand and added responsibility, there are bound to be some tasks that fall by missioncriticalpower.uk
45 investing in all team members through professional development is more likely to create a happier workplace and therefore staff are more loyal and likely to stay within their role, progressing up the career ladder. The investment in teams has a positive impact on the quality of service. It is great to see that CBRE DCS has made the decision to certify its entire technical teams across the world and not just a select few, and I hope this encourages other to follow suit. This is a huge global commitment to education and professional development, leading the way for other businesses.
The existing skills shortage is going to intensify unless new talent can be encouraged to enter the industry and existing team members upskilled
the wayside that are not even perceivable in a mission critical environment, or that will need to be completed by others. This is why the industry still needs more talent to be entering, alongside training up existing team members. Clear development plans Businesses need to put clear and detailed professional development plans in place as a priority in safeguarding the future of their companies. CNet Training is an education provider dedicated to the
digital infrastructure industry. The company is recognised throughout the world for working with businesses on their professional development needs. A significant part of CNet’s history is the development of ‘The Global Digital Infrastructure Education Framework’, which offers industry professionals an opportunity to map data centre and network infrastructure education, qualifications and certification to meet individual and
business needs. In May, we announced that we are working with integrated data centre operations service provider CBRE Data Centre Solutions on a two-year project to further enhance the certification of its entire technical workforce globally and ensure every data centre technician achieves the Certified Data Centre Technician Professional (CDCTP). This is something that I believe could help other companies. Research shows
While everyone is working on their individual projects, the fundamental issue of the skills shortage still remains, and nothing currently seems to be having a real impact Andrew Stevens, CNet Training missioncriticalpower.uk
Growth industry The data centre industry needs to be better marketed to society as a growth industry with a wealth of career opportunities. Plans need to be put in place to promote the industry as a possible career path, as well as all the different job opportunities and specialisms it can provide. The announcement that up to a third of the current workforce is considering retirement highlights the importance of planning ahead for businesses to future proof the skills within the team. It also shows the need to budget for future professional development to ensure that there will be a new generation of professionals, who are competent and confident to take on the open positions, at the same time as managing the increasing demand the industry faces globally. I think businesses need to be open to considering candidates from a different career background that might have transferable skills that would fit within the data centre sector. For example, CNet has spent more than 20 years supporting military resettlement personnel, training forces leavers with skills to work in across the digital infrastructure industry. The electrical industry also seems to be a natural leap. » October 2019 MCP
46
TRAINING
We need to be looking to secure the next generation of talent
It is important to note that it is not just the data centre industry that is affected by a shortage in skilled teams. It is well reported that the engineering and industrial sectors – including IT, infrastructure, construction and power – are also struggling to find sufficient numbers of engineers and workers due to the significant shortfall within these specialist areas. We need to be looking to secure the next generation of talent and that work needs to start early in schools, inspiring young people, especially young girls, to view STEM subjects as possible career choices for later in life. That is why all CNet Training staff are STEM ambassadors. We are committed and dedicated to helping to encourage young people to pursue and enjoy education in science, technology
engineering and mathematics subjects. Children start making education choices as young as eight and so there needs to be a concerted effort from industry professionals, teachers and parents to make sure young people are aware of the industry and the benefit of learning STEM skills. Industry professionals and STEM ambassadors need to be going into schools to inspire them and then for teachers and parents to continue to encourage STEM learning, as well as discussing career opportunities and the benefits from careers in engineering and data centres, for example. This requires us to not only be inspiring and informing children, but also the teachers and parents, on what the industry is all about, its career potential and industry growth.
No quick fix This is not a quick fix and it will take a long time to resolve and change people’s way of thinking, which is why it is so important that, as an industry, we are all standing together, promoting the same unified message. If our approach to resolve the skills shortage is consistent, engaging and on a large scale across the industry, I hope we will start to see an improvement in safeguarding the future skillset of the industry. With an overwhelming male and ageing population in the industry, a lot more needs to be done to bring more diversity into the sector as a way of helping to alleviate the skills shortage. Although there are some fantastic female role models currently in the industry, with more people talking about the
More needs to be done to encourage women into the sector. We must become better at explaining to young career seekers what the data centre sector is and why they should want to be involved in a fast-paced industry that is rapidly growing MCP October 2019
issue of diversity than ever before, it still remains a very male-dominated industry. More needs to be done to encourage women into the sector. We must become better at explaining to young career seekers what the data centre sector is and why they should want to be involved in a fastpaced industry that is rapidly growing and that has such a vast variety of opportunities and career paths available. It is also not just gender that is underrepresented across the industry, more needs to be done to encourage people from different social classes and ethnic backgrounds into the industry. It is also important to promote the industry not just for technical people but as a sector that plays the most essential part in industry and everyday life today. There are so many specialist areas required to make the digital infrastructure industry work including, project management, sustainability, sales, HR, operations, finance, health and safety, design and marketing, just like any other business, so it is really important that the industry is known and considered more – it is an exciting, growing industry to be part of. Serious efforts now need to be taken and initiatives put in place to safeguard the future talent pipeline. Traditional mindsets need to be changed to be open to new ways of recruitment to broaden and open up opportunities to a wider demographic of people. Other sectors have proven the benefit of having a more inclusive approach and the digital infrastructure sector will need to do the same if it wants to progress forward in the same way. Despite the issues we currently have within the sector, it is still a great industry to work in. I have mentioned a number of the issues this sector faces but there are great opportunities for those individuals with the right ethics, attitude, appetite and skills to excel. ● missioncriticalpower.uk
48
THERMAL OPTIMISATION
T
here is a need for education in the data centre sector on the different cooling options available and their implications for operating expenditure (opex), energy efficiency and environmental impact. Many operators are familiar with direct expansion (DX) systems or chilled water type systems, but there is a need for improved awareness of new approaches that can help deliver on long-term sustainability goals. The market is advancing at a pace with new technologies such as hybrid cooling, which has the potential to significantly reduce energy consumption and the impact of data centres on the environment. With cooling accounting for up to 40% of the energy used in data centres, it is important to understand the hidden costs associated with the various technologies and how they compare. DX cooling During the past 15-20 years, DX has become the dominant cooling solution and, until recently, this was driven by the fact that it was considered cheap and simple to install. DX units offer a number of benefits, such as good levels of cooling, coupled with a low footprint. As these systems are based on indirect cooling, there is also no danger of introducing contaminants from outside into the data centre. However, DX systems are less desirable from an energy efficiency perspective as well as their dramatic instability, in terms of refrigerant pricing, meaning that these units are no longer the low-cost option that they once were. Chilled water cooling Chilled water systems have also come to the fore in recent years. However, hybrid solutions offer the advantage of having a lower refrigerant charge per unit – despite the fact that these operate using a similar process to a chiller. Stulz UK’s largest hybrid solution, for example, uses 7.6kg of refrigerant per MCP October 2019
Uncovering the hidden costs Stulz UK’s Johnathan Attwood warns that environmental regulation is increasing the cost of conventional cooling approaches, widely used by the data centre sector. So how can operators reduce their opex and improve their green credentials?
circuit (with a twin circuit). An equivalent, conventional chiller system will use about 40% more refrigerant on a 1:1 basis. With a hybrid unit, the capacity is spread over multiple units, but the risk of losing large quantities of refrigerant in the event of a failure is significantly reduced. Hybrid cooling Hybrid cooling combines the reliability and control of a DX system, with the energy saving benefits of a free cooling system. In warmer months, when the external ambient temperature is above 20°C, the Stulz GE system operates as a water-cooled DX system and the refrigeration compressor rejects heat into
the water circuit via a plate heat exchange (PHX) condenser. The water is pumped to an air blast cooler where it is cooled, and the heat rejected into air. In cooler months, below 20°C external ambient temperature, the system automatically switches to partial free cooling mode known as ‘mix mode’. In this mode, the water is directed through both proportionally controlled valves and enables proportional free cooling and water-cooled DX cooling to work together, with the dry cooler fans being used to cool the water to the desired level to achieve the required cooling capacity. In the winter months, dependant on water
temperature and/or heat load demands, the water can be used in ‘free cooling mode’. Mix mode cooling Mix mode cooling is the point between free cooling and DX. This means the ambient air outside is cold enough to pre-cool the water provided to the hybrid unit but not quite cold enough to lower the temperature fully, to the required parameters (typically this is 3-6 degrees below the internal setpoint – dependant on system efficiency). This ‘sweet spot’ requires an element of mechanical cooling as a ‘top up’ to meet the load within a room. This mix mode
Payback depends on the size of the system and location, but the return on investment can be very quick at around 3-4 years for a medium sized data centre (circa 500kW). The larger hybrid systems offer the quickest payback periods missioncriticalpower.uk
49
CyberAir 3Pro DX air conditioning system The larger hybrid systems offer the quickest payback periods.
cooling typically makes up between 50-68% of the potential operation mode of a system. Data centre operators can extend the free cooling and mix mode potential of their hybrid equipment by raising the return air temperatures. Potentially, it is possible to increase the return air from 24 degrees to 27 degrees, resulting in additional cost savings of up to 40% per year. This hybrid approach drastically reduces power consumption over traditional DX systems and hence data centres can achieve significant cost savings. Data centre operators are starting to scrutinise the pay back on their investments and this is where hybrid approaches can offer added benefits. Payback depends on the size of the system and location, but the return on investment can be very quick at around 3-4 years for a mediumsized data centre (circa 500kW). missioncriticalpower.uk
Hidden cost of refrigerant The initial capex cost for a hybrid cooling system is higher than a traditional computer room air conditioning (CRAC) unit. But there are other factors that need to be considered, when calculating the total cost of ownership and opex. As part of the EU Energy Strategy, there is a need to reduce fluorinated greenhouse gas (F gas) by -60% by 2030. The latest regulation, which came into force in 2015, means that refrigerant availability and cost are being heavily affected with a revised target due in 2020 and yet further rises in refrigerant costs expected. As refrigerant is now an expensive ingredient, this can have a significant impact on the total cost of ownership. For example, in August 2017 the average purchase price of R407c and R410a refrigerants was £14.78 and £29.78 respectively per kilo. By the end of July 2018, the average purchase price of R407c and R410a both increased to £50.66 and £58.59 respectively per kilo, and further increases followed shortly after. If we look at a 50m (equivalent length) and a total heat rejection capacity of 80kW, the cost of refrigerant equates to a cost of just over £3,600 per system, rising from £1,800 – representing an increase of 98% in just one year. Going forward, there will be
an intensification of certification and record keeping for operating companies. Records will need to be kept for a minimum of five years and must include types of F gas used, quantity and how it is recycled when removed. It is therefore important to think about using sustainable solutions and to reduce the use of refrigerants where possible. Hybrid solutions require less quantities of refrigerant and, against a back-drop of rocketing refrigerant prices, this will be a significant advantage. In the future, data centre operators will be held responsible for preventing refrigerant leaks from their equipment, along with contractors that install, maintain
or dispose of equipment. However, for equipment that contains F gas above certain thresholds, there will also be a requirement to check for leaks at specific intervals (further information is available at gov.uk/guidance/f-gas-inrefrigeration-air-conditioningand-fire-protection-systems). Ultimately, a large, single chiller using 200kg of refrigerant poses a greater risk of leaks, than a hybrid system, comprising multiple small units – each using 18kg of refrigerant. By adopting hybrid technology, data centre managers are finding that they can save costs, maintain their green credentials and ensure a high level of resilience. ●
Stulz Air Cooled DX system
October 2019 MCP
50
POWER DISTRIBUTION
Making an intelligent choice Intelligent PDUs can provide a better insight into your rack environment and provide an objective measurement of power usage, says Vicky Newton from Austin Hughes
T
here are major benefits to an organisation in choosing intelligent power distribution units (PDUs) for installations, including the knowledge surrounding the data which is captured. Not only is data displayed locally on the PDU or a connected device mounted to the outside of the rack but it is also available remotely, including current (AMP), voltage (Volt), power (KW), energy consumption (KWh) and power factor of the entire PDU. Power usage data can then be assessed, having been collated and reported using a web-based graphic user interface (GUI) or integrated into an existing building management system. This data can also be used for inter-departmental billing or in colocation data centres a revenue stream providing accurate billing data to clients, if meter reading accuracy of the PDU is within +/- 1%. Resilience at the PDU level Given the mission critical nature of the environment, the intelligent rack PDU must be designed, built and manufactured to provide extremely high levels of resilience. Areas that can be used to benchmark this include hot swappable digital local touchscreen displays, hot swappable DC power modules and latchable relays at the MCP October 2019
socket or receptacle level (that will always supply AC power or are always on in the event of component failure). Such features are usually found in metered and outlet switched/ outlet switched with outlet metering PDU models. Intelligence is becoming increasingly important as data centres sites are being located further afield and no longer limited to being situated on land – for example the Project Natick Data Centre, by Microsoft, is located on the bottom of the ocean; there is also talk of putting data centres into space. Data centres may also be located
development, big data and IOT technologies. Intelligent PDUs allow for capacity planning, departmental/faculty cross charging for power usage, environmental objective measurement on power usage effectiveness. In addition, many sectors including finance require third party software integration via Simple Network Management Protocol (SNMP) as well as hardware modifications within the power distribution unit itself. These can subsequently require global variations (voltage, input, socket types, etc) for each geographic location/office.
choose to have different coloured PDU chassis, allowing the clear visual identification of the PDUs for technicians and engineers to reduce human error while working in racks/ cabinets. Another intelligent PDU option now available is dual feed PDUs, where the primary and redundant feed are in one PDU – saving valuable rack space and providing an overall cost saving to the end client. Intelligent rack PDUs with daisy chain capability significantly reduce the quantity of network data ports and IP addresses required. One IP address per 16 PDUs or per 32 dual feed PDUs with no node licensing, IP or software charges equates to a significant saving for the client. The use of intelligent PDUs means the data is easily accessible remotely and without access to each individual rack by engineers. A centralised network operations centre enables employees with suitable access levels to use data from intelligent PDUs to improve energy efficiency within the data centre and make better informed decisions. Integrating environmental sensors with the PDUs allows parameters to be set to monitor temperature/ humidity fluctuations as well as power.
Intelligent rack PDUs with daisy chain capability significantly reduce the quantity of network data ports and IP addresses required. One IP address per 16 PDUs or per 32 dual feed PDUs with no node licensing, IP or software charges equates to a significant saving for the client underground and in remote locations to take advantage of free cooling. Using intelligent power distribution units within the server racks enables data to be viewed remotely. Physical access is no longer required. Rack PDUs are not limited to providing power usage information, however. Educational establishments, for example, are seeing greater storage demands driven by increased research and
Increased scalability It is hard to miss the fact that the scale of data centres has been increasing – 430 global hyperscale data centres were recorded by Synergy at the start of 2019 with a further 132 planned. Hyperscale is classed as more than 500 racks, which can equate to more than 1,000 PDUs in one location. Installing two PDUs in each rack allows the continuation of both primary and redundant (A & B) power feeds. Some organisations
Ultimately, intelligent PDUs can help increase resilience by providing a better insight into your rack environment. The data also enables cross charging between departments or the ability to charge power usage back to third party clients, in colocation facilities, providing revenue to the data centre operator. With rising energy costs and closer scrutiny of power usage, this information is becoming invaluable. ● missioncriticalpower.uk
Residual Current Monitoring for maximum data centre availability
Continuous monitoring of earthed (TT & TN) systems Fast & early warning of developing earth faults End user notification before reaching shutdown threshold Reduce the chance of unplanned downtime and shutdown Periodic inspection & testing without shut off Enables predictive, planned maintenance
DATA CENTRE TECHNOLOGY
BENDER UK
www.bender-uk.com
4123 BUK RCMS advert A5.indd 1
Low Mill Business Park, Ulverston, Cumbria, LA12 9EE Tel: 44(0) 1229 480123 email: industrialsales@bender-uk.com
23/09/2019 10:05
52
THERMAL OPTIMISATION
Karl Lycett, Rittal UK’s product manager for climate control, says that small things can make a big difference when it comes to ensuring uptime. All too often poor planning is increasing risk
C
limate control is a powerful weapon in the battle in achieving uptime but, to employ it effectively, there must be an understanding of the impact of poor control, as well as detailed knowledge of all the options, including new technology, to take an installation to the next level. The IT industry is clearly responding rapidly to consumer demands Manufacturers are increasing performance capabilities with each passing year, while also reducing footprint. This means that the heat density within a rack increases with each new generation of equipment. MCP October 2019
Risk to uptime driven by poor climate control
If you fail to combat the predicted increase in heat load after an upgrade, you risk causing significant harm to your IT systems. Reduced operating life IT equipment is extremely sensitive and has to be kept in a strict temperature range to perform to its full potential. Straying from these limits means equipment will be in an environment it was not designed for, which will shorten its lifespan. Any reduction in the lifespan of equipment will increase costs. It will age quicker, and will need to be replaced at a faster rate.
This reduces available spend in other areas; potentially causing a wider impact on the business. Reduced performance and reliability Equipment that is exposed to higher temperatures will protect itself by reducing output, even shutting down completely if a high enough threshold is met. Any mission critical equipment that ceases operating can wreak havoc on a business. The loss of an e-mail system or a production line error is going to be expensive to correct and carries the risk of reputation damage if customers do not receive goods or services.
Increased energy costs If you have added new drives, your existing cooling equipment may still cope with the demand‌ but only just. It will have to work hard to maintain the status quo, which will lead to a spike in your energy consumption and a reduction in the lifetime of your cooling system Small changes make a big difference Even if your existing climate control is suited to your equipment, there are still small improvements you can make to increase the efficiency of your cooling and save money. missioncriticalpower.uk
53 Blanking spare Us Any unoccupied rack space may mean the resulting spare ‘U’ is left vacant. This then allows hot air to ‘short circuit’ the correct route and leak into the cold area of the rack, which, in turn, affects the overall efficiency of the cooling equipment as the dreaded ‘Delta T’ is reduced along with the overall cooling performance. A simple remedy is the use of blanking strips, which fill up the spare ‘U’ and ensure separation. Brush/foam strips The same principle applies anywhere that cables enter the rack and in the space either side of the 19” angles. These points allow both hot and cold air to mix and permits ambient air into the rack. The application of brush strips to the roof and base plate allows the installation of new cabling, while still ensuring an effective seal. Foam strips that can be modified to suit a gap, will provide a solid barrier to either side of the angles and prevent an air short-circuit. Aisle containment If perforated doors are being used, installation of aisle containment should be high on the priority list. This is a system of door and wall pieces, which create a barrier between the warm air and the cool air. The layout is at the customer’s discretion but a ‘cold aisle’ creates a pocket of cold air which can be utilised by all racks in the vicinity, and the ‘hot aisle’ is the opposite in which the hot air from numerous racks is in one zone. This system is practical, it is modular and suits existing installs. It can increase performance and reduce energy consumption of existing cooling equipment, which may prevent the need for an upgrade, saving money. Considerations for new installs 1. Types of cooling? There are many types of systems, missioncriticalpower.uk
each suited to different applications. If the heat load is small, fans can draw air through the rack and perform the cooling. As heat density increases, there is need for mechanical cooling which utilises either a direct expansion (refrigerant) circuit or a cold water product connected to a chiller. Both are known as ‘split systems’. They have their product in the white space, delivering cold
you install new servers and their density increases. Some manufacturers offer scalable products via the addition of extra fans, which allow you to increase the output as needed to ensure that same temperature range is maintained. The same point rings true if you specify a chiller and leave insufficient room for growth. Then you will have two options – either replace the chiller with a
Take installations to the next level Optimal operation is not just about the right cooling; there are a range of additions which can take the functionality of your IT equipment to the next level. Data centre infrastructure management software (DCIM) is offered by many manufacturers and allows the
All too often I see the results of poor initial planning and a lack of understanding of the need to future-proof installations from the get-go air but employ a condenser for DX or chiller for CW. This approach has limitations dependant on manufacturer as there are maximum distances that have to be adhered to prevent any issues with pressure etc. The installation will require holes being drilled for pipework and electrical supplies for both parts of the system. 2. Future expansion? Installing a cooling unit which is only slightly larger than your load means you will face more costs when
larger one or ensure the chiller can master slave with other chillers, allowing you to purchase another small chiller to work in tandem. 3. Redundancy You should also plan for any scenario where your climate control shuts down, and the way to do this is by buildingin redundancy. Quite simply, add more units than you need to protect your equipment so that when a product breaks down or requires maintenance, the heat load can be managed through alternative systems.
IT equipment is extremely sensitive and has to be kept in a strict temperature range to perform to its full potential
user to visualise their white space and equipment. If a DCIM is used in tandem with connected climate control or power, it gives remote access for the IT manager to live temperatures, energy usage and other variables. If anything changes, the DCIM will issue an alert so the problem can be resolved before it causes any harm to your equipment. Security If you have multiple users (for example, other businesses may rent your rack space), it can create security concerns. Nobody wants an unauthorised person pulling out wires so it’s worth investing in a DCIM in conjunction with lockable racks to prevent unauthorised entry and alert staff to any issue before your equipment is compromised. In summary, all too often I see the results of poor initial planning and a lack of understanding of the need to future-proof installations from the get-go. Taking heed of the above can prevent higher costs and further disruptions for your business down the line. ● October 2019 MCP
54
POWER DISTRIBUTION
Delivering Net Zero: Unlocking the opportunity At a major event, held at Silverstone, business and industry will come together to share tactics in the race to reach net zero
D
elivering net zero is the challenge and the opportunity of our times. Businesses and public sector organisations will be tasked with much of the heavy lifting. How do we get there? The Energyst is bringing together business and industry to discuss how to deliver net zero on 28 and 29 April 2020 at Silverstone. Two events in one Delivering Net Zero: Heat/ Power/Transport aligns the Energyst Event and EV Event in a single venue. The conferences and exhibitions will run side-by-side, providing delegates with a forum spanning convergent energy vectors. An extensive two-day conference programme MCP October 2019
An extensive two-day conference programme focusing on key aspects of decarbonisation is the cornerstone of both events
a-service and financing focusing on key aspects decarbonisation. of decarbonisation is the EV Event seminars cover cornerstone of both events, everything businesses need arming delegates with expert to know about electric insight and intelligence vehicles and related to map the journey The event infrastructure: ahead. From vehicle Expert-led is free to procurement and sessions will attend. funding models highlight Register at to incentives proven, costtheenergyst and billing, effective through charging opportunities for event.com infrastructure and businesses to make service models, to site both step change and considerations, capacity and incremental improvement bundled energy services, as – and insight on how to well as smart charging, minimise risk and maximise vehicle-to-building and effectiveness. vehicle-to-grid. � Energyst Event seminar sessions span energy If you would like to submit efficiency, renewables, a topic for discussion, or flexibility, storage, private are interested in speaking, wires and microgrids, green please email Brendan@ gas, heat networks, renewable energystmedia.com heat, decarbonisation-asmissioncriticalpower.uk
the energyst event The Silverstone Wing 28TH - 29TH April 2020
Delivering net zero Heat | Power | Transport
Register now for your free ticket at: www.theenergystevent.com/register
the EV event
56
PRODUCTS
Transformerless UPS combines flexibility with efficiency Riello UPS has launched the Sentryum (S3T), its third generation of transformerless online UPS system. Available in 10, 15, and 20kVA models, the new range puts flexibility at the forefront by offering data centre operators, IT admins and other mission-critical applications the choice of three cabinet sizes, enabling them to maximise battery backup time to suit their needs. The Compact (CPT) frame is designed for the most space-restricted environments, incorporating a single internal battery string in a footprint of less than 0.25m2. Active (ACT) houses up to two internal battery strings in a footprint of 0.35m2. While even though it takes up less than 0.4m2 space, the Xtend (XTD) option provides enough room to fit three battery strings inside its single cabinet. Manufactured using components including a dual-core DSP processor and three-level inverter, the new range delivers full-rated unity power and is capable of exceptional operating efficiency of 96.5% in double conversion online mode, even at loads of 50-75%. The Sentryum features a control system that helps minimise harmonic voltage distortion, while it also has high overload and short circuit capacity. This ensures the UPS can deal with sudden peak loads without having to transfer to bypass. Up
to eight Sentryum units can be paralleled together to scale up capacity or provide redundancy. It features a revamped large touchscreen colour display panel along with an intuitive new LED status indicator that changes colour depending on the operating mode and condition of the UPS system.
Compare costs of prefab data centres Schneider Electric has introduced a Prefabricated vs Traditional Data Centre Cost Calculator, providing users with a cost analysis for considering the best approach to deploying new IT infrastructure capacity and delivering key insights into procurement decisions. Users can also examine the effects of a combined approach, using prefabricated modules for only one, or some of the elements. Additionally, the TradeOff Tool uses trending data to devise multipliers and estimate the effects of increasing rack density, capacity and redundancy. It calculates a summary of the capital expenditure incurred via user input options and demonstrates the percentage difference. Bar charts provide a graphical overview of the equipment, design/installation and space/building costs for each approach. “Depending on the customer requirements, there are a number of advantages and disadvantages to each approach,” said Wendy Torell, senior research analyst at Schneider Electric’s Data Centre Science Centre. “With its introduction of the Prefabricated vs Traditional Data Centre Cost Calculator, Schneider Electric has simplified the task, allowing IT Professionals to model the financial implications of various deployment strategies that are accurate to within 20% or so of the costs that might be expected by choosing prefab.” Visit tinyurl.com/y2s8rbst to start using the cost comparison TradeOff Tool (schneider-electric.com) MCP October 2019
Single module, Li-ion ready UPS Piller’s newly unveiled Uniblock UBT+ 2000 UPS will be delivered VRLA and lithium-ion ready. Condensing the multiple paralleled solutions of others into a single unit, cuts cost and increases reliability in large data centre and other mission critical environments. It is highly scalable, allowing for 16 units in parallel with a total capacity of up to 32MW in one system. Furthermore, the technology allows for a complete range of system design options, from conventional low-voltage configurations to medium-voltage designs, through to the optimised efficiency and availability of leading-edge IP-bus systems able to deliver Tier III and even Tier IV compliance with simple N+1 redundancy. The Uniblock UBT+ 2000 is a high power density UPS developed from proven technology deployed across many data centres. Online efficiency ratings of 97% promise significant cost savings for operators. The capital cost per kW is reduced significantly and maintenance is kept to a minimum with its capacitor-free and parallel-free internal layout.
Real-time load data provided EkkoSense has added Rentaload load bank functionality into its EkkoSoft Critical M&E SaaS software. Customers across Europe will now be able to view real-time load data in EkkoSoft Critical, providing them with immediate insight into the impact of their real or simulated data centre loads on cooling, power and capacity performance. The solution is already in place at several data centre sites across Europe. Pierre-Luc Barbe, Rentaload’s general manager, said: “From day one we recognised the power of EkkoSoft Critical – for both real-time and simulation/risk management uses. Our customers use load banks to simulate capacity increases, and they find EkkoSoft Critical’s ability to demonstrate loads as part of their broader data centre activities a great way of optimising their risk management strategy.” missioncriticalpower.uk
58
Q&A
Mike Elms Centiel UK’s managing director talks about his ‘extravagant’ love of shirts, Paul Merton’s genius and why you should always listen to your mother Who would you least like to share a lift with? Any politician. The UK is a green and pleasant land and it seems the politicians are doing their best to ruin this and make us a laughing stock in Europe. You’re God for the day. What’s the first thing you do? We have lived in our village for more than 30 years and the kids have grown up and gone to school here. We know lots of parents and, in the past three years, three mums have died of breast cancer. It is a terrible disease and it affects so many people, so I’d get rid of cancer. If you could travel back in time to a period in history, what would it be? Ah, to be 18 again. But this time around, it would be so much better, with all the knowledge I have now. I left home at 17 and joined the Air Force but, in retrospect, I wish I had gone to university. Apart from that, to be transported back to the deck of the Titanic. I’ve always been fascinated by the history of that huge ship. Who are you enjoying listening to? I spend a lot of time in the car so the news on Radio 4 or 5 Live starts the day. I have an eclectic taste in music so tend to station hop. I might be tuning in to Planet Rock one minute or Scala the next. I like to listen to Just a Minute on Radio 4. Paul Merton is a genius. What unsolved mystery would you like the answers to? JFK’s assassination leaves many unanswered questions. Lee Harvey Oswald was accused, but before he was tried, Jack Ruby shot him and so we may never know if it was an organised plot to assassinate the US president or not. MCP October 2019
What would you take to a desert island? Having too little to do would soon drive me nuts – a bit like the thought of retirement. I need a sense of purpose, so I’d take a DIY boat building manual. I’d have something to work towards and eventually be able to sail off the island and get back to real life. What’s your favourite book? My two favourite books are about horses. The first is Black Beauty, I’ve read it so many times, I even remember the first line. Sea Biscuit is the other one, which is a true story about a racehorse and his jockey and trainer. They had a hard life but came back against the odds. If you could perpetuate a myth about yourself, what would it be? When I was younger (quite a lot younger) I used to play a decent standard of football. I used to play for Slough Town school boys, and we got through to the English Schools’ Football Association final. We lost to Liverpool on aggregate. At the time it was a big thing for our local town. I probably thought I was better than I actually was. What would you do with a million pounds? Sadly, a million pounds does not seem a lot these days. My wife and I are fortunate to be able to lead a comfortable life, so I’d divide the amount between
year. I think it made her a pretty wise person.
Occasionally in business, or in your personal life, you do come across a ‘Brian’. I am always totally mystified – and irritated – at what actually drives them my two girls so they could pay off their mortgages. What’s your greatest extravagance? If you ask my wife she would say “shirts”. At the last count I had 52. They are all different colours and designs, but my wife would say many of them look exactly the same. (I might say similar for her shoe collection!) In my defence, I tend to be drawn in for a ‘buy three for the price of two’ offer! Who doesn’t like a bargain? If you were blessed with any talent, what would your dream job be? If you’d have asked me when I was 16, I would have said fighter pilot. Those Top Gun boys are clever guys with good coordination skills and it would have been an exciting career. These days, I would probably say maths teacher. What is the best piece of advice ever been given? Always listen to your mother! She used to say useful things like “never a borrower or a lender be”. She was a very strong woman. I was born in Canada as my dad was in the Air Force. We moved every two years. My mum had to bring up three children and pack up and move to another country every other
What irritates you the most in life? Those who won’t help other people, those who won’t do something for their fellow man or seem to go out of their way to make life difficult. Brian was the store manager where I worked in the air force. You’d ask Brian for a nail and he’d look you right in the eye and say he hadn’t got any left in stock knowing full well there were boxes of them in the storeroom. Occasionally in business, or in your personal life, you do come across a “Brian”. I am always totally mystified – and irritated – at what actually drives them. What should energy users be doing to help themselves in the current climate? I believe if everyone does a little, then it can make a big difference. At Centiel we are focused on creating UPS solutions that reduce environmental impact. Long term, efficient systems reduce running costs – it’s a win-win. What’s the best thing – work wise – that you did recently? Joining Centiel 18 months ago. I have worked with our chairman, David Bond, and Filippo Marbach, founder of Centiel, in the past and understand and like their philosophy. They have a responsible approach to the environment and some fantastic kit. We are all keen to leave a legacy and this drives us to manufacture and supply UPS with the highest level of availability and efficiency to ensure we leave something for the next generation. ● missioncriticalpower.uk
W E N PROTECTION AT HEART 2MW VRLA AND LI-ION READY UPS ADDED TO PILLER RANGE
Introducing the UNIBLOCK™ UBT+2000: Piller’s new VRLA and Lithium-ion ready battery-backed UPS. Cost cutting, space saving and scalable, to protect your business in a heartbeat.
Nothing protects quite like Piller STATIC, ROTARY AND DIESEL UPS | CONTAINERISED SOLUTIONS | AIRCRAFT GROUND POWER | FREQUENCY CONVERTERS | STATIC SWITCHES PILLER GROUP GMBH AUSTRALIA | CHINA | FRANCE | GERMANY | INDIA | ITALY | SINGAPORE | SPAIN | UK | USA | GLOBAL REPRESENTATION
piller.com