DCNN Summer 2024

Page 1


DCNN is the total solution at the heart of the data centre industry

Environmental monitoring experts and the AKCP partner for the UK & Eire.

How hot is your Server Room?

Contact us for a FREE site survey or online demo to learn more about our industry leading environmental monitoring solutions with Ethernet and WiFi connectivity, over 20 sensor options for temperature, humidity, water leakage, airflow, AC and DC power, a 5 year warranty and automated email and SMS text alerts.

projects@serverroomenvironments.co.uk

DESPITE AI FLOURISHING, CAUTION IS URGED

Welcome to the Summer issue of DCNN – which is my very first as Editor of the title!

Having previously served as Editor of one of DCNN’s sister titles, Electrical Contracting News, it’s a pleasure to be back with the company and this time focusing on all things data centres, networks and beyond – and in the three months that have passed since I took charge of DCNN, the level and speed of innovation and the potential for advancements and greater efficiencies has become abundantly clear. In no area has this been more prevalent than it is with regards to AI – a rapidly-evolving concept that is seemingly having a major impact across not only the data centre sector, but indeed, nearly all areas of business. Despite the multitude of advantages it might be able to offer (many of which are covered across the articles in this edition of the magazine), there are definitely individuals within the sector urging caution too – perhaps understandably so –and in particular, SELECT’s new President, Mike Stark, is particularly concerned that with the build and development of new data centres,

CONTACT US

EDITOR: SIMON ROWLEY

T: 01634 673163

E: simon@allthingsmedialtd.com

GROUP EDITOR: CARLY WELLER

T: 01634 673163

E: carly@allthingsmedialtd.com

GROUP ADVERTISEMENT MANAGER: KELLY BYNE

T: 01634 673163

E: kelly@allthingsmedialtd.com

SALES DIRECTOR: IAN KITCHENER

T: 01634 673163

E: ian@allthingsmedialtd.com

there’s going to come a time when the National Grid will struggle to support the demand.

You can read his full views in our news story located on page 8, but Mike is far from the only voice raising concern or caution about the role of AI – as you’ll see across many of the other features throughout the issue – and as we’re still at such an early stage of adoption, it does seem prudent to take a planned and calculated approach before the industry rushes in too hastily.

It’s been a pleasure getting out and about meeting many of you in person – perhaps most notably at the recent Infosecurity Europe event at the London ExCel at the beginning of June –and I’m excited about seeing many of you at the various dates in my diary coming up throughout the summer months also.

Enjoy the issue, and should you wish to get involved in our Autumn edition – either editorially, commercially or both – please feel free to get in touch!

STUDIO: MARK WELLER

T: 01634 673163

E: mark@allthingsmedialtd.com

MANAGING DIRECTOR: DAVID KITCHENER

T: 01634 673163

E: david@allthingsmedialtd.com

ACCOUNTS

T: 01634 673163

E: susan@allthingsmedialtd.com

22 Ed Haslett of Zumtobel Group discusses the importance of rethinking data centre emergency lighting and the impact this could have on life-critical systems

Andrew Skelton of Centiel explores the company’s new UPS hire solution and explains why a containerised offering of this nature can prove advantageous

Michael Akinla of Panduit explores why network information is essential for efficiency, safety and security in our digitalised world

DCNN speaks with J.J. Kardwell, CEO of Vultr, about the rise of GPU-as-a-Service, the role of data centres in the AI ecosystem, J.J.’s beginnings in the industry, and more

Andreas Rüsseler of R&M looks at the latest trends from the company’s most

26 Tate Cantrell of Verne discusses the impact of AI on the data centre industry while assessing the ways to balance innovation with sustainability

29 DCNN looks at the success Danfoss had in helping iXora in its development of a liquid cooling system specific to the requirement of typical data centre operations

32 Julien Neimard of Controlit Factory looks at the best maintenance strategies for the protection of data centre roofs and explains the role of electronic leak detection

35 Ajay Kareer of Harting discusses the importance of reliable and energy efficient connectivity solutions for data centres

38 David Gammie of iomart explores the colocation options currently available and assesses the emerging trends and opportunities

42 Nick Layzell of Telehouse Europe assesses why embracing colocation represents a pivotal strategic shift away from the ‘cloud-first’ mentality

45 Arpen Tucket of Vantage Data Centers explains why leveraging hybrid cloud could help provide a colocation solution that provides the best of both worlds

48 Paul Mellon at Stellium Datacenters explains why critical cable landing station infrastructure should be equally as secure as modern data centres

51 Cathy Chu of Dow explains how single-phase immersion cooling and silicone can enhance edge computing operations

56 Mark Lewis of Pulsant looks at the five best practices for combining edge and IoT into a cohesive strategy

59 Kevin Hilscher of DigiCert explores the elusive goal of IoT security and explains how organisations can securely manage their IoT devices

SWITCH DATACENTERS COMMENCES OPERATIONS

Switch Datacenters, a developer and operator of sustainable data centres, has recently unveiled its latest 15-18MW facility in Amsterdam, AMS4, which is expanding local capabilities through modular design, sustainability and robustness.

By aligning the needs in power capacity, density and sustainability of both clients and the local community,

OF AMS4

the company has shown considerable capacity to expand. On top of the newly opened facility, Switch Datacenters has over 200MW of new capacity in and around Amsterdam in current development. By adding this capacity, it will be one of the largest wholesale players in Amsterdam, which is the third largest data centre market in EMEA.

AMS4 is built on the foundation of an existing logistics building at Amsterdam Science Park; a move which has allowed Switch Datacenters to achieve a construction timeline of just 22 months, while reducing Scope 3 emissions by approximately 50%.

“Our new data centre sets a new standard for sustainability in the industry by redeveloping existing industrial buildings, while also designed to deliver heat to the local community,” says Gregor Snip, CEO of Switch Datacenters.

“AMS4 exemplifies our dedication to making data centres more environmentally-friendly. We are contributing to local and national needs in terms of moving away from fossil fuels and delivering excess heat for use by domestic units.”

AMS4 is fully operational and services a diverse group of clients.

AGGREKO HIGHLIGHTS IMPORTANCE OF BRIDGING POWER

Following proposals from the Labour Party to build new data centres on greenbelt land, Aggreko is highlighting the importance of bridging power in the successful delivery of new projects.

Peter Kyle, the Shadow Secretary of State for Science, Innovation, and Technology, has proposed easing planning requirements for data centres in an effort to boost the nation’s capacity for cloud computing and AI.

According to industry experts, this will likely invite far more applications for data centres on the greenbelt. However, with the National Grid already under significant strain and less developed in these areas, Billy Durie, Global Sector Head of Data Centres at Aggreko, is calling attention to the role bridging power will have to play in ensuring the scheme’s success.

Billy says, “Plans to expand data centre construction out to the greenbelt are likely to encourage the development of the UK’s cloud computing and AI industries, but will not be without their challenges.

“The wait for a grid connection on a new data centre build can often be multiple years, stretching into decades in extreme cases. Exacerbating this further is the lack of developed power infrastructure on greenbelt land, which will require significant time and investment to support the requirements of a high-energy user such as a data centre.

“This is where bridging power comes into play. Decentralised energy solutions, totally independent of the grid, can support construction and commissioning before a mains connection is established, allowing the build to go ahead early.”

Aggreko, aggreko.com

Switch Datacenters, switchdatacenters.com

JOHNSON CONTROLS FORMS GLOBAL DATA CENTRE SOLUTIONS

Johnson Controls, a global provider of smart building technologies, has announced the creation of a dedicated Global Data Centre Solutions organisation which is designed to provide integrated solutions to data centre customers around the world in support of the company’s business segments.

Todd Grabowski, President, Global Data Centre Solutions, will lead the new organisation, reporting directly to Chairman and CEO, George Oliver.

“Over the last few years, we have been investing and building momentum in the data centre market to establish Johnson Controls’ leading position,” George comments. “It is clear our offering is resonating with customers, and we are now taking further steps to capture the growth opportunity ahead of us. Todd and his team will prioritise offering our full suite of smart building technologies – coupling our unique set of energy-efficient, sustainable and safe data centre solutions with unmatched service to meet increasing demand and drive Johnson Controls’ continued growth and value creation.”

portfolio of integrated solutions helps to minimise costs, maximise efficiency, and optimise timing for data centre owners. It is well-positioned to capitalise on rapidly increasing demand in the emerging data centre market due to its innovation efforts and inherent strategic advantages, the company notes.

Johnson Controls’ products are already widely used in data centres across the world. The company’s

Johnson Controls, johnsoncontrols.co.uk

EUROPEAN DATA CENTRE POWER CAPACITY TO RISE BY 2027

According to recent research by Savills, European data centre power capacity is projected to rise to approximately 13,100MW by 2027, reflecting a 21% increase. Similarly, total international bandwidth usage in Europe is predicted to rise by a 31% compound annual growth rate (CAGR) by 2030.

The growth of the data centre market is expected to be evenly spread across Europe, Savills says, primarily driven by the significant impact of AI on the industry. The European AI market is forecasted to grow at a robust annual rate of 15.9% (CAGR 2024-2030), serving as a key driver for the surge in data centre demand.

According to the international real estate advisor, 94 new European data centre projects are due to be delivered in the next four years, totalling approximately 2,800MW.

Scott Newcombe, EMEA Head of Data Centres at Savills, comments: “Despite the high number of new data centres anticipated to be built by 2027, the market is expected to remain largely undersupplied across Europe. Given the projected expansion of internet bandwidth usage, European data centre capacity needs to triple by 2027 and reach around 22,700MW in power, so there remains a significant supply/demand gap.

“We believe that prime yields, currently standing at 5-6% on the continent, will remain stable for most of the year, with a slight inward movement towards the end of the year as market dynamics evolve.”

Furthermore, between 2022 and 2023, construction costs for a data centre increased by 6.5%, reaching $9.1 million (£7.2m) per MW on average across Europe, according to Turner & Townsend. Zurich is Europe’s most expensive market for developing a new data centre and the second most expensive in the world, after Tokyo; while London is the second most expensive market in Europe, followed by Frankfurt.

Savills, savills.co.uk

SELECT WARNS ABOUT DEMAND FOR ELECTRICITY FROM POWER-HUNGRY AI

SELECT’s new President has warned that the demands on the electrical network to power AI may become unsustainable as it becomes a growing part of society. Mike Stark, who took over the association in June, says the UK’s National Grid could struggle to satisfy the voracious energy needs of AI and the systems it supports.

The 62-year-old, who is Director of Data Cabling and Networks at Member firm OCS M&E Services, joins a growing number of experts who have warned about the new technology’s huge appetite for electricity, which is often greater than many small countries use in a year. Specifically, he questions whether the UK’s current electrical infrastructure is fit for purpose in the face

of the massive increase in predicted demand, not only from the power-hungry data centres supporting AI, but also from the continued rise in electric vehicle (EV) charging units.

Mike says, “AI is becoming more embedded in our everyday lives, from digital assistants and chatbots helping us on websites to navigation apps and autocorrect on our mobile phones. And it is going to become even more prevalent in the near future.

“Data centres, which have many servers as their main components, need electrical power to survive. It is therefore only natural that any talk about building a data centre should begin with figuring out the electrical needs and how to satisfy those power requirements.

“At present, the UK’s National Grid appears to be holding its own, with current increases being met with renewable energy systems. But as technology advances and systems such as AI are introduced, there will be a time when the grid will struggle to support the demand.”

It is estimated that there could be 1.5 million AI servers by 2027. Running at full capacity, these would consume between 85 and 134TWh per year – roughly equivalent to the current energy demands of countries like the Netherlands and Sweden.

SELECT, select.org.uk

VIAVI PUBLISHES LATEST STATE OF THE NETWORK STUDY

VIAVI Solutions has published its 2024/25 State of the Network study in partnership with Enterprise Strategy Group (ESG). The research, involving 754 respondents from 10 countries, focuses on the evolution of network performance and security tools over the past 16 years, assessing their impact on the observability and security posture of enterprise organisations.

According to the report, organisations with a formal observability strategy are 3.5 times more likely to detect disruptive incidents quickly compared to those without such a strategy. This approach not only shortens incident detection times but also brings additional benefits, such as enhanced security (83%), faster product/service advancements (82%), and improved compliance (78%).

Network observability provides deep insights into network behaviour, performance and health by collecting, analysing and presenting data, enabling administrators to understand and manage the network in real time. True network observability embraces and leverages all network data sets, including flow data, packet data and metrics.

The report also underscores the critical need for Continuous Threat Exposure Management (CTEM), with 88% of organisations highlighting an urgent need to improve their threat management capabilities.

“Organisations are increasingly recognising the transformative impact of observability on network management and security,” says Chris Labac, Vice President and General Manager, Network Performance and Threat Solutions, VIAVI. “This report demonstrates a clear trend toward network observability, not only as a way of enhancing security, achieving compliance objectives, and detecting incidents, but as a key driver of business.”

VIAVI Solutions, viavisolutions.com

“With all components accessible from the front and no requirement for rear/side access, our UPS systems take up less space, providing a high-power density in a small footprint.”

BOXING CLEVER

Andrew Skelton, Operations Director at Centiel, explores the company’s new UPS hire solution and explains why a containerised offering of this nature can prove hugely advantageous.

Shipping containers have been used to house UPS systems for many years. The military was quick to recognise the advantages of a selfcontained, purpose-built, secure source of power protection that could easily be delivered onto the side of a mountain if necessary! These days, we are increasingly using containerised solutions as a temporary solution for data centres, available to hire on a short or longer-term basis for facilities needing instant power protection.

Centiel’s new UPS hire solution can be arranged to suit specific needs at very short notice. Subject to availability, the company currently has two 600kW and one 300kW containerised, flexible UPS solutions ready to deploy within

48 hours and it is currently adding to the fleet. Centiel can also parallel them together for sites requiring up to 1.5MW of backup power. The containers can be delivered onto suitably rated hard standing areas to suit disaster recovery situations. They are an ideal temporary solution for data centres but can also provide critical power at short notice, or for an economical option to support more planned projects.

Centiel also offers containerised UPS solutions to purchase for permanent installations. They can be used both inside and outside buildings to rapidly deploy self-contained, bespoke, mini data centres which can be added to as required like ‘Lego blocks’, ideal where space is at a premium.

Containers also have the advantage of being secure structures which can easily be fenced off. They can be painted to blend in with the surroundings or, alternatively, wrapped with company logos and branding to stand out.

Even with modification, containers are significantly cheaper to purchase than to develop a brand-new building to house the required equipment. They do need air-conditioning due to the tight space, however, this is often less than a heated building needs.

Siting the container close to the area which needs back up power is usually straightforward and reduces the cost of cabling. They can also be a great option to extend a facility in a remote or awkward location.

Centiel delivers its containerised UPS fully tested with batteries already charged. The company simply delivers the bespoke containerised UPS solution into position, installs the top row of batteries and connects the AC input and output cabling via Powerlock connections, so the system is typically up and running within six hours of delivery.

Centiel offers offer flexible deployment of its industry-leading UPS solutions from its standalone and modular ranges from 10kW to 1.5MW. The company also has containerised modular UPS solutions for larger projects between 50kW and 1.5MW. The containers are fully supplied with electrical installation, integral lighting, fire detection and suppression, cooling, batteries and 24/7/365 support contracts with guaranteed site attendance. This makes them suitable as a full ‘plug and play’ option, or for

facilities needing to back up their existing UPS while refurbishment or other works take place or for permanent installations.

The flexible nature of Centiel’s modular UPS systems take full advantage of floor to ceiling space. With all components accessible from the front and no requirement for rear/side access, the company’s UPS systems take up less space, providing a high-power density in a small footprint. Centiel can provide a 600kW UPS with 10 minutes battery run time in a 20ft container and a 300kW UPS with the same autonomy in a 10ft container.

Because Centiel’s UPS have front access, they can be situated close to the wall and smart air flow management coupled with forced air cooling creates an encased ‘hot aisle’ where hot air can be forced out of the container easily, reducing the need for air conditioning.

Centiel’s containerised UPS solutions are available for its standalone UPS PremiumTower and also the award-winning, three phase true modular UPS CumulusPower which offers ‘9 nines’ (99.9999999%) availability to effectively eliminate system downtime; class leading 97.1% on-line efficiency to minimise running costs; and true ‘hot swap’ modules to eliminate human error in operation.

Containerised UPS now offer a customisable, cost effective and rapid deployment of power protection solution for data centres. They are an ideal temporary solution for facilities needing to hire critical power protection at short notice or also for an economical option to purchase to support more planned or permanent projects.

Centiel now protects critical loads for data centres and comms rooms in over 100 countries across five continents.

Centiel, centiel.co.uk

Future-proof your data centre and experience tomorrow’s power protection technology today

Centiel’s StratusPowerTM is the ultimate power protection solution for today’s dynamic data centre environment. With unmatched availability, reliability, and operations and business continuity, minimizing the risk of downtime.

Our innov a tive D A R A de s i g n deliver s unp a r a lleled s c a l a bilit y, elimin a te s s in g l e point s of f a ilure , a nd provide s a f a ult - toler a n t a rchitecture. From comp a ct 10 k W module s to robu s t 62 5 k W option s, the U P S meet s a r a nge of po w er requirement s w ith the a bilit y to s c a le up to a n impre ss ive 3 . 75 MW .

NETWORKING OUR WAY TO THE FUTURE

Michael Akinla, Business Manager, Panduit EMEA and Ireland, explores why network information is essential for efficiency, safety and security in our digitalised world.

AUTOMATED NETWORK MAPPING INCREASES THE BENEFIT OF IIM

Intelligent Infrastructure Management (IIM) is increasingly essential as digitalisation has driven the capability to track, trace and continuously monitor networks and their surroundings. Today, there is the opportunity to choose from comprehensive suites of hardware, modular software, and turnkey services from a range of suppliers – some providing single solutions, others a full suite of capabilities.

To meet customers’ data requirements and ensure networks deliver the expected performance – high bandwidth, availability and uninterruptable communications – there needs to be in place real-time end-to-end network monitoring. To increase customer confidence in its capabilities, this should include at least some

provision for automatic and autonomous digital intervention and reporting.

Increasing utilisation of hybrid digital infrastructure to meet this requirement is creating a massive growth in the application sector. According to BlueWeave Consulting, the global IIM/DCIM market will surpass $3 billion (£2.36bn) with a CAGR of 8.6% by 2028.

Most IIM suppliers are driven to improve operational and performance management through a range of functionality including automated device discovery, visualisation, analytics, and actionable intelligence. Today’s best solutions have been developed to ensure end users (administrators, engineering, management, and customers) receive real-time data to maintain optimum performance and generate alerts to flag possible imminent faults or future security risks.

Today’s systems offer Cloud-native connectivity and are highly scalable, which has added a further dimension to the resource monitoring previously available. These track infrastructure data providing essential relevant information to authorised users anywhere in the world with an internet connection. Comprehensive capabilities such as vendor neutral, agentless auto-discovery of assets ease installation and have greatly reduced equipment set-up time. These capabilities offer customisable reports and dashboards, which allow engineers, facilities management, operators, and customers to gain live updates and pre-empt critical events.

All facilities, whether enterprise, colocation, or wholesale are built around the four pillars of power, space, cooling and connectivity. Therefore, current network solutions need to offer not only real-time, integrated and accurate data, but also enhanced data intelligence that concurrently supports IT Infrastructure, operations, and facilities management.

ON-SITE THE LITTLE THINGS COUNT

There are few Network Mapping tool suppliers that are also innovators in the realm of physical infrastructure for data centres and enterprise. Fewer still that undertake the research and development of network infrastructure, including copper and fibre cabling, connectors, cable pathways, cabinets and hot and cold enclosure systems, UPS and PDU, and other connectivity systems. Partnering with such an organisation guarantees that customers benefit from developments which are based on long-term, ground-up experience and not simply theory, or second hand guidance.

Network Mapping is the process of compiling the location of devices and connections of an IT environment and generating a logical and visual representation of the network. It is essential that the presentation format is easy to compile and understand while providing a simple layout to follow the interconnections from point-to-point.

For example, a recent development to support Network Mapping capability and automate the labour intensive and error

prone cable documentation process sees the implementation of pre-labelled patch cords, which when used in conjunction with a handheld Bluetooth rapid ID scanner, allows network engineers to quickly, easily, and more accurately place and trace cables. The data assigned to these connections is uploaded in real time directly into the system.Accurate physical infrastructure documentation can drastically reduce downtime during an outage. However, documenting physical infrastructure can be extremely time-consuming, over time moves, adds and changes (MACs), introducing possible failures in updates – and therefore network maps can often be out-of-date. Whereas it is estimated that this new process will reduce the time and cost of patch cord documentation by up to 50%, while guaranteeing accurate connectivity mapping.

Network mapping is not a one-off activity. The evolving nature of the network MACs must be recorded, and those changes reflected in the network plan. If we consider increasing network densification where physical layer changes are resulting in much higher density cable concentrations, this can mean additionally, thousands of fibre and copper connections that need to be managed across the enterprise or data centre. Each connection is a potential point of failure and therefore it is essential that they are discoverable on the network connectivity map and provide active up-to-date data.

SPEED OF CHANGE

The speed of technological advancement and the growth of access to multi-venue data connections are making network management more complex and the customer data increasingly interdependent. This pace of changes makes it increasingly important to have access to the tools and systems that allow monitoring and oversight which is relied on for accurate network optimisation.

Understanding the data centre network infrastructure layout offers greater opportunities to address the challenges of how to migrate to higher data speeds and cable density whilst retaining control on costs.

Exponential bandwidth growth within enterprise and data centres and the imminent explosion of edge data centre traffic firmly places fibre densification at the forefront of network design, planning and implementation decision making. This only highlights the requirement for real-time infrastructure mappings to monitor the organisational and customer needs within the data centre.

As the value of each fibre circuit rises exponentially and networks continue to densify, data centre managers who can demonstrate greater scalability, agility and resilience will gain a business advantage.

Real-time network maps are an indispensable resource, especially when conducting performance monitoring. Visual diagrams are very useful in identifying performance chokepoints and can highlight opportunities for improvement. This is true for internal administrators and customers with access to granular level data now available with these solutions.

Panduit’s IM is a cloud-enabled suite of solutions that provides a comprehensive range of instrumented hardware, modularised software and turnkey services, to provide real-time globally available operational and performance management through automated device discovery, analytics, visualisation and actionable intelligence enabling capacity, event, and change management, for small individual sites up to multi-site global requirements.

Panduit, panduit.com

RapidID™ Network Mapping System reduces

the time and cost of patch cord documentation by up to 50%*.

By using pre-labelled Panduit patch cords and the RapidID Bluetooth-enabled handheld scanner, network engineers can quickly, easily, and more accurately place and trace cables.

The Network Mapping System automates the labour intensive and error-prone cable documentation process to reduce the risk of a network outage. With RapidID, the painstaking labelling process is already done. Additionally, RapidID is a practical alternative to traditional manual approaches and is ideally suited for building a new telecom room, locating installed cabling, or replacing a network switch.

“RapidID is a game-changer for any network engineer,” stated Stuart McKay, Sr Business Development Manager EMEA. “We offer our customers an innovative way to eliminate the pain points around patch cord labelling and documentation for network systems.”

Rapid ID uses patch cords pre-labelled with unique barcodes and a Bluetooth®-enabled handheld scanner to automate labelling, tracing, and troubleshooting in t hr ee easy steps .

INSTALL

Panduit cables pre-labelled with unique barcode labels.

DOWNLOAD

The mobile app from iOS or Android app stores to a tablet device.

SCAN

Barcodes using the Bluetoothenabled handheld scanner.

A JOURNEY INTO THE CLOUD

In our latest interview, DCNN Editor, Simon Rowley, speaks with J.J. Kardwell, CEO of Vultr, a global cloud hosting provider founded in 2014. During our discussion, we focus on the rise of GPU-as-a-Service, the role of data centres in the AI ecosystem, J.J.’s beginnings in the industry, and more.

SR: Hi J.J. Can you tell us about yourself and how you got into this sector?

JK: After a couple of years of working at Walt Disney, I spent more than a decade working in venture capital and private equity. In my last few years as an investor, I ran the digital infrastructure team at a large growth equity firm. At the time, we were focused on data centres, fibre networks, wireless towers, and first-generation hosting and managed services companies.

Through the course of that work, I met David Aninowsky, the founder and executive chairman of Vultr, which had been building an independent Infrastructure-as-a-Service (IaaS) company since 2001. It was clear from our first meeting that David and the company were doing something very different from the rest of the market, with a focus on building the most user-focused and efficient IaaS platform in the

industry. I was so impressed that we tried to buy the company from David, but David had a much larger vision for what the company could become and wisely chose to stay independent.

I later left private equity to become an operator, and in 2014 I co-founded an AI company called EverString, which automated machine learning modelling to help B2B companies prioritise every potential prospect, regardless of whether they were in that company’s pipeline. We ultimately used our capabilities in deep learning and NLP to build the most comprehensive and accurate company data platform, and the business was acquired by ZoomInfo in 2020.

Dave and I had been talking for a while about a way to work together, and I joined Vultr as CEO in the same month that we sold EverString. Vultr was the perfect platform to address the challenges I had experienced

first-hand as an AWS customer at EverString, where I had learned three key things: the transformational business impact of GPU-accelerated infrastructure, the importance of providing a great customer experience, and the need for democratising access to cloud infrastructure.

SR: For those who may not be aware, can you give us an overview of the work that Vultr does?

JK: Vultr was founded in 2014 and was built by developers for developers. Vultr recognised the challenges faced by cloud users due to complex, overpriced cloud platforms that limit choice and slow innovation. Since day one, our mission has been to expand and democratise access to high-performance cloud infrastructure. Strategically located in 32 data centres around the world – including London and Manchester – we are the world’s largest independent cloud computing platform, offering composable, easily scalable access to the latest full-stack cloud computing, storage and management technologies.

Our simple, intuitive control panel and API-first automation enable users to spin up infrastructure on demand and build, test and deploy applications in a few clicks – without complex configurations or paying for services that don’t fit their needs or budget. We also deliver unrivalled price-to-performance, offering options for every workload and billing customers only for what they use. And we stand behind our service, guaranteeing 100% uptime across our network.

SR: What is your current role like, and what does the normal working day consist of?

JK: Vultr is an efficiency-focused company, and we are focused on customers and delivering outcomes. I spend a lot of time talking with customers, both large and small. Being connected to our users is the lifeblood of our business, since many of their needs are poorly met by the hyperscalers. This connectedness to our customers and the broader developer community is what makes Vultr different.

SR: What do you think about the rise of GPU-as-a-Service, and how do you feel it will affect the industry?

JK: The economics of cloud GPU did not work until the last couple of years, primarily because of the extremely steep ramp in performance between product generations, combined with the limited absolute capabilities of GPUs before 2020. Until very recently, essentially all GPUs were purchased outright by end users and deployed on premise or in colocation. With the massive performance gains and the price-to-performance tipping point of the most recent generations of GPUs, cloud maths became viable, and the cloud GPU market was born.

The availability of GPU-as-a-Service enables AI users and developers to access GPUs with the same flexibility as cloud computing. This means that businesses of all sizes and developers everywhere can access portions of these systems or large-scale clusters for hours, days, months or years. They benefit from the fundamental value of the cloud, which is to pay for what you use. The performance of the latest GPUs and their availability through a cloud delivery model is truly a transformational moment for businesses and developers worldwide.

SR: What is the role of data centres in the AI ecosystem, in your view?

JK: Supply chain constraints for GPUs have dominated a lot of the headlines, but access to space and power in data centres is going to be as important of a bottleneck for the deployment of AI infrastructure over the next few years. The 18-month lead times for generators, and the well-documented challenges with expanding power distribution in many parts of the world, make scaling this critical layer of infrastructure more of a challenge than most people realise.

The rapid growth in demand for AI infrastructure came on the heels of a time when the data centre market was barely recovering from the global supply chain shocks created by COVID-19. This is why Vultr is working so closely and planning so far into the future

with the most reliable and forward-looking data centre operators globally. A key to enabling large-scale AI infrastructure deployments will continue to be close collaboration between cloud computing technology platforms, like Vultr, and data centre providers with the ability to consistently and reliably scale space and power.

The rapid growth in AI-infused applications will also create an elevated importance of delivering inference at the edge globally, making those platforms with access to globally-distributed cloud data centre capacity even more valuable. Just like in other industries where latency impacts user experience, real-time AI capabilities will need to be made available around the world where businesses and end users are located.

While companies are increasingly leveraging AI, some organisations still lack a strong edge strategy – much less an AI edge strategy. Organisations will need to embrace the concept of training centrally, inferring locally, and deploying globally. In this case, serving inference at the edge requires organisations to have a distributed GPU stack to train and fine-tune models against localised data sets. Once fine-tuned, the models are then deployed globally across data centres while complying with local data governance and privacy regulations.

SR: Do you have any advice for organisations on this upcoming change?

JK: Generative AI requires the power of GPUs, but GPUs are often difficult to access for many companies due to high costs and supply constraints. Much of the cost challenge is driven by the egregious pricing model of the Big Tech clouds.

When it comes to operationalising AI across the enterprise, organisations should look to GPU-focused independent cloud infrastructure companies rather than the hyperscalers to maximise flexibility and cost efficiency, and ensure access to the best-of-breed software stack to get the most out of the underlying hardware.

In taking this approach, companies can avoid unnecessarily complex services, inflexibility that limits customisation, and

vendor lock-in that makes it difficult to move workloads to the most efficient and performant environments.

SR: What are some of the proudest achievements of your career so far?

JK: I’m most proud of what we have built at Vultr for users around the world, as we are fundamentally changing the availability of cloud infrastructure globally. We’ve brought together the most impactful capabilities and an innovative team to serve the global market at a level that no other independent cloud computing company is doing. My prior company was built using a Big Tech cloud, and as a customer and entrepreneur it was an awful experience. I came to Vultr because it delivers cloud infrastructure in a customer-first way and, given my prior experience, this is a very personal mission for me.

SR: Finally, what are some of your interests away from work?

JK: I spend most of my free time with my three daughters, and I enjoy seeing them pursuing their interests ranging from sports to art. I also workout and bike for exercise.

However, the majority of my time is focused on building Vultr’s capabilities for our customers in 185 countries around the world. Bringing Vultr’s capabilities to an ever-growing part of the world is not only my job – it’s my hobby.

Vultr, vultr.com

18–19 SEPTEMBER

DATA CENTRES: LIGHTING THE WAY FORWARD

Ed

Haslett, Divisional Director – Critical Facilities UK & Ireland at Zumtobel Group, discusses

the importance of rethinking

data centre emergency lighting

and the impact this could potentially have on the reliability of life-critical

systems.

Amidst the complexity of cooling systems and IT infrastructure, life-critical emergency lighting systems often remain an afterthought when fitting out a data centre. Yet strategic planning from the outset can profoundly impact reliability, operational efficiency and, most importantly, safety.

The biggest question is whether to opt for a centralised power system. With cooling accounting for approximately 40% of a data centre’s energy consumption, heat and overheating are major concerns. Efficiency is not only pivotal for profitability but also for mitigating the costs associated with downtime due to overheating.

Moreover, the higher ambient temperature in data centres can also adversely affect emergency lighting systems, compounding data centre operators’ challenges. But let’s take a step back and look at the role of emergency lighting to understand the gravity of this predicament.

THE ROLE OF EMERGENCY LIGHTING

When standard lighting fails, emergency lighting is the lifeline that provides sufficient illumination to give orientation and efficiently light an escape route. It prevents panic and ensures that other safety equipment can be found immediately. In a nutshell, it can be lifesaving. Emergency lights are designed to activate immediately when the electricity supply fails; the batteries take over and power luminaires for one, three or eight hours.

WHY TRADITIONAL EMERGENCY LIGHTING SYSTEMS ARE CHALLENGING

In the demanding environment of data centres, ambient temperatures in hot aisles often range from 35-45°C. Here, the operational limitations of integral battery emergency luminaires become evident. The specified operating

“With data centres trying to optimise floor space and pack in more servers, temperatures are rising”

temperatures for many of these conventional emergency luminaires typically cap at 25-30°C, highlighting a substantial performance gap. Manufacturers may sometimes indicate higher operating temperatures, but it is crucial to distinguish between the case temperature of the emergency kit itself and the tested operating temperature within a luminaire.

The elevated ambient temperatures pose a significant challenge for the integral batteries within emergency luminaires.

UNDERSTANDING AMBIENT TEMPERATURE

Ta, or ambient temperature, is a critical factor influencing the performance of emergency lighting systems. In the confined spaces of a luminaire, the Ta inside the fitting is affected by various heat sources, including the driver, emergency converter, charging batteries, and the LED board. With data centres trying to

optimise floor space and pack in more servers, temperatures are rising; the Ta inside luminaires can escalate rapidly, surpassing the operational limits of integral battery systems.

THE CASE FOR CENTRALISED POWER SYSTEMS

Centralised emergency power systems offer a compelling solution to this challenge. These systems ensure consistent operation even in extreme (higher temperature) conditions by eliminating reliance on integral batteries. Moreover, they reduce maintenance demands and costs associated with frequent replacements. The white space, especially, is often a highly secure area within the data centre building, so limiting access – unless it’s absolutely necessary to interfere with it – is in the client’s best interest.

Having the busy data centre and facilities management teams repeatedly return to

areas to maintain the life safety systems and change out emergency lighting components is not only a poor use of their time, but it also introduces more visits to these highly sensitive spaces. A central battery system centralises the components so that they can be located in areas of lower temperature and away from the more sensitive spaces through good design practice. A bonus is that these systems’ battery lives can be guaranteed for longer than that of integral battery options.

BEST PRACTICES FOR DESIGN AND IMPLEMENTATION

A best practice design comprising LED lighting technology, an intelligent lighting control system and a central power system (CPS) to support a dedicated emergency lighting system will positively impact energy usage. In addition, it will reduce associated manual maintenance costs whilst creating a safer, more flexible lighting solution that can be quickly and easily adapted to suit changing requirements.

Emergency central battery systems using dedicated emergency luminaires can reduce emergency lighting loads and the number of emergency lighting circuits. This leads to a more cost-effective initial installation and an easier system to maintain long-term. As a general rule of thumb, once more than 120 emergency luminaires are present on a project, the DC central battery system often pays for itself. A tailored lighting strategy based on cooling topologies within data centres is advised. For hot aisles, a Ta 45-rated lighting system with

dedicated emergency luminaires fed from a central battery system is recommended. In cold aisles and circulation areas, a high Ta-rated luminaire will not be required, but a coordinating solution can be applied.

PROACTIVE MONITORING FOR ENHANCED SAFETY

Beyond physical infrastructure, proactive monitoring of emergency luminaires is crucial. Intelligent systems continuously assess performance, alerting operators to potential issues before they escalate. This proactive approach minimises downtime and maximises system lifecycles, bolstering safety and operational efficiency.

SAFETY FIRST: AUTOMATING TESTING AND MAINTENANCE

To ensure fail-safe performance, a dedicated, addressable emergency lighting system with full automation of testing is required. Cutting-edge technology enables automatic system status reporting, reducing human error and maintenance costs. Remote supply systems further support higher ambient temperatures, ensuring system resilience.

CONCLUSION: ENHANCING RELIABILITY AND SAFETY

By embracing centralised supply systems and proactive monitoring, data centres can bolster emergency lighting systems’ reliability, safety and operational efficiency.

A considered approach to emergency lighting provides a critical safety net for personnel and infrastructure, ensuring uninterrupted operation even in challenging environments.

In the dynamic landscape of data centres, where every second counts, prioritising emergency lighting is not just prudent –it’s essential. With innovative solutions and strategic planning, data centres can confidently navigate challenges, safeguarding people and technology.

Zumtobel, zumtobel.com

THE AI INFLUENCE ON NEXT-GEN DATA CENTRES

Tate Cantrell, Chief Technology Officer at sustainable data centre solutions provider, Verne, discusses the impact of AI on the data centre industry while assessing the ways to balance innovation with sustainability.

The rapid growth of AI has led to an increased demand for compute power, placing a strain on energy resources worldwide – and much of this demand lies at the feet of the data centre industry. The latest analysis by the International Energy Agency (IEA) forecasts that global power demand for data centres will double by 2026, and research by the IDC indicates that the growing use of AI technology will require storage capacity in data centres to reach 21.0ZB by 2027.

This growing demand for AI technologies is changing the very way data centres are designed and operated. At the forefront of these changes should be a consideration for energy efficiency. As an incredibly powerhungry technology, AI poses a significant threat to the environment. In order to enjoy the benefits and conveniences that AI can bring while safeguarding the planet, the data centre industry must ensure sustainability is central to all decisions when it comes to adapting to AI.

DATA CENTRE DESIGN

There is an increasing need for data centres that are designed to handle AI compute –with Savills predicting that as many as 3,000 new data centre facilities are needed on the European continent by 2025. There are several design considerations, especially when it comes to cooling and rack density.

AI training workloads consistently operate at very high densities, ranging from 20-100kW per rack or more. Networking demands and cost drive these training racks to be clustered together. These clusters of extreme power density are fundamentally what challenges the power, cooling, racks, and software management design in data centres. With AI clusters, servers are deeper, power demands are greater, and cooling is more complex. This creates a requirement for racks of greater dimensions and weight capacity.

“In order to support true sustainability and avoid greenwashing, the data centre industry must commit to transparent sustainability reporting”

The industry is also undergoing a challenging transition from air cooling to liquid cooling –the latter of which provides many benefits such as improved processor reliability and performance, space savings with higher rack densities, increased energy efficiency, improved power utilisation, and reduced water usage. Due to the growing demands of AI, machine learning and HPC workloads, Verne recently introduced liquid cooling to its Finnish data centre, POR-DC01.

These are some examples of the innovations and design elements needed to meet the demands of AI compute in the data centre. AI technology itself can also be used to improve data centre operations. An intelligent data centre is one that is optimised and automated using AI, machine learning and IoT devices to improve key aspects such as efficiency, security and resource management.

ENERGY EFFICIENCY

Global power consumption from data centres was roughly 460TWh in 2022 and could double by 2026 to more than 1,000TWh –approximately equal to Japan’s total electricity use. As well as placing a strain on energy resources, this presents a huge environmental challenge – as power-hungry AI technologies produce equally large carbon emissions.

To overcome these challenges, the data centre industry needs to prioritise energy efficiency so that valuable power is not wasted, and any harm to the environment is kept to a minimum. Using renewable energy for data centre operations is one obvious solution. This is where location comes in.

Iceland, for example, has ready access to naturally occurring renewable energy resources, making it an ideal choice to locate compute-heavy applications such as those required for AI and machine learning. Many

Nordic sites allow for highly efficient cooling solutions due to their naturally moderate climate. Additionally, these locations are typically equipped to integrate with data centres through heat recapture programmes, further minimising the environmental impact of the data centres.

In order to support true sustainability and avoid greenwashing, the data centre industry must also commit to transparent sustainability reporting. Standardised metrics would go a long way here. There are already some initiatives underway that are making a difference, such as the EU Energy Efficiency Directive (EED) which mandates that data centres proactively monitor and report energy consumption and emissions.

However, the burden of responsibility still lies with the industry itself. That’s why industry innovations such as new liquid cooling technologies and heat re-use initiatives are key to ensuring that data centres are truly green – and can continue to support AI applications without harming the earth’s natural capital.

BALANCING AI INNOVATION WITH SUSTAINABILITY

There are already businesses that show it’s possible to successfully balance the power needs of AI technology with sustainability – and with a positive societal impact to boot. Peptone is one such example. The company’s work in analysing protein dynamics is crucial for creating effective drugs and vaccines. Its AI-driven research identifies the most desirable protein variants, accelerating the drug development process and making it more cost-effective compared to traditional approaches.

Peptone chose to locate its compute with Verne at a green data centre in Iceland. This allows Peptone the freedom to focus on its groundbreaking research without worrying about the environmental impact of its AI applications. Installed in a specialised, highly optimised data centre environment, with 5 PFLOPS of AI performance in a 6U form factor, and a single platform for every AI workload,

Peptone has the flexibility to scale its AI-driven protein engineering system on-demand, while maintaining full visibility of operations and always keeping the data in sight. All of this is powered by 100% renewable energy, which utilises the country’s natural geothermic and hydroelectric energy resources.

THE FUTURE OF DATA CENTRES AND AI

As AI continues to evolve, its impact on the data centre industry will only grow. In order to balance this technological advancement with true sustainability, data centres must be designed and optimised to properly handle AI compute whilst ensuring maximum energy efficiency. Organisations must also consider where their AI compute is located, prioritising locations where it is powered by renewable energy sources, such as data centres in countries like the Nordics that have ready and reliable access to green power.

In this way, the data centre industry can ensure that AI technology has positive outcomes for both people and planet.

Verne, verneglobal.com

PLAYING IT COOL WITH IMMERSION COOLING

In this feature, DCNN looks at the success Danfoss Power Solutions had in helping audio technology start-up, iXora, in its development of a liquid cooling system specific to the requirements of typical data centre operations.

Although the Caribbean island of Curaçao enjoys a tropical climate which may be considered pleasant for many people, it can be disastrous for electronics. Heat and long hours of direct sunlight, combined with tropical rain and salty, moist air, can very quickly corrode electrical equipment and ruin components.

The founders of audio technology start-up, iXora, were well aware of these challenges when they set about developing their high-performance amplifier system. The solution needed to not only withstand the challenging Curaçao climate, but also offer users easy transport from event to event. Air cooling requires fans and other moving parts that are subject to wear and tear, so iXora looked at alternative solutions.

The company developed a closed-circuit immersion cooling system, which provided the necessary cooling at an extremely high efficiency. Because the system was closed,

it was also easy to set up, take down and transport, and it effectively shielded electronic components from ambient conditions. Everything was going well for iXora, until COVID-19 struck in 2020; which essentially wiped out the live event scene almost overnight. However, company founders, Vincent Beek and Vincent Houwert, decided to use the opportunity to relocate to the Netherlands to seek further investment and finesse the design of the amplifier system.

AN INTRODUCTION TO IMMERSION COOLING

In recent years, digital technologies such as cryptocurrency and artificial intelligence have emerged from relative obscurity to the mainstream, producing vast amounts of data. As a result, data centres have sought to increase capacity while simultaneously

improving efficiency. Air cooling had long been the norm for data centre cooling, but many data centre facilities have transitioned to (or are exploring) liquid cooling as a more efficient and cost-effective alternative to air cooling.

There are two main approaches to liquid cooling: direct liquid cooling and immersion cooling.

Direct liquid cooling involves circulating coolants to specific components. While more efficient than air cooling, it is often not suitable for high-intensity processing operations.

Immersion cooling typically involves submerging electronics in a bath of non-conductive liquid. It provides far greater efficiency and cooling density compared to air cooling and requires no fans or other active cooling components. However, designs are often complex, custom and expensive, requiring large baths in which to submerge server racks entirely.

PIVOTING TO A NEW MODEL

iXora realised that its closed system could provide the perfect solution for data centres. It could substantially improve the efficiency and capacity of server hardware without the need for significant infrastructure redesign, and at a much lower cost compared to the immersion cooling systems available at the time.

The company, now joined by data centre expert, Job Witteman, began designing a

prototype closed immersion cooling system for the data centre market, based on its existing amplifier cooling solution. Rather than the traditional horizontal rack system, iXora instead sought to develop a chassis containing vertical cassettes, in which dielectric oil would cool the electronic components.

This required a vastly more sophisticated design compared to iXora’s previous solutions for audio equipment, and a deep understanding of the specific requirements of typical data centre operations. Furthermore, because the prototype would be the first of its kind, it also required the design and development of new systems and components unique to the application.

PROTOTYPE CHALLENGES

One example of this was the new system’s heat exchanger, required to transfer heat away from each cassette. The design required two couplings per cassette to connect them both to the chassis, and to the facility’s wider cooling equipment. Achieving a low pressure drop in these couplings was vital, as each small pressure drop could cause disruptions to the cooling performance. Cassettes also required easy connection and disconnection for maintenance, with zero leakage, as any liquid coming into contact with data centre equipment could pose a serious risk to operations.

iXora approached Danfoss for help in the development of custom couplings for its system. As well as zero leakage, these also needed to provide precision alignment, a low connect force, and a compact size. Drawing on its extensive experience in coupling technology, Danfoss product engineers were able to calculate exactly what was required based on the flow rate, maximum coupling size, pressure requirements and heat exchange rate, alongside a range of other factors specific to data centre cooling applications. Based on these calculations, Danfoss concluded that aluminium dry break quick-disconnect couplings would be the most suitable solution. These were then manufactured to specification by Danfoss.

SUCCESS IN PARTNERSHIP

The prototype was a success, with the Danfoss couplings outperforming all other couplings tested. iXora’s Head of Operations and Sales,

Vincent Beek, explains, “The Danfoss couplings achieved full alignment, easy connection and disconnection and, crucially, zero leakage. This is vital for maintenance. With a conventional full immersion system, it can be tricky getting servers out of the liquid bath. Ours is effectively plug and play, so you can just disconnect it and carry it straight to the workshop.

“We’re now onto the field-testing stage, with a global pilot to follow this year, and then next year we’ll be scaling up to mass production. None of this would have been possible without Danfoss. As a start-up, it can be difficult to get the attention of larger companies, particularly when it comes to help with R&D. They saw our solution, immediately bought into it, and we’ve benefited greatly from their expertise.”

Jeroen Veraart, Senior Sales Development Manager at Danfoss Power Solutions, adds, “This was a true partnership. iXora benefited from us for sure, but we’ve learned a lot from them as well. As a result of working with them we’ve identified new ways in which we can improve and refine our products further.

“iXora was a dream to work with. With a start-up there’s always an element of risk, but they were enthusiastic, willing to learn, and always coming up with creative ideas and solutions. At the end of it they’ve got a really impressive system. Immersion cooling is clearly the future for data centres and, in just a few years, I expect that market will grow considerably. This has been a really rewarding partnership for both parties, and I look forward to seeing it continue to flourish.”

Danfoss, danfoss.com

ELECTRONIC LEAK DETECTION REACHES NEW HEIGHTS

Julien Neimard, Business Development Manager at Controlit Factory, looks at the best maintenance strategies for the protection of data centre roofs and explains the key role of electronic leak detection.

A data centre’s roof is a fundamental component in protecting the facility’s functionality and integrity. This article explores the critical importance of roof maintenance in data centres, with a special focus on the role of Electronic Leak Detection (ELD) to prevent leaks and water damages.

THE ESSENTIAL ROLE OF ROOF MAINTENANCE IN DATA CENTRES

In a data centre, the roof is more than a simple cover; it’s a crucial barrier against external elements. Constantly facing UV exposure, temperature extremes, rain, wind, snow and human activities, the roof withstands significant stress throughout the building’s life. Maintaining it is integral to protecting the sensitive and valuable equipment inside. It ensures operational continuity and prevents significant risks of downtime and equipment failure that can result from neglected maintenance.

THE DANGER OF WATER LEAKS

Water leaks in data centres are not a minor inconvenience, but a serious threat. Even a small leak can quickly escalate, leading to extensive damage, and operational downtimes. This is why detecting damages early – before they turn into leaks – is so crucial. It’s not just about fixing a problem; it’s about preventing a chain reaction that could disrupt data centre operations.

SHORTCOMINGS OF TRADITIONAL METHODS

Traditional methods such as flood tests and visual inspections have limitations in their ability to detect roofing issues. Flood testing, usually performed after waterproofing installation, can be logistically challenging and carries risks (such as added weight to the structure, water infiltration, or unprecise location of the leak). Visual checks - the most common form of roof maintenancemay miss smaller and critical deteriorations.

“Visual checks, the most common form of roof maintenance, may miss smaller and critical deteriorations.”

There is a need for a more efficient, accurate, non-destructive testing method. ELD offers a more thorough and accurate approach, identifying issues that traditional methods might overlook.

ELECTRONIC LEAK DETECTION

ELD is a game-changer for integrity testing and roof maintenance. Using electrical currents, ELD can accurately locate damages in waterproofing membranes. It is precise, fast, non-invasive, and covered by ASTM standards D7877-14 and D8231-19.

Unlike other testing methods, ELD can detect the smallest breaches before they leak and escalate into costly problems.

ELD methods are often divided into two main categories:

• High-Voltage ELD: This method is suited for exposed waterproofing membranes. It uses a high-voltage pulse to detect breaches in the

membrane. It is also called spark testing or dry method.

• Low-Voltage ELD: Appropriate for both exposed and covered roofs, this technique involves creating an electrical potential with water on the roof surface.

Even though the testing method and equipment may vary, ELD always relies on electrical current and conductivity of the substrate (‘substrate’ refers to the surface directly under the waterproofing membrane).

THE NEED FOR CONDUCTIVE UNDERLAYS

ELD relies on electrical current and conductivity to accurately spot damages on the waterproofing layer. To work, a conductive medium is required under the membrane to inspect.

A specially designed conductive underlay will provide a uniform conductive layer, allowing for accurate ELD testing not only after installation of the waterproofing, but also during the entire building lifespan.

DESIGNING FOR ELD: BEST PRACTICES

Effective ELD relies heavily on the proper design of the roofing system. This includes the integration of a conductive underlay. A well-designed roof can significantly enhance the accuracy and efficiency of ELD methods.

TIMING AND FREQUENCY OF ELD IMPLEMENTATION

ELD is most effective when conducted at strategic intervals:

• After waterproofing installation: Ensuring the integrity of the installation is crucial, and ELD provides an immediate assessment of the newly installed waterproofing.

• Post-equipment installation: Conducting ELD after installing major equipment on the roof ensures that the installation process has not compromised the membrane.

• Regular maintenance schedules: Bi-annual ELD checks, or more frequently following significant weather events or operation on the roof, help maintain the building integrity.

COMPREHENSIVE MAINTENANCE STRATEGIES

A well-rounded maintenance strategy for data centre roofs includes ELD as a central component. This should be complemented with regular checks of other critical roof elements like flashings, drains, and ongoing visual inspections to ensure a complete assessment of the roof’s condition. Each of these activities play a vital role in preserving the roof’s overall health and functionality.

IMPLEMENTATION AT GREENERGY DATA CENTERS

Since its establishment in 2020, Greenergy Data Centers in Estonia has been a paragon of innovation in data centres. Its 14,500m2 facility utilises Controlit Factory’s technology and ELD testing as a key part of its roof maintenance programme. This approach allows the company to detect and address even minor roof issues early, preventing potential leaks and ensuring the longevity of its roof.

Its proactive maintenance, especially post-construction activities on the roof, has been crucial in protecting Greenergy Data Centers’ core assets. Regular ELD testing is a key part of the company’s strategy to maintain its facility’s integrity.

CONCLUSION

Adopting Electronic Leak Detection methods is more than a beneficial strategy for data centres – it’s a crucial step toward future-proofing their facilities. Incorporating ELD into a comprehensive roof maintenance plan is key to protecting the data centre’s physical infrastructure and ensuring seamless operations.

As the landscape of data centres continues to evolve with technological advancements, ensuring the integrity of the building envelope with methods like ELD is a commitment to excellence and innovation.

The Han-Eco connector reduces power wastage by up to 50% by using low-impedance contacts

POWERING OUR WAY TO A MORE RELIABLE FUTURE

Ajay Kareer, Data Centre Market Manager for HARTING, discusses the importance of reliable and energy efficient connectivity solutions for data centres.

The worldwide data centre market is experiencing explosive year-on-year growth as our reliance on remote working, AI and the Internet of Things increases at a staggering rate. In addition, the changes to our working lives caused by COVID-19 have meant that businesses and individuals need reliable access to data to allow them to embrace flexible ways of working. Therefore, as we become more reliant on remote or hybrid working models, it’s essential that data centres run as smoothly and efficiently as possible.

PROBLEMATIC POWER OUTAGES

Data centre power outages can happen for various reasons such as weather conditions, network failures, human error and software issues. However, they can also occur due to power infrastructure problems created inside the data centre from either generator, Uninterruptible Power Supply (UPS) or Power Distribution Unit (PDU) failures.

The International Data Corporation reports that energy consumption per server is growing by around 9% per year globally. Despite servers getting more compact to save installation space, their improved performance increases their energy requirement. As a result, energy consumption costs can be more than 50% of the total data centre operating expenses (OPEX). It’s therefore essential to invest in and manage each part of the critical infrastructure in the data centre to ensure energy efficiency and reliability.

CONNECTORS TO BOOST

RELIABILITY

One method of improving energy reliability is by using ‘plug and play’ connectors and pre-assembled cable assemblies which can reduce maintenance downtimes by removing the element of human error. They also reduce the overall cost of ownership when compared to hardwired connections.

Cable assemblies distribute power from the data centre’s UPS to the PDUs. These assemblies consist of a cable between one or two connector hoods. Inside the connector is an insert or multiple inserts where the conductors from the cable are terminated. The connector hoods then mate with a matching housing wired to the PDU and/or UPS.

When a cable assembly is designed and manufactured using automation, human error is massively reduced. If the same connections are handmade or field wired, the chance of error increases, potentially risking catastrophic issues either during the initial power up or during the operation of the data centre. This, in turn, can result in hours of expensive skilled labour spent troubleshooting as well as the downtime costs of the rack, PDU or entire data hall not functioning. If designers hardwire the conductors inside the cable, a skilled electrician is needed to disconnect and reconnect the hard-wired PDU. Using cable assemblies means there is no need to hire an electrician and, since everything is

pre-wired and pre-tested, wiring errors are virtually eliminated. Cable assemblies also offer benefits during the design and prototype phase and make access for maintenance easier.

IMPROVING INFRASTRUCTURE EFFICIENCY

As energy costs can account for more than 50% of the total operating expenses of a data centre, one important ongoing challenge is to improve the energy efficiency of its infrastructure. To calculate the exact effect of power usage from connectors in data centres, HARTING has compared the power consumption of three different connector solutions in its independently accredited test laboratory. One of the connectors tested was the HARTING Han-Eco; with the other two being CEE (IEC 60309) plugs from different manufacturers.

The results showed that the Han-Eco connector reduced power wastage by up to 50%, compared to the other two brands of IEC connectors, by using low-impedance contacts. These contacts reduce the power lost in connections and significantly improve the Power Usage Efficiency (PUE) of data centres.

Depending on the electricity price, which differs regionally and worldwide, different monetary gains can be realised. As an example, one hyperscale data centre with 15,000 racks can achieve annual power consumption savings of around £90,000. These calculations are based on the average EU industrial prices from 2020, so potential savings will be even more dramatic when we consider how much energy prices have increased over the past four years.

AN OPEN EXCHANGE OF IDEAS

The Open Compute Project (OCP) is focused on the redesign of hardware technologies for IT infrastructure. The goal of the working group is to make data centres more efficient, more flexible and more quickly scalable via an open exchange of ideas, specifications and other intellectual property to maximise innovation and reduce the complexity of technical components.

In a data centre, power shelves provide power to IT equipment. The Rack & Power Project Group within the OCP initiative is focused on standardising racks and making them easier to integrate into the data centre infrastructure. These designs, called the Open Rack, began worldwide installation at the beginning of 2023.

As a lead author and initial connector partner in the standardisation process, HARTING has now developed the third version of the Open Rack (ORV3), called the ORV3 OCP Input Power Connector. In line with the OCP’s goal of optimising efficiency in the construction and scaling of data centres, the Han ORV3 enables a more compact design for the entire infrastructure thanks to its shallower rack system.

CONCLUSION

As we have seen, reliability, ease of use and efficiency are key themes when it comes to data centre energy management. Connectivity technology is constantly being refined and developed, and new Smart Connectivity solutions are designed to improve safety, identify faults, and ensure systems within data centres are working efficiently.

One of the most important additional functions powered by Smart Connectivity is the signalling of the mating state. The mating state

Smart Connectivity solutions are designed to improve safety and quickly identify faults

can indicate a range of different parameters, including if the connector is electrically connected and whether it is mechanically locked. It can also indicate if the connector is overloaded and monitor whether environmental parameters such as temperature and humidity are within the permitted range.

The plug-in status is indicated by means of a light. In its simplest case, a red or green display shows whether a fault is present. Modern full-colour LEDs can denote other states, such as the presence of voltage. A digital interface within the connector then transmits the information in much greater detail to a control centre.

Connectors are currently identified by using electrical contacts as coding pins, with the control system determining which attachment is plugged in. However, this method has its limits, especially with large, flexible systems.

The latest solution identifies the connector with the help of a bus system and microcontroller, or alternatively, via NFC (Near Field Communication). This gives each connector a unique ID which is assigned to the corresponding attachment or tool. As a result, even simple components such as lamps, door contacts or analogue sensors can be identified.

HARTING, harting.com

NAVIGATING THE EXPANDING COLOCATION MARKET: WHAT’S

NEXT?

David Gammie, CTO, iomart, explores the colocation options currently available and assesses the emerging trends and opportunities – including AI, which is already having a big impact on the market – before turning his attention to what the future may have in store.

With market growth from $54.82 billion (£43.12bn) in 2022 to $61.47 billion (£48.36bn) in 2023, the colocation market continues to expand without any signs of slowing down. In the UK alone, colocation facility revenue is projected to reach £2.7 billion by 2024, growing at a compound rate of 4.8%.

With the shift towards cloud services and a wide range of IT infrastructure options, colocation has become a reliable and scalable choice for organisations. It offers cloud benefits without the complexities and costs associated with full public cloud migrationespecially for business applications that require careful architecture to maximise public cloud advantages - and its infrastructure offers better data sovereignty assurance.

WHAT FORM OF COLOCATION IS BEST?

Colocation can vary from a base level or managed colocation, high-security colocation or high-density colocation, each with its own benefits and use cases.

Standard colocation tends to offer businesses access to standard 42U dual-powered cabinets or vendor cabinets, with full network connectivity and internet connectivity. However, with standard colocation options, the level of service provision is limited to the physical infrastructure, i.e. power and cooling.

On the other hand, managed colocation is fully managed and looked after by service provider teams. This includes monitoring, analysis, hardware replacement, or any other infrastructure service that a business needs. Although everything is managed by a service provider, with managed colocation an organisation will still have manual access to its cabinets when needed. Opting for a managed approach just means organisations have the option to leave the heavy lifting to their colocation service provider, giving them peace of mind.

With technologies like AI being used more regularly, high-density colocation is really seeing momentum. This form of colocation is designed for businesses that use power-hungry equipment and want to keep everything within a single footprint.

There is also the option for high-security colocation. Security is one of the biggest draws for companies looking for colocation options. Physical security in a colocation data centre is key and will typically involve constant 24/7

surveillance, fire detection and suppression systems. Most providers will also offer 24/7 on-the-ground security presence as well, meaning hardware is protected and able to run continuously. High-security colocation is the best option for organisations with specific compliance or security requirements. Businesses will ordinarily get access to caged enclosures with as many cabinets as they require.

EMERGING TRENDS ARE HAVING AN IMPACT

While security has become a major prerequisite for organisations looking for colocation providers, there are also other trends emerging and impacting what those organisations are seeking.

Sustainability continues to be a major focus for the sector, and rightly so. Data centres are estimated to be responsible for up to 3% of global electricity consumption today, with this figure expected to increase to 4% by 2030.

“With technologies like AI being used more regularly, high-density colocation is really seeing momentum”

With customers demanding better environmental credentials from the data centres they use, we are seeing the majority of colocation providers investing significant funds into improving sustainability. Many are doing this by transitioning to renewable energy such as wind to power them, or even generating their own energy through onsite solar panels.

Colocation supports sustainability by allowing organisations to reduce or close any internal data centres that aren’t being utilised to their full extent, or that are located in areas where renewable energy isn’t readily available. What’s more, the onus is more on the colocation providers than individual organisations, taking the burden and cost away from having to build or improve their own data centres to meet their sustainability goals.

ANALYSING THE PROS AND CONS OF AI

Another emerging trend shaking things up across all industries is AI – and it’s certainly having its impact on the colocation market, both from a customer perspective as well as the data centre operators themselves.

It’s estimated that 35% of global companies are using AI in some way. On a positive note, AI is being used to completely transform data centre operations, allowing colocation providers to optimise resource utilisation, enhance energy efficiency, and bolster security. By leveraging AI-driven monitoring, predictive analytics and automation tools, providers can deliver unparalleled reliability, scalability and performance.

However, AI doesn’t always make things easier. The computational power needed to run AI applications is vast and is adding significant power demand on colocation providers. To combat this, data centre providers are retrofitting existing infrastructure where possible and also looking at new build options. What’s more, providers of hardware and rack equipment are building new generations of kit that can deal with the workload AI demands. Moving forward, the biggest challenge with this will be sourcing power in geographies where the grids are almost reaching capacity. Couple

this with the sustainability goals organisations are laying out, colocation providers have a big task on their hands.

NEXT STEPS – WHAT DOES THE FUTURE OF COLOCATION LOOK LIKE?

The future of colocation will be one of striking a balance as the market continues to build momentum.

On the one hand, providers and customers alike will be able to harness the benefits of AI, reaping rewards such as enhanced efficiency and improved security.

On the other hand, providers will have to balance those advantages with battling the extra power demanded by the utilisation of AI while maintaining sustainable data centres that support the environmentally conscious goals they and their customers both want to adhere to.

What is sure is that no matter the organisation’s needs, there will be a form of colocation that works for them and aligns with their wider business strategy.

iomart, iomart.com

THE MERITS OF CONTEMPORARY COLOCATION

As businesses move away from on-premises IT infrastructure and in-house data centres, colocation is increasingly coming into the spotlight. In fact, Telehouse’s Vision 2030 research has revealed that 54% of organisations now prefer colocation services over on-premises solutions, a significant rise from 33% in 2020. This shift towards colocation is driven by the need to accelerate digital transformation initiatives, demand for high-speed interconnection services and scalable environments. On-prem set-ups, though offering full ownership, come with managerial burdens and high costs related to equipment, cooling, power, compliance with security regulations, and the specialised skills required to maintain and operate mechanical and electrical

systems. Additionally, when problems occur, downtime costs mount rapidly.

Cloud computing is another option, alongside colocation and on-premises data centres. However, according to the Flexera State of the Cloud Report, 84% of IT professionals indicate that managing cloud costs is a primary challenge, even more so than security issues. The move from a ‘cloud-first’ to a ‘cloud-selective’ strategy reflects a growing sentiment among organisations, recognising that cloud computing, while beneficial for its agility and innovation, may not suit every business. It is colocation that emerges as a strategic, configurable approach, retaining the advantages of cloud computing without the complete reliance on it.

Nick Layzell, Customer Success Director at Telehouse Europe, assesses why embracing colocation represents a pivotal strategic shift away from the ‘cloud-first’ mentality.
The Telehouse London Docklands data centre campus is the site of Europe’s first carrier-neutral colocation facility

COST AND LONG-TERM EFFICIENCY

With organisations looking to cut costs, colocation offers significant advantages by eliminating the need for extensive on-site data storage operations and reducing the need for in-house technical staff. It provides guaranteed service level agreements for uptime, thereby minimising the financial impact of downtime. Beyond just storage costs, colocation is seen as a cost-effective solution for companies with stable workloads, avoiding the high costs of expanding cloud infrastructure to accommodate growth. Colocation also mitigates energy costs through the use of forward hedged rates by data centre operators, which help manage price fluctuations and provide cost transparency.

ENHANCED PERFORMANCE, COMPLIANCE AND SECURITY

In sectors such as finance, where real-time data access is critical, colocation provides financial institutions with low-latency solutions and consistent performance. This is essential for high-reliability applications, including

customer-facing digital services and AI-driven processes. On-site expert engineers at colocation facilities are equipped to swiftly address any downtime or other issues, offering a level of responsiveness that cloud services may struggle to match.

Colocation also presents a clear advantage in meeting stringent regulatory and data sovereignty requirements. Companies required to process data within specific geographical boundaries benefit from facilities managed by staff who are experts in navigating complex compliance regulations. This advantage reduces the need for businesses to seek external regulatory expertise.

Colocation allows for enhanced customisation and control over hardware and software configurations, which is vital for industries with stringent security demands. The presence of trained and dedicated personnel, including skilled engineers, at these facilities ensures strict adherence to security protocols and bridges any potential skills gaps. This combination of benefits makes colocation an increasingly preferred option for businesses prioritising performance, compliance and security.

BUILDING SCALABLE CONNECTIONS FOR TOMORROW’S NEEDS

Looking ahead, organisations need to reevaluate their digital infrastructure to stay compliant and competitive. The demand for digital connectivity is driving a shift towards hybrid infrastructures that combine the cloud’s scalability with the security and reliability of colocation services. Colocation data centres provide businesses with the flexibility to scale and the bandwidth to ensure reliable service distribution, which is essential for the successful deployment of AI and other new technologies. The increasing interconnectedness of AI, IoT devices and edge computing technologies translates to a substantial growth in the volume of data organisations have to manage. Colocation can help them meet the needs of the scalable, high-performance IT infrastructure required to harness the full potential of this data.

ACCESSIBILITY AND ECOSYSTEM INTEGRATION

A significant limitation of on-prem infrastructure is its lack of advanced connectivity options with networks and ecosystems. In contrast, colocation provides immediate access to bespoke infrastructure, which is vital for specific workloads. This setup fosters a highly

connected digital ecosystem, incorporating major cloud providers. The presence of various internet service providers and interconnection services in colocation centres ensures high-bandwidth and low-latency connections. This infrastructure supports swift interactions with partners, suppliers and customers.

IT INFRASTRUCTURE FOR THE MODERN ENTERPRISE

Embracing colocation represents a pivotal strategic shift from the ‘cloud-first’ mentality, enabling businesses to tap into significant benefits such as cost efficiency, enhanced security, and superior scalability. This transition from on-premises data centres to colocation not only optimises resource management, but also aligns with contemporary needs for agility and enhanced connectivity.

Colocation equips businesses with the flexibility to navigate away from the limitations of legacy infrastructure, paving the way for a more integrated and dynamic future. This approach is ultimately redefining how companies structure their IT strategies. Prioritising adaptability and strategic connectivity over conventional solutions is helping to put IT teams back in the driving seat.

Telehouse Europe, telehouse.net

HYBRID CLOUD: THE BEST OF BOTH WORLDS

Arpen Tucker, Senior Business Development Manager, UK, Vantage Data Centers, explains why leveraging hybrid cloud could help provide a colocation solution that provides the best of both worlds.

Fintechs are well aware of the criticality of their IT systems to deliver competitive advantage and growth. Getting it right means the agility to respond quickly to customer demands while reacting to new market opportunities with timely new products. The result is more business and greater customer loyalty through the delivery of a highly tailored customer experience in tune with their immediate and future needs.

However, the handling of large volumes of sensitive financial data requires specialised, highly compliant IT. Ongoing regulatory updates, such as those related to e-money, anti-money laundering, and capital requirements, require fintech firms to be agile in adapting their IT systems to comply with changing laws. There are other risks too. Fire, flood, power outages or the fallout from a physical or cyber attack all

can take vital systems offline, resulting in loss of business opportunities, customers, reputational damage and even the potential of hefty fines.

With so much at stake, fintech companies must regularly evaluate whether it is affordable or indeed best practice to keep all their IT systems on premises. An alternative is outsourcing IT servers and equipment into third party ‘colocation’ data centres. Backed by service level agreements, these can provide the critical infrastructure, security and around-the-clock support services necessary for keeping systems continuously available.

Leveraging the cloud as well as, or instead, is a further option. But don’t forget that these still depend on the reliability, connectivity and security of someone’s data centre(s) somewhere.

HYBRID CLOUD

The solution could be hybrid cloud, combining both public and private clouds under one umbrella; including, where required, legacy systems.

On a pay-as-you-go basis, this allows a fintech company to manage and scale variable workloads in the public cloud and leverage the latest AI and machine learning (ML) tools and technologies available to innovate and experiment with new product offerings. Apart from the time to market benefits, capital expenditure can be devoted to private cloud infrastructure for supporting workloads that can be seamlessly accessed from their data centre. Such an optimised approach allows continued control and ownership of sensitive information – a prerequisite for fintech firms – while also addressing the growing challenge of having sufficient compute resources consistently available.

There are further advantages to taking the hybrid route. Fintech businesses will be able to seamlessly integrate and analyse public data sets and private financial data to derive valuable insights and enhance their AI-driven

financial services. Moreover, critical data and applications can be replicated across both on-prem and public cloud environments, reducing the risk of downtime due to hardware failures or unforeseen events.

MAXIMISING CUSTOMER EXPERIENCE

The hybrid cloud must meet customer expectations for application responsiveness and predictability, which means placing compute resources closer to end-users and vital data sources. This brings latency and the cost of connectivity into focus, both of which are critical in fintech. However, with the considerable amounts of data moving back and forth between the public and private cloud environments, few on-prem data centres will be able to afford to run the dedicated network links necessary for assuring the consistent performance of workloads that may have variable resource needs.

While Microsoft, for example, offers ExpressRoute as a low latency dedicated connection, it is only available as a direct ‘trunk’

“The hybrid cloud must meet customer expectations for application responsiveness and predictability”

connection to certain colocation, public cloud and connectivity operators. These can connect directly with ExpressRoute at core data centre speeds, and so largely eliminate latency issues and ensure bandwidth is optimised.

But for those on-prem or colocation data centres not directly connected, the only alternative is to find an equivalent fast and predictable connection from their facility to an ExpressRoute partner end point. As such, organisations using ExpressRoute – or AWS Direct Connect, for that matter – for their own private data centre will still have to deal with any latency and speed issues in the ‘last mile’ between their facility and their chosen ExpressRoute point of presence.

This is the case even where connectivity providers are offering ExpressRoute to a private or colocation facility, as they are layering their own connectivity from the edge of their network and the ExpressRoute core to the edge of the user network.

In addition, if an organisation is planning on using a colocation facility for hosting some or all the hybrid cloud environment but keeping legacy workloads operating in its own data

centre, the colocation must offer a range of diverse connectivity options. Multiple connections running in and out of the facility will assure maximum performance and resilience.

In summary, with fintech’s growing adoption of AI and ML based applications, the dilemma of where and how to maintain IT infrastructure is coming under increasing scrutiny. As they strive for greater agility without compromising the control and security of sensitive, stringently regulated financial information, many firms will turn to best of both worlds’ hybrid cloud solutions.

In turn, the quality of on-prem or third-party colocation data centres at the heart of these complex environments will require careful evaluation in terms of their forwards power availability and cooling solutions, on-site engineering expertise, and proximity to public cloud provider networks for ensuring predictable, low latency and seamless interconnection of public, private and legacy workloads.

Vantage Data Centers, vantage-dc.com

SECURING THE BEACHHEAD

Paul Mellon, Operations Director at Stellium Datacenters, explains why critical cable landing station infrastructure should be equally as secure as modern data centres.

By their very nature, data centres are highly controlled environments – high security, controlled temperatures and humidity, and resilient power platforms. We spend a lot of time ensuring they are free from potential single points of failure, which might weaken their resilience.

That said, they are all interconnected by subsea and terrestrial communications cables. At one time, the most important aspect of subsea/terrestrial communications was latency. While this remains very important, the most important factor now is security. In its original form, the cable landing station sits on a beach or close to it, which is a single point of failure. When offline for whatever reason, it impacts multiple Terabits of data traffic flowing globally.

SECURITY IS KEY

Subsea cables spanning the globe for over a century mostly arrive onshore at a cable landing station (CLS). Cable landing stations are crucial to implementing global communications networks and have made the globalisation of business and social communities possible. These are usually in remote locations and away from typical land-based communications hubs. They were and are usually unmanned and fitted with very sophisticated equipment to optimise the performance of fibre cable communications. In addition, cable landing stations need the same level of resilience that is built into data centres: security, temperature/humidity control, and resilient power.

However, providing these performance criteria outside a data centre’s highly controlled environment is a complex and different challenge. It is impossible to provide an unmanned cable landing station with the same level of security as a 24/7 manned data centre. A response time of four hours (or more) to the site for an event in a cable landing station is typical.

It is possible, however, to implement a technical platform to create resilience around temperature/humidity control and power. Oversight of these conditions can be monitored remotely, and change of state responses can, like a data centre, be swift – within minutes. But, where an event demands an onsite response, it will be four hours plus.

A NEW WAY OF THINKING

When the everyday business world came to a halt in March 2020 due to the rising COVID-19 epidemic, subsea and terrestrial communications became the new framework upon which businesses, governments and social communities depended for their very functioning and existence.

With all but blue light and essential services in lockdown, remote working was telescoped forward years to maintain minimum business and social functioning. The experience of COVID-19 quells any doubt about how much we depend on fibre communications globally.

This again amplifies the risk that is carried when these cables are laid in subsea landing environments where it is highly challenging to provide meaningful security to unmanned beach-located cable landing stations. In global terms, subsea fibre communications cables have been the target of attack by dictatorial governments responding to sanctions against them by developed Western first-world economies.

While it may not be possible to tackle all known risks satisfactorily, many risks that fall within our control can be addressed. This cannot be left to the discretion of providers whose motivations are more commercial than national. Fibre cable communications at the subsea/terrestrial level need to be recognised as part of the national infrastructure, like the national electric and gas grids.

THE WAY AHEAD

Cable landing stations need to comply with minimum security and operational criteria. Many can be migrated into secure environments – like data centres, control centres and selected highly secure environments. Such regulation needs to be owned by a government authority whose mission it is to:

• Review the macro perspective of subsea cables throughout the UK regarding their function and resilience from a national perspective.

• Implement cable landing station design, efficiency, construction and operation standards.

• Implement regulations that monitor measurable KPIs and operational criteria to ensure the subsea and terrestrial fibre cables and cable landing stations continue to build on their resilience.

There are some 123 cable landing stations throughout the UK. They vary in size and importance concerning their core functionality. Many of these link islands around the coast. There are quite a number, however, that warrant high category status if even a modest

risk analysis was undertaken to evaluate the impact of an event taking them offline.

Only two of these cable landing stations are based in a data centre environment (Stellium Datacenters in Newcastle, pictured on page 48, hosts both), providing the high security, resilience and regulation corresponding to their national, commercial and community importance.

We need a vision comparable to the national electric and gas grids in order to guide building a robust infrastructure within the existing and new cable landing stations. One that coincides with the national interest, as well as business and community requirements. To date, it has been left entirely to the industry to implement resilience as they deem fit to support their commercial requirements.

In summary, if our experience of COVID-19 has taught us anything, it should be that there are much better paths than leaving fundamental decisions on critical infrastructure to chance. The resilience and security of cable landing stations cannot be overlooked.

COOLING ON THE EDGE

Cathy Chu, Global Strategic Marketing Director, Consumer & Electronics, Dow, explains how single phase immersion cooling and silicone can enhance edge computing operations.

According to estimates by global market intelligence firm, IDC, by the year 2025, there will be 41.6 billion Internet of Things (IoT) devices capable of generating 79.4 zettabytes (ZB) of data. Furthermore, the International Energy Agency predicts that by 2026, the rapidly growing AI industry will consume at least 10 times more energy than it did in 2023. As these numbers continue to grow, the demand on centralised data centres will also escalate, making edge computing and edge AI an increasingly attractive alternative for enterprises.

The paradigm shift towards edge computing brings a number of advantages by placing computational resources in close proximity to the data source. This approach unlocks lower latency, optimised bandwidth utilisation, and enhanced reliability. edge computing and edge AI enable real-time analysis of data, a highly coveted capability for industries that demand swift decision making based on real-time data. Harnessing the power of edge AI enables heightened privacy and security, as data processing occurs directly on the individual IoT device, eliminating the need for transmission to centralised locations. This decentralised

architecture also provides scalability to seamlessly accommodate fluctuating data loads.

While edge computing, AI and IoT represent powerful tools driving the transformation of data, it is crucial to address the thermal challenges posed by these three digital powerhouses. Effectively managing the heat generated by these technologies is essential to ensure optimal performance and reliability, underscoring the need for innovative cooling solutions.

IMMERSION COOLING, POWERED BY SILICONE

Traditional cooling methods, like air cooling face limitations in edge computing environments, such as limited space for equipment and demanding maintenance requirements. Immersion cooling, however, offers several advantages that make it well-suited for edge computing.

Immersion cooling is a technique in which electrical and electronic components, including servers and IoT devices, are fully submerged in a thermally conductive but electrically insulating liquid coolant. The three primary liquids used

for immersion cooling to date are fluoro-carbon fluids, synthetic oil and silicone fluids. While all of these offer significant improvements over air-cooled systems, each comes with its own set of performance challenges:

• Silicone fluids provide significant cost advantages but are incompatible with other silicone components.

• Synthetic oils come with concerns over flammability and thermal instability compared to other alternatives.

• Fluoro-carbon fluids are relatively costly and present significant environmental, health and safety (EHS) concerns, especially in the event of accidental leakage.

An alternative to these fluids that can help alleviate some of these performance drawbacks comes in the form of a hybrid silicone-organic fluid. This type of technology not only provides excellent thermal conductivity for efficient and cost-effective heat dissipation, but also an exceptionally low Global Warming Potential (GWP) score.

As server load densities continue to grow, the hybrid silicone-organic liquid can penetrate small spaces in close proximity to the materials requiring cooling. It also enables increased computer performance, significantly reduced

footprint requirements and power usage, as well as compatibility with other silicone components for data cooling operations.

ADVANTAGES OF SINGLE-PHASE IMMERSION COOLING FOR EDGE ENVIRONMENTS

In single-phase immersion cooling operations, heat is transferred to the coolant, like the aforementioned hybrid silicone-organic fluid, through direct contact with server components. The coolant then circulates back to the server from a heat exchanger at a user-specified temperature without boiling off and is cooled via the heat exchanger, staying in its liquid phase. By submerging the IT equipment directly into the coolant, heat dissipation is made more efficient, eliminating the need for traditional air-based cooling systems that consume significant space and make significant noise with air circulation pathways and air conditioning equipment. In edge computing environments, where noise and space are critical considerations due to their frequent location in dense urban areas with higher populations and limited and expensive real estate, the compact footprint offered by immersion cooling is a substantial advantage.

edge AI applications demand immense computational power, and the efficient cooling facilitated by single-phase immersion cooling techniques opens up the potential for overclocking or operating at elevated performance levels without sacrificing reliability. Additionally, single-phase immersion cooling typically consumes less energy compared to traditional cooling methods. As AI and other high heat generating applications continue to come to the edge, it becomes crucial to actively explore strategies to reduce energy consumption.

As the edge computing, IoT and AI ecosystem continues to expand, ensuring optimal performance and longevity of devices within this ecosystem becomes paramount.

To achieve this, it is imperative to explore reliable and more efficient cooling solutions. Single-phase immersion cooling, leveraging a hybrid silicone-organic fluid, presents a compelling approach to address the thermal challenges arising from the surge in data processing.

Not only does this technology offer enhanced performance and reliability, but it also aligns with sustainability goals, providing benefits for both operations and the planet.

By embracing innovative cooling solutions like this, we can unlock the full potential of edge computing, IoT and AI while minimising their environmental impact.

Dow, dow.com

Make

greater with an industr y-leading network

Celebrate 40 years with APC.

Start leveraging four decades of uninterrupted protection, connectivity, and unparalleled reliability with the APC UPS family, a legacy marked by pioneering UPS technology and an unwavering commitment to innovation.

THE DATA ‘DOUBLE TAKE’

Mark Lewis, CMO, Pulsant, looks at the five best practices for combining edge and IoT into a cohesive strategy.

Amidst the intense, ubiquitous hype surrounding AI, business leaders could be forgiven for thinking that Internet of Things (IoT) projects have faded a little.

However, IoT remains a priority for businesses. In its State of IoT Spring 2024 report, market specialist IoT Analytics noted that investments in AI, cyber security, and a range of other technologies continue to contribute towards the IoT market’s compound annual growth rate (CAGR) of 17%. Furthermore, they predict that rate is set to continue until 2030 .

One such contributing technology is edge infrastructure. By using smaller data centres, located closer to the collation sensors and devices, businesses can yield insight and deliver change faster, across both consumer and industrial IoT projects. However, this success depends on balancing five concerns:

1. RELIABILITY AND PERFORMANCE

The initial deployment of any IoT project hinges on all devices, connectivity and applications functioning as expected. Continued success requires optimisation of the many links in the chain as a project expands. Creating a distinction between ‘edge’ and ‘core’ at an infrastructure level makes this task easier. Data collected by sensors and devices around the network edge can be sent to a regional data centre for processing. From here, only data that is required for large scale analytics or cloud based development work needs to be sent over carrier networks or the public internet. This tiered approach has several inherent benefits to performance and reliability. Regional edge data centres offer resiliency in their operations, diversity in their connectivity providers and the ability to optimise applications performance.

2. SECURITY AND DATA PRIVACY

Historical figures predicting the growth of IoT have varied wildly. The most recent statistics suggest that there will be 39.9 billion connected devices by 2033. Europe accounts for approximately 8.5 billion connections, of which approximately 3.26 billion will be either industrial or non-consumer.

With so many more devices collecting and transmitting richer data, security needs will increase as businesses address the risks of breach or unauthorised access. Locating compute in regional edge facilities means less data is exposed to long distance travel over the insecure internet, reducing risk and backhaul network costs.

However, it’s not just an issue of security, but also compliance and privacy. edge infrastructure is inherently regional and local, making it easier to control where data resides and easier to comply with data sovereignty and privacy regulations.

3. SCALABILITY

Adapting swiftly to changing business trends is never easy, so it is important to be able to scale flexibly as markets and technology evolve. Regional edge infrastructure sits between central processing of large scale data sets in compute-heavy, hyperscale facilities and local, real time data generated by sensors and devices. Architecture that takes into account this separation in workloads is inherently more scalable, secure and cost effective. edge data centres are better placed to handle the low latency requirements of real time data workloads by reducing backhaul and improving application responsiveness. The availability of multiple connectivity options on site means bandwidth, speeds and resilience can scale as the business grows.

4. INTEROPERABILITY

So many applications, so much data and so many new business models. If the underlying architecture is sound, then interoperability

between components can be improved with access to an ecosystem of suppliers, partners and providers that can be engaged rapidly and easily. Many ecosystems at the edge have become highly automated. Blockchain environments often feature the ability to engage or disengage different ecosystem components, seamlessly and at high speed. It’s likely that automation in the ecosystem will continue to proliferate as interoperability benefits increasingly support growth.

5. COST

Balancing the costs of investment in infrastructure with commercial returns from business operations is always tricky. Doing this with new or emerging technology such as IoT or AI can be risky, although the rewards are often substantial. Building a strategy around edge infrastructure can help minimise some of the costs in this journey in four ways:

1. Local data stays local. Cloud ingress/egress fees can be expensive, so minimising data transit to only essential data helps keep costs under control.

2. Minimising network costs. The cost of moving large quantities of data can mount up rapidly, especially if diverse routing or resilient links are required. Minimising the amount of data

backhauled to large compute facilities or clouds will reduce costs.

3. Scale as you grow. edge data centres provide highly scalable environments with supplier diversity; compute and connectivity costs can be aligned to growth.

4. Right workload, right place. Optimising performance and controlling costs can be problematic with public cloud. edge facilities offer the ability to maintain control over infrastructure and tune applications performance to meet business demands.

IoT may be an emerging technology but the principles around successful project implementation are well established and still apply. Getting the basic infrastructure decisions right from the beginning will help accelerate the projects towards generating successful returns faster.

Putting the right workload in the right place can make a significant difference to success. edge facilities offer opportunities to scale in a controlled fashion, improve performance and reduce costs. Considering how edge and IoT work together from the outset will bring sustained benefits into the future.

Pulsant, pulsant.com

THE EDGE CUTS BOTH WAYS

IoT security is a constantly moving target, and despite new solutions emerging, so too do new problems. Here, Kevin Hilscher, Sr. Director Product Management – Device Trust, DigiCert, explores the elusive goal of IoT security and explains how organisations can securely manage their IoT devices.

IoT security feels like one long headache. Old problems recur and when solutions eventually come along, new problems emerge. The opportunities this technology offers are profound, but the risks which torment it are serious and stubborn.

When IoT devices first emerged they were eagerly connected by just about anyone who could get their hands on them - businesses, organisations and individual consumers alike. Demand skyrocketed and security professionals warned that this rapid and massive adoption of new devices and technologies could present dire risks.

INFILTRATING WELL-SECURED NETWORKS

Countless devices were shipped and connected with inbuilt vulnerabilities and design flaws. Many of them were shockingly obvious –hardcoded passwords, unencrypted data handling, firmware that couldn’t update, and default credentials that could be easily guessed or even found online in user guides.

It’s been a real coup for attackers, who have used that insecurity to their own profit. These have provided easy access into otherwise well secured networks and provided a seemingly infinite army of devices to recruit into botnets.

Many of these problems emanated from the fundamental immaturity of this technology and its supply chain. The simple fact is that device manufacturers had never before had to deal with information security and so designed components and devices without much consideration as to the threats they might face. Security measures – when they were present – were often bolted on afterwards, leaving the risks of these devices largely undealt with.

That has changed – at least partly. Organisations, consumers, regulators and industry bodies have woken up to the risks and are making great strides in attempting to secure the IoT. Standards have emerged – such as Matter – which aim to introduce secure communication for smart home devices, regulators have introduced legislation that could police risk within the IoT supply chain, and consumers have started demanding greater security for their devices. Many problems, unfortunately, remain.

THE THREAT OF DDOS ATTACKS

Take IoT botnets, for example. One of the biggest early examples of the profound spread of vulnerabilities within IoT was Mirai. This simple piece of malware infected IoT devices – including routers, smart doorbells and kettles – and enlisted them in what eventually became a huge botnet which successfully DDoS (Distributed Denial of Service) attacked enormous hosting providers, telecoms giants and even the whole internet infrastructure of the country of Liberia. The malware could do this by simply guessing the default credentials of a device from a small library of commonly used passwords. It was that basic design fault that allowed Mirai to launch some of the most powerful DDoS attacks ever seen at the time. Mirai has been shut down and many manufacturers have stopped using default passwords. That doesn’t mean the problem has gone away. In fact, it may have become worse. Nokia’s 2023 Threat Intelligence Report, for example, reveals that IoT botnet DDoS attacks increased five times over the previous few

years. In fact, the report continues, 40% of all DDoS traffic is from IoT botnets.

A CONSTANTLY MOVING TARGET

Again, IoT security is a constantly moving target. Old problems persist and while new solutions emerge, so do new problems. ChatGPT serves as an interesting example. This is becoming an added feature of many IoT devices but unfortunately provides yet another vector into a device – and the broader networks to which they are attached. These problems can come from simple vulnerabilities in the applications themselves, or cunning attackers can use prompt injections to manipulate the device and the data it handles in sinister ways.

But for all the potential new threats which have arrived in recent years, problems still remain within the organisations that maintain these IoT deployments, according to DigiCert’s 2023 Digital Trust survey. Of the organisations surveyed, for example, nearly all of them transmit personally identifiable information across IoT deployments without encryption. These deployments can be made up of hundreds or even thousands of sensors and devices, creating ample opportunity for data to leak, be corrupted or stolen.

Given that scale and complexity, organisations struggle to manage those deployments in a secure way. Only 24% of surveyed organisations can update those devices in the field, only 4% can update their algorithms, and only 3% can revoke device identities. Nearly all (93%) of respondents agreed that these issues resulted in data breaches, outages and exploits, while 84% said they result in direct breaks in by malicious actors.

‘LEADERS AND LAGGARDS’

New IoT threats might not be something organisations can control, but securely managing their own IoT devices is well within their reach. The rewards of IoT adoption, perhaps predictably, redound to those who do so.

DigiCert’s Digital Trust survey distinguishes between ‘leaders’ and ‘laggards’ – the organisations that maintained high levels of trust maturity in their IoT deployments, and those who didn’t. Almost all of those trust-mature organisations – the leaders –acquired new customers more successfully than the 64% of ‘laggards’ who could report the same.

Most leaders (70%) reported greater productivity while less than a quarter of laggards did. The problems laggards experienced were also worse. Half of the laggards reported compliance problems emanating from their IoT deployments, but not a single leader did.

IoT security must be a frustrating discipline. Many defenders will feel that IoT attack surfaces and device risks widen and proliferate faster than they can secure them. However, the extent to which they can mitigate the emergence of new threats is always going to be limited. Instead, organisations need to make sure they’re doing what they can to stop their IoT deployments unintentionally exposing them to unnecessary risks.

DigiCert, digicert.com

ONE EYE ON THE FUTURE

Andreas Rüsseler, CMO of Reichle & De-Massari (R&M) –a global cabling and connectivity solutions provider for high end-communication networks – looks at the latest trends from the company’s most recent research findings.

R&M would like to share some findings from its most recent research into technology and market developments in areas including data centres, FTTx, LAN and smart cities.

Increasing demand for internet services, especially in underserved areas, remains the key deployment driver for FTTx . Operators will keep upgrading network technologies, and networks may see greater integration with smart city infrastructure. The company is seeing greater uptake of smaller diameter cables with 200/180y fibres and blow-in technologies, for cost-effective, scalable, faster, future-proof rollouts.

Demand for resilient, extensive broadband is driving underground deployment. High fibre count micro blow-in cables can reduce costs

and deployment time in urban regions. Ducts can accommodate additional fibres without new construction work. Equipping cables with sensing technology allows monitoring of infrastructure health, detecting breaks and providing data on environmental conditions, improving maintenance and response times.

Aerial deployment remains an attractive –often the only – option for realising fast, cost-effective remote rollouts. Aerial cables can be installed using existing pole infrastructure. Pre-terminated cable reduces staff training and experience requirements and investments in splicing and test equipment.

Smart city infrastructure, with countless IP-equipped low-latency devices, is an

increasingly significant driver. Powerful fibre networks are needed to support IoT 5G, and anticipated 6G networks. Smart city functionalities require 4G and 5G small cell networks (indoor and outdoor) and macro cells.

An all IP-based ‘digital ceiling’ cabling approach, based on extending an RJ45-based data network throughout a smart building, supports all necessary protocols in a standardised way. Network switches, sensors, controls, WLAN access points and other distributed building services connect to building automation via pre-installed overhead connection point zones. Plugged-in devices are immediately powered and connected to the network. Next-generation wireless devices, delivering advanced connectivity technologies such as Wi-Fi 6 (IEEE 802.11ax), require 25G/40G connectivity.

In data centres, 40G/100G (4 x 10G) / 100G (4 x 25G) require eight fibres in parallel pairs,

but as migration to 400G/800G continues, 16 or 32 pairs are needed (8 x 50G/8 x 100G), boosting cable density. Smart migration paths and monitoring/asset management are key. DC operators need to utilise (rack) space more efficiently as density increases. Very Small Form Factor (VSFF) Connectors (SN/MDC), slim ribbon fibres, and high-density connectors and closures are also important. With thousands of cables running into the DC, fast, easy, splice-free connections are increasingly important too.

Solutions such as pre-term cabling and new push-pull fibre connector types significantly reduce handling and installation time guarantee functionality and increase first installation quality. Preconfigured cabinets with power, cooling, security and connectivity offer a neat modular solution.

Digitisation is profoundly affecting DCs, with the availability of new tools and information and the need to accommodate fast-changing

requirements. Decentralisation and hybrid/ multi-cloud strategies are becoming prevalent as new applications will use different types of DC elements such as close to the data source (edge), on-premise DC and private and public cloud – all in one.

Efficiently handling data traffic and ensuring low latency for AI is driving Spine-Leaf architecture, which efficiently handles rising data traffic and ensures lowlatency connections, but also could become part of managing the increasing complexity of DC, especially in the light of higher efficiency requirements set by adapted norms and governmental guidance.

Planning for future capacity while keeping availability high and service interruptions low remains challenging. Equipment is continuously added, moved, or replaced, making accurate, real-time visibility into processes and assets difficult. Larger, more complex DCs increasingly rely on Automated Management Systems and Data Centre Infrastructure Management solutions.

Introducing IoT and asset and capacity management has made DCIM essential to operations. An ‘expert layer’ can present KPI-related understandable, actionable insights from across DC systems. DCIM can also support compliance with standards and anticipate issues before they result in non-compliance. Incorporating AI and AR into DC asset management can enhance resource utilisation and decision-making. DC design and building can be optimised using ‘digital twins.’ 5G, low latency and high-speed connectivity are enabling edge computing growth. This requires a network of smaller, distributed data centres that process and store data locally, reducing latency and bandwidth use. IoT-driven data production is pushing DCs to scale up storage capacity and develop more efficient data processing. To manage and derive insights from data, DCs increasingly rely on AI and ML algorithms, automation and orchestration. 5G networks use network slicing to provide virtualised, independent logical networks on physical network infrastructure.

We see a marked difference between local and national DC infrastructure. The former caters to local businesses, government agencies, and other entities requiring specialised services, data processing, and storage within a limited geographical area. Less space and power means these DCs can host fewer servers, handle less data and may have limited redundancy. They must comply with local and state regulations.

Nationwide DCs may consist of multiple data centres to cover different regions. A larger scale, distributed architecture and typically greater capacity enables them to manage vast amounts of data and extensive traffic, but it introduces latency. These DCs must adhere to (inter)national regulations.

In Local Areas Networks, a need for faster, more reliable wireless connections will push adoption of Wi-Fi 6 and 6E. Centralised SDN control that can efficiently manage and orchestrate network resources in dynamic LAN environments should become more prevalent. Information Technology (IT) and Operational Technology (OT) network convergence is expected to accelerate, along with increased uptake in Power over Ethernet (PoE), IoT Devices and edge facilities.

A ‘holistic fibre’ backbone, merging data and building control, is becoming widespread. LAN convergence is largely driven by the need to simplify, improve efficiency, and reduce costs, while enhancing uniformity, functionality, and flexibility. Centralising IT resource management provides enormous technical and business efficiency increases by consolidating systems, boosting resource utilisation, saving energy, lowering costs, and leveraging system intelligence.

An All-IP approach enables connection of devices to building automation via pre-installed overhead connecting points. Instead of separate networks for telephony, data, and video, there’s just one network to manage. This can reduce cost and complexity of physical cabling infrastructure and network management. All building technology and management devices can communicate in the same way, without barriers, over Ethernet/Internet Protocol (Ethernet/IP), with the LAN providing the basis for physical communication. Internet, cloud and smart grid can be integrated in the background. LAN-enabled IoT can help monitor and manage energy usage to reduce carbon output without impacting comfort or quality of living.

Uptake of Single Pair Ethernet (SPE) cabling based on xBASE-T1 using a single twisted pair for data transmission is expected to keep growing. SPE enables integration of field devices, sensors and actuators into an existing Ethernet environment, without extra gateways and interfaces. SPE is well-suited for connecting sensors and actuators in industrial environments due to its support for cable runs up to 1km and its ability to deliver power and data over a single wire pair, simplifying cabling and reducing installation costs.

SPE can transmit up to 50W along with data and control signals (Power over Digital Line, PoDL) – ideal for Industrial Internet of Things (IIoT) applications. It can support converged networks for data, voice and video over a single network infrastructure. As SPE technology matures, it may play a prominent role in industrial and building automation scenarios, as well as building automation, and remote or centralised building management. However, as LAN bandwith, power and length performance demands grow to support 10 to 40Gbps, PoE and comprehensive Ethernet/IP coverage, SPE can supplement existing cabling but can’t always replace RJ45 technology.

GENERAL TRENDS

Fibre and copper networks for data centres, Telcos, industrial applications, and LAN must become more energy-efficient, integrate renewable power, and consider environmental impact in design and operation.

R&M also expects that within a few years, data communication, mobile, video, and other networks will merge onto a single network. Previously, separate devices fulfilled specific functions, but now we need to define different functions and integrate hardware and software. That requires greater attention to interoperability, integration, standards, monitoring and optimisation.

R&M, rdm.com

R&M, a globally active developer and provider of high-end infrastructure solutions for data and communications networks, is now offering Release 5 of the DCIM software, inteliPhy net.

With Release 5, inteliPhy net is turning into a digital architect for data centres. Computer rooms can be flexibly designed according to the demand, application, size, and category of the data centre. Planners can

EATON ANNOUNCES LAUNCH OF 5P GEN 2 UPS

Intelligent power management company, Eaton, has announced the launch of the Eaton 5P Gen 2 UPS, a compact and more efficient power solution for edge and IT needs.

Reportedly delivering more output, security and control than any other device in its class, this new product range also enables fleet management, remote UPS setting and remote firmware upgrades.

The 5P Gen 2 has enhanced power capability and provides up to 1350W, which is 22% more than its predecessor and 33% more than comparable models available on the market, Eaton says, making it ideal for protecting a wide range of applications. Its intelligent design ensures both stable performance and energy savings, while advanced load segment control prioritises critical equipment and optimises battery runtime.

position the infrastructure modules intuitively on an arbitrary floor plan using drag-and-drop, and inteliPhy net enables detailed 2D and 3D visualisations that are also suitable for project presentations.

With inteliPhy net, it is possible to insert, structure and move racks, rack rows and enclosures with just a few clicks. Patch panels, PDUs, cable ducts and pre-terminated trunk cables can be added, adapted and connected virtually just as quickly. The software finds optimal routes for the trunk cables and calculates the cable lengths.

inteliPhy net also contains an extensive library of templates for the entire infrastructure, such as racks, patch panels, cables and power supply.

R&M, rdm.com

This UPS model features the Eaton ABM+ Advanced Battery Management technology, which extends battery life by up to 50% and allows for accurate battery life prediction and timely replacement alerts powered by machine learning. It also comes with hot-swappable batteries and an intuitive battery replacement wizard via a built-in graphical LCD.

Eaton, eaton.com

SCHNEIDER REVEALS DATA CENTRE WHITE SPACE PORTFOLIO

Schneider Electric has unveiled its revamped data centre White Space portfolio.

The new portfolio includes the second generation of NetShelter SX Enclosures (NetShelter SX Gen2), new NetShelter Aisle Containment, and a future update to the NetShelter Rack PDU Advanced, designed to meet the evolving needs of modern data centres –particularly those handling high-density applications and AI workloads.

The NetShelter SX Gen2 Enclosures are specifically engineered to support the demands of contemporary

data centres. These new racks can support up to 25% more weight than previous models, handling approximately 4,000lbs (1,814kg), which is essential for accommodating the heavier, denser equipment associated with AI and high-performance computing.

The latest NetShelter Aisle Containment can achieve up to 20% more cooling capacity. This is crucial for managing the heat generated by AI servers and other high-density applications. The system incorporates an air flow controller that automates fan speed, reducing fan energy consumption by up to 40% compared to traditional passive cooling systems.

Lastly, the NetShelter Rack PDU Advanced with Secure NMC3 is an updated power distribution unit equipped with advanced security features and enhanced management capabilities.

Schneider Electric, se.com

VERTIV UNVEILS NEW AI POWER AND COOLING INNOVATIONS

Vertiv, a global provider of critical digital infrastructure and continuity innovations, has announced a new portfolio of high-density data centre infrastructure solutions to support the higher power and cooling requirements of AI.

Now available across EMEA, Vertiv 360AI is designed to accelerate AI deployments of any scale, with designs ranging from rack solutions for test pilots and edge AI, to full data centres for AI model training.

AI and accelerated computing are driving unprecedented demand for power and cooling, with rack densities anticipated to reach up to 500kW per rack. As a result, power and cooling infrastructure design and deployment has become significantly more complicated.

Vertiv 360AI provides a simple way to power and cool AI, with a complete portfolio of power, cooling and

service solutions that solve the complex challenges arising from the AI revolution. Vertiv 360AI solutions include validated designs and pre-engineered solutions to provide the benefit of Vertiv’s deep expertise while eliminating design cycles.

Vertiv, vertiv.com

OBJECT FIRST UNVEILS LARGER STORAGE CAPACITY FOR OOTBI

Object First, the provider of the ransomware-proof backup storage appliance purpose-built for Veeam, has announced increased storage capacity of up to 192TB on a single Ootbi node that unlocks up to 768TB of usable immutable backup storage per cluster.

The latest release allows customers to manage storage capacity more efficiently and back up data

more securely without sacrificing performance. The 192TB version of Ootbi joins the existing 64TB and 128TB appliances, all of which are interoperable.

“This announcement reinforces Object First’s commitment to innovation and meeting customers’ needs for secure, adaptable storage capacity,” says Eric Schott, Chief Product Officer, Object First. “Support for more immutable storage allows customers to scale with the growing demands of modern environments while protecting data against the risks of threats like ransomware. The continuous integration with Veeam’s 12.1.2 release allows for even greater backup storage capacities beyond 3PB as part of a Veeam backup repository.”

Object First, objectfirst.com

ARISTA UNVEILS ETHERLINK AI NETWORKING PLATFORMS

Arista Networks, a provider of cloud and AI networking solutions, has announced the Arista Etherlink AI platforms, designed to deliver optimal network performance for the most demanding AI workloads, including training and inferencing.

Powered by new AI-optimised Arista EOS features, the Arista Etherlink AI portfolio supports AI cluster sizes ranging from thousands to 100,000s of XPUs with highly efficient one and two-tier network topologies that deliver superior application performance – while offering advanced monitoring capabilities including flow-level visibility.

The 7060X6 AI Leaf switch family employs Broadcom Tomahawk 5 silicon, with a capacity of 51.2Tbps and support for 64 800G or 128 400G Ethernet ports.

The 7800R4 AI Spine is the fourth generation of Arista’s flagship 7800 modular systems. It implements the latest Broadcom Jericho3-AI processors with an AI-optimised

packet pipeline and offers non-blocking throughput with the proven virtual output queuing architecture.

The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering customers massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture.

Arista Networks, arista.com

STULZ LAUNCHES COOLANT MANAGEMENT AND DISTRIBUTION UNIT

STULZ, a global mission critical air conditioning specialist, has announced the launch of CyberCool CMU – an innovative new coolant management and distribution unit (CDU) that is designed to maximise heat exchange efficiency in liquid cooling solutions.

Launched at Data Centre World Frankfurt 2024, CyberCool CMU seeks to offer industry-leading levels of energy efficiency, flexibility and reliability within a small

footprint, while providing precise control over an entire liquid cooling system.

CyberCool CMU has been developed to maximise heat exchange by isolating the facilities water system (FWS) and technology cooling system (TCS) elements of a liquid cooling system. This significantly reduces the risk of cross-contamination and leaks, thereby enhancing overall reliability.

It also provides precise control over each side of the cooling system, enabling better management of coolant flow rates, temperatures and pressure, which improves overall system efficiency. As it is precision engineered, CyberCool CMU accurately controls the supply temperature and flow rate of the coolant with minimal power consumption.

STULZ, stulz.com

INFINIDAT LAUNCHES NEW FAMILY OF CYBER SECURE STORAGE ARRAYS

Infinidat has announced the launch of the InfiniBox G4 family of next-generation storage arrays for all-flash and hybrid configurations, along with a series of significant enhancements and new capabilities that advance the company’s InfiniVerse infrastructure consumption services platform, seamless hybrid multi-cloud support, and cyber security capabilities.

Taking a platform-centric approach, Infinidat unveiled a strategic extension of its Storage-as-a-Service (STaaS) offerings with the advancement of its InfiniVerse platform.

The new Infinidat G4 storage arrays deliver up to twice the performance of the current generation of InfiniBox and InfiniBox SSA II solutions and include a new lifecycle management controller upgrade option called InfiniVerse Mobius.

This launch also includes support for Microsoft Azure public cloud with InfuzeOS Cloud Edition, and enhancements to Infinidat’s InfiniSafe enterprise cyber

storage resilience and recovery solution. The InfiniSafe enhancements include new Automated Cyber Protection (ACP), InfiniSafe Cyber Detection capabilities for VMware environments, and the extension of InfiniSafe Cyber Detection to Infinidat’s InfiniGuard purpose-built backup appliance in the second half of 2024.

Infinidat, infinidat.com

DIGICERT UNVEILS DEVICE TRUST MANAGER FOR IOT SECURITY

DigiCert, a global provider of digital trust products, has announced the evolution of its IoT security platform with the launch of DigiCert Device Trust Manager, an innovation designed to safeguard IoT devices throughout the entire lifecycle.

The new Device Trust Manager addresses the critical needs of device manufacturers for an integrated and scalable solution to secure IoT devices, manage complex compliance requirements, and ensure

operational and device integrity amidst growing threats targeting devices.

“With Device Trust Manager, DigiCert is reinforcing its commitment to digital trust in the rapidly expanding IoT landscape,” says Deepika Chauhan, Chief Product Officer, DigiCert. “We’re excited to introduce this integrated platform to new and existing customers, transforming IoT device security with comprehensive protection throughout the device lifecycle. Device Trust Manager checks all the IoT boxes, except the one labelled ‘Ship and Pray’.”

DigiCert Device Trust Manager offers unparalleled security for every stage of the IoT device lifecycle, from birth to decommission, ensuring compliance while improving operational efficiency.

DigiCert, digicert.com

Make greater

with an industr y-leading network

40 years of uninterrupted protection, connec tivit y, and reliability.

ap into four decades of unparalleled UPS reliability, strengthening infrastructures, as well as sophisticated software and ser vices that will help future -proof your business in an ever-evolving technological landscape.

40 co Ta rel we th ev

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.