DCNN Autumn 2024

Page 1


APT and Schneider

Electric transform

The Pirbright Institute’s data centre to fast-track advanced viral research

DCNN is the total solution at the heart of the data centre industry

Read the case study on page 10

Environmental monitoring experts and the AKCP partner for the UK & Eire.

How hot is your Server Room?

Contact us for a FREE site survey or online demo to learn more about our industry leading environmental monitoring solutions with Ethernet and WiFi connectivity, over 20 sensor options for temperature, humidity, water leakage, airflow, AC and DC power, a 5 year warranty and automated email and SMS text alerts.

projects@serverroomenvironments.co.uk

A NEW DAWN FOR DATA CENTRES

Welcome to the Autumn issue of DCNN, and I trust that you all had an enjoyable summer.

It’s been a particularly busy time for the sector since our previous issue was published, and although we were initially focused the outcome of the general election – and the promising news that the government quickly announced plans to tackle the skills gap and accelerate the transition to net zero – our attention subsequently turned to the news that data centres have been designated as ‘Critical National Infrastructure’ here in the UK.

The first such designation since 2015 (the previous being for the space sector), the announcement was met with, by and large, an overwhelmingly positive reaction – with many praising it as a positive step forward for the UK’s digital infrastructure sector. However, caution is still being urged in some quarters, with Dr. Thomas King of DE-CIX noting that, “Data centres on their own are of little value. Interconnection between data centres –therefore robust networks and interconnection platforms – is essential to enable the data and applications housed in data centres to create value for society and business, and these also require recognition as critical infrastructure.”

CONTACT US

EDITOR: SIMON ROWLEY

T: 01634 673163

E: simon@allthingsmedialtd.com

GROUP EDITOR: CARLY WELLER

T: 01634 673163

E: carly@allthingsmedialtd.com

SALES DIRECTOR: KELLY BYNE

T: 01634 673163

E: kelly@allthingsmedialtd.com

MANAGNG DIRECTOR: IAN KITCHENER

T: 01634 673163

E: ian@allthingsmedialtd.com

Further details on the CNI designation can be found in our news pages, and additional analysis can be found in our key issue feature on page 14.

In other news, I recently had the pleasure of touring the Wellcome Sanger Institute just outside of Cambridge as a guest of Schneider Electric – which has provided its DCIM software and PDU systems to slash energy consumption in its data centres by a third. Learning of the critically important research taking place at the institute, and seeing the data centres supporting its efforts, made a genuinely big impression, so stay tuned for additional coverage from my visit in our Winter issue.

In the meantime, there’s more from Schneider Electric courtesy of this month’s cover story on page 10, which details how The Pirbright Institute has transformed viral research computing thanks to a new data centre – which is another hugely impressive project, and one you’ll very much enjoy reading about.

Thank you as always for your ongoing support, and enjoy the issue!

STUDIO: MARK WELLER

T: 01634 673163

E: mark@allthingsmedialtd.com

CEO: DAVID KITCHENER

T: 01634 673163

E: david@allthingsmedialtd.com

ACCOUNTS

T: 01634 673163

E: susan@allthingsmedialtd.com

A look at how The Pirbright Institute has transformed its viral research computing courtesy of a new data centre from Schneider

On 12 September 2024, data centres were given ‘Critical National Infrastructure’ status in the UK – but what does this mean for data centre providers?

Simon Rowley speaks with Alex Brew of Vertiv about the AI boom and the impact this will present for companies across the

Louis McGarry of Centiel explains why investing in high-quality, true modular

A special look ahead to the DataCentres Ireland conference and exhibition taking place at the RDS in Dublin on 20-21 November

Hans Obermillacher of Panduit looks at the

31 Sam Walker of ProLabs explores the benefits of third-party optical solutions for alternative networks 34 Dave Swadling of Eclipse Power explains how IDNOs can release data centre operators from the constraints that could hold back delivery of their potential

38 Rich Jensen of Huber+Suhner explains how data centres can meet the demand for processing power in AI applications

40 Alan Stewart-Brown of Opengear looks at five ways of enhancing enterprise networks amidst shrinking budgets

44 Carlos Mora of Corning Optical Communications assesses the best ways of maintaining a future-ready data centre while exploring Base-8 and Base-16 connectivity

48 MicroCare UK’s Liam Taylor describes the crucial role of clean fibre optic connections in advanced networks

50 Carsten Ludwig of R&M explains why data centre infrastructure needs to ensure integration between network connectivity, rack designs, cable management and DCIM

54 Jad Jebara of Hyperview looks at the ways DCIM can help modernise data centres in terms of both technology and talent

58 Kevin Brown of Schneider Electric looks at the future of DCIM and explains why it is the essential connection point for resilient, secure and sustainable IT

60 Russ Kennedy, Chief Evangelist, Nasuni, explains why data resilience is critical for success in the AI era

63 Rick Vanover of Veeam discusses the critical importance of data resilience and outlines the best strategies for success

66 Candida Valois of Scality explains why a cyber-resilient approach is the best way to safeguard critical data

68 DCNN looks at how Athos Therapeutics managed to scale drug discovery via AI analysis thanks to GPU-powered cloud infrastructure implemented by Vultr

UK DATA CENTRES DESIGNATED CRITICAL NATIONAL INFRASTRUCTURE

The government has now classed UK data centres –the buildings which store much of the data generated in the UK – as ‘Critical National Infrastructure’. It is the first Critical National Infrastructure (CNI) designation in almost a decade, since the Space and Defence sectors gained the same status in 2015.

It means the data housed and processed in UK data centres is less likely to be compromised during outages, cyber attacks, and adverse weather events. Putting data centres on an equal footing as water, energy and emergency services systems will mean the data centres sector can now expect greater government support in recovering from and anticipating critical incidents,

giving the industry greater reassurance when setting up business in UK and helping generate economic growth for all.

CNI designation will, for example, see the setting up of a dedicated CNI data infrastructure team of senior government officials who will monitor and anticipate potential threats, provide prioritised access to security agencies including the National Cyber Security Centre, and coordinate access to emergency services should an incident occur.

Gov, gov.uk

EXA INFRASTRUCTURE TO ACQUIRE BULGARIAN TELECOMS COMPANY

EXA Infrastructure has agreed to acquire Global Communication Net (GCN), a Bulgarian telecommunications company offering services on its owned national fibre optic network of over 2,500km.

EXA Infrastructure provides critical modern infrastructure and engineering expertise that serves as the backbone for digital and economic growth, and the deal will see EXA Infrastructure acquire the full suite of GCN’s products and services, including dark fibre, wavelength and colocation services.

With the addition of GCN’s network, EXA Infrastructure will expand its extensive footprint into the strategically important region of South-Eastern Europe,

providing key access to international interconnection points in Turkey, Greece, Romania, North Macedonia, Serbia and Georgia.

This region has grown in significance due to the traffic volumes entering Europe from Asia and the Middle East, and also due to the greater focus on network redundancy and alternative route provisioning given the lack of diversity around the Red Sea.

As a result of the deal, EXA Infrastructure now owns 155,000km of fibre network infrastructure across 37 countries.

EXA Infrastructure, exainfra.net

DATA CENTRE VACANCY RATE FALLS BELOW 10%

The colocation data centre vacancy rate dropped below 10% in Frankfurt, London, Amsterdam, Paris and Dublin (FLAPD) for the first time ever in Q2 2024, amid continued strong demand for space.

According to new research from CBRE, take-up (44MW) exceeded new supply (30MW) for the fourth straight quarter and the vacancy rate fell in Europe’s top five data centre markets as a result.

The vacancy rate of FLAPD stands at 9.8% and is expected to fall to 7.9% by year end, according to CBRE’s data. If that expectation comes to fruition, it will be the fifth consecutive year the vacancy rate has declined.

Hyperscalers’ interest in colocation data centres remains particularly high, driven by the need to deliver digital services, as well as keeping sought after supply away from their competitors.

However, providers are finding it increasingly difficult to accommodate given a lack of available power and appropriate land in the primary markets of Europe. Data centre construction is increasingly difficult in markets such as Frankfurt and Amsterdam, where regulation features prominently in development plans.

As a result, take-up regularly exceeds the new supply delivered in most large European metro markets.

IMASONS ISSUES OPEN LETTER ON DECARBONISATION TARGETS

The Governing Body of the iMasons Climate Accord, a programme of Infrastructure Masons, is calling on all suppliers serving data centres to support greater transparency in Scope 3 emissions as part of broader efforts to reduce the industry’s carbon footprint.

Consisting of AWS, Digital Realty, Google, Meta, Microsoft and Schneider Electric, the Governing Body released an open letter that explains the importance of widespread adoption of Environmental Product Declarations (EPDs), which are standardised, third-party-verified documents reporting the embodied emissions of a product.

EPDs outline the greenhouse gas emissions of a product through its entire lifecycle, from the raw materials in the product, to manufacturing, transportation, product use, and product end-of-life.

While EPDs are common in some business sectors, there is not widespread adoption of EPDs in the data centre industry. The open letter demonstrates a significant push forward from the world’s largest hyperscalers and digital infrastructure companies to drive meaningful change across the industry, working in partnership with their trusted suppliers.

The signatories of the iMasons Governing Body’s open letter all have net zero carbon emissions commitments in place to address their responsibility in mitigating data centre carbon emissions, and this letter marks another milestone toward decarbonising operations.

iMasons Climate Accord, climateaccord.org

DDOS ATTACKS HAVE SURGED BY 106%, DATA REVEALS

Zayo Group, a global communications infrastructure provider, has released its latest bi-annual Distributed Denial of Service (DDoS) Insights Report, which includes details of a 106% increase in attack frequency from H2 2023.

The report also found that an average DDoS attack now lasts 45 minutes – an 18% increase from this time last year – costing unprotected organisations approximately $270,000 per attack, at an average rate of $6,000 per minute.

It takes very little time, expertise or investment to run a DDoS attack, and with the AI boom, bot-based attacks have made it even easier to attack more often, in a more sustained manner, and with more requests

per second. Beyond intensifying frequency and duration, AI is also driving the increased pervasiveness of DDoS attacks across many industries.

Tema Hassan, Senior Product Manager at Zayo Europe, says, “Recent trends in DDoS attacks in Europe reveal a significant escalation in both frequency and sophistication. The number of attacks has surged, driven largely by geopolitical conflicts. This has led to an increase in attacks on critical sectors like financial services, telecommunications and internet service providers, which are vital to national infrastructure.

Group, zayo.com

PROJECT TO DELIVER LOW LATENCY IN HOME BROADBAND NETWORKS

A new project to aid the delivery of low latency in home broadband networks to improve the user experience of interactive applications has been launched by Broadband Forum.

The global standards body will show operators and service providers how to implement Low Latency, Low Loss and Scalable Throughput (L4S) technology, which was specified by the Internet Engineering Task Force (IETF) last year in response to demands of new applications that require low and predictable latency for the best user experience.

L4S will enable service providers to offer services to subscribers that can support applications with latency and capacity demands at the same time and at the same network bottleneck. Offering network congestion control that was not previously available for latency sensitive applications, L4S can ensure better user experiences for cloud gaming, video conferencing, extended reality (XR) and more.

As L4S does not require implementation across the whole network to start to deliver benefits, Broadband Forum’s project will support phased implementations of the technology that focus on the most beneficial parts of the network first, on the journey to a full end-to-end support.

With unmatched availability, reliability, and efficiency, StratusPower ensures seamless operations and business continuity, minimizing the risk of downtime.

and provides a fault-tolerant architecture. From compact 10 kW modules to robust 62.5 kW options, the UPS meets a range of power requirements with the ability to scale up to an impressive 3.75 MW.

NEW MODULAR DATA CENTRE DEPLOYED AT PIRBRIGHT INSTITUTE

In this issue’s cover story, we look at how The Pirbright Institute has transformed viral research computing courtesy of a new data centre from Schneider Electric and APT.

Schneider Electric, the leader in digital transformation of energy management and automation, has deployed a new modular data centre at The Pirbright Institute, enabling it to stay abreast of new technological advancements – including artificial intelligence (AI) and high-performance computing (HPC) – and fast track its vital scientific and viral research programmes.

Schneider Electric, together with its EcoXpert Partners, Advanced Power Technology (APT), developed a new containerised data centre to meet the Institute’s requirement for a scalable, resilient, flexible, and energy efficient infrastructure that would ensure the highest levels of availability and continuity.

GLOBAL CENTRE OF EXCELLENCE

The Pirbright Institute is at the forefront of global viral research, operating as one of the UK’s leading virus diagnostics and surveillance centres. Pirbright is a world-leading centre of excellence for research into the control and surveillance of virus diseases of farm animals, and viruses that spread from animals to humans.

The Institute has undergone significant digital transformation, and as part of that process,

has leveraged an advanced data centre solution from APT and Schneider Electric to enable new levels of scientific collaboration, while adhering to strict data sovereignty, storage, resiliency, and security standards.

CONTAINERISED DATA CENTRE SOLUTION

Due to the mission-critical nature of its research, and the need to provide continuity of service during any modernisation projects, the Institute had to identify a new strategy to build out its infrastructure to support future technological requirements for HPC and AI.

First seeking a strategy to modernise its legacy IT and comms rooms and deploy them away from its existing buildings, it began to explore the benefits of a containerised data centre and engaged with APT to design and specify its new critical infrastructure system.

APT quickly identified a need for what became known as the CDC – the Institute’s new ‘Containerised Data Centre’. Using key components from Schneider Electric’s EcoStruxure for Data Centres solutions portfolio – including its EcoStruxure Row Data Center system, APC NetShelter racks, InRow cooling, Symmetra Uninterruptible Power Supplies (UPS) and APC NetBotz environmental monitoring – APT was able to pre-configure its

design, enabling the solution to be pre-tested off-site for faster deployment.

The project was delivered in three phases and within a strict timeline of 12 weeks, ensuring minimal impact on the Institute’s business and critical applications. The first required detailed site preparation and connection of utilities, with new foundations laid for the data centre modules.

The second required the data centre to be built, pre-configured and pre-integrated off-site, along with the migration of the existing applications and IT systems, which were to be retained by the Institute. The third and final phase was deployment, and with tight physical access, the containerised modules were delivered to site via low-loader and craned into position in June 2023.

By the end of July 2023, the project was completed and commissioned ahead of schedule. The new data centre delivers 80kW of scalable, optimised and future-proofed capacity in an N+1 configuration, allowing the Institute to increase the resiliency and availability of its critical infrastructure systems.

“The unique set of challenges we encountered at The Pirbright Institute required a tailor-made data centre solution, meeting its requirements for fast deployment, increased security, availability and efficiency,” says John Thompson, Managing Director of APT. “The new containerised data centre will provide a long-term, flexible, collaborative and scalable solution, which enables its end-users to deliver the highest standards of research, while meeting future demands for security and sustainability.”

SECURE, SUSTAINABLE, SCALABLE

With the containerised data centre at its core, Pirbright’s infrastructure is future-proofed for new evolutions in high-tech research equipment such as sequencers, and diamond-light processes for virus analysis that can generate data sets of 700GB each. It also allows them to leverage new advancements in HPC, AI and GPU-powered computing, allowing them to identify breakthroughs in viral research at a far faster rate.

Security is paramount, and the data centre includes complete monitoring and management systems, delivered via Schneider Electric’s EcoStruxure IT Expert data centre infrastructure management (DCIM) software. This is supported by Netbotz environmental monitoring, with over 60 data parameters measured and managed, including temperature, humidity, leak detection, and multiple cameras providing real-time information via one complete platform.

Heavy-duty enclosures are also utilised outside the facility to prevent physical intrusion, and the design has enabled Pirbright to extend the life cycle of the data centre, delivering a 50-year life span.

PROVISIONING FOR THE FUTURE

The new data centre now provides a dedicated, world-class IT function that allows Pirbright to compete for ground-breaking research projects on a global basis. Its scalable, modular architecture also provides the highest levels of availability, resiliency and efficiency.

The data centre has also allowed The Pirbright Institute to bring its IT infrastructure in line with its development master plans, ensuring it retains its place as the UK’s foremost centre of excellence in research and surveillance of viral diseases. Future plans include new laboratories, scientific and administrative facilities, three new centres of computing, and a control facility.

Schneider Electric, se.com

Make

greater with an industr y-leading network

Celebrate 40 years with APC.

Start leveraging four decades of uninterrupted protection, connectivity, and unparalleled reliability with the APC UPS family, a legacy marked by pioneering UPS technology and an unwavering commitment to innovation.

A CRITICAL NEW ERA

On 12 September 2024, data centres were given ‘Critical National Infrastructure’ status in the UK – but what does this mean for data centre providers? David Varney, Partner at UK law firm, Burges Salmon, and Victoria McCarron, Solicitor at Burges Salmon, explain.

Earlier this month, the Technology Secretary, Peter Kyle, declared that UK data centres will now be classified as Critical National Infrastructure (UK CNI), marking the first new CNI designation since 2015. UK CNI constitutes critical elements of infrastructure of which the loss or compromise could result in major detrimental impact on essential public services, emergency systems, national security, defence, or the functioning of the state.

This new designation places data centres on par with essential services, ensuring they receive prioritised support during critical incidents such as cyber attacks, environmental disasters and IT blackouts. This follows the Science and Technology Committee’s recent inquiry into the cyber resilience of the UK CNI sector, during which the importance of bolstering the digital infrastructure against potential cyber attack was emphasised.

KEY ASPECTS

Data centres are crucial to the UK’s digital economy, powering essential services like healthcare, finance and, increasingly, AI applications. Investment in data centres has surged recently, particularly within the UK; for example, Chancellor Rachel Reeves confirmed that Amazon Web Services plans to invest £8 billion in the UK over the next five years to build, operate and maintain data centres. Important aspects and implications of their designation as UK CNI include as follows:

• Strengthening UK’s digital infrastructure: The UK Government’s growing investment in the digital sector necessitates parallel enhancements in protections to ensure its resilience and security. A notable recent development is the proposed £3.75 billion investment welcomed by the UK Government in Europe’s largest data centre in Hertfordshire, which is anticipated to create nearly 14,000 jobs across the UK. As technological

advancement and development become increasingly central to government policy and integral to the daily lives of UK citizens – such as in NHS records, financial information and personal data stored on smartphones – it is increasingly critical to ensure the digital infrastructure storing this data is secure.

• Recent cyber security incidents: The need for greater resilience in the UK’s digital infrastructure can be highlighted by two significant incidents this year. The first was a ransomware attack affecting services provided by Synnovis, a pathology firm, causing severe disruptions at healthcare sites including Guy’s and St Thomas’ Hospital and King’s College Hospital which resulted in the cancellation of operations and the diversion of emergency patients.

Additionally, the faulty CrowdStrike software update that caused a global computing outage was estimated as causing approximately £7.8 billion in damages, indicating the potential financial damage arising from such incidents. The greater protection given to data centres by the new CNI classification will reduce and mitigate the impact of such incidents.

• NIS regulations: The UK Network and Information Systems Regulations 2018 (NIS) are a crucial cyber security framework applicable to ‘operators of essential services’ and ‘relevant digital service providers’, enhancing the security and resilience of network and information systems across sectors like energy, healthcare and finance.

The NIS2 Directive came into force across the European Union in January 2023, which is aimed at CNI sectors and expanded the original scope of the NIS Directive to include other critical sectors such as space, waste, water, food and manufacturing. Although the EU’s NIS2 Directive does not apply directly to the UK, the UK Government plans to align its NIS regime with the EU’s updated framework to strengthen cyber defences, particularly for digital service providers and future-proofing the regulations.

Proposed reforms include expanding the scope to cover ‘managed services’ and implementing a flexible risk-based assessment regime regulated by the UK Information Commissioner. These measures aim to ensure high levels of cyber-resilience and safeguard essential services against cyber threats.

• Cyber Security and Resilience Bill: The government plans to introduce the Cyber Security and Resilience Bill to strengthen the country’s cyber defences, as announced in the King’s Speech in July. This legislation will mandate that providers of essential infrastructure (i.e. UK CNIs) protect their supply chains from cyber threats, as well as expanding the scope of the current NIS Regulations, safeguarding a wider range of digital services and supply chains than currently protected.

• Enhanced government support: The new classification means UK data centres will receive additional government support in anticipating and recovering from emergencies. This includes the creation of a dedicated CNI data infrastructure team of senior officials who will monitor potential threats and coordinate priority access to government security agencies (including the National Cyber Security Centre) and emergency services to ensure rapid response and recovery during critical incidents.

TAKEAWAYS

The classification of data centres as Critical National Infrastructure marks a pivotal moment for the UK’s digital economy. By providing enhanced protections and support, the UK government aims to ensure the resilience and security of data centres, fostering a secure environment for investment and growth. This move not only intends to safeguard vital data but also reinforce the UK’s position as a leader in data security and technological innovation.

Burges Salmon, burges-salmon.com

THE AI BOOM HAS ARRIVED AND IS HERE TO STAY!

In this issue’s interview, DCNN Editor, Simon Rowley, speaks with Alex Brew, Regional Director, Northern Europe at Vertiv, about the AI boom and the impact, opportunities and challenges this will present for companies across the data centre sector.

SR: Hi Alex! Could we start by asking when you first joined Vertiv, how you ended up in the role, and what your current position involves?

AB: I’ve always believed in the importance of being good at what you do, which has certainly guided my career. My journey into the data centre industry was influenced by early advice

from a mentor. He suggested that aligning myself with a company associated with the IT industry would be a wise move, given the strong growth prospects and opportunities in that space. This advice led me to seek roles where I could contribute to, and benefit from, the expansion of IT infrastructure. I have now been at Vertiv for 10 years, having joined the company when it was Emerson.

At Vertiv, my current role involves leading the sales teams across Northern Europe. This includes overseeing our regional operations as well as our segment coverage, which spans across colocation, hyperscale, telco, edge, commercial, industrial and enterprise. Also within my remit is Vertiv’s channel business that complements these segments, enabling us to deliver comprehensive solutions to our clients.

SR: How do you assess the current state of the data centre sector, and are there any recurring themes or requests that Vertiv is receiving from customers?

AB: The data centre sector is incredibly buoyant, but also in a state of flux. We’ve seen significant growth in recent years, largely driven by conventional cloud applications and the rise of virtualised working environments.

However, the advent of AI has introduced a new, transformative tier of IT load that is highly disruptive. This shift is presenting major engineering challenges for data centres, marking one of the most significant changes in recent times.

The challenges are not just about transitioning to more advanced cooling technologies, like liquid cooling, but also about managing increased power demands, balancing white space and grey space allocations, and navigating the complexities of power availability and grid demands. AI’s arrival doesn’t mean that cloud growth will slow down; rather, it’s an additional layer that demands even more sophisticated infrastructure. Data centres will need to adopt new technologies and design principles, as the infrastructure that worked in the past won’t suffice for the future requirements.

SR: We recently had the pleasure of attending Vertiv’s AI Solutions Innovation Roadshow in London. What do you make of the AI revolution we’ve been experiencing in recent times, and what are some of the opportunities and challenges that this has created?

AB: The AI revolution is both revolutionary and evolutionary, and it’s fascinating to see how it’s impacting every market we operate in. Unlike previous technological booms that were closely tied to specific sectors, such as hyperscalers, AI is permeating across all industries. Every organisation we work with is looking to harness AI to gain operational and competitive advantages in their respective fields.

The growth trajectories in AI deployment and user uptake are extreme, which means a significant amount of IT compute load needs to be delivered in a responsible and sustainable way. This is where our expertise comes into play. As a leader in this field, we’re in a unique position. Our advanced work with key chip manufacturers gives us an in-depth understanding of the infrastructure requirements necessary to support AI projects

successfully. This knowledge is invaluable to our customers, who are looking to us for guidance on how to best implement these technologies. This is why we launched our AI Hub.

When customers decide to pursue AI, it’s not as simple as just selecting a server and implementing it the next day. The IT hardware involved has very specific critical digital infrastructure requirements, which are a step change from traditional IT setups. This often necessitates the introduction of new technologies, such as liquid cooling, which might not be compatible with existing infrastructure platforms. Therefore, adjustments must be made to help these new technologies to be effectively integrated.

The challenges aren’t limited to cooling; they extend to power management, rack architecture, and more.

SR: Is the speed of deployment of AI products and innovations a challenge for Vertiv (and the industry in general), considering that companies are now reaching the stage of adoption – and perhaps more quickly than anticipated?

AB: The rapid adoption of AI technologies means companies are moving to deployment faster than anticipated. This acceleration creates a need for solutions that can be deployed quickly and efficiently. To address this, Vertiv has focused on developing our Vertiv 360AI platform, which includes a number

of predefined, pre-engineered solutions that reduce the traditional design time required to marry together various components and systems to create a given infrastructure platform. By providing these predefined solutions, we help companies get to market faster and more efficiently, offering them a competitive advantage in the marketplace.

SR: In terms of supplying your customers with AI solutions, are solution-oriented offerings the biggest area of growth?

AB: Yes, solution-oriented offerings are indeed a major growth area for us in the context of AI. Our customers are navigating new paths and requirements as they adopt AI technologies, and traditional infrastructure may not always meet these new needs. The industry lacks extensive experience in designing infrastructure solutions specifically for AI equipment.

This gap has propelled Vertiv to the forefront of the industry, as we work closely with chip manufacturers and other partners to develop and provide the necessary infrastructure solutions. Positioning ourselves as leaders in this field allows us to effectively meet the evolving needs of our customers.

SR: How are Vertiv’s technologies able to assist companies in that data centre world with regards to AI?

AB: Vertiv’s technologies play a crucial role in supporting AI infrastructure. For AI deployments, which often involve high-density servers with specific power and cooling requirements, we offer a range of solutions:

• Liquid cooling: Our expanding liquid cooling portfolio addresses the high heat outputs of AI servers. This system is recognised by leading chip manufacturers as an effective cooling solution for their products, and we have a comprehensive range to cater for different server architectures.

• Rack architecture and integrated IT solutions: AI servers typically require

specialised racks to cater for the size and weight of AI servers and the close-coupled infrastructure components such as the fluid network manifolds and higher-density power distribution units (PDUs). Vertiv provides integrated rack solutions designed for these needs and high-density rack PDUs to manage power requirements effectively.

• UPS to support high capacity, high availability AI power demands: Largely driven by computing and cooling requirements from AI and high-performance computing (HPC), it is crucial to have a robust backup power solution to provide continuous availability of the GPUs and CPUs that run AI compute. Vertiv’s uninterruptible power supply (UPS) is engineered to handle the fluctuating load demands of data centres.

• High-density prefabricated modular (PFM) data centre solutions: With demand for AI-ready data centre capacity outstripping supply, developers and operators are focused on bringing new capacity online as quickly as possible. We offer a liquid cooling-equipped PFM data centre solution engineered to enable efficient and reliable AI computing. The solution can be configured to support the platforms of leading AI compute providers and scaled to customer requirements.

SR: You mentioned in your presentation that ultimately, “Every company is going to be an AI company.” Could you expand on this notion?

AB: The idea that “every company will be an AI company” reflects the growing necessity for AI across all sectors. AI offers a significant competitive edge by enabling companies to work smarter, optimise operations, and enhance productivity. Companies that do not embrace AI risk falling behind their competitors who do.

Just as many companies have evolved to see themselves as technology companies due to their reliance on technology, such as financial services, the same trend is occurring

with AI. It is becoming integral to maintaining a competitive advantage and driving future growth. Embracing AI is not just about staying current; it’s about staying ahead in an increasingly AI-driven market.

SR: What’s your one biggest concern as it relates to the rise of AI? Should we still be cautious?

AI: One of the main challenges Vertiv is facing is meeting the rapidly growing demand in the AI market. The sheer scale and speed at which the AI market is expanding presents a significant challenge for any business. To address this, we are scaling our manufacturing capacity as quickly as possible.

For instance, we have recently expanded our facilities, such as the new thermal plant in Mexico and increased the capacity of our switchgear, busway and integrated modular solutions (IMS) business by more than 100%, to increase our production capabilities. This expansion helps us keep pace with the increasing demand and helps us to support our customers effectively.

SR: Finally, what does the future hold for Vertiv, and what most excites you about what’s to come over the next year or so?

AB: The future for Vertiv is filled with opportunities as we continue to innovate and adapt to the changing landscape of data centres and AI. Chip and server technologies are evolving at such a rate now, meaning companies like Vertiv need to maintain a huge focus on R&D to provide the necessary infrastructure innovations to underpin this evolution, and our partnerships with the likes of Nvidia and Intel mean we are really leading the charge.

As we look ahead, we are focused on advancing our technologies to support the next generation of data centre infrastructure and driving growth in the AI sector for our customers.

Vertiv, veriv.com

SHOW ME THE MONEY!

Louis McGarry, Sales and Marketing Director, Centiel, explains why investing in high-quality, true modular UPS systems will result in peace of mind, drastic cost savings over the long term, and a whole host of other hidden benefits.

We all love a bargain! However, it’s only later that we often discover a good reason why our purchase was so cheap. It’s not a problem if you use eBay for your cheap and cheerful purchase, but when it comes to investing in uninterruptible power supplies (UPS) that protect millions of pounds worth of data, the potential consequences of buying an inferior system are truly terrifying.

It’s not just that poor quality components fail more often, but the speed of acquiring replacement parts, which may not be kept in stock, could be significant too.

What if you can still purchase the highest quality UPS available but be more intelligent with the set up and infrastructure to reduce total cost of ownership and save costs instead? Let me show you how.

PAY AS YOU GROW

A high-quality UPS known for its market-leading performance, availability, efficiency – and fully supported by an experienced UK-based engineering team – will understandably cost more than a low quality, off-the-shelf system where the supplier offers little support. However, the increased flexibility of today’s leading, true modular UPS means it is possible to implement the highest quality UPS and save money over the long term.

If you are building a 6-10MW data centre, fitting a 10W UPS from the outset would not be a great financial decision. This is because oversized systems cost more to buy and run, and therefore, it’s just not necessary. An alternative approach is to take time to understand the likely growth patterns of

the facility. The type of data centre will also influence the initial size of the UPS purchase: are you building one giant data hall or a decentralised, multi-room site, each supported by localised UPS? If so, only buy what you actually need.

The planning phase is critical to save on Capital Expenditure (CAPEX) and Total Cost of Ownership (TCO). In short, invest in what you need for day one and choose a UPS system which will be flexible enough to enable you to pay as you grow. True modular UPS systems are perfect for this purpose, and Centiel often advises clients to fit the inexpensive infrastructure and racks, and to then add UPS modules and capacity as the data centre grows.

REDUCING MAINTENANCE COSTS

Frequently, Centiel is called to a site following a service alarm which may have simply reported a minor power disturbance. At the very minimum, if on-call engineers attend, they need to download the logs, analyse and

reset alarms. This inevitably incurs a cost to the client, and a visit could be avoided if staff engineers were trained for first level response.

Centiel works with selected data centres which have staff engineers on-site to offer enhanced training so they can provide their own first level response. This can cover basic product management, remote monitoring and identification of certain alarms to know how to respond correctly. Rather than waiting four to six hours for an external engineer to attend, it means most notifications or alarms can be dealt with immediately, and the issue resolved while external preventative maintenance costs are reduced.

Furthermore, enhanced training can enable clients to monitor the performance of their UPS more closely, to ensure it is always operating at the sweet spot of its efficiency curve. This saves energy and ongoing running costs. Informed clients can also take advantage of energy management modes, putting modules into hibernation to reduce running costs further, as appropriate.

Centiel is led by clients on how much training they would like. However, best practice encourages more engagement with UPS equipment and more understanding through training, as it means clients can proactively work towards reducing TCO in so many ways. Eventually, it is likely that UPS will not just be used for power protection, but as energy hubs to support data centres in relation to more efficient energy management. Therefore, the more site teams get to know the equipment, the more options there are available to improve efficiency further in the future.

DYNAMIC REDEPLOYMENT

Centiel can also train clients to take advantage of the dynamic redeployment of UPS modules. Centiel’s true modular UPS have safe-hot-swap functionality, so clients with the correct training can remove modules safely within a running frame and redeploy them to other areas of the data centre within minutes. This can ensure systems are always optimised. It also means additional expenditure can be avoided as existing modules can be reused in areas of expansion.

PLANNING

The planning stage of any UPS installation is key to reducing TCO. Start conversations with manufacturers early and don’t be afraid to ask for advice. Centiel works as a trusted advisor and helps data centres and consultants plan optimal power protection strategies right from the conceptual stage. The company works to develop flexible future-proofed options, designed to reduce TCO over the long term. Centiel can then help with technical detailing, implementation plans and how its engineering team can work best with the data centre over the next few decades to offer training and experienced support.

In contrast, where organisations have bought inferior UPS, they have told Centiel information has been misleading and that a lack of joined-up thinking has led to implementation and ongoing issues.

Communication problems have also created challenges beyond just the poor quality of the technology. The client’s time needed to resolve such problems becomes a further hidden cost.

FUTURE-PROOFING

Centiel can make accurate TCO calculations on the hard costs of a UPS over time. The company can make like-for-like comparisons about efficiency and energy use and can provide accurate quotes for maintenance and remedial repairs for its UPS. Centiel can supply accurate and favourable Mean Time Before Failure (MTBF) statistics and Mean Time to Repair illustrations too, and this means clients have a clear view of the future and can plan accordingly.

This is not always the case at the lower end of the market. The availability (uptime) of the system will be drastically reduced when poor quality components fail and cannot be replaced quickly, leaving the critical load at risk. System efficiency is often significantly lower too, so ongoing running costs are increased. Communications issues and lack of UK support can compound the situation.

In today’s economic climate, cost reduction is always a key consideration, but only looking at the purchase price in isolation could prove to be an expensive mistake. Over a UPS’s working life it is better to invest in the best technology with a high operating efficiency and an excellent, cost-effective maintenance and training plan supported by experienced UK

based engineers; rather than the alternatives that lack these benefits, resulting in high running costs while the critical load is also at more risk.

Centiel’s design team has led UPS technological development for decades, and it is a trusted advisor to some of the world’s leading facilities. The company’s award-winning, comprehensive range of UPS solutions can be tailored to suit growing data centres to offer industry leading (99.9999999%) availability, which equates to just milliseconds of downtime per year. Meanwhile, the experienced team works hand-in-hand with clients and consultants to reduce TCO over the long term.

Centiel, centiel.co.uk

DATACENTRES IRELAND 2024 TO

OFFER MORE CHOICE, MORE IDEAS AND MORE SOLUTIONS!

A special look ahead to the DataCentres Ireland conference and exhibition taking place at the RDS in Dublin on 20-21 November.

Having positioned itself as the country of choice for data centre owners and operators, Ireland has a long history within the European data centre community.

Ireland’s success in attracting data centres has seen the sector grow significantly over the last 20 years, and given the size of the country and density of data centre capacity (which currently stands at 1,261MW operational, with a further 1,286MW with planning permission already granted), Ireland is at the forefront of addressing the needs and challenges which are affecting data centre communities both locally and internationally.

These include:

• Continuity of supply

• Sustainability

• Micro-grids and grid flexibility

• Standby generation

• Decarbonisation

• Green energy

• Hydrogen for data centres

• The circular economy

• Cooling and heat reuse

• Data centres assisting Ireland in the adoption of renewables

DataCentres Ireland, taking place in Dublin this November, consists of a world renowned, multi-streamed conference programme featuring leading local and international speakers and industry leaders, integrated into the largest gathering of data centre infrastructure suppliers and service providers - making this a must attend event for all involved in data centres and other mission critical environments.

This year’s exhibition features over 120 suppliers and solution providers, making it the largest event of its kind in the country. A full list of this year’s exhibitors can be found on the event’s website.

To facilitate and further the discussion and the dissemination of new ideas and information, DataCentre Ireland’s organisers have announced that the multi streamed conference will again feature a Strategy Stream looking at the issues diving the sector, as well as an Operational Stream looking at the technology, products and practices which

can make data centres run more effectively and efficiently whilst remaining safe, secure and resilient.

The conference programme is now live and features over 70 industry leaders and experts from across the data centre sector.

To view the full conference programme, visit datacentres-ireland.com.

Attendance to all aspects of DataCentres Ireland is free and you can register for the event via the link below.

DataCentres Ireland, datacentres-ireland.com

SUPPORTING DATA CENTRE FIBRE DENSIFICATION

Hans Obermillacher, Senior Business Development Manager for Data Centres EMEA at Panduit, looks at the benefits that can be derived from shrinking all aspects of the data centre.

The increasing complexity of information based services, no matter what type of business you are in, continues to drive infrastructure densification. Infrastructure, such as optical distribution frames (ODFs) is not a fixed asset – it needs to grow and evolve with the businesses’ requirements, and to accommodate increasingly prolific cabling options.

Digital transformation at data centres and enterprises must deliver reliable, scalable networks to meet these demands. Robust fibre network management has become crucial for today’s organisations, and their customers, to ensure continuous high speed data transport and end-to-end delivery of services.

Data centre build and use costs continue rising. As a result, data centre owners are looking to maximise the compute, network and storage resources across smaller physical footprints. This, in turn, drives the need for a higher density physical infrastructure, improved resource utilisation and workload mobility.

Optimising ODF in the data centre
Diagram 1

Another trend that is applying pressure for higher densities comes from increasingly dynamic hyperscalers provisioning space within locally situated multi-tenant data centres (MTDCs) as part of their edge strategy to provide a more local (lower latency) presence for their customers. In order to improve ROI, these MTDCs are focused on optimising their space utilisation in both the white space and grey space.

The market growth in digital devices and communication continues unabated. The global cloud computing market size was valued at $545.8bn in 2022 and it is projected to reach $1,240.9bn by the end of 2027, growing at a CAGR of 17.9%.

To meet this demand, the uptake of higher data rates has accelerated from 40Gb to 400Gb, with 800Gb devices available, and the roadmap to 1.6TB clearly planned. With each increase in data rate, the density of compute power per square metre increases, but also puts even more focus on having a reliable high density physical infrastructure.

As an example of the speed of data centre development, Panduit is currently supporting a top five hyperscaler across EMEA to deploy high-performance cabling infrastructure within MTDCs at a rate of one site per nine days. All of these builds utilise the maximum fibre density components such as enclosures (144 fibres per RU), 2mm diameter Uniboot duplex LC fibre patch cords and the smallest diameter rollable ribbon trunk cabling available (12.5mm for 288 fibres and 17mm for 576 fibres).

The smaller diameter fibre cabling enables higher densities in the pathways and racks while also relieving congestion at the panels to make it less likely for adjacent fibres to be disturbed during Moves, Adds or Changes (MAC). The end result is a data centre that meets the requirements for efficient space utilisation.

Crucial to these and similar installations, are optical distribution frames (ODF) which are generally the areas of highest density fibre connections. These frames provide a passive environment to house, distribute and manage all of the fibre cabling coming into a data centre and are, therefore, normally access controlled.

VERSATILITY IS KEY

The latest ODF solutions provide modular building blocks that can be assembled to create the exact frame and cable manager configuration to optimise density and system availability as well as scalability.

Typical frame arrangements are wall mounted single or double racks, and middle of the room back-to-back configurations. Modular systems offer flexibility and reduce floor space by as much as 50% compared to standard four post racks.

For a MTDC, this reduction in the floorspace effectively increases the operational rental floorspace which has a significant positive impact on the site’s overall income and profitability.

In Diagram 2, four traditional racks with 19in high density enclosures offer a capacity of 23,040 fibres and occupy the space of 12 floor tiles. By comparison, diagram 3 shows the Panduit ODF front access solution featuring a compact frame system that can be placed back-to-back to achieve the same 23,040 fibre capacity, but within a floor space requirement of only 6.5 tiles.

Space Saving, Total Front Access
Diagram 3
Diagram 2

These innovative modular platforms are designed for data centres that require network patching in the white space and Meet-Me Rooms (MMR). The ODFs also work well in smaller areas, as the front access frames are designed for wall mounting. Therefore, previously non-revenue earning areas can now be utilised, ensuring customers receive the most efficient and effective cabling infrastructure layout. A high density ODF also enables data centre operators to meet their growing demands for new services, simplify day-to day MACs and futureproof their investment.

MACS AND SECURITY

Managed network connectivity is essential, and ensuring MACs are processed correctly and recorded is vital to the operational efficiency of the IT space, whether white space or grey space. Intuitive cable management and easy access to the fibre connectivity in a high density ODF can increases the ease of use and speed of MACs by up to 40%. In addition, utilising the latest fibre connectivity with boots that can be used as pull tabs and small diameter cable greatly reduces the risk to adjacent circuits during the MACs.

In the latest ITIC Hourly Cost of Downtime survey from 2022, human error (64%) is second to security breaches (73%) as the key issue that most negatively impacts reliability and causes downtime. Systems, like a properly designed and maintained ODF, can reduce or eliminate human error which improves customer satisfaction and overall profitability.

There are various digital and paper-based systems for recording new installs and MACs, but it is vital that it is done rigorously and dynamically so that the network information is accurate and changes are recorded. As cabling infrastructure becomes more dense and complex, the need to record and manage cable runs is migrating to digital formats, essential to the reduction in human error incidents.

It is important to choose an ODF that offers clearly defined and traceable cable routing

pathways that reduces guesswork and helps limit the need for expensive rip and replace cabling. Multiple-level lockable security on the ODF at both door and enclosure is important to MTDCs to reduce accidental human error, whilst adding compartmental security between multiple tenants.

CONCLUSION

The continued drive to shrink all aspects of the data centre is being realised through the expansion of higher data rate electronics. To take advantage of this, high density, high performance fibre infrastructure is increasingly important. Innovative designs are now offering cost effective solutions that are future-proofed and can scale as needed to defer capex spending.

High Density ODFs offer well engineered, expandable configurations with the capability of hosting the highest fibre counts in a highly organised, simple to use format. They also offer security in areas where human error is often the cause of costly and preventable downtime. Choosing an ODF that meets all of these requirements and is globally available is an important step for hyperscalers and MTDCs to standardise on a design that optimises density whilst simplifies installation.

Panduit, panduit.com

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Scalability

Ease of Use

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Flexibility

Scalability

Scalability

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Scalability

Scalability

Ease of Use

Ease of Use

Configuration platform to create cabinet to meet your specification and requirements

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Ease of Use

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Configuration platform to create cabinet to meet your specification and requirements

Configuration platform to create cabinet to meet your specification and requirements

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Configuration platform to create cabinet to meet your specification and requirements

Configuration platform to create cabinet to meet your specification and requirements

Ease of Use

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Easy to adjust E-rails, PDU installation and accessories to reduce

Flexibility

Universal design for server or network applications

Universal design for server or network applications

Flexibility

Easy to adjust E-rails, PDU installation and accessories to reduce

Universal design for server or network applications

Flexibility

Flexibility

Universal design for server or network applications

Universal design for server or network applications

Universal design for server or network applications

Flexibility

Robust

Robust

Universal design for server or network applications

Universal design for server or network applications

Universal design for server or network applications

www.panduit.com

www.panduit.com

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Robust

Solid construction with Best-in-Class load rating for secure installation

Robust

Solid construction with Best-in-Class load rating for secure installation

Robust

Solid construction with Best-in-Class load rating for secure installation

Robust Solid construction with Best-in-Class load rating for secure installation

Robust

Solid construction with Best-in-Class load rating for secure installation

Solid construction with Best-in-Class load rating for secure installation

Robust Solid construction with Best-in-Class load rating for secure installation

Solid construction with Best-in-Class load rating for secure installation

Robust Solid construction with Best-in-Class load rating for secure installation

Security

Security

Security

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Security

Security

Security

Security

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Security

Security

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

https://mkt.panduit.com/UK-FlexFusion.html

Enhanced Cable Management

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Enhanced Cable Management

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Enhanced Cable Management

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Enhanced Cable Management

Enhanced Cable Management

Enhanced Cable Management

Enhanced Cable Management

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Enhanced Cable Management

Enhanced Cable Management

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

RKFL26--SA-ENG 2/2021

THE ALTERNATIVE PATH FORWARD

Sam Walker, VP Sales EMEA & India, ProLabs, explores the benefits of third-party optical solutions for alternative networks.

When it comes to delivering digital transformation at a national level, most countries agree that a degree of competition is beneficial. In the UK, alternative networks (altnets) are the challengers in a market dominated by the likes of Openreach and Virgin Media. If these businesses are to remain a financially sustainable alternative to the incumbent service providers, they must overcome competitive market behaviour and consider joining forces with other altnets if they are to survive and thrive.

The growing financial pressures and the need for economic efficiency have led some of the larger altnets to initiate multi-million-pound mergers and acquisitions. This wave of consolidation underscores the

importance of interoperability within their networks. Seamless integration is vital; if altnets integrate their new networks into their existing infrastructure and they’re not compatible, it could spell financial disaster, undermining the very efficiencies they sought to achieve through consolidation.

One of the most effective ways to ensure smooth integration is by using compatible optical transceivers. These components offer a wider, cost-effective range of options for network design, which can significantly ease the transition process. Compatible optical transceivers not only facilitate smoother network integration but also provide the flexibility needed to adapt to various technological environments.

THE ALTNET BOOM

The altnet industry has seen significant growth, driven by substantial investments and a growing consumer base. European operators have invested €120bn (£101bn) in fibre-to-the-home (FTTH) rollout, with 57% of this investment coming from alternative operators. In 2023 alone, nearly half a million UK customers switched to altnets, resulting in two million live connections - a 33% year-on-year growth.

Altnets have proven themselves to be strong contenders in the telecom market, consistently surpassing major national providers in consumer satisfaction on major review sites. Despite some scaling back of network rollouts, these alternative networks are ambitiously working towards providing full fibre infrastructure to 16.7 million premises by the end of 2024 in the UK, with an expected three million live connections. This impressive growth trajectory could position altnets to capture over 10% of the residential broadband market, solidifying their role as dynamic challengers and key drivers of competition in the industry.

MERGERS AND CONSOLIDATION

While altnets have demonstrated impressive growth, financial sustainability remains a concern, prompting discussions about mergers and consolidations. The drive to scale and achieve economic efficiency underpins these consolidations.

For instance, Netomnia, one of Britain’s most promising competitive fibre access network operators, is at the centre of the UK broadband sector’s latest consolidation deal, having announced its plan to merge with fellow fibre network builder, Brsk. This trend is evident in recent multi-million-pound mergers and acquisitions within the altnet sector.

However, not all altnets see the need to merge, particularly those with strong regional brands, as it is important to strike a balance between maintaining brand identity and achieving financial stability.

THE CHALLENGE OF INTEGRATION

Merging altnets and integrating new networks with existing infrastructure present significant challenges, particularly in ensuring interoperability for seamless transitions. Common issues during integration include compatibility problems and the need for adaptable solutions. Third-party optical solution providers like ProLabs offer transceivers that support multiple Network Equipment Manufacturers (NEMs), facilitating smoother integration and enhancing network flexibility.

As these businesses challenge the established players in the telecom sector, they require cost-effective optical transceivers that align with their service agreements. However, they may lack the facilities and resources to test every component within their networks. Partnering with a provider that can conduct extensive testing helps altnets reduce overhead costs and allocate resources more effectively, ensuring their integration efforts are both efficient and economical.

THE ROLE OF THIRD-PARTY OPTICAL TRANSCEIVERS

Third-party optical transceivers play a pivotal role in telecommunications networks. They facilitate smoother integrations and offer design flexibility. Compared to traditional NEM solutions, third-party optical transceivers are cost-effective and provide greater adaptability. These advantages are crucial for altnets looking to maintain high-quality service while managing costs.

NEMs benefit from strong brand awareness and often inflate the prices of these optical components because they want to maintain their dominant market positions and lock network builders and service providers into solely using their solutions.

This is where suppliers like ProLabs come in, supplying compatible optical transceivers which deliver the same levels of performance and quality offered by NEM solutions, but at a fraction of the price. They can also provide multi-coded transceivers compatible with several different NEMs within the same network. By purchasing third-party products, service providers can benefit from greater interoperability and compatibility within their networks.

WARRANTY CONCERNS

Despite caution from NEMs, integrating third-party components into networks will not necessarily lead to technical issues or void warranties. Legislation such as the Treaty on the Functioning of the European Union (TfEU), the UK’s Competition Act of 1998, and the Magnuson-Moss Warranty Act of 1975 all prohibit anti-competitive agreements between businesses. The Magnuson-Moss Act, in particular, ensures that NEMs cannot void warranties or service agreements if alternative components are used.

Providers like ProLabs offer a non-transferable lifetime warranty on its solutions when purchased from authorised resellers, providing customers with the confidence needed to build flexible networks using a range of components.

TESTED AND VALIDATED

In addition to robust warranties, third-party providers offer top-tier testing services to ensure compatibility and optimal performance. This includes in-house programming and testing in advanced laboratory facilities.

As NEMs release new products, experts analyse these switches and components to confirm that third-party solutions remain compatible and suitable for customer networks. Testing is conducted within the customer’s own environment to verify that optical transceivers and components will function correctly once installed.

For service providers and altnets, which often face constraints on time and resources, this service is invaluable. Third-party optical solutions, such as those offered by ProLabs, are vital for ensuring seamless integration and cost-effective network design.

As altnets expand their presence in the UK telecom market, innovative solutions will be key to maintaining their competitive edge and ensuring long-term sustainability.

ProLabs, prolabs.com

ENERGISING THE DIGITAL ECONOMY

Independent Distribution Network Operators (IDNOs) have a vital role to play in energising data centre developments. Dave Swadling, Director of Customer Connection Development at Eclipse Power, explains how IDNOs can release hyperscalers and data centre operators from the constraints that threaten to hold back delivery of their potential for the whole UK economy.

The UK’s digital infrastructure plays a vital role in the country’s economic growth. According to Cloudscene, the digital economy contributes around 8% of the country’s GDP, generating over £65bn gross value added. No wonder that successive governments have made digital growth a core feature of the economy. As critical building blocks of the digital economy, data centres will need to keep pace with the explosion in power-intensive workloads being generated from cloud computing, IoT, machine learning, and artificial intelligence.

For some time, the expansion of data centres has been subject to a ‘perfect storm’ of converging events that present real challenges to growth – from planning barriers to demands for greater energy efficiency and sustainability. Business leaders

are increasingly focused on the role of data centres in their supply chain emissions to account for their upstream Scope 2 and 3 carbon footprints. But the overriding issue for hyperscalers and data centre operators is grid capacity.

Applications made to Distribution Network Operators (DNOs) or the National Energy System operator (NESO) today are being given connection dates that are 10 to 15 years in the future. This delay puts projects – and investment – at risk. When your business plan is centred on having X-number of megawatts or a volume of data centre capacity set up and running in a certain timeframe, being unable to connect to the grid for a decade is clearly a problem. It’s more than a problem for the enterprise; it’s a problem for the broader economy.

The UK is an attractive location for data centres because of its political, economic and energy stability. Without connectivity at the scale and pace needed by the digital sector, investment in new projects will be redirected into countries in Europe and further afield where it’s easier to find sites with suitable power and connection timescales.

Also, as we progress towards a zero carbon economy, we are connecting more renewables to the grid, adding to the connections queue which is currently standing at 800GW. This is all taking place as data centres are becoming more prevalent and larger.

In short, data centres are waiting longer than ever for larger power connections, at a time when we’ve effectively topped out the grid in terms of capacity – which is the perfect storm. The light on the horizon, however, is that there are solutions to overcome these challenges, if you know where to look and who to work with.

THE DIFFERENCE IS IN THE ‘I’ OF THE BEHOLDER

Independent Distribution Network Operators (IDNOs) are perfectly equipped to unlock opportunities for data centre operators to connect to the grid. Introduced in 2004 to increase competition in electricity distribution, IDNOs, like DNOs, design, own, operate and maintain electricity networks in the UK. Also, like DNOs, they are licensed by Ofgem.

The difference lies in the ‘I’ of IDNO. ‘Independence’ means that IDNOs are not restricted to a geographical part of the UK as DNOs are. They can operate nationwide and can be more flexible about how they interpret the standards set by DNOs, which vary from region to region. Because IDNOs operate in a competitive market, they have to have more of a customer focus than DNOs. This means that they can adapt and adjust to market challenges in a way that DNOs aren’t incentivised to do. They can suggest innovative ways to overcome the challenges and simplify the complexities of getting connected to the grid.

Naturally, IDNOs with experience of helping data centre customers connect to the grid will understand HV connections. The effective ones will offer additional advantages, like making a capital contribution, value engineering the grid connection as well as design expertise to match the substation footprint around the needs of a specific site. They can also apply learned experience from energising other sectors.

It’s often the less technical aspects of power projects that have the biggest impact on whether a project is viable or not. Successful IDNOs must have the ability to build strong relationships with stakeholders across the whole power ecosystem; the transmission and service operators, DNOs and National Grid, grid consultants, renewable developers, and more. They typically build a deep understanding of not only who to deal with among the different organisations, but importantly, how to deal with them.

The really effective IDNOs do all of this in a way that is designed to help the data centre customer by providing the right advice in an open and transparent way. Ultimately, the data centre operator needs to convince investment communities of the value and viability of their project. So, the IDNO needs to distil and crystalise both the problem and solution in a language that makes it easy for the client to sell internally.

The strength of the relationships mean that they can negotiate with the DNO and National Grid to get the best solution for their customers and introduce innovative ways to solve connectivity challenges.

UNLOCKING THE QUEUE

Given the challenges of connecting to the grid, IDNOs like Eclipse can operate independently as ‘power-brokers’ to unlock the queue. While there is no silver bullet or any single solution to fit all situations, a customer-centric IDNO has the commercial drive to find solutions. IDNOs are ideally suited to finding the best solution and bringing the right people together. That can involve putting the investors together with the people who need to build data centres; the grid experts with the people who have the connections; the landowners who want to do something with their land options; the grid consultants and designers; or the renewable generation developers. They can bring all of this together to make as many projects viable as possible by taking advantage of what is already in the grid queue. There’s no sense getting in the back of the queue; at 800GW and counting, there’s too much in it.

An IDNO looks for the nuggets that are already in the queue that are suitable for a data centre project. Many of the renewables projects in the connections queue are for battery storage. Ofgem, government and NESO forecasts point to the UK needing 200GW of capacity to support the energy transition. That means only a fraction of the 800GW pipeline of projects will actually connect. Some of these will be looking to switch into data centre opportunities and use their place in the queue as an incentive. Additionally, data centres have large UPS battery facilities that can be used as export capacity to support the grid – essential as more variable renewable energy sources come online. So, the right power-broker in the form of an IDNO can help unblock the connections queue and increase the penetration of renewable energy in the grid.

THE COMMON DENOMINATOR

Data centres will continue to innovate to decarbonise and find new ways to work collaboratively with the grid. IDNOs are a common denominator within the highly complex equation, as they connect all of the constituent parts. It simply makes sense for any data centre developer and operator to have an IDNO inside their camp.

Working with a skilled IDNO ensures that appropriate relationships are in place so that hyperscalers and data centre operators are released from the constraints that threaten to hold back delivery of their potential for the whole UK economy.

Opening the DNO market up to competition has given the sector a fresher, faster route to solving energy connectivity challenges. The transmission network is crying out for more competition to drive regulatory and industry change. This would enable customers to connect to the network faster and more efficiently and is the natural next step that would revolutionise energy in the UK.

Eclipse Power, eclipsepower.co.uk

THE POWER OF LIGHT

Rich

Jensen,

Vice President of POLATIS Program

Management and Architecture at Huber+Suhner, explains how, by harnessing the benefits of optical circuit switches, today’s data centres can meet the ever-growing demand for processing power in AI applications.

In today’s fast-paced digital landscape, all-optical circuit switching is revolutionising data centre architectures by enhancing efficiency, reducing costs, lowering power consumption, and enabling advanced AI and Machine Learning (ML) applications.

As businesses increasingly rely on AI to process and analyse massive volumes of data, the demand for more efficient and powerful data centre infrastructures has never been greater. From natural language recognition and cancer diagnosis to financial fraud detection and autonomous vehicles, AI and ML applications are transforming industries and delivering unprecedented insights and capabilities.

THE GROWING IMPACT OF AI/ML

It is projected that in the near future, over 50% of data centre resources will be dedicated to supporting AI and ML applications. This surge presents significant challenges for hyperscalers and providers of cloud computing and high-performance computing (HPC) services, which must

adapt their infrastructures to meet these evolving demands.

AI involves creating machines and computers capable of mimicking cognitive functions associated with human intelligence, such as visual perception, language understanding, data analysis, and decision-making. ML, a subset of AI, empowers systems to learn and improve from experience without explicit programming. ML algorithms analyse vast datasets, learn from the insights and make informed decisions, with predictive accuracy improving as more data is processed during training.

EVOLVING NETWORK TOPOLOGIES

To efficiently scale the computing power required for emerging AI and ML services, innovative network topologies centred around clusters of processing units – such as CPUs, GPUs and TPUs – are essential. By combining the flexibility of optical circuit switching (OCS) with advanced control plane orchestration, new data centre network designs are significantly accelerating computational efficiencies while simultaneously reducing costs and power consumption in these resource-intensive cluster platforms.

The principle of disaggregation: The required resources are bundled together in bespoke ratios to form flexibly proportioned ‘bare metal’ hardware hosts, composed on-the-fly, using a common pool of finely grained underlying resources

LIMITATIONS OF LEGACY ARCHITECTURES

Traditional data centre AI/ML architectures typically rely on power-intensive optical packet switches with fixed optical interconnects to transfer data between processing clusters. These packet switching fabrics convert data signals from optical to electrical, switch the packets electrically, and then convert them back to optical for retransmission. This process is known as optical-electrical-optical (OEO) conversion. This method is inefficient for transferring entire data streams between fibres, as required by many AI and ML applications. Fixed optical interconnections also present challenges; equipment failures can disrupt optimal processor topologies, limiting overall cluster efficiency. Additional drawbacks of using packet switches in ML platforms include high costs, data delays due to signal re-timing, excessive power consumption for OEO conversions and electrical switching of high-bitrate traffic, and significant heat dissipation from elevated power usage. Increased data latency reduces cluster efficiency because ML algorithms require rapid processing of large data volumes. Furthermore, packet switches are signal-format dependent, making upgrades and scalability expensive as data formats evolve and transmission speeds increase.

ADVANCED NETWORK TOPOLOGIES WITH OCS

To address these challenges, network architects are leveraging optical circuit switching to create reconfigurable network topologies that dynamically interconnect clusters of high-performance processors, optimising solutions for various AI and ML problems.

Optical circuit switches, such as the POLATIS patented DirectLight technology from Huber+Suhner, transparently switch data signals between fibres without necessitating OEO conversion. These reconfigurable optical connections are fault-tolerant, allowing for rerouting around equipment failures and optimising the topology of remaining equipment.

Optical circuit switching offers significantly lower latency and is signal-format independent, eliminating the need for OEO conversions or signal re-timing. This independence from specific signal formats and transmission speeds means that no upgrades are required as technologies advance.

Some optical circuit switches can establish and maintain connections even in the absence of light on the fibre, known as ‘dark fibre’ connections. This capability allows for the pre-provisioning of connection paths, significantly accelerating network reconfiguration times.

Various network topologies have been proposed, incorporating optical circuit switches with sizes ranging from a few dozen input and output ports to matrices with several hundred ports.

Additionally, optical circuit switches are more cost-effective and consume considerably less power than packet switches, contributing to reduced power loads in ML clusters. The resulting power savings enable more efficient and cost-effective resource utilisation, while the transparency to data formats and line rates lowers capital expenditures during network upgrades.

Various studies have demonstrated that deploying optical circuit switching can accelerate cluster ML language learning models by a factor of three or more compared to conventional OEO packet switching. This approach also significantly reduces power usage and capital costs.

EMPOWERING THE FUTURE OF AI/ML

By harnessing the benefits of optical circuit switches, today’s data centres can adopt new reconfigurable topologies that cost-effectively scale ML processor clusters to meet the ever-growing demand for processing power in AI applications and services. This transformative approach not only enhances computational efficiency and reduces operational costs, but also positions organisations to effectively leverage AI and ML technologies now and in the future.

Huber+Suhner, hubersuhner.com

KEEPING NETWORKS AFLOAT IN FINANCIALLY TURBULENT TIMES

Alan Stewart-Brown, VP EMEA, Opengear, looks at five ways of enhancing enterprise networks amidst shrinking budgets.

At the beginning of the 1980s, Robert Metcalfe’s formulation, known as Metcalfe’s Law, put forward the proposition that a network’s value increases exponentially with each new connected device. Today, in an era with billions of devices and server cores, we’re witnessing an unprecedented scale of connectivity that even Metcalfe himself might not have foreseen. The current wave of IT transformation is notably driven by AI, significantly impacting business efficiency and necessitating robust investment in the existing network infrastructure.

However, today’s economic climate, characterised by high inflation rates, supply chain disruptions, interest rate increases and labour market challenges, increasingly compels CFOs to scrutinise budget allocations more closely, making it ever more difficult for enterprises to maintain and innovate network infrastructure.

As financial constraints intensify, organisations must carefully consider how to maintain networks that are effective and suited to their needs, but it’s also crucial to keep innovation at the forefront, despite budget limitations. In light of all this, here are five considerations for businesses.

1. Invest in a unified management framework:

Enterprise observability traditionally reacts to user-reported issues, tasking the ops team with diagnosing problems like application failures or internet outages. The shift towards virtualisation and management fragmentation in the enterprise makes it extremely difficult to achieve the observability needed to not only troubleshoot, but to automate too.

A management framework that integrates both physical and virtual infrastructure through a unified interface, along with management

applications such as Splunk or Juniper’s Apstra, establishes a comprehensive observability system. This system forms a closed loop that significantly speeds up the identification and resolution of issues. Such integration not only facilitates rapid problem-solving but also lays the groundwork for immediate automation from the start (day zero) and ongoing monitoring after deployment (day two).

This unified management approach allows organisations to streamline their operations and enhance system reliability through proactive monitoring and real-time data analysis. By connecting disparate elements of the network infrastructure, the framework ensures that all components communicate seamlessly, reducing the complexity and improving the efficiency of network management. This is crucial for maintaining high-performance networks that can adapt to evolving business needs and technological advancements.

2. Make use of automation and AI for more efficient operations: Automation, enhanced by AI, can significantly raise operational efficiency and reduce errors. These technologies automate many repetitive jobs, from checking configurations to authentication and logging, liberating engineers for other tasks.

It might sound beneficial to have an automation or an AI tool in place, capable of performing repeatable processes like checking network device configurations, authentication, time protocols device upgrades or logging. However, it’s important to understand that without a unified management framework, the benefits of automation and AI will not be fully realised. While virtualisation and software-defined networking have effectively changed what is analogous to the product in industrial automation, the factory needs to be redesigned with a unified management framework. Otherwise, the benefits will be limited to single tasks.

3. Multi-cloud and hybrid deployments offer significant opportunities: The need for local application hosting, driven by concerns over latency and privacy, poses challenges in managing private cloud infrastructures effectively. To address these challenges, an emerging model involves utilising public cloud infrastructure on premise. In this model, the public cloud provider manages the Infrastructure as a Service (IaaS) components, allowing organisations to dictate the extent of integration with public cloud resources. Typically, the network operator maintains control over the physical infrastructure and

connectivity, while the public cloud provider handles the software platform. This enables the organisation to maintain the same tool chains and reduces the overall management burden. This division of responsibilities enables organisations to use consistent tool chains across different environments, thereby simplifying operations and reducing the burden of managing diverse technologies. It also facilitates a more streamlined approach to integrating public and private cloud services, optimising both performance and cost-efficiency while maintaining rigorous security standards. This approach not only eases the management load, but also allows organisations to leverage the strengths of both public and private clouds, ensuring flexibility and scalability.

4. Additional security layers on networks are required: With the volume of cyber security threats growing all the time, additional security layers are essential. A unified management system that integrates security applications, and that can ingest real time telemetry data and use AI to spot suspicious behaviour and isolate, or remediate, gives organisations the ability to reduce the impact of attacks. Full visibility and control over the network is critically important to mitigate the risk of security breaches and reduce compliance risk and insurance cost.

5. Embrace 5G and edge computing:

As 5G becomes mainstream, edge computing is stepping into the limelight, offering new applications and data processing closer to data collection points. 5G’s rapid connections facilitate the shift as enterprises balance their network usage across data centres, the edge, and the cloud, benefiting high-response applications like VR/AR and autonomous vehicles and accelerating technology adoption. Navigating network innovation during times of tightening budgets demands a strategic approach centred on secure, cost-effective management frameworks such as Smart Out-of-Band management.

These strategies underline the importance of a unified management framework for reducing costs, especially in economic downturns. They enhance efficiency from initial deployment, through effective network monitoring and help mitigate failures, leading to considerable cost savings.

In economically challenging times, organisations can secure their network infrastructure, while continuing to deliver impactful customer experiences and maintaining profitable operations.

Opengear, opengear.com

A NEW ERA OF CONNECTIVITY

Carlos Mora, Market Development Manager at Corning Optical Communications, assesses the best ways of maintaining a future-ready data centre and looks at whether Base-8 or Base-16 connectivity is the answer.

The range of options for active equipment in the data centre – and the optical connectivity footprint to match – continues to grow rapidly. Take the introduction of 400G transceivers, for example. This has led to new duplex (very small form factor connectors like SN, MDC and CS) as well as multifibre connectors (such as MPO 12-DD using 16-fibres and MPO-16 APC) entering into the market. With the increased range of choices and the shift towards higher data rates comes more complexity for operators and also important considerations for the choice of infrastructure itself. With an adaptive infrastructure, it is possible to upgrade from 100 to 400 to 800G and even 1.6T with surprisingly few changes.

Network operators, planners and designers are currently looking into what might be their best option, Base-8 or Base-16, and whether they will have to migrate to a Base-16 solution in the future along the upgrade path to 1.6T speed.

To quickly recap, Base-12 connectivity was introduced in the mid-90s and served the data centre industry well for almost two decades. Since then, as the industry has progressed from 40G towards higher rates, new types of transceivers based on 8 fibres have made the case for Base-8 connectivity much stronger.

So how does Base-16 fit into the picture, what are the advantages of both technologies and what do operators need to consider when making future upgrades?

A CLOSER LOOK AT BASE-16

Base-16 solutions, which utilise MPO-16 APC single-row connectors, rather than the more commonly used MPO-12, are considered to be the best route forward in some parts of the industry.

There is an opportunity to utilise and match new MPO-16 APC transceivers, which are multi-mode fibre, with MPO-16 APC backbones; however, there are also other 16-fibre transceiver options available, using either 2 x 8 fibre MTPs or 16 fibres in two rows.

The core benefits of a Base-16 solution include greater port density and less components, which helps to reduce signal loss. Better distribution of switch capacity, as well as reducing the number of testing and cleaning components, are also advantages – the latter point is valid in applications where standardising the MPO-16 APC connector within your transceivers and backbone is needed.

Using Base-16 for the backbone of the infrastructure, however, brings additional considerations when it comes to technology upgrades and adopting the next generation of transceivers – adding additional components means losing density advantage or a need to change over the backbone.

Some operators must also take into account the potential drawbacks of reduced flexibility in terms of polarity and pinning adjustments during the installation of an MPO-16 APC connector, as well as the increased risk of disconnecting multiple fibre pairs in case a connector fails. Granularity (2 and 8 fibres) is what most of Corning’s data centre customers are currently asking for to reduce risks.

It’s also worth considering that MPO-16 was originally introduced for 400G-SR8 multimode applications, where most of the distances within the data centre are not exceeding a maximum of 100m. With a migration to higher speeds, this reach would become shorter (50m to 80m, depending on the chosen transceiver speed).

Operators would therefore need to consider whether applying a multimode fibre backbone on a 16F MPO APC connector is the right solution for them. If there are no benefits in terms of cost, complexity or upgrade strategy, it may not be what they are looking for.

SO HOW ABOUT BASE-8?

Base-8 is widely considered to be the most flexible path to scale up and migrate to new technologies and higher speeds. So, what are the considerations and advantages here?

Significantly, with Base-8, network adjustments can be implemented with minimal to no changes in the infrastructure: for example, a single-mode infrastructure getting upgraded from QSFP-40G-PLR4 (MPO-8) with a 4xSFP-10G-LR (LC Duplex) breakout, to either a QSFP-100G-PSM4 (MPO-8) with 4xSFP-25G-LR (LC Duplex) breakout, or even an OSFP-400G-DR4 (MPO-8) with 4xQSFP-100G-DR (LC Duplex) could be done without changing a single component.

A Base-8 backbone can also support any applications that require an MPO-16 APC interface at the transceiver, with no change requirement in the fibre backbone, and a single component can be exchanged or installed to make it compatible. For example, if the path of migration from a QSFP-100G-SR4

infrastructure to 400G would lead to implementing an OSFP-400G-SR8 (MPO-16 APC), it only requires an MPO-16 APC to 2xMPO-8 harness instead of an MPO-8 patch cord.

The granularity provided by a Base-8 system for current and new technologies like 800G and 1.6T is key for port mapping and port breakout. In order to easily migrate up to 1.6T, a scalable use of the backbone or trunk cabling can be realised when the lowest common denominator or multiplier serves as the basis.

For duplex applications, this would typically correspond to ‘Factor 4’, i.e. Base-8 cabling, on the basis of which -R4 or -R8 transceiver models can be mapped. In other words, high speeds will typically be deployed in breakout mode, with common applications for LC Duplex and MPO-8, either in a single or dual interface. This means Base-8 can support both current technologies and future developments.

LOOKING AHEAD

Ultimately, any infrastructure choice needs to anticipate future customer requirements, business needs and adoption of new technologies.

Regardless of whether we are talking hyperscale, colocation or enterprise data centre, there are many shared drivers and requirements: cost, space utilisation, migration to higher data rates and new technologies.

Implementing the right operational infrastructure is key, and while Base-8 is currently providing crucial flexibility and granularity and likely will for the foreseeable future, there are still compelling arguments for Base-16, and active equipment developments could build a much stronger case for its adoption for future applications.

KEEPING IT CLEAN

Liam Taylor, European Business Manager, Fibre Optics at MicroCare UK, explains the crucial role of clean fibre optic connections in advanced networks.

Clean fibre optic connections are essential for the reliability and performance of modern networks. With the ongoing deployment of 5G technology and the predicted shift towards 10G, maintaining clean fibre connections is more important than ever.

Contaminants, even those invisible to the naked eye, can significantly degrade network performance. Understanding the sources of contamination and employing effective, industry-approved cleaning techniques are crucial to safeguarding the integrity of advanced telecommunications infrastructure.

MODERN NETWORKS

The deployment of 5G technology has been transformative across various sectors, enabling advancements in IoT, AI, VR and AR. The ultra-high speeds, low latency and extensive device connectivity offered by 5G are essential for these advanced applications ranging from remote video conferencing to smart factory automation and autonomous agricultural machinery.

Additionally, 5G supports critical infrastructure such as healthcare systems with telemedicine and remote surgery capabilities, smart cities with efficient traffic management and energy use, and enhanced public safety with real-time data for emergency services. To support increasing network traffic, higher device numbers and larger data volumes, a stable, high-speed all-fibre infrastructure is replacing traditional coaxial or copper cables.

For today’s 5G and tomorrow’s 10G networks to work effectively, ensuring fibre connections are perfectly clean is crucial. Even the smallest contaminant can disrupt network operations, leading to significant failures.

A THREAT TO NETWORK INTEGRITY

Fibre contamination is the leading cause of network failures. Contaminants can block light flow through the fibre, causing back-reflection (signal diverted back to its source) or insertion loss (weakened signal). The signal path can be completely obstructed in severe cases, resulting in total network shutdowns. These issues are particularly problematic for 5G and 10G networks, which require every available

milliwatt of power to support uninterrupted connectivity and top-speed performance. The higher frequency of light in these advanced networks makes them extremely sensitive to refractive angle changes, amplifying the impact of even minute contaminants.

Contaminants can originate from various sources, affecting fibre optic performance in several ways, and even brand-new patch cords are susceptible to contamination. Dust, outgassed plasticisers, mould-release agents, and other residues from the connector manufacturing process can be trapped inside protective dust plugs. When these plugs are removed, contaminants can transfer to the fibre end faces, potentially degrading optical performance immediately.

Static electricity compounds these issues by attracting and retaining dust on the fibre end faces. Generated from friction between materials, static charges pull dust particles to the contact zone of the connectors, where they become lodged and challenging to remove. Because fibre end faces are dielectric materials and function as electrical insulators, static charges are not easily dissipated, leading to persistent dust accumulation.

EFFECTIVE CLEANING TECHNIQUES

One of the best methods to combat contamination, including static build-up, is the ‘wet-to-dry’ cleaning technique. This involves using a high-purity cleaning fluid and a lint-free wipe, click-to-clean tool, or cleaning stick. Wet-to-dry cleaning is effective and follows strict industry standards like IEC 61300-3-35, a widely recognised international standard that specifies the requirements for the cleanliness of fibre optic connectors, helping to prevent performance issues caused by contaminants.

HIGH-PURITY CLEANING FLUID

Some installers use Isopropyl Alcohol (IPA) to clean end faces, but IPA is challenging to keep in high purity and is slow to dry. It also absorbs water and minerals from the atmosphere, which can redeposit onto the fibre end faces. A superior alternative is specially engineered

optical grade cleaning fluids. These fluids clean consistently, are static dissipative, and come in hermetically sealed packaging to prevent spills and maintain high purity. They are fast drying, minimising the potential for re-contamination, and are non-flammable and non-hazardous, making them safe for personnel, storage and transport.

LINT-FREE CLEANING WIPES

High-grade fabric wipes that do not lint or generate static charge are ideal for cleaning fibre splices and end faces. These wipes are soft to prevent scratching the ceramic or composite ferrule end faces and are highly absorbent to wipe away contamination. Sealed packaging ensures the wipes stay pure and clean before use. When using the wet-to-dry method, lightly dampen a section of the wipe with cleaning fluid. Glide the end face over the wipe, moving from the dampened region to the dry section.

CLICK-TO-CLEAN TOOLS

Click-to-clean tools are excellent for connectors with lighter contamination levels or high volumes of connectors. They are quick and convenient, especially when time is of the essence. When using a click-to-clean tool with the wet-to-dry method, dampen a wipe with cleaning fluid, touch the tool end to the dampened area, then insert the tool into the end face and clean. Avoid spraying the cleaning fluid directly onto the end face or tool.

CLEANING STICKS

Cleaning sticks are best for low fibre counts, heavily contaminated end faces, close-pitched CS adapters, and hard-to-reach alignment sleeves. They should be non-linting and engineered to conform to the end face geometry, ensuring comprehensive cleaning without disassembling the connector or adapter. Using a static-dissipative cleaning fluid with these sticks enhances their effectiveness.

Microcare UK, microcare.com

HOW DCIM SUPPORTS AI DEMANDS OF TODAY AND TOMORROW

Carsten Ludwig, Market Manager DC, Reichle & De-Massari, explains why data centre infrastructure needs to ensure integration between high-density network connectivity, rack designs, housing, power, cable management and Data Centre Infrastructure Management solutions (DCIM).

Applications we use every day increasingly demand exceptionally high bandwidth and ultra-low latency for rapid data transfer and processing. More data needs to be moved in real time with lower latency. This is significantly affecting data centre design and operation. Data centres must consider technological advancements, evolving workloads, and greater performance, efficiency, scalability and sustainability requirements.

AI, in particular, is driving the need for advanced networking technologies, hardware accelerators and specialised processing to efficiently handle intense data throughput and latency. Graphics Processing Units (GPUs) consist of hundreds or thousands of smaller cores optimised for handling multiple tasks simultaneously. Originally intended for graphics processing, GPUs now handle data analysis and machine learning, requiring more power, emitting more heat and occupying more space

than traditional CPUs – all while space and emissions must be minimised.

ADVANCED NETWORKING TECHNOLOGIES

Ultra-high-density equipment is helping meet AI’s computational and power requirements. HD server, storage and networking solutions make it possible to better use valuable space, while offering greater scalability and optimising real estate use and energy consumption.

Technology innovations have made it possible to pack more computing power into smaller devices and handle data growth and processing demands. Computing power can be added as demand grows without expanding physical footprint. High-density setups also enhance energy efficiency, reducing per-unit energy consumption of hardware, by consolidating resources.

This is significantly influencing data centre rack design and configuration. Racks are being equipped with high-bandwidth networking hardware, such as 100Gbps or even 400Gbps Ethernet switches, to facilitate fast data transfer rates essential for AI workloads.

Spine-Leaf architecture – a two-layer network topology composed of spine (backbone) switches and leaf (access) switches – is being increasingly adopted to enhance data centre scalability, performance and reliability to support AI workloads.

The current and predicted uptake of AI significantly increases data centre requirements for network connectivity and bandwidth to support massive data transfer and real-time processing needs. AI workloads demand ultra-low latency and high-throughput connections, leading to adoption of advanced networking technologies such as 100Gbps Ethernet, InfiniBand, and terabit-speed networks. This is driving investments in scalable, robust, flexible network infrastructures to support seamless data flow, minimise bottlenecks and ensure rapid access to vast datasets.

OPTIMISING DCIM FOR AI

As data centres evolve to accommodate increasing demands for processing power and data storage, efficient resource management becomes paramount. Today’s infrastructure needs to ensure integration between high-density network connectivity, rack designs, housing, power, cable management and Data Centre Infrastructure Management solutions (DCIM). To ensure the best possible integration of DCIM with future-proof data centre hardware, R&M suggests taking the following factors into account.

DCIM tools can dynamically allocate resources such as power, cooling and compute based on real-time demands of AI workloads, ensuring optimal performance and efficiency. Using DCIM to combine insights into nearly every aspect of data centre operation with automated analysis, modelling and simulations

facilitates scalability and flexibility. This allows data centre managers to meet changing user demands, assess infrastructure future-proofness and make informed decisions on infrastructure investments, operational changes and strategic planning based on data-driven insights.

DCIM solutions can scale with and adapt to evolving data centre needs and technologies. As equipment is continuously added, moved or replaced, accurate, real-time visibility into processes and assets becomes difficult. DCIM can support compliance with standards and anticipate issues before they result in non-compliance.

DCIM systems can also facilitate the scaling of infrastructure to accommodate the growing and variable demands of AI applications, ensuring that resources are available as needed. Preconfigured cabinets with integrated DCIM, power, cooling, security and connectivity help data centres scale easily, incorporating modular and flexible architectures that can adapt to evolving AI technologies and increasing data loads. This streamlined, scalable solution enhances efficiency and reduces deployment time. These cabinets ensure optimal thermal management and power distribution. Additionally, built-in security features and seamless connectivity support robust data protection and quick integration into existing infrastructures, allowing data centres to scale rapidly and maintain high levels of performance and reliability.

It’s vital to ensure interoperability between DCIM software and hardware to facilitate seamless integration and access to all functionalities. What’s more, the demand for greater network connectivity drives higher port density in data centre racks, making traditional manual management approaches insufficient. Intelligent racks with advanced monitoring and management capabilities dynamically handle higher port densities, reducing human error and downtime. These systems offer real-time visibility into port utilisation, optimising resource allocation and ensuring seamless scalability.

Be sure to implement DCIM tools with robust analytics and real-time monitoring capabilities to optimise performance and resource utilisation. An ‘expert layer’ can present KPI-related actionable insights from across data centre systems, and AI and AR can be incorporated into data centre asset management to further enhance resource utilisation and decision-making. Data centre design and building can be optimised using ‘digital twins.’

As data centres pack more equipment into smaller spaces, the risk of overheating and equipment failure may increase. Intelligent racks and cabinets with sensors and monitoring systems track environmental conditions like temperature and humidity, providing alerts and automated responses to prevent overheating and ensure optimal conditions. Additionally, intelligent infrastructure aids asset management, ensuring efficient utilisation and correct placement of equipment to avoid hot spots and evenly distribute workloads.

AI isn’t just driving performance demands but can also support the operation of the data centre itself. Switches may incorporate AI algorithms for intelligent traffic optimisation, anomaly detection and network self-configuration. AI and machine learning will also play an increasing role in optimising data management, from tiering and data placement to capacity planning, leverage automation and AI-driven features within DCIM to enhance operational efficiency and predictive maintenance.

DCIM can significantly improve Power Usage Effectiveness (PUE) in several ways. Focusing on energy-efficient hardware and DCIM supports sustainability goals and reduces operational costs.

DCIM tools continuously monitor energy consumption across the entire data centre, providing detailed insights into power usage patterns and helping in planning and optimising placement of IT equipment to balance power and cooling loads effectively. Sensors track temperature, humidity and airflow, helping to optimise cooling systems and reduce unnecessary energy expenditure. DCIM can also help identify underused equipment that can be decommissioned or consolidated, reducing overall power consumption.

With data on temperature and airflow, DCIM can help implement and maintain cooling optimisation as well as hot and cold aisle containment strategies. Predictive models can help forecast future energy needs, allowing for better planning and resource allocation to avoid over-provisioning and waste. Detailed reports on energy consumption, cooling efficiency and other key performance indicators allow for benchmarking against industry standards.

High-density racks may produce more heat and require more power than traditional racks, requiring dedicated designs and components. Evolving requirements are driving adoption of more advanced cooling and power management solutions to support increased computational demands, as well as energy efficiency goals. Advances in cooling technologies and server design effectively manage increased heat output, even with higher densities, ensuring compliance with regulatory, reporting and auditing requirements, and environmental goals.

REGULAR RECERTIFICATION

An up to date, accurate database also helps determine a data centre’s performance and class level, which is essential for recertification of a data centre. Regular recertification helps maintain high performance and reliability, minimising downtime and operational risks, and ensuring the data centre meets industry standards and regulatory requirements.

During the initial review, a thorough examination of the data centre’s current state, including infrastructure, security and operational processes, takes place. Areas

in which the data centre does not meet the required standards or certification are then identified. Based on this ‘gap analysis’, necessary improvements or upgrades are implemented.

This might include updating hardware, improving cooling systems, enhancing security measures, or upgrading power supplies. Final certification requires extensive documentation and evidence of compliance to be submitted to the certification body. A well-implemented DCIM is essential to ensuring all necessary documentation is up to date and easily accessible for the recertification process.

Good record-keeping also makes comparisons possible, providing a solid basis for decision-making. You can compare multiple sites and installations within a data centre, or draw comparisons between different data centre sites all related to the same set of customers and applications. DCIM platforms consolidate data from various sources into a single dashboard, allowing managers to monitor multiple data centres or installations simultaneously. This unified view enables quick comparisons and real-time monitoring of performance metrics across different sites.

Furthermore, DCIM tools can be used to standardise data collection and reporting across all monitored data centres or locations. By using consistent metrics and KPIs, DCIM allows for direct comparisons.

A HOLISTIC APPROACH

A system becomes truly efficient when managed as an integrated whole. A holistic approach that considers every part of the data centre and its unique requirements is key to success. As the ‘non-compute’ and ‘compute’ worlds converge beyond traditional silos, we can significantly improve efficiency for both current systems and application coding. This improvement is based on deep data and simulations from real-world scenarios, providing unprecedented insights into resource utilisation for task execution.

Reichle & De-Massari, rdm.com

BRIDGING THE GAP BETWEEN TECHNOLOGY AND TALENT

Jad Jebara, President & CEO, Hyperview, looks at the ways Data Centre Infrastructure Management (DCIM) can help modernise data centres in terms of both technology and talent.

In the digital era, data centres stand as the titans powering our interconnected world. As cloud computing, big data and artificial intelligence reshape our industries, the explosive growth of these sectors is second-to-none. The global data centre market, set to surge from $187.35 billion in 2020 to a projected $517.17 billion by 2030, underscores their critical role in our technological future. However, beneath this rapid growth lies a dual challenge that threatens to undermine the industry’s potential. The first, often overlooked issue of ageing infrastructure, is a silent impediment to efficiency and

reliability. The second, equally crucial, is a growing talent shortage that could stifle innovation and growth.

These interlinked challenges present both risks and opportunities. As we navigate this complex landscape, the industry must not only modernise its technological foundation but also cultivate an environment that attracts and retains the brightest minds.

AN INVISIBLE ISSUE: AGEING INFRASTRUCTURE

There is a stark reality of the data centre industry that is both critical and concerning.

IDC reports that the average data centre is around nine years old, with Gartner noting that any facility over seven years old is now considered outdated. Alarmingly, about one-third of data centres have facilities between six and 10 years old, and roughly 17% have been operational for a decade or more.

Ageing infrastructure is not just a matter of reduced performance; it’s a serious concern affecting several critical areas:

• Inefficient power utilisation: Older equipment often demands more power, increasing energy costs and potentially overloading electrical circuits. This not only enhances the carbon footprint but also places additional stress on ageing electrical systems.

• Diminished performance: Over time, IT infrastructure degrades, leading to slower processing speeds, reduced storage capacity and increased latency. These performance issues impact employee productivity, customer satisfaction and businesses’ ability to meet the demands of modern operations.

• Increased risk of failures and downtime: Ageing infrastructure heightens the risk of equipment failures, leading to unplanned outages and downtime. This results in lost productivity, missed deadlines, revenue losses and potential long-term damage to a company’s reputation and customer trust.

• Safety hazards: Perhaps the most dangerous outcome of ageing infrastructure is the increased risk of arc flash incidents –violent electrical failures that pose serious safety risks.

While these technical challenges are significant, they are compounded by another critical issue facing the industry: the shortage of skilled talent.

THE TALENT PARADOX

The challenge of ageing infrastructure in data centres is inextricably linked to the industry’s talent shortage, creating a complex paradox. A recent survey by the Uptime Institute reveals a stark reality: over half (53%) of data centre operators report significant difficulties in finding qualified personnel. This marks a troubling 15% increase since 2018, highlighting a growing talent gap. On one hand, the persistence of legacy systems and outdated management practices exacerbates the talent crisis:

• Skilled professionals are deterred by the prospect of maintaining obsolete technology

• The focus on legacy system upkeep diverts resources from innovation and growth

• Outdated environments fail to attract new talent, particularly tech-savvy younger generations

Additional research reveals that the number of staff needed to run the world’s data centres will grow from around two million to nearly 2.3 million by 2025. As the average data centre engineer is aged 60, it’s evident that new blood is essential for the industry’s longevity and success.

Addressing these infrastructure challenges can be a catalyst for talent attraction and retention:

• Modernised data centres with cutting-edge DCIM solutions appeal to top-tier talent

• Efficient management practices allow staff to focus on strategic, high-value tasks

• A reputation for innovation and sustainability makes the industry more attractive to potential recruits

By investing in modern infrastructure and management tools, data centre operators can create a virtuous cycle. Improved systems attract skilled professionals, who in turn drive further innovations and efficiencies. This approach not only solves immediate operational challenges but also cultivates a more appealing career path in data centre management, addressing the talent gap from both ends.

In essence, the journey to modernise data centre infrastructure is not just about technological upgrades – it’s about creating an ecosystem that nurtures and attracts the talent necessary to drive the industry forward in an increasingly digital world.

DCIM: A MULTI-FACETED SOLUTION

DCIM technology is proving to be an essential solution to these challenges. DCIM is revolutionising data centre management by incorporating advanced technologies that optimise operations, forecast and negate electrical failures, all while tackling industry-wide issues.

In addition to the technical and operational benefits, incorporating DCIM into data centre operations also supports the current workforce by enhancing their capabilities while ensuring that the industry can continue to thrive despite the talent shortage. This approach is crucial for maintaining the efficiency, reliability and long-term success of data centres in an increasingly digital world.

Key benefits of DCIM include:

• Talent shortage mitigation: Automation of routine tasks and provision of insights help less experienced staff make informed decisions.

• Predictive maintenance: AI and machine learning can anticipate equipment failures before they occur, reducing the risk of arc flash incidents and other catastrophic failures.

• Real-time monitoring: IoT sensors deliver continuous data on equipment performance, power usage, and environmental conditions, allowing prompt issue identification and resolution.

• Optimised resource allocation: Comprehensive insights enable data-driven decisions about resource distribution, ensuring efficient power and cooling management.

• Extended equipment lifespan: DCIM often reveals opportunities to optimise or repair existing equipment rather than replace it, lowering costs and reducing environmental impact.

• Enhanced safety protocols: Detailed equipment status insights facilitate more effective safety measures.

• Scalability for edge computing: Efficient management of distributed infrastructure supports evolving technological landscapes.

THE FUTURE IS TALENT-DRIVEN

As the data centre sector undergoes rapid expansion, it is vital to recognise and tackle the issues related to ageing infrastructure and talent shortages. Modern DCIM technology offers the insight and control needed to ensure safety, boost efficiency, and create an environment that attracts and retains skilled professionals.

By adopting DCIM, data centre operators can proactively address risks associated with

outdated equipment, optimise resource use, enhance overall performance and reliability, and strengthen safety measures to protect personnel and assets. Furthermore, these solutions create innovative work environments that appeal to top talent, enable less experienced staff to make informed decisions, and free up skilled professionals to focus on strategic initiatives rather than managing legacy systems.

This approach not only safeguards current operations but also ensures that data centres can handle the growing demands of the digital era while nurturing the workforce needed to drive innovation.

As digital infrastructure becomes increasingly critical, investing in DCIM solutions goes beyond operational efficiency – it’s crucial for developing a resilient, sustainable and talent-rich digital future. Looking ahead, the integration of DCIM technology will be essential in shaping the future of data centres, enabling operators to manage the challenges of rapid growth while upholding the highest standards of safety, efficiency, reliability and workforce development.

By addressing both the technological and human aspects of data centre management, the industry can create a virtuous cycle of improvement, attracting the right talent to solve complex challenges and drive continuous improvement and innovation.

Hyperview, hyperviewhq.com

DCIM: THE ESSENTIAL CONNECTION POINT

Kevin Brown, SVP, EcoStruxure Solutions, Secure Power Division at Schneider Electric, looks at the future of DCIM and explains why it is the essential connection point for resilient, secure and sustainable IT.

As a software category, DCIM (Data Centre Infrastructure Management) is in an incredible market position when it comes to Green IT and sustainability. DCIM is the essential connection point between your IT infrastructure and the OT infrastructure supporting it. For those who are responsible for the overall IT infrastructure, whether on-prem, in colocation facilities or on the edge, this is where the magic happens. DCIM can help answer sustainability questions at a time when regulations are aimed at reducing the energy consumption of IT and promoting energy efficiency.

THE DCIM EVOLUTION

Historically, DCIM has focused on resiliency, not energy consumption and sustainability. The data was there, but it was difficult to extract and hard to organise. Times have changed. Modern DCIM is oriented around sustainability and making sure you have everything in place, such as sensors, compute and cooling, and figuring out how to optimise it. Schneider is constantly addressing operational challenges and working on better APIs and improved data organisation.

Schneider Electric had a vision of DCIM 3.0, which was introduced in June 2022, and it

focused on the need for DCIM to evolve to address the sprawling hybrid IT infrastructure and the connection points between the user and applications. These all must be running 24/7 –there is no such thing as non-mission critical.

DCIM 3.0 represents the trends Schneider was seeing: that the hybrid IT environment in all its complexity was here to stay; resiliency challenges would remain a top priority; cyber security concerns would drive the need for better management tools; and sustainability would emerge as a priority for CIOs. Since DCIM 3.0 was introduced, these trends have only become more pronounced and urgent. Let’s take a closer look.

RESILIENCY

The best DCIM solutions are used to maximise the efficient use of power, cooling and space resources. In this way, DCIM improves the availability and resiliency of physical infrastructure systems and the IT workloads they support. Schneider is building in features like predictive failure algorithms and focusing on improved visibility and more robust reporting. Some DCIM systems enable customers to create a digital twin of

their IT whitespace, which allows capacity management to minimise IT footprint, simulate layout for cooling optimisation, and obtain energy efficiency down to a subsystem level.

SECURITY

From a cyber security standpoint, Schneider has invested heavily in this area to ensure it is meeting the best standards available. For example, it recently introduced the Secure Network Management Card System, which includes the independent cyber security certification for the NMC3 version 3.0 firmware (IEC 62443-4-2). Schneider has significantly increased the R&D resources dedicated to ensuring it is maintaining its code with regular and frequent updates. DCIM helps its customers adhere to their security policies through a reporting engine and makes firmware updates for the OT infrastructure extremely simple.

From a physical and environment standpoint, Schneider has capabilities for customers to monitor environmental threats – leak detection and ambient temperature and humidity. The company can also provide remote cameras that capture video when an event is detected, saving those clips for later forensics. Many customers also use the DCIM solution to lock down their racks with permission-based entry and tracking. Robust tracking capabilities provide an audit trail of all this activity. Many of these capabilities have been used in some of the best-run data centres – with the increased focus on security, Schneider thinks these best practices should be applied across the entire infrastructure.

SUSTAINABILITY

Regulations aimed at reducing the energy consumption of IT and promoting energy efficiency are here, and Europe is leading the charge with the EU’s Energy Efficiency Directive and Fit for 55 with its goal of reducing EU emissions by 55% by 2030. Organisations will have to comply, and Schneider believes that regulations will be coming for smaller environments. In the US, Schneider thinks that the recently passed SEC rule will lead to a focus on IT energy consumption.

As the essential connection point between your IT infrastructure and the OT infrastructure supporting it, DCIM is in a unique position to help answer sustainability questions. Schneider recently unveiled new model-based, automated sustainability reporting features in its DCIM software that is unlike anything available in the market. The new models now offer customers a fast, intuitive, and simple-to-use reporting engine to help meet regulatory requirements. And it can scale from the largest data centre to the smallest server room, providing unprecedented visibility.

THE FUTURE OF DCIM

When it comes to the future of DCIM, Schneider is continuing its investment in its DCIM portfolio and is optimistic about its offer and where the IT market is headed. The company is focused on both on-premise and cloud-based solutions because it recognises that many customers need a choice.

Long term, Schneider believes cloud-based solutions will provide an advantage with the use of AI and machine learning. AI is obviously powerful and fast moving. Schneider is investing to ensure that this tool will help customers reach ‘nirvana’ – with less data and more information about what they need to prioritise.

Schneider has many other ideas and technologies it is investing in, all of which will help its customers meet its DCIM 3.0 vision for a resilient, secure and sustainable IT infrastructure.

Schneider Electric, se.com

PLAYING IT SAFE

Russ Kennedy, Chief Evangelist, Nasuni, explains why data resilience is critical for success in the AI era.

While data security is far from a new topic to enterprises, there is still much to learn about building out a comprehensive and effective defence and response plan to secure critical business data. Today’s cyber attacks are becoming more sophisticated and precise, and thus harder to detect. We’re seeing cyber criminals using tools like AI to upgrade their phishing emails, fixing some of the grammatical errors that have in the past raised red flags. Enterprises are constantly challenged by having to catch up with their attackers and implement cyber security measures to match their level, especially as AI is making data assets more valuable for companies and cyber criminals. In fact, a recent report by

Nasuni revealed that the biggest roadblock preventing organisations from either developing or implementing AI solutions is data privacy and security (42%).

WHY SECURING DATA IS MORE CRUCIAL THAN EVER

In today’s AI era, safeguarding systems and models that have become critical to operational workflows is crucial. But there’s another element of AI security that businesses must also consider, ensuring both the data needed to run AI systems, and the data these systems generate, are covered with enhanced protection.

As large organisations integrate advanced AI models as an essential part of operational workflows, it’s important to consider that business-critical workflows will collapse if the data fed into them are compromised. This makes data recovery the number one priority for firms when faced with a ransomware attack. An equally critical need is to ensure that models have access to secure, cleansed, organised and relevant data at all times. Once data is up to date and secure, enterprise leaders can innovate and find new ways to adapt to an uncertain and highly regulated landscape, ensuring compliance –across different industries.

ROBUST SECURITY TO AVOID MANUFACTURING DISRUPTION

The manufacturing industry has already embraced automation. For example, a business with multiple plants that produce

complex products, has automated scanners that capture high-resolution, and possibly even three-dimensional, images of the components at each stage of production/assembly.

AI can help drive further value from this innovation by analysing those images to identify faults in the production line, thus helping to improve output and quality, more efficiently, with higher profit margin. Old and new data will be vital to train this AI model, as it will need to learn the correct look of a product at all stages to be able to accurately flag flaws.

Imagine, then, a cyber attack hits the organisation; the workflow is critically disrupted and a choice must be made between pausing manufacturing or risking producing flawed products. A lack of a robust cyber response in this situation could result in huge losses to the business.

ENHANCING MEDIA AND MARKETING WITH SECURE DATA

Media and marketing are also ahead of the curve of AI adoption, with companies looking for new ways to efficiently produce, distribute and secure digital content. They are actively testing content creation from GenAI tools for video, audio and still-life assets, which means that existing datasets need to be fully accessible to ensure optimal creativity and performance while safeguarding content against infringement of copyright and intellectual property (IP). This is crucial as content production processes are becoming more complex, collaborative and global in scale.

Additionally, the industry also frequently draws upon insights from data gathered from focus groups or user research to drive decision-making. AI is being utilised here to analyse this data to draw out new and unique consumer behaviours. Without secure data, these models will not be able to deliver such valuable insights.

ENSURING RESILIENCY FOR CRITICAL HEALTHCARE DATA

Data in healthcare is critical, and AI is emerging as a key solution to accelerate diagnostics and support the development of new treatments and technologies. On a day-to-day basis, data is crucial to patient care, so having robust data resiliency and a comprehensive security plan in place is non-negotiable.

One hospital recently learned the hard way about this. After being hit by a sophisticated ransomware attack that cut off staff access to critical data, the hospital had to use an old backup to restore operations – which took over a month. During this time, the hospital staff had to go back to age-old pen and paper as the IT team worried about exposing new data to any follow-up attacks. Without access to data, AI models would be unable to drive efficiencies in patient care.

A ROBUST DATA SECURITY STRATEGY IS VITAL

Data resilience has long been a core tenet of enterprise IT, but is now at the forefront as companies are using data to train and feed their AI and ML solutions. At the same time, cyber threats are becoming more sophisticated and the potential for disruption is growing alongside the reliance on AI tools and platforms.

Securing data that drives enterprise AI must be a critical priority for businesses. The right infrastructure and tools with built-in security and data resilience can not only help to accelerate innovation, but also offer proactive ransomware protection and disaster recovery, should an attack occur – while enabling any organisation to boost data protection and compliance.

Nasuni, nasuni.com

BOLSTERING DATA RESILIENCE IN THE AI ERA

Rick Vanover, Vice President of Product Strategy, Veeam, discusses the critical importance of data resilience and outlines the best strategies for success.

Almost two decades ago, Clive Humby coined the now-infamous phrase, “Data is the new oil”. With AI, we’ve got the new internal combustion engine. The discourse around AI has reached a fever pitch, but this ‘age of AI’ we have entered is just a chapter in a story that’s been going on for years – digital transformation.

The AI hype gripping every industry right now is understandable. The potential is big, exciting and revolutionary, but before we run off and start our engines, organisations need to put processes in place to ensure their data is available, accurate and protected. Look after your data, and it will look after you.

TAKE CONTROL BEFORE SHADOW SPRAWL DOES

It’s far easier to manage with training and controls early on when it comes to something so pervasive and ever-changing as a company’s data. You don’t want to be left trying to ‘unbake the cake.’ The time to start is now. The latest McKinsey Global Survey on AI found that 65% of respondents reported that their organisation regularly uses Gen AI (double from just 10 months before). But the stat that should give IT and security leaders pause is that nearly half of the respondents said they are ‘heavily customising’ or developing their own models.

This is a new wave of ‘shadow IT’ –unsanctioned or unknown use of software, or systems across an organisation. For a large enterprise, keeping track of the tools teams across various business units might be using is already a challenge. Departments or even individuals building or adapting large language models (LLMs) will make it even harder to manage and track data movement and risk across the organisation.

The fact is, it’s almost impossible to have complete control over this, but putting processes and training in place around data stewardship, data privacy and IP will help. If nothing else, having these measures in place makes the company’s position far more defendable if anything goes wrong.

MANAGING THE RISK

It’s not about being the progress police. AI is a great tool that organisations and departments will get enormous value out of. But as it quickly becomes part of the tech stack, it’s vital to ensure these fall within the rest of the business’s data governance and protection principles. For most AI tools, it’s about mitigating the operational risk of the data that flows through them. Broadly speaking, there are three main risk factors: security (what if an outside party accesses or steals the data?), availability (what if we lose access to the data, even temporarily?) and accuracy (what if what we’re working from is wrong?).

This is where data resilience is crucial. As AI tools become integral to your tech stack, you need to ensure visibility, governance and protection across your entire ‘data landscape’. It comes back to the relatively old-school CIA triad – maintaining confidentiality, integrity and availability of your data. Rampant or uncontrolled use of AI models across a business could create gaps.

Data resilience is already a priority in most areas of an organisation, and LLMs and other AI tools need to be covered. Across the business, you need to understand your business-critical data and where it lives.

Companies might have good data governance and resilience now, but if adequate training isn’t put in place, uncontrolled use of AI could cause issues. What’s worse, is you might not even know about them.

BUILDING (AND MAINTAINING) DATA RESILIENCE

Ensuring data resilience is a big task – it covers the entire organisation, so the whole team needs to be responsible. It’s also not a ‘one-and-done’ task, as things are constantly moving and changing. The growth of AI is just one example of things that need to be reacted to and adapted to.

Data resilience is an all-encompassing mission that covers identity management,

device and network security, and data protection principles like backup and recovery. It’s a massive re-risking project, but for it to be effective it requires two things above all else: the already-mentioned visibility, and senior buy-in.

Data resilience starts in the boardroom. Without it, projects fall flat, funding limits how much can be done, and protection/availability gaps appear. The fatal ‘NMP’ (‘not my problem’) can’t fly anymore.

Don’t let the size of the task stop you from starting. You can’t do everything, but you can do something, and that is infinitely better than doing nothing. Starting now will be much easier than starting in a year when LLMs have sprung up across the organisation. Many companies

may fall into the same issues as they did with cloud migration all those years ago: you go all-in on the new tech and end up wishing you’d planned some things ahead, rather than having to work backwards.

Test your resilience by doing drills – the only way to learn how to swim is by swimming. When testing, make sure you have some realistic worst-case scenarios. Try doing it without your disaster lead (they’re allowed to go on vacation, after all). Have a plan B, C, and D. By doing these tests, it’s easy to see how prepped you are. The most important thing is to start.

IMMUTABILITY

ISN’T ENOUGH

Candida Valois, Field CTO at Scality, explains why a cyber-resilient approach is the best way to safeguard critical data.

Cyber threats, including ransomware, are becoming more sophisticated, with a greater impact than at any time before. For example, in 2024 the UK’s NHS was hit with a ransomware cyber attack against pathology services provider Synnovis, causing widespread delays to outpatient appointments and elective procedures to be postponed.

Organisations have to be on high alert to make sure their business-critical data is always protected, and that they remain operational without impacting customers – even in the event of an attack.

To stay future-proof, organisations are beginning to realise the value of adopting a new way of protecting data assets, known as a cyber-resilience approach .

RETHINK YOUR SECURITY

Three recent technology developments have turned standard cyber security measures on their head:

1. AI is empowering criminals . The UK’s National Cyber Security Centre noted the increased effectiveness, speed and sophistication that AI will give attackers. The year after ChatGPT was released, phishing activity increased 1,265% and successful ransomware attacks rose 95%.

2. ‘Immutability-washing’ leads to weak spots in cyber defences . In other words, just because something purports to be immutable, doesn’t mean it really is. Truly ransomware-proof security is not what most ‘immutable’ storage solutions are offering. Some solutions use periodic snapshots to make data immutable, but that creates periods of vulnerability. Some solutions

don’t offer immutability at the architecture level – just at the API level. But immutability at the software level isn’t enough; it opens the door for attackers to evade the system’s defences.

Attackers are getting better at exploiting the vulnerabilities of flawed immutable storage. To create a truly immutable system, organisations must deploy solutions that prevent deletion and overwriting of data at the foundational level.

3. Exfiltration attacks are a growing menace

Today’s ransomware attackers not only encrypt data; they now exfiltrate that data. Then they threaten to publish or sell it unless you pay a ransom. Data exfiltration has become part of 91% of ransomware attacks today.

Immutably alone can’t stop exfiltration attacks because they don’t rely on changing, deleting or encrypting data to demand a ransom. To defeat data exfiltration, you need a multi-layered approach that secures sensitive data everywhere it exists. Most providers have not hardened their offerings against common exfiltration techniques.

MOVING BEYOND IMMUTABILITY

Relying solely on immutable backups won’t protect data against all the current and emerging ransomware perils. It’s time for organisations to move beyond basic immutability and adopt a more holistic security paradigm of end-to-end cyber resilience. This paradigm includes the strongest type of true immutability. But it doesn’t stop there; it includes strong, multi-layer defences to defeat

data exfiltration and other emergent threats such as AI-enhanced malware. This entails creating security measures at every level to shut down as many threat types as possible and achieve end-to-end cyber resilience. These levels include:

API – Amazon shook up the storage industry when it introduced its immutability API (AWS S3 Object Lock) six years ago. It offers the highest protection against encryption-based ransomware attacks and creates a default interface for common data security apps. In addition, the S3 API’s granular control over data immutability enables compliance with the strictest data retention requirements. For the modern storage system, these capabilities are must-haves.

Data – Stopping data exfiltration is the goal here. Anywhere sensitive data exists, organisations need to deploy strict data security measures. To make sure backup data can’t be accessed or intercepted by unauthorised parties, what’s needed is a hardened storage solution that has many layers of security at the data level. That includes broad cryptographic and identity and access management (IAM) features.

Storage – Should an advanced hacker get root access to a storage server, they can evade API-level protections and gain unfettered access to all the server’s data. There are sophisticated AI-powered ways to defeat authentication that can make attacks like this harder to defeat. A storage system must make sure data is safe – even if a bad actor finds their way into the deepest level of an organisation’s storage system.

Next-gen solutions address this scenario with distributed erasure coding technology. It makes data at the storage level unintelligible to hackers and not worth exfiltrating. It also enables an IT team to completely reconstruct any data that was lost in an attack or corrupted – even if several drives or a whole server gets destroyed.

Geographic – When data is stored in one location, it’s especially susceptible to attack. Bad actors try to infiltrate several organisations at once by attacking data centres or other high-value targets. This

raises the odds of actually getting the ransom. Today’s storage recommendations include having many offsite backups, geographically separate, to defend data from vulnerabilities at one site.

Architecture – The security of storage architecture determines the security of the storage system. That’s why cyber resilience must focus on getting rid of vulnerabilities located in the core system architecture. When a ransomware attack is in process, one of the first things an attacker tries to do is to escalate their privileges. If they can do that, then they can deactivate or otherwise bypass immutability protections at the API level.

If a standard file system or another intrinsically mutable architecture is the foundation of an organisation’s storage system, its data is left out in the open. The risk of ransomware attacks at the architecture level increases if a storage system is founded on a vulnerable architecture, given the explosion of malware and hacking tools enhanced by AI.

AI-powered ransomware attacks are on the rise, rendering many traditional approaches to protect backup data ineffective. Immutability is a must, but it’s not enough to combat the increasing sophistication of cyber criminals – and not only that, but most so-called ‘immutable solutions’ really aren’t. What’s needed today is end-to-end cyber resilience that addresses five key levels to help organisations future-proof their data security strategy.

Scality, scality.com

ATHOS RECEIVES THE AI TREATMENT

In this exclusive case study, DCNN looks at how Athos Therapeutics managed to scale drug discovery via AI analysis thanks to GPU-powered cloud infrastructure implemented by Vultr.

Athos Therapeutics is a clinical-stage biotechnology company pioneering the use of AI-driven molecular analysis to revolutionise drug discovery for autoimmune diseases and cancer. Facing increasing data and computing demands, the company recently turned to Vultr’s Cloud GPU platform to scale its AI model training, powered by NVIDIA HGX H100. With this solution, Athos is accelerating the development of next-generation treatments by using cutting-edge AI and machine learning. Based in the US, Athos uses a rich repository of patient samples and annotated clinical data from some of the leading medical centres both in its home country and abroad. These materials and data power the Athos AI/ML platforms to identify genetic hubs of disease, allowing them to create treatments tailored to the different molecular subtypes of autoimmune diseases and cancer. Its AI-driven platform has found essential drug targets, including ATH-063, a potential treatment for inflammatory bowel disease, and other promising options in their development pipeline.

A NEED FOR ENHANCED SOLUTIONS

As Athos advanced its research and development, it needed more powerful computing resources. Training AI models on its extensive datasets needed reliable access to GPUs to maintain critical workloads and advance drug discovery. It needed enhanced solutions to several challenges, including:

1. Cost efficiency by eliminating the need to build, operate and maintain its data centres – saving significantly on hardware, utilities and maintenance expenses

2. Engineering support and maintenance for hardware failures, driver and software dependency issues

3. Scalable and agile solutions that can adapt to hardware upgrades and reduce the risk of limited technical support and increasing long-term costs

Athos thus sought an independent cloud computing provider to address these challenges and provide the scalability, support and cost-efficiency needed to accelerate its AI-driven breakthroughs.

THE SOLUTION

Athos partnered with the independent cloud computing platform, Vultr, and chose its GPU service powered by NVIDIA HGX H100. This setup helped handle the growing computing demands of the biotechnology company’s AI models. The partnership allowed for:

- Scalability and flexibility. Combined with Dell Technologies, Vultr’s infrastructure offered the flexibility needed to work with Athos’ increasing datasets and hardware upgrades.

- Enhanced GPU performance. NVIDIA H100

Tensor Core GPUs allowed Athos to train large AI models with improved performance, faster iteration cycles, and support mixed precision computations.

- Greater cost efficiency. Vultr’s solution removed the need for Athos to manage its own data centres. This strategy led to considerable savings in operational costs. By partnering with Vultr, Athos avoided purchasing hardware, facility maintenance, IT staff, and engineering support. Vultr’s scalable pricing model meant that Athos only paid for the computing resources it needed and used

- Continuous engineering support. Vultr’s engineering team provided Athos critical support by minimising downtime which, in turn, maximised productivity for the company.

- Robust data security and disaster recovery Vultr ensured the safety and confidentiality of Athos’ patient data and proprietary AI algorithms with its data sovereignty solutions and built-in disaster recovery mechanisms, which are designed to protect against hardware and software failures.

Vultr’s GPU-powered cloud infrastructure was implemented in close collaboration with Athos’s AI and computational biology teams. Vultr first worked with Athos to assess its existing cloud setup and design a customised solution that could scale with its growing computational needs. The migration of Athos’s AI models to Vultr’s platform was executed seamlessly, with Vultr’s engineering team minimising disruption to ongoing research efforts. Once the implementation was complete, Vultr and Athos continued working together to optimise their GPU utilisation, which improved model training speeds and overall system performance. With Vultr’s advanced monitoring tools, Athos maximised its hardware utility, reducing waste and enhancing efficiency across its platform.

Dimitrios Iliopoulos, PhD MBA, President and CEO, Athos, explains, “Athos is committed to providing novel precision therapeutics for patients with autoimmune diseases and cancer. The combination of Vultr Cloud GPU, powered by NVIDIA and Dell infrastructure, enables us to achieve our aims. Our AI computational teams are excited about collaborating with Vultr.”

June Guo, VP of Artificial Intelligence & Machine Learning at Athos, adds, “Because Athos’s datasets continue to grow every year, Vultr and Dell’s scalable and secure NVIDIA GPU infrastructure enables us to train such large datasets for precision medicine on autoimmune diseases and cancer.”

Vultr, vultr.com

CENTIEL INTRODUCES STRATUSPOWER UPS

As the demand for data centres grows, so does their energy consumption, making it increasingly important to improve efficiency and reduce environmental impact.

In response, Centiel has developed StratusPower, a highly efficient, scalable three phase true modular UPS, providing peace of mind when it comes to power availability and uptime for critical power protection. StratusPower now improves energy efficiency and reduces carbon footprints to help data centres to achieve net zero targets.

StratusPower offers ‘9 nines’ (99.9999999%) availability to effectively eliminate system downtime; class leading 97.6% online efficiency to minimise running costs; true ‘hot swap’ modules to eliminate human error in operation; and long-life components to improve sustainability.

Like all of Centiel’s UPS, StratusPower is manufactured at its factory in Switzerland. However, uniquely, it includes even higher quality components, so instead of replacing filter capacitors and cooling fans every four years, they now need replacing every 15 years, or just once during their entire 30 year design life. As a data centre has a design life of typically 25 to 30

years, StratusPower will last as long as the data centre. Furthermore, at the end of its life, StratusPower can also be almost 100% recycled.

The three-phase modular UPS StratusPower now covers a power range from 50 to 1,500kW in one cabinet and can be paralleled for 3,750kW of uninterrupted, clean power – which is perfect for data centres.

UPS cabinets are designed with scalability and flexibility in mind, and future load changes are easily accommodated by adding or removing UPS modules as required. A data centre will never outgrow a well specified StratusPower UPS and it can be constantly rightsized to ensure it always operates at the optimal point in its efficiency curve.

StratusPower is already hardware enabled, and with adaptations to the software/firmware, it is future ready to accept alternative energy sources. Configured correctly with LiFePO4 batteries known for their cycling ability, StratusPower has the potential to become a micro-grid or energy hub, storing and delivering energy into the facility when required.

Centiel, centiel.com

SCHNEIDER ELECTRIC ANNOUNCES

Schneider Electric, a leader in the digital transformation of energy management and automation, has announced the availability of its APC Back-UPS Pro Gaming uninterruptible power supplies (UPS) in Europe.

Celebrating 40 years of reliability and leadership in critical power protection during 2024, the new, stylish, and state-of-the-art UPS has been designed to protect gaming equipment from power outages and deliver a robust power connection, despite energy spikes and failures.

Back-UPS Pro Gaming has been specifically designed with gamers, streamers and influencers in mind. It delivers uninterruptible power protection – even in regions where the grid is unstable –keeping GPU-powered PCs, leading consoles, streamers and gamers connected, regardless of power disruptions.

To deliver robust protection, APC Back-UPS Pro Gaming UPS features sine wave battery backup power, delivering the smooth electrical current required by sensitive electronics and AVR (Automatic Voltage Regulation), and helping to protect against power irregularities that can result in glitches and buffering during an outage to extend the lifespan of gaming equipment.

NEW 1MW COOLANT DISTRIBUTION UNIT LAUNCHED

Airedale by Modine, a critical cooling specialist, has announced the launch of a coolant distribution unit (CDU) in response to increasing demand for high performance, high efficiency liquid and hybrid (air and liquid) cooling solutions in the global data centre industry.

The Airedale by Modine CDU will be manufactured in the US and Europe and is suitable for colocation and hyperscale data centre providers who are seeking to manage higher-density IT heat loads. The increasing data processing power of next-generation central processing units (CPUs) and graphics processing units (GPUs), developed to support complex IT applications like AI, result in higher heat loads that are most efficiently served by liquid cooling solutions.

The CDU is the key component of any liquid cooling system, isolating facility water systems from the IT equipment and precisely distributing coolant fluid to where it is needed in the server/rack. Delivering up to 1MW of cooling capacity based on ASHRAE W2 or W3 facility water temperatures, Airedale’s CDU offers the same quality and high energy efficiency associated with other Airedale by Modine cooling solutions.

Developed with complete, intelligent cooling systems in mind, the CDU’s integrated controls communicates with the site building management system (BMS) and system controls for optimal performance and reliability.

Airedale by Modine, airedale.com

Schneider Electric, se.com

R&M EXPANDS READY-TO-INSTALL CABLING SYSTEMS

R&M , a developer and provider of high-end infrastructure solutions for data and communications networks, is expanding its range of terminated fibre optic cables.

The VARIOline family now includes three plug-and-play solutions. The ready-to-assemble loose tube cabling system simplifies installation in buildings, data centres and outdoor areas as installers only have to feed the cables into shafts, ducts and racks and connect them in patch panel modules.

The overall solution, consisting of ready-to-use cables and fitted fan-out legs, saves field-mounting, splicing and measuring on the construction site. Installation work can be carried out in a time and cost-saving manner with just a few specialists.

The VARIOline overall solutions are used for backbone, trunk and campus cabling with either single or multimode fibres.

VARIOline Easy is designed for protected environments, making it ideal for high-density packing in data centres and enterprise networks within commercial buildings and factories. This solution enables efficient space utilisation and easy management of fibre optic connections, providing a seamless and reliable network infrastructure for critical applications.

VARIOline Classic provides a mechanically stable solution suitable for medium-density packing in both indoor and certain outdoor installations, while VARIOline OP is available for outdoor environments.

R&M, rdm.com

KINGSTON DIGITAL LAUNCHES SSD FOR DATA CENTRE ENVIRONMENTS

Kingston Digital Europe , a flash memory affiliate of Kingston Technology Company – a provider of memory products and technology solutions – has announced its latest data centre Solid State Drive (SSD), DC2000B; a high-performance PCIe 4.0 NVMe M.2 SSD optimised for use in high-volume rack-mount servers as an internal boot drive.

Using the latest Gen 4×4 PCIe interface with 112-layer 3D TLC NAND, the DC2000B is ideally suited for internal server boot drive applications as well for use in purpose built systems applications where higher performance and reliability are required. DC2000B includes on-board hardware-based power loss protection (PLP), a data protection feature not commonly found on M.2 SSDs. It also includes a new integrated aluminium heatsink that helps to ensure

broad thermal compatibility across a wide variety of system designs.

“Whitebox server makers and Tier 1 server OEMs continue to equip their latest generation servers with M.2 sockets for boot purposes as well as internal data caching,” says Tony Hollingsbee, SSD Business Manager, Kingston EMEA. “DC2000B was designed to deliver the necessary performance and write endurance to handle a variety of high duty cycle server workloads. Bringing the boot drives internal to the server preserves the valuable front loading drive bays for data storage.”

DC2000B is available in 240GB, 480GB and 960GB capacities and is backed by a limited five-year warranty and free technical support.

Kingston Digital Europe, kingston.com

Make greater

with an industr y-leading network

40 years of uninterrupted protection, connec tivit y, and reliability.

ap into four decades of unparalleled UPS reliability, strengthening infrastructures, as well as sophisticated software and ser vices that will help future -proof your business in an ever-evolving technological landscape.

40 co Ta rel we th ev

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.