FROM THE EDITOR’S DESK
“The scientific way of thinking,” says Carl Sagan, “is…an essential tool for a democracy in an age of change.”
Dear Readers,
I start this issue by remembering Carl Sagan, an astronomer and a popular public advocate of sceptical scientific inquiry and the scientific method. Carl reminds us of the potential of science as a tool for advancing society. And I think that is where we are in the industry now.
As I welcome you to read this issue, I wish for you to remember that you bear the torch of technological transformation. In the data centre industry, we have much change coming as we tackle many challenges.
Primarily, there is the widely discussed subject of Sustainability. On that, I want to pause and ask, how much of it is talk and how much of it is action? I hope this issue helps you to look at the discussions and question what is missing, what you can add, and what you are changing.
I will also confess that my interest in Cooling has expanded while reading about the
CONTACT US
EDITOR: ISHA JAIN
T: 01634 673163
E: isha@allthingsmedialtd.com
GROUP EDITOR: CARLY WELLER
T: 01634 673163
E: carly@allthingsmedialtd.com
GROUP ADVERTISEMENT MANAGER:
KELLY BYNE
T: 01634 673163
E: kelly@allthingsmedialtd.com
SALES DIRECTOR: IAN KITCHENER
T: 01634 673163
E: ian@allthingsmedialtd.com
various methods being utilised in the industry. Reiterating my interview with Liz Cruz from the Autumn 2023 issue, she predicted a transition in liquid cooling that is now highlighted further in this issue’s feature.
Also, Cloud Computing & Storage still remains the most diverse topic, and I have tried to encase as many ideas as possible, from cloud repatriation to WAN acceleration and immutable object storage!
Finally, I want to share that this is my final issue of DCNN as the Editor. I have made a difficult decision to move on, but someone equally brilliant will be taking over. Be on the lookout for further updates, and I wish you a relaxing, yet riveting read!
Isha Jain EditorSTUDIO: MARK WELLER
T: 01634 673163
E: mark@allthingsmedialtd.com
MANAGING DIRECTOR: DAVID KITCHENER
T: 01634 673163
E: david@allthingsmedialtd.com
ACCOUNTS
T: 01634 673163
E: susan@allthingsmedialtd.com
3
48
65
Isha Jain engages in a conversation with Louis McGarry, Sales and Marketing Director at Centiel, discovering his personal journey into the world of uninterruptible power supply.
22 What liquid cooling means for carbon reduction in data centres
24 Data centre cooling: an imperative for innovation
27 How to unleash the full potential of data centre liquid cooling
29 Why immersion cooling is not yet a viable option for many data centres
31 Unlocking cooling efficiencies with rear door heat exchangers
50
54
56
59
62
PULSANT EXPANDS WITH £4.5 MILLION DATA HALL IN MANCHESTER
Pulsant, UK provider of edge infrastructure and data centres, has cut the ribbon on its new £4.5 million data hall in Manchester and welcomed IT services leader, Dacoll, as its first customer in the upgraded facility.
The expansion supports Manchester Digital Strategy’s 2026 targets for £1 billion investment in digital infrastructure and a 50% increase in digital
sector businesses by 2026. This latest development to the Trafford Park site is part of platformEDGE, Pulsant’s national edge strategy to equip regional businesses with the infrastructure to capitalise on new technologies such as analytics, AI and IoT.
Recent research from LINX shows that bandwidth needs in Manchester have more than doubled, going from 135Gbps to 307Gbps in the last 12 months. Answering this demand, the Pulsant Manchester site spans four data halls with more than 400 racks, offering 1MW of power. Using its national data centre network and ecosystem of connectivity partners, Manchester businesses can also access global clouds, telcos and carriers via the LINX Manchester point of presence (PoP).
Pulsant, pulsant.com
SCHNEIDER ELECTRIC AND NVIDIA PIONEER DESIGNS FOR AI DATA CENTRES
Schneider Electric has announced a collaboration with NVIDIA to optimise data centre infrastructure and pave the way for ground-breaking advancements in edge artificial intelligence (AI) and digital twin technologies.
Schneider Electric will leverage its expertise in data centre infrastructure and NVIDIA’s advanced AI technologies to introduce the first publicly available AI data centre reference designs. These designs are set to redefine the benchmarks for AI deployment and operation within data centre ecosystems, marking a significant milestone in the industry’s evolution.
With AI applications gaining traction across industries, while also demanding more resources than traditional computing, the need for processing power has surged exponentially. The rise of AI has spurred notable transformations and complexities in data centre design and operation, with data centre operators working to
swiftly construct and operate energy-stable facilities that are both energy-efficient and scalable.
In collaboration with NVIDIA, Schneider Electric plans to explore new use cases and applications across industries and further its vision of driving positive change and shaping the future of technology.
Schneider Electric, se.com
STELIA TRANSFORMS CONNECTIVITY WITH TELEHOUSE EUROPE PARTNERSHIP
Stelia and Telehouse Europe have announced a partnership to launch Stelia IX solution, a high-capacity Layer 3 internet exchange service. Stelia IX revolutionises global connectivity and data exchange efficiency with Telehouse Europe as a key anchor location.
The new partnership comes at a critical juncture as IP traffic volumes continue to surge with a 22% compound
annual growth rate (CAGR) in western Europe. This trend is largely driven by the proliferation of content-rich applications such as video streaming, social media, and gaming, which together account for around 80% of all data traffic and is leading to a significant shift toward more localised traffic patterns.
By facilitating efficient, high-capacity data exchange and reducing reliance on traditional transit services, Stelia IX enables Telehouse Europe clients to optimise their network costs and improve overall profitability, at the same time empowering them to dramatically extend their reach into distributed ecosystems by making it highly accessible to a wide range of organisations, from small-scale enterprises to established ISPs and content providers.
Stelia, stelia.io; Telehouse Europe, telehouse.net
ADVANIA EXPANDS WITH ATNORTH SITE IN ICELAND
atNorth, a leading Nordic colocation, high-performance computing and artificial intelligence service provider, has announced an expansion of its partnership with Nordic IT services corporation, Advania, providing additional capacity at its ICE03 site in Iceland which opened last year.
Advania is a long-standing customer of atNorth at its ICE01 site in Reykjavík, in addition to some of atNorth’s other data centres in Sweden and Finland. The expansion to the ICE03 campus allows for further geographical separation of its infrastructure and highlights the business’ focus on data security, not to mention the benefits of redundancy and performance optimisation.
atNorth’s ICE03 data centre is located in the town of Akureyri, which is in the north of Iceland and, therefore, benefits from the country’s cool climate and renewable
energy sources. Akureyri is becoming increasingly attractive as a thriving technology hub as a result of investment in better and more resilient connectivity in the region.
atNorth, atnorth.com
TECHUK REPORT EXPLORES DATA CENTRE HEAT EXPORT IN THE UK
As the global community intensifies its focus on sustainability and energy efficiency, the intersection between technology and environmental responsibility becomes increasingly significant. As a result, techUK has released a new report shedding light on the potential benefits and challenges associated with integrating data centre heat into district heating networks in the UK.
The report, Warming Up to Efficiency: Understanding the Potential Benefits and Pitfalls of Data Centre Heat Export in the UK, delves into the opportunities, barriers, and successes of reusing data centre residual heat, offering insights for stakeholders across industries. District heating networks, renowned for their low-carbon heating solutions, stand as a platform to harness surplus heat generated by data centres. However, effective implementation hinges on addressing several critical factors.
The report underscores the importance of sustainability considerations and regulatory frameworks in advancing towards net zero objectives. It calls for government clarity on participation criteria, heat availability, quality standards, and infrastructure guidelines to establish a standardised and scalable approach.
techUK, techuk.org
YONDR GROUP POWERS UP ITS FIRST DATA CENTRE CAMPUS IN MALAYSIA
Yondr Group has energised its first campus in Malaysia, marking a significant milestone in delivering the site’s power infrastructure.
Located in Johor’s Sedenak Tech Park, the campus is set to deliver 300MW of critical IT capacity when fully complete. It will see the development of multiple
phases, with access to dark fibre connectivity, scalable utilities, and infrastructure.
The milestone reached in Johor brings the company a step closer to its aim of positively contributing towards Malaysia’s digital infrastructure. The Sedenak Tech Park, formerly known as Kulai Iskandar Data Exchange (KIDEX), is a flagship data centre complex, which spans 700 acres and is nestled in the heart of the larger 7,290-acre Sedenak Technology Valley.
This puts Yondr’s data centre development geographically close to a wide variety of technology-driven developments. This will ensure that its clients in Malaysia benefit from long-term scalability potential, in terms of both power and land requirements.
Yondr Group, yondrgroup.com
RMD AND SCHNEIDER ELECTRIC ADD AN EDGE TO EDUCATION AT UNIVERSITY OF LINCOLN
University of Lincoln sets sail on an edge data centre modernisation journey in collaboration with Schneider Electric and RMD.
Established around 25 years ago, the University of Lincoln is one of the newest centres of academia in the UK. Charged with enriching the city’s economic, social and cultural life, and listed in the world’s top 130 in the Times Higher Education’s (THE) Young University Rankings 2022, today it is also one of the top universities for student satisfaction.
The main university campus is situated in one of the great historic cities, in the heart of the city of Lincoln. Today, the city is a winning combination of old meets new, where remnants of Roman Britain, a Norman castle and the Cathedral Quarter lay alongside a vibrant city
square and the contemporary architecture of the university’s campus buildings.
To date, the university has constructed or acquired 25 buildings at a rate of approximately one per year, recently opening a substantial new student village. In terms of its significance to the local economy, out of every five people you might stop in the streets of Lincoln City, one is likely to be studying at the university, where just under 200 different courses are offered (independent numbers suggest 18,000 students of a total 103,000 urban population).
As an academic institution that has more or less been conceived and grown up in the internet age, its student population is tech-literate, and the university depends heavily on IT to support the many faces of college life. For example, the campus has become largely cashless in recent years. “You can’t buy a cup of coffee or a sandwich if the IT isn’t working,” says Darran Coy, Senior Infrastructure Analyst and Team Leader for Compute and Storage at the university. “Everything has to work 24/7.”
With IT and network uptime critical for the function of the university, its IT team supports a variety of services, some of which require large amounts of data storage and processing. For instance, at Lincoln Agri-Robotics (LAR), established at Lincoln University as the world’s first global centre of excellence in agricultural robotics, lightweight robotic vehicles are sent into fields for a variety of tasks, using image recognition in applications from the identification and eradication of pests and diseases in real time without synthetic pesticides, to monitoring, weeding and harvesting crops.
Elsewhere, Darran says many of the standard applications used by students and the university itself have moved to a Software-as-a-Service (SaaS) or cloud-based delivery model. Accordingly, downtime is a luxury the university simply cannot afford. “In times past we could arrange to shut down IT systems on, say, a Thursday morning to carry out essential maintenance and upgrades, and of course our weekends were completely free,” he says. “But today, many of our buildings are open all day and every day. So we have to make sure that everything is up and running all the time.”
THE CHALLENGE OF RELIABILITY AT THE EDGE
“We open a new building nearly every year, and each one needs its own comms room. Despite the fact that we operate a central data centre, each comms room is populated with IT racks, including servers and networking equipment, together with all the necessary supporting
infrastructure, including cooling, structured cabling, power distribution (PDUs) and power protection. It is the epitome of edge computing.”
These edge environments, distributed across the city centre campus and satellite campuses at Riseholme and Holbeach, provide Wi-Fi connectivity, enabling access to SaaS applications required by students and staff. These edge facilities are, therefore, mission-critical to academic and back-office operations. Each person has a unique IP address, allowing them, for example, to print documents and materials. Even those studying traditional subjects like geography and music use as much technology as the computer scientists, according to Darran.
“We have something like 1,000 teaching groups that rely on AV, for example, they’ve got big screens, sound systems and digital projectors, all kinds of cool stuff to enliven lectures and make information more consumable.”
The university is also a major user of Power over Ethernet (PoE). “All of our access points use PoE,” continues Darran. “And it’s also used to power other assets such as Raspberry Pi operated digital information displays widely used around the campus and security cameras. PoE requirements increase the need for reliable power in all situations.”
Like many universities, Lincoln works with outside companies on research projects as well as providing incubation services for innovations which may have wider market appeal. These sorts of activities are income-generating for Lincoln, and the IT which supports them needs to be robust and demonstrably resilient.
Power reliability is, therefore, a major challenge for the university. Given its location in the city centre, the utility is generally dependable, and since prolonged power blackouts are not seen as a major threat, there is no provision for secondary power generation to any of the university facilities. However, intermittent disruptions do occur to the main power supply, and there are occasional ‘brownouts’. Taken together, these are recurring problems which could present a threat to continuous uptime.
Consequently, the university depends heavily on uninterruptible power supply (UPS) systems to build resilience into its network. UPS systems provide battery backup in the event of a disruption to mains power, so that essential functions can continue operating as normal until mains power is restored. Given the distributed nature of the edge IT infrastructure around the college, there has been a substantial wide
variety of UPS systems in place. Currently, there are 110 APC Smart-UPS systems from Schneider Electric providing backup to essential assets.
Given the lack of power-generating equipment at the university, UPS is specified with battery systems to deliver one hour’s runtime for the attached load. It had been the custom to add UPS support on an ad hoc basis as new buildings were built and fitted out with IT. In the early days, there was no systematic or coordinated approach to deploying UPS systems and, in fact, it was only the loss of expensive IT equipment in the early days which made their use standard.
“The distributed edge nature of the university’s IT infrastructure in the university and the ongoing expansion with new buildings, together with the growth in dependence upon SaaS and cloud services, has sometimes meant that infrastructure has not always kept up with demand. We faced two tasks – the need to maintain and upgrade existing UPS systems to ensure they could deliver the required runtime and the need to meet the provision of new Schneider Electric UPS and installation services in new construction projects. To help us, we partnered with RMD.”
RMD AND SCHNEIDER ELECTRIC ARE THE SOLUTIONS FOR RELIABLE EDGE
Darran and the team began their relationship with RMD over a decade ago when the Schneider Electric Elite partner won a tender for the replacement of some aging APC Smart-UPS On-Line SRT units on site. Soon after, the university took the step to implement a programme to ensure regular inspection and maintenance of the UPS devices on which it is so dependent. “In many respects, Schneider Electric is a victim of its own success – the UPS were so reliable and worked so well we hadn’t really realised that many of them were well past their use-by date!”
Opting for a systematic approach to securing power by contracting with a specialist UPS service provider, RMD was selected on the basis of an open tender. Based on various single and three-phase UPS systems from Schneider Electric, the approach to maintenance has since become much more proactive. RMD’s Scot Docherty says, “Our start point was to understand the condition of the UPS under contract using a simple traffic light scheme – there were a lot of red lights!”
Together, Darran and RMD started to renew the UPS and bring them up to spec. This ongoing programme covers the UPS installed in buildings, as well as adding UPS protection to some of the older campus buildings, which had never had the benefit of protection. In addition to the maintenance and modernisation services, RMD was also tasked to work with construction contractors to support them with sourcing and the installation of UPS to ensure power protection of edge server rooms in the new buildings.
The expertise of RMD has yielded benefits to the university, from procurement of UPS systems to maintenance and replacement, allowing the university to match new UPS systems to the exact requirements needed in each location.
“We’ve found it useful to involve RMD at the construction phase of each new building,” says Darran. “Sometimes a main contractor might recommend a UPS system that is wholly excessive to what we really need. Whereas, RMD, which has specialist expertise in the field, is much better placed to recommend what sort of UPS system we need and how many battery packs should be installed. So it’s great to have a relationship which allows us to ‘right-size’ our UPS requirements and, therefore, keep an eye on the efficiency and effectiveness of the proposed solution.”
The RMD relationship has made for a more systematic and regular approach to maintenance. “RMD knows us and our requirements and how we work,” says Darran. “Now, instead of waiting until something dies before replacing it, we have an ongoing system of regular maintenance and of replacing batteries and UPS units in accordance with their condition rather than their age.”
Two other important measures have been implemented as a result of the relationship. Firstly, following the installation of monitoring software - using Data Centre Expert - which is part of Schneider Electric’s EcoStruxure IT data centre infrastructure management solution, Darran is now able to manage and monitor all elements of the data centre infrastructure, including UPS and cooling centrally, to ensure maximum efficiency and reliability.
Data Centre Expert provides a scalable monitoring software solution that collects, organises and distributes critical device information to provide a comprehensive view of equipment. Importantly, the application provides instant fault notifications for quick assessment and resolution of critical infrastructure events that could adversely affect IT system availability.
The software gives Darran’s small team of six full visibility of infrastructure equipment spread widely across the campus in different edge locations, with the ability to prioritise remedial tasks in the event of unforeseen circumstances and respond more quickly to events.
Secondly, and further demonstrating how RMD’s expertise has benefitted the university, bypass panels as an aid to maintenance and replacement activities are now being installed as standard in the electrical design for infrastructure supporting the edge server rooms. “They’re not the cheapest things to put in, but they have saved us a lot of downtime. If a battery fails and needs to be replaced, for example, you just flick a switch to bypass the UPS and that allows you to keep IT services operating while you swap out any parts that need to be replaced.”
RESULTS
Immediate results from the university working with RMD and Schneider Electric include improvements to power availability as well as the serviceability of its infrastructure. By increasing temperature setpoints, the university is saving energy as a first step to moving towards becoming net zero carbon for IT services.
The improved monitoring and maintenance has resulted in a more efficient and reliable power-security environment that provides peace of mind to the IT staff and also presents opportunities for improvements in the area of sustainability. The insights from Schneider Electric APC UPS systems, APC, PDUs and APC NetBotz sensors made available using Data Centre Expert software have enabled Darran and the IT team to collaborate more effectively with the university’s sustainability team tasked with improving the overall carbon footprint of the campus.
The IT team has been slowly raising temperatures in its comms room, which naturally means using less power on air conditioning, using insights provided by a Data Centre Expert and custom software written by the IT team. “I can use query data to generate helpful graphs that provide an overview of whether the temperature is right in a room and where it can be appropriate to raise the operating temperature for better overall efficiency,” says Darran. “Being able to mine the data allows us to only use the power that we need.”
In addition, monitoring using Data Centre Expert software together with NetBotz sensors ensures that servers, as well as the UPS batteries, are kept within recommended temperatures. This ensures that warranty requirements are maintained and the batteries are in an environment that maximises their useful lifecycle. Another benefit is that equipment changes can be planned according to their condition rather than their age.
On the recommendation of RMD, physical infrastructure in edge locations is now being deployed in new builds with bypass switches as standard and upgraded in older installations, improving the efficiency of maintenance operations with no break in IT services.
“We enjoy working with RMD – over the years, their site engineers have given us straight advice, which we’ve found to be trustworthy. This is backed up by the quality of Schneider’s products and solutions. They not only help us deliver a first-class student experience, but also help us to achieve our efficiency and reliability goals whilst working towards greater sustainability. Together, we’re giving an edge to the education of all those choosing to enrich their lives by studying at the University of Lincoln,” concludes Darran.
Find out how one small change can be the big solution to your IT challenges. Watch the video.
THE SKILLS GAP IS WORSENING, BUT THERE IS A WAY OUT
Jad Jebara, President and CEO at Hyperview, illuminates the path for companies to bridge the widening skills gap, safeguarding the flourishing data centre industry’s future.
In an age ruled by data, the data centre industry has become the backbone of modern society, supporting the infrastructure that underpins our interconnected world. However, a critical challenge now threatens the stability and growth of the industry: The widening skills gap.
According to the Uptime Institute, 58% of companies reported difficulty sourcing qualified candidates to fill vacancies in 2023, up from 53% in 2022. This shortage, if not addressed promptly, poses a major existential threat to the data centre industry, and to every modern business that relies on digital services and storage. Not only should this challenge be taken seriously, but it should be addressed as the number one threat to the industry.
EVOLVING TECHNOLOGY DEMANDS SPECIALISED SKILLS
The rapidly evolving technology landscape is looked at by so many of us with excitement, and rightly so. However, it also presents a range of new challenges for data centre operators to grapple with. Workloads are shifting beyond traditional data centres, leading to a more complex and distributed infrastructure.
Gartner predicts that by 2025, 85% of infrastructure strategies will integrate core data centres, edge data centres, colocation, and cloud services. To navigate this complexity, personnel now require a specialised set of skills spanning various disciplines.
The talent shortage in the data centre industry not only jeopardises its ability to meet growing technological demands, but also poses a significant threat to the industry’s transition toward a greener, low-carbon future. The shift towards sustainable practices and the development of environmentally friendly data centres demand expertise in energy-efficient technologies, renewable energy integration, and innovative cooling solutions.
Finding qualified candidates with the required skill set has become a daunting task. Challenges arise not only due to the rapid evolution of technology, but also because established professionals in the field are in high demand. Poaching is nothing new, but it is certainly on the rise. The Uptime Institute found that 42% of operators reported issues with staff being hired away, primarily to competitors. This represents a considerable jump from 17% in 2018.
The industry is also confronting the impending retirement of many experienced professionals. The workforce within the industry tends to be on the more mature side, as indicated by a Data Centre Knowledge survey, which found that only 13% of respondents were below the age of 44. The Uptime Institute reported that by 2025, at least 2.3 million personnel will be required globally to keep data centres operating. Yet, by this time, half of the world’s engineering staff are projected to retire, creating an enormous void of talent to fill.
BRIDGING THE SKILLS GAP
As the data centre landscape continues to evolve, operators must prepare for emerging technologies such as artificial intelligence and machine learning. A recent Capgemini survey highlighted the growing concern, with 59% of respondents considering the gap between talent availability and open roles a top business risk.
To stay ahead of the curve, data centres must start to look for other ways to fill the skills gap, such as proactively implementing training and development programmes for graduates entering the workforce. Efforts must be intensified to establish trade school and
graduate talent pipelines to the data centre that prevent other sectors from poaching such valuable talent. Additionally, the focus should also be directed within to retain current staff. By providing thorough training on emerging technologies and relevant regulations, these initiatives can help existing employees gain the necessary skills to continue a long, successful career in the modern data centre environment.
To broaden their talent pool, operators should start looking internationally for talent. Companies such as Deel.com provide opportunities to hire resources across the globe, breaking down geographical barriers. Improving recruitment strategies is equally vital, focusing on networking opportunities, role-model presentations, and career progression. By emphasising the importance of skills in project management, technical networks, data management, cyber security, and multi-cloud environments, the industry can attract and retain top talent.
Another important step to bridging the skills gap is to implement next-generation data centre infrastructure management (DCIM) applications into data centre operations. These technologies streamline management, automate tasks, and leverage AI and machine learning to enhance efficiency, resiliency and sustainability. The importance of evolving process maturity levels to achieve the four R’s - Recruit, Retain, Retrain and Reward - cannot be overstated. Next-gen DCIM technologies operate at a maturity level five, in contrast to traditional processes with a maturity of typically level one or two.
Currently, data centres are in the transition of forging a greener future and DCIM software is central to this. Its integration provides real-time insights into energy consumption, forecasting energy needs, optimising temperature settings, benchmarking performance, and measuring progress. This level of automation becomes especially invaluable during talent shortages, offering substantial support to ensure operational efficiency and environmental responsibility.
Hyperview, hyperviewhq.com
STEPPING TOWARDS A POWERFUL FUTURE
Isha Jain speaks to Louis McGarry, Sales and Marketing Director at Centiel, to learn more about his personal journey into the world of uninterruptible power supply (UPS).
IJ: Tell us about yourself and how you got into the sector.
LM: I left Bournemouth University with a degree in Design Engineering, and at the time, I didn’t even know UPS systems existed! I wanted to design aircrafts for a living, but hours spent at a CAD machine turned out to be uninteresting. I applied for a sales engineering job and Centiel’s now Chairman, David Bond, was on the interview panel. Later, I received a copy of The UPS Handbook - of which David is the authorin the post, with a note asking me to read it and
come back for a second interview. That was 18 years ago and I’ve never looked back.
IJ: For those who may not know, can you give us an overview of the work that Centiel does?
LM: Centiel is a Swiss-based technology company designing, manufacturing and delivering industry-leading quality power protection solutions for critical facilities. The individual members of its R&D team developed and brought to market the first
three-phase transformerless UPS and the first, second, third and fourth generations of true modular, hot-swappable UPS. Centiel’s products protect critical power loads in locations across the world.
IJ: What is your role like, and what has been your greatest achievement?
LM: I wear several hats. I’m busy with client meetings, I work to ensure my team is aligned, I’m involved in marketing and there is a significant strategic side to my role too. My greatest achievement is growing a team of individuals who can plan projects and support clients on a much bigger scale than I did when I started out. I try to impart knowledge early, so they can achieve great things. Centiel is all about quality - from our Swiss manufacturing base to our industry-leading solutions, to our client relationships. This all begins with strong foundations and the training of our team of trusted advisors.
IJ: Are there any exciting projects that you are working on at the moment?
LM: We are currently working on projects across different industries, from transport to banking and finance to healthcare. We have recently launched an IP54 UPS, making our products suitable for use in semi-industrial settings, opening another market sector for Centiel.
IJ: Where does UPS fit in the overall sustainability of data centres?
LM: Sustainability means making informed decisions today which will have a positive impact on tomorrow. However, up until recently, the most sustainable options have often been overlooked in place of lowest purchase price. Now, due to the dramatic increase in energy costs, customers are seeking solutions which save energy and have a low total cost of ownership (TCO). The introduction of StratusPower, with its 30-year design life, 9-nines availability and almost 98% efficiency, can help organisations to move away from a ‘throw away’ culture with a genuinely sustainable offering that also helps them reduce their TCO.
IJ: What does the future hold for Centiel products within the data centre industry?
LM: Centiel is a technology company which uses technology to solve challenges – these just happen to be in the UPS industry. We were initially focused on maximising availability and energy efficiency, but these technology challenges are now solved and we are focused on sustainability and how energy can be best managed in the future, especially for data centres. StratusPower is already hardware-enabled, and is therefore, future ready to accept and harness alternative energy sources.
IJ: Is there a topic you believe needs more attention and discussion in the industry?
LM: Sustainability must be addressed. The world is already behind the curve and conversations in board rooms have simply been too late to achieve the UN’s 2030 sustainability goals. In the UPS industry, we can make a
difference today to ensure organisations are more energy efficient, and there are longer term measures which can be brought in too. People are often worried about change, but we have no choice.
IJ: What are some challenges you face as a UPS manufacturer?
LM: Although Centiel’s team has led development in the UPS industry for many years, the company is relatively young. Established competitors have deep marketing pockets, yet their technology is not as good or as sustainable as ours. Our main challenge, therefore, is an ongoing one around education. However, our specialist team offers free training and acts as a trusted advisor to help organisations make informed decisions about different UPS solutions and help them work out what the best long-term options are.
IJ: What’s next for you and your career?
LM: To continue to grow and develop along with Centiel. I want to be the leader of a team that helps to revolutionise critical power protection in data centres and delivers excellent advice and customer service. I hope to leave a legacy of people who can carry on supporting clients with our solutions for many years to come.
IJ: What are your interests away from work?
LM: I love spending time with my family. I have three young children and act as transport to all manner of activity clubs from cubs, squirrels, karati, rugby to swimming – not necessarily in that order!
Centiel, centiel.com
WHAT LIQUID COOLING MEANS FOR CARBON REDUCTION IN DATA CENTRES
As data centre facilities produce a significant carbon footprint, with high-power densities pushing air cooling to its limits, they will soon require specialised cooling solutions. Junji Zhu and Maria-Anna Chatzapoulou, Principal Engineers – Mechanical at Cundall, explain further.
The journey towards net zero carbon practices is approaching its culmination: action. Regulatory bodies are now focusing on the effects of embodied carbon and Scope 3 emissions.
Embodied carbon refers to the total emissions from producing and transporting the materials of a building, such as the emissions released when processing raw materials, transportation, construction, and the end-of-life disposal of materials. This is relevant for companies that want to invest in data centre development and retrofitting, as European companies must report indirect emissions through their value chains from 2025.
One of the challenges is to develop net zero carbon data centres while covering the demand for high-performance computing (HPC). To do this, companies must look to
systems that consider all of their stakeholders across the whole supply chain. Additionally, they are likely to deploy liquid cooling technologies to cope with the associated steep increase in IT density.
UNDERSTANDING THE MEASUREMENT OF EMBODIED CARBON
There is a need for a standardised approach to measuring net zero carbon data centres, but there is currently no global agreement on what this should entail. However, some initiatives seek to achieve this, for example, the iMasons Climate Accord Group. The lack of standardisation has made it difficult for companies to accurately report their emissions, leading to concerns about greenwashing.
Whole Life Carbon Assessments (WLCA), starting from material extraction, are becoming more comprehensive and require a standard metric for embodied carbon impact assessment. Whilst the sector-wide metric of kgCO2e per m2 is used, the metric of kgCO2e per kW of IT capacity might be better suited when evaluating operating efficiencies, particularly considering the imminent growth of high-density HPC servers. This will provide a more accurate representation, improving audit processes.
THE POWER OF LIQUID COOLING
The technologies of the future require HPC. Many organisations building their data centres will have to invest in campuses that increase their IT capacity in MW and rack density. However, transitioning is expensive, and by current requirements and predictions, unnecessary at an entire data centre scale. Instead, organisations should look to construct their data centres to be HPC-ready, utilising sustainable, targeted cooling systems such as liquid cooling. By being HPC-ready, companies can take advantage of technologies such as AI, while the data centre itself continues to operate with a low PUE.
HPC pushes traditional air-based cooling to its limits, potentially increasing costs and carbon emissions and impacting sustainability. In some instances, air-based adiabatic cooling systems, particularly at scale, may see a high demand for water consumption when trying to cool HPC racks, as the racks need to operate at a lower temperature with higher air speeds, making liquid cooling more efficient in these use cases. As the number of HPC racks and their densities increase, liquid cooling deployment will also increase. These systems can manage the increased energy consumption of HPC racks in a targeted way that doesn’t affect the standard rack systems around them.
There are two main groups of liquid cooling technology suitable for HPC: direct-to-chip and immersion/precision-immersion cooling. Another technology often discussed as liquid cooling, but strictly speaking is not, but can be
suitable for HPC, is Rear Door Heat Exchangers (RDHx). Each of them have different benefits, but the main advantage of using them is that they can circulate liquid precisely to cool HPC environments without affecting the standard air-cooled racks often situated close to them.
With direct-to-chip or immersion liquid cooling, it is possible to achieve the same IT processing capacity using less power input to the rack due to the decreased demand for server fan power. This can be in the region of 10%, but does not contribute to increasing the PUE as it sits on the IT power side of the equation. In addition, liquid has a far greater heat absorption rate than air, which can transport the heat from the racks more efficiently and at high liquid temperatures, potentially eliminating the need for compressors to reject the heat.
TURNING STEPS INTO INDUSTRY STRIDES
Organisations that use HPC technologies must understand their carbon footprint and the technologies they use to comply with sustainable regulations. A hybrid approach will be essential for most in the future, rather than the complete transition. It will be crucial for legacy facilities, as it can help operators transition to more efficient systems while increasing IT capacity.
Cundall is already taking steps to address the need for a net zero carbon data centre. Its team is producing what a data centre of this kind would look like in practice and what operators need to consider when designing one. It has developed tools to measure embodied carbon affecting different areas of the project development. This approach to definition, measurement and reporting is significant.
As AI and HPC continue to increase IT capacity demand in data centres, liquid cooling will become far more relevant. However, only with a sustainability-led mindset can we realise its actual impacts and utilise the technology to the fullest.
Cundall, cundall.com
DATA CENTRE COOLING: AN IMPERATIVE FOR INNOVATION
The rise of immense computing power presents unique cooling challenges. Nikolai Chakinski, Product Manager, Colocation at Neterra, shares that liquid cooling solutions are well-positioned to address these challenges.
The data centre services industry is experiencing a period of unprecedented growth, fueled by the ever-increasing demand for computing power. This surge is particularly pronounced in the realm of high-performance computing (HPC). The raw computing power of data centres comes from two workhorses: Central Processing Units (CPUs) and Graphics Processing Units (GPUs). While each excels in specific tasks, both share a key characteristic; they’re becoming increasingly powerful, cramming more processing muscle into smaller physical footprints. This miniaturisation, while impressive, comes at a cost of more heat generation.
This translates to a rapid rise in data centre power consumption, pushing the boundaries of traditional cooling methods like air conditioning. These systems, while effective for a time,
are struggling to keep pace with the thermal demands of modern data centres.
THE LIMITATIONS OF AIR CONDITIONING
Air conditioning systems were not designed for the intense heat loads generated by today’s high-powered servers. They’re becoming increasingly inefficient and expensive to operate, struggling to meet the cooling requirements of these ever-more-powerful machines.
Traditional air conditioning systems have a practical upper limit of around 6-7kW on average cooling capacity per rack. This falls short for the growing number of racks demanding 12kW or more, highlighting the need for innovative solutions.
LIQUID COOLING SOLUTIONS: A STEP FORWARD
Liquid cooling solutions offer a significant leap forward in cooling capabilities compared to traditional air conditioning. They boast superior heat dissipation, improved energy efficiency, and the potential to reduce operating costs. There are two primary types of cooling solutions: immersion cooling and liquid cooling with heat exchangers.
IMMERSION COOLING: THE PINNACLE OF EFFICIENCY
Immersion cooling stands as the most efficient liquid cooling technology available today. In these systems, servers are submerged in a special dielectric fluid, which directly absorbs the heat they generate.
This method offers the highest cooling capacity and the lowest energy consumption of all liquid cooling solutions, making it ideal for the most demanding applications such as AI and HPC.
DIRECT-TO-CHIP LIQUID COOLING: A VIABLE ALTERNATIVE
Direct-to-chip or cold plate liquid cooling provides another effective liquid cooling option. This approach is bringing the liquid closer to the
heat source and involves circulating a coolant through a cold plate heat exchanger located on the chip. The heat dissipating from the computer chip is absorbed into the coolant loop.
Compared to air conditioning, cold plate liquid cooling offers increased cooling capacity and improved energy efficiency. This makes it a suitable solution for applications requiring moderate cooling capabilities, such as general-purpose computing.
A SHARED BENEFIT
Liquid cooling solutions hold the potential to significantly reduce the environmental impact of data centres by optimising energy consumption and water usage.
Immersion cooling, in particular, shines in terms of energy efficiency, requiring less energy to remove the same amount of heat compared to other methods. Additionally, liquid cooling systems can be repurposed to capture waste heat from data centres, which can then be used for building heating, for example. The sustainable approach here is demanding reusing the energy one way or another.
THE FUTURE OF DATA CENTRE COOLING
As the data centre industry continues to evolve, innovative cooling solutions will play a central role in driving progress. Liquid cooling, with its superior cooling capabilities and energy efficiency, represents the path forward for building data centres that are truly future-proof.
Neterra’s latest data centre, SDC 2, exemplifies this commitment by utilising cooling technologies to cater to the growing demand for high-performance computing while also prioritising sustainability. The facility leverages a combination of advanced cooling and structural engineering techniques to minimise energy consumption and environmental impact.
As the data centre industry continues its journey of transformation, liquid cooling solutions stand ready to play a pivotal role in driving innovation.
Neterra, neterra.net
HOW TO UNLEASH THE FULL POTENTIAL OF DATA CENTRE LIQUID COOLING
Mark Seymour, Distinguished Engineer at Cadence, invites readers to see how data centre digital twins can ensure liquid cooling reaches its full potential.
Reducing emissions is a top priority for leaders across the globe, pushed by the demands from both governments and customers. The data centre industry is no different. Luckily, there is good news. Those looking to drive both sustainability and efficiency can consider liquid cooling as part of the solution.
However, facility leaders and operators need an in-depth understanding of the technology to introduce it effectively. That includes knowing not just the pros, but also the limitations, and how tools such as data centre digital twins can help to ensure liquid cooling reaches its full potential.
LEGACY INFRASTRUCTURE CREATES COMPLEXITIES
The first challenge stakeholders must navigate is the integration of liquid cooling with legacy air cooling infrastructure in traditional data centres. Coordinating the intricate flow networks of both systems can be complex. Liquid cooling has the potential to disrupt established air cooling patterns, although this requires the careful retrofitting of existing infrastructure. Moreover, new liquid cooling technology that can work harmoniously with older systems could be costly for legacy data centres that have already invested heavily in air cooling systems.
MATERIAL LIMITATIONS
Operational complexity can arise even in new facilities that incorporate liquid cooling from the get-go. After all, introducing fluid connections is an entirely different setup from what many data centre professionals are used to.
Immersion cooling, where servers are submerged in a mineral oil (or equivalent) bath, or cold plate technology, where a metallic plate with coolant inside removes heat directly from the components, introduces a completely new operational paradigm. It requires specialised training and expertise to navigate this unfamiliar territory and ensure efficiency and safety. In short, the transition from air to liquid cooling demands new skills.
Even with a willingness to embrace new methods, practical limitations exist. Theoretically, immersion liquid cooling should remove 100% of the heat from chips into the liquid. However, material incompatibilities and the systems’ reliance on buoyancy-driven flow create limitations. For example, in the immersion model, the insulating plasticisers in wires can react with the coolant, causing degradation and brittleness and impacting the equipment’s longevity. Furthermore, as chips become more densely packed to facilitate higher power densities, immersion systems may find it difficult to remove heat through a buoyancy-driven flow mechanism.
Cold plate technology offers an alternative approach but comes with its own dilemmas. Subpar coolant quality can clog and corrode the plate, diminishing the efficiency of heat removal. Because the hot chips and the cold plate cannot be effectively insulated from the air and cold plates are generally only applied to critical chips (for example, CPUs, GPUs and DIMMS), a portion of the heat will escape into the surrounding environment. Therefore, while offering advantages in certain scenarios, cold plate technology still requires careful consideration and will not be a complete blanket solution for all data centre cooling needs.
In light of the inherent challenges associated with cold plate and immersion cooling technologies and the absence of a clear
frontrunner for widespread adoption, facility owners and operators are tasked with critical evaluation.
THE DIGITAL TWIN ADVANTAGE
Data centre digital twins, virtual replicas of physical data centre facilities, can offer the insight needed to help liquid cooling decision-making. These valuable tools enable modeling different cooling scenarios and technologies to be trialed in the digital realm so operators can make informed decisions about implementing changes in the real world. This helps maximise performance and minimise downtime, which are both key measurements for whether the introduction of liquid cooling has been successful.
Turning to digital twins to implement liquid cooling effectively will also put data centres in a stronger position to meet the regulations looming on the horizon. This includes the Energy Efficiency Directive (EU/2023/1791) and Corporate Sustainability Reporting Directive (CSRD), both of which mean larger companies must disclose their carbon usage figures, both direct and indirect. The reason is that digital twins enable the design, implementation and operation of liquid cooling to achieve carbon and energy reduction goals. However, digital twins can also provide customised dashboards and reports to help facilities oversee the necessary information for upcoming regulations.
AN INGREDIENT FOR SUCCESS
Cleary, despite the hurdles, the potential of liquid cooling remains significant - it is a lucrative pathway to improved efficiency for data centres. However, this is not a solution that data centre managers can simply plug and play. Implementation cannot be rushed and requires intelligent insight. To manage the infrastructural and operational complexities of liquid cooling, operators need a safe test bed for trialing their introduction. Digital twins can offer this and help drive success.
Cadence, cadence.com
WHY IMMERSION COOLING IS NOT YET A VIABLE OPTION FOR MANY DATA CENTRES
Considering a wide range of existing data centre cooling environments, Paul Mellon, Operations Director, Stellium Datacenters, offers his viewpoint on immersion cooling.
To say that immersion cooling is not yet a viable option in many data centres is quite a bold statement in an environment where immersion cooling can achieve the highest level of cooling efficiency. There are many in the data centre and communications industry who view immersion cooling as a panacea to energy inefficiency. In many ways, it has the capability to bring rack power density and efficiency to new levels.
However, this comes at a price in terms of the immersion environment, which is usually a tank filled with dielectric fluid. Both the tanks and the dielectric fluid bring a new range of issues for the data centre environment to embrace:
• Weight: This can vary in fully loaded size and weight from 500kg to several thousand kilograms requiring 20kN+ floors to support the same.
• Physical size: These can vary in size from 1,000mm x 800mm x 1,500mm high to 6,000mm x 2,000mm x 2,000mm high.
• Dielectric fluid handling: This is a significant issue. Depending on the specific dielectric, there can be health and safety issues, as well as the practical issue of dealing with 230/400V in a dielectric environment.
• Power density: HPC power densities can be very challenging for many of the mature data centres.
• Power and communication interfaces with the tank.
• Removal/reinstatement of IT kit from the immersion tank.
• Specialist training for staff: We are already challenged by a significant deficit of trained talent in the data centre industry, so the additional training will add to an already challenged situation.
While none of these elements are particularly challenging for a green field development, many of the 3,000 existing data centres in North America, plus another 3,000 in Europe will struggle to accommodate access, weight, space and staff training. Currently, there are only a dozen or so data centres in the world which are OCP (Open Compute Project) certified, Stellium Datacenters being one which can deliver on these elements.
In time, the immersion solution will evolve into a product that will be more favourable to deployment in a broader range of data centre environments. Currently, it is primarily used by clients who have purpose-built facilities specifically designed for this cooling method. There are also the few within the existing data centre pool that, with significant investment, can be adapted for immersion cooling.
This may all sound too negative in terms of immersion cooling. As an engineer, Paul has been trained to design the most efficient system, as well as ensuring the design works in an ‘existing world’ environment. He has the view that the world will take some time to evolve to a point where immersion cooling becomes an off-the-shelf solution to fit into a wide range of existing data centre cooling environments.
There are some 1.3bn internal combustion engine cars in the world today, alongside 26 million fully electric cars. In the last three years, this number has begun to move at a faster pace to fully electric vehicles. The product is more refined and practical. It has taken some 20 years for the motor industry to evolve to this position. HPC immersion cooling as a design, like the electric car, will continue to evolve and
mature to a product which is readily deployable to a wide range of data centre facilities, instead of the current purpose-designed immersive cooling facilities.
A BRIDGE TOO SOON?
So, the expression of a bridge too far might be better expressed as a bridge too soon, certainly for the very many existing data centres that are not OCP certified and cannot meet the fundamental requirements of immersion cooling. Therefore, in the interim, the existing range of HPC cooling solutions will continue to deliver robust efficient HPC solutions for clients:
• In-row cooling for application up to 40kW per rack
• Rear door cooling up to 50kW
• Direct-to-chip/cold plate up to 100kW
These are all tried and tested robust solutions that can freely operate within the existing data centre environment. They have been deployed in many existing data centres. All can be configured to deliver a PUE of sub-1.2. In many ways these non-immersive options maintain the flexibility of the traditional data centre to evolve over time.
The term, flexibility, is really important. Taking an existing rack and fitting it with a rear door cooler in one of the 6,000 data centres in Europe and North America offers an immediate solution to support HPC demands. This route to HPC also has the real value of extending the life of existing data centre facilities without creating the significant construction carbon burden of building new facilities.
Stellium Datacenters, stelliumdc.com
UNLOCKING COOLING EFFICIENCIES WITH REAR DOOR HEAT EXCHANGERS
Data centre organisations are striving to balance innovation with sustainability, especially when it comes to finding efficient cooling solutions for the data centres powering AI infrastructure. John Hall, Managing Director, nLighten UK, says that rear door heat exchangers can be a promising solution.
ENHANCED COOLING EFFICIENCY
Rear door heat exchangers, also known as rear door coolers or heat exchanger doors, provide a targeted cooling solution for high-density server racks. By directly cooling the exhaust air expelled from servers, these heat exchangers optimise airflow management, ensuring that hot spots are mitigated effectively. This targeted cooling approach enhances the overall efficiency of the cooling system, enabling data centres to maintain optimal operating temperatures even under heavy computational loads.
ENERGY SAVINGS
Traditional cooling methods, such as raised floor air conditioning or overhead cooling units, often result in significant energy wastage due to inefficient airflow distribution and cooling redundancy. Rear door heat exchangers eliminate these inefficiencies by leveraging the hot air generated by servers to facilitate the cooling process. By harnessing this waste heat, data centres can reduce their reliance on mechanical cooling systems, resulting in substantial energy savings and lower operational costs, particularly in AI environments characterised by continuous high-power computing.
SCALABILITY AND ADAPTABILITY
One of the key advantages of rear door heat exchangers is their scalability and adaptability to evolving AI workloads. As organisations scale their AI infrastructure to meet growing computational demands, traditional cooling solutions may struggle to keep pace. Rear door heat exchangers, however, can be seamlessly integrated into existing server racks or retrofitted onto new installations, providing a flexible cooling solution that can easily accommodate changes in rack density or configuration. This scalability ensures that data centres remain agile and responsive to the dynamic requirements of AI applications without compromising on cooling efficiency.
IMPROVED RELIABILITY
Maintaining optimal operating temperatures is crucial for the reliability and performance of AI hardware. Excessive heat can degrade component lifespan, increase the risk of system failures, and compromise the accuracy of AI algorithms. Rear door heat exchangers can play a pivotal role in ensuring the reliability of AI infrastructure by actively dissipating heat from servers, thereby extending equipment lifespan and reducing the likelihood of costly downtime. By maintaining consistent temperatures, these heat exchangers contribute to the overall resilience and uptime of data centre operations, essential for mission-critical AI workloads.
ENVIRONMENTAL SUSTAINABILITY
In an era where environmental sustainability is a top priority for businesses worldwide, rear door heat exchangers offer a compelling solution to reduce the carbon footprint of data centres. By optimising energy efficiency and minimising reliance on traditional cooling
methods, these heat exchangers help data centres achieve higher levels of energy efficiency and reduce the greenhouse gas emissions associated with cooling operations. Additionally, the reuse of waste heat for heating applications or district heating further enhances the environmental benefits of rear door heat exchangers, transforming data centres into energy-positive assets that contribute to local sustainability initiatives.
Rear door heat exchangers, therefore, represent a game-changing technology for data centres powering energy-hungry AI applications. However, there are a number of practical considerations when considering installing rear door hear exchangers. If the data centre uses raised floors, can the additional weight be supported, knowing that the ‘Wet Weight’ of the rear door heat exchanger, including the frame, can be anything between 120 and 190kg? What existing infrastructure can be used? Can the existing air cooling equipment support the new hybrid cooling infrastructure? Is the site water supply suitable? And is there capacity for growth as cooling requirements increase?
Having evaluated these factors carefully and achieved the optimum balance, data centre operators and owners can expect significant improvements in cooling efficiency, reduced energy consumption, enhanced scalability, increased reliability, and environmental sustainability.
In summary, these innovative cooling solutions are poised to drive the next wave of efficiency and performance in AI infrastructure. As organisations continue to embrace AI technologies to gain a competitive edge, investing in rear door heat exchangers is a strategic decision that promises long-term benefits for both business operations and the planet.
SUSTAINABILITY FEATURE
SUPPORTED BY
UPS FOR TOMORROW’S WORLD
Sustainability means making informed decisions today, which will have a positive impact on tomorrow. So, when replacing an uninterruptible power supply (UPS), Louis McGarry, Sales and Marketing Director at Centiel, says that we have a duty to research and do our due diligence to select solutions which will offer a better, greener future.
Up until recently, the most sustainable options have often been overlooked in place of options which save on upfront CapEx costs. Money was the driver. Now, due to the dramatic increase in energy costs, for the first time in history, we are seeing this same driver push customers towards solutions which also save energy. Sustainability now goes hand-in-hand with cost savings.
IMPROVING EFFICIENCY
The positive news is that most modern UPS already offer high levels of efficiency. For example, both of Centiel’s modular UPS solutions, CumulusPower and StratusPower have online efficiencies of over 97%.
Choosing the most efficient UPS available and right-sizing the UPS to the load to ensure the UPS works at the optimal point in its efficiency curve is essential.
Selecting a UPS with variable load management can help too. In a situation where the load can vary, UPS modules can be put into a ‘sleep mode’. While not switching power, their monitoring circuitry is fully operational, so they are instantaneously ready to switch power if needed. Because it is the switching of power that causes the greatest energy losses, system efficiency is significantly increased.
Peak shaving is also becoming more commonly used. This is a way that facilities can actively use their own energy storage to save costs during peak times of demand
on the national grid. Peak shaving can help customers avoid paying higher electricity prices or fees that are applied when going above their maximum peak load. It can be achieved by either reducing usage levels by switching off non-essential equipment or by utilising other energy sources such as battery storage or UPS systems. With UPS, peak shaving is achieved with reduced energy taken from the grid, while batteries simultaneously discharge during high rate demand.
For peak shaving to work successfully, the necessary technology must be included in the UPS, so product selection from the outset needs to be considered carefully. The type of UPS battery used is also critical.
Using LiFePO4 batteries, it is possible to reduce costs by taking some energy from the batteries instead of the national grid during peak times in the day, for example, and recharging batteries at times of lower demand when electricity costs are less, such as at night. It would not make sense to discharge batteries completely or quickly, so to preserve battery life, small amounts of battery energy are taken simultaneously with the grid to shave pence off the bill which over time adds up to significant savings. Centiel’s Intelligent Flexible Battery Charging functionality, which is customer enabled, sets site specific parameters and takes place automatically.
STRATUSPOWER
However, Centiel has gone even further with the recent introduction of its latest sustainable UPS innovation, StratusPower. StratusPower shares all the benefits of its award-winning three phase, true modular UPS CumulusPowerincluding ‘9 nines’ (99.9999999%) availability to effectively eliminate system downtime; class-leading 97.6% online efficiency to minimise
running costs; true ‘hot swap’ modules to eliminate human error in operation –now also includes long-life components to improve sustainability.
Uniquely, StratusPower offers a 30-year design life, so five years down the line facilities will not be hit with component lifecycle replacements. The system is fully scalable and can be enhanced through the years to ensure the latest tech is always on board. It’s a distinct step away from our ‘throw away culture’ to one where the UPS can be upgraded or repaired rather than replaced to extend its useful working life.
HARNESSING RENEWABLES
What if StratusPower could also harness renewable energy? Well, StratusPower is already hardware enabled, and with adaptations to the software/firmware, is future- ready to accept alternative energy sources.
To save energy, become more sustainable and lower carbon footprints, organisations will need to plan many years ahead. However, with the proper research, there are now options available to contribute more positively to tomorrow’s world, where managing and reducing future energy use will become second nature.
Let’s see what changes we can make today to reduce energy, save costs and become more sustainable for the future.
Centiel, centiel.comSHARING SUSTAINABILITY CHALLENGES AND BEST PRACTICES
R&M considers a sustainable approach to be a prerequisite for longevity. As a 60-year-old corporation dealing with sustainability from the outset, Markus Stieger, Chief Operating Officer, shares some of R&M’s findings and best practices.
The Corporate Sustainability Reporting Directive (CSRD) and other measures related to the European Green Deal are impacting the data centre market for fibre and copper cables, just like any other industry. This extends the scope of the current Non-Financial Reporting Directive (NFRD) considerably. The data centre sector is working to get to grips with these regulations and understand what they mean for operations, processes and supply chains.
It’s never too early to start documenting and collecting data continuously. Like other businesses, R&M had to start collecting the data it had never tracked in detail before.
CO2 monitoring, for example, is particularly difficult. Everybody is talking about CO2 reduction, but nobody seems to know exactly how much CO2 they’re emitting! Setting up a process takes time. Two years after starting data collection, Markus says that R&M has a report for Scope 1 and 2 emissions, whereas, Scope 3 is still an ongoing process.
People are just beginning to understand what CDSR requires, particularly when it comes to estimating Scope 3. How does one get the information from its partners? How does a company ensure it’s audit-ready?
THE ‘CONNECTING THE PLANET’ SUSTAINABILITY STRATEGY
R&M’s core business of providing connectivity categorises 34 main goals into four areas:
• Connecting people encompasses all topics relating to social commitment.
• Connecting nature addresses all environmental and climate protection issues.
• Connecting ethics addresses ethical and compliance principles we act on.
• Connecting circularity encompasses all topics of the R&M value chain.
“We want to seize new opportunities, retain our financial independence, and sustainably invest our profits, as we have done in the past,” says Martin Reichle.
“R&M is a family-owned company with a long-term focus. We are proud to be celebrating our 60th anniversary in 2024. This alone is proof of how solid and sustainable our business model is,” says Peter Reichle.
By the end of this year, R&M plans to have a first estimation in place, guiding its goal to reduce greenhouse gas emissions 50% by 2030. For ongoing data collection, employees need to receive training, or new people must be hired. For big and small firms, it’s a very specific, time-consuming job requiring a great deal of expert knowledge and teamwork.
One thing that is very important is to get your documentation right. To date, assessors often simply checked whether companies had policies in place. If that was the case, they would be
awarded a certain number of points. However, just because policies have been introduced, doesn’t necessarily mean that people are acting on them. Now, auditors are beginning to ask questions such as: Do people work with policies? What is the specific result? It is also important to document in a way that’s clear to others. The burden of proof lies with oneself!
R&M spent a long time documenting what it had already been doing for a considerable period of time. Once a larger team was involved, data was collected more professionally and the practices evolved - even though there are still some gaps to fill. Markus imagines that smaller businesses may struggle as they may not be able to conclusively demonstrate that they’re doing things right if they simply haven’t kept records.
R&M has been focusing on gathering data and having documentation in place to prove that it is complying with everything, from environmental standards to human rights and labour laws, but it’s hard to organise this without adjusting resources. It can place a lot of pressure on individuals that suddenly need to take this on in addition to their ‘regular’ jobs. Although implementing new systems costs manpower, this will, in fact, reduce the workload in coming years. Of course, it’s also vital to make sure all information is provided in a way that is immediately understandable and useable by all involved and by the auditor.
EcoVadis has been assessing R&M’s business since 2016. Through EcoVadis, the company has learned which data it had to start tracking to improve its scores. Hence, the company did its homework and started documenting policies that had been ‘unwritten rules’ for many years. These topics are transforming from ‘voluntary’ to ‘mandatory’ quickly. The sooner one starts to work on documentation and data collection to comply with regulations, the better.
Transformation takes time, so it is important to engage, empower and involve people to create motivation. By making people part of the process, Markus believes that R&M empowers them to take initiative. Last year, it started a global sustainability ambassador network to foster exchange. R&M’s motto is, ‘Every action, no matter how small, has an impact’.
What it’s also starting to do is rethink processes and products that have remained unchanged for many years, including the materials the company utilises. Technology is advancing and so are the recyclable or recycled materials that it can use in its products. Markus shares that the company started out as a component provider and evolved into a full solution provider, which has stimulated an alignment between different business lines and the value chain. The only constant is change and he believes that they are just starting to ask questions such as: What happens with a cable when it has been in the ground for 25 years? Which parts can we reuse? What happens when the end of the lifecycle of a product has arrived? Can we prolong its operational lifetime? Give it a second life?
R&M’s first Corporate Social and Environmental Responsibility (CSER) report was released in 2010. Quite early on, especially as the company is not obliged to report figures to shareholders. Initially, it released a report every two years, yet since 2021, it has released an edition annually, as part of management’s strategic agenda.
So putting out a CSER Report can significantly boost reputation and enhance brand value by making commitment to sustainability and social responsibility tangible. It is vital now that investors consider environmental, social, and governance (ESG) factors as part of their investment criteria. CSER reporting also helps
identify and manage environmental and social risks at the earliest stages, while identifying inefficiencies and waste, helping bring about operational improvements and cost savings.
Last year, the company began with the first in-house double materiality assessment. This provides information to determine whether a sustainability topic is critical for a business or not. The company’s main learning was that it should have done this type of analysis earlier!
It is very important to evaluate the most critical sustainability topics for the business, to narrow them down and focus on the most relevant ones. As topics change, a continuous materiality assessment is important. Current focus areas include CO2, packaging and waste, transportation, human rights, equality, diversity and circularity.
“On to the next 60 years of doing business sustainably!” says Markus.
He believes that we are all on a learning path together and need to embrace this societal shift and turn it into business opportunities that benefit business, environment and society while complying with the law. None of us is going to save the planet alone, even the tiniest action has an impact. It adds up and it’s scalable.
Reichle & De-Massari, rdm.comWORKING EFFICIENTLY WITH POWER ELECTRONICS IN DATA CENTRES
By leveraging power electronics, data centres can improve reliability, efficiency, flexibility and sustainability. Jorlan Peeters, Managing Director at HyTEPS, explains how we are seeing more and more power electronics in data centres worldwide – which can bring significant advantages, as well as complications.
Power electronics uses electronic (semiconductor) components such as diodes, thyristors, MOSFETs and IGBTs to switch, control and convert significant electrical power. They might be used to convert direct current to alternating current or change voltage level or frequency. Power electronics enable more efficient energy conversion and provide flexibility in managing and controlling electrical energy.
In data centres, power electronics are used in a variety of ways.
Some examples are:
• Inverters used to convert mains voltage to required levels for equipment, or convert UPS direct current to AC power.
• Adapt energy supply to specific equipment and loads in the data centre. This can help optimise energy consumption and deal with peak loads.
• Convert and distribute energy more efficiently, leading to lower operational costs and a reduced carbon footprint.
• Integrate variable renewable energy sources more easily and efficiently.
• Reduce generated heat and optimise cooling systems, for example with Variable Frequency Drives (VFDs) for fans and pumps.
• Power Factor Correction (PFC) of (non-linear) loads.
• Power electronics make smaller equipment possible, which can lead to significant space savings.
A FEW CHALLENGES
Power electronics place higher demands on energy quality and design of the energy infrastructure. If these higher requirements are not met in an installation, this will lead to a dropout of electricity, which in turn, will lead to production loss, damage to the installation, and malfunctions. Also, installation parts may age more rapidly and need to be replaced sooner than necessary.
Power electronics in a data centre can cause several problems, mainly due to their critical role in managing and distributing electrical power across various systems:
• Power quality issues such as voltage spikes, sags and electrical noise can be exacerbated. Poor power quality can lead to data corruption, equipment malfunctions, and reduced equipment life.
• Installation, maintenance, repair and detecting and resolving potential vulnerabilities require specialised knowledge. Failure to maintain these systems can lead to unexpected downtime, which is highly detrimental in a data centre environment.
• Switching power supplies, for example, in servers and UPS systems, can generate harmonic currents. These can interfere with other equipment and reduce efficiency of the electrical installation.
• Causing Electromagnetic Interference (EMI) and voltage fluctuations, which may interfere with the operation of other devices.
• Overheating due to inadequate cooling can shorten the operational lifetime of components and lead to failure.
• Switching on and off power electronics can cause transient voltages and currents that can be harmful to other devices.
• An improperly executed grounding system will lead to undesirable currents and potential differences.
A PRACTICAL CASE STUDY: LED LIGHTING IN A DATA CENTRE
LED lighting offers many practical advantages in data centres. The longer lifespan of LED (often thousands of hours more than traditional fluorescent or incandescent bulbs) means less frequent replacements, lower maintenance costs and less downtime. LED lighting consumes significantly less energy and is often dimmable. Given the scale of data centres, this can bring significant energy and cost savings. However, LED uses power electronics such as dedicated drivers, rotating power supplies, dimmers and current control circuits. Therefore, it’s smart to properly map out the possible consequences before rolling out LED on a large scale in a data centre.
HyTEPS conducted research for a client on the differences between a standard fluorescent fixture with an HF starter and a LED fixture. A report provided insight into the impact of both options on the electricity grid and power quality (quality of voltage and current).
Analysis showed the LED luminaires inrush current was higher, but nominal power consumption was considerably lower.
The company then extrapolated the results for an installation of 500 luminaires and concluded that heat could be generated in the neutral conductor as a result of a third harmonic current. Without adequate modifications to the system, fire might break out. Based on this information, and continuous monitoring and optimisation, it was able to present changes to the design and the bank was able to switch to LED with peace of mind.
This illustrates how modelling, maintenance, monitoring and adherence to best practices can help identify and address (potential) issues with power electronics in the data centre. Furthermore, by paying attention to power quality, downtime can be minimised or prevented. That’s why HyTEPS recommends continuous monitoring and analysis to identify and address issues such as harmonics, voltage dips or voltage spikes.
Good power quality reduces stress on electronic components, prolonging their lifespan. It minimises the risk of damage due to power disturbances such as voltage spikes, sags or electrical noise, thereby enhancing the reliability of the data centre’s operations. By maintaining high power quality, the chances of equipment failures and associated downtime are significantly reduced, and data centres can ensure better integrity and accuracy of data processing and storage. Equipment operating under optimal power conditions consumes less energy, requires less maintenance and is less likely to experience premature failure. Furthermore, data centres can comply with strict regulations regarding data integrity and uptime.
HyTEPS,
hyteps.nlADDRESSING CHALLENGES IN THE ERA OF SURGING AI WORKLOADS
In the fast-paced world of technology, the rise of AI and high-density computing has become a game-changer, redefining what data centres need to handle.
Sam Bainborough, Sales Director EMEA-Strategic Segment Colocation and Hyperscale at Vertiv, invites data centre experts to review their strategies to ensure a holistic approach.
The increasing use of AI applications has created a huge demand for computing processing and power, putting significant pressure on data centres, and pushing them to quickly adapt to the evolving needs.
According to Statista, the AI market is projected to reach US$305.90bn in 2024 and have an annual growth rate (CAGR 2024-2030) of 15.83%, resulting in a market volume of US$738.80bn by 2030.
As AI workloads continue to soar and the need for computing power becomes more intense, data centre architects find themselves
at a crossroads. They must rethink their design strategies, making significant changes to network architecture, power systems and thermal management if they are to continue to be fit for purpose.
TWO CRITICAL AREAS OF EVOLUTION
This paradigm shift emphasises two pivotal areas that demand simultaneous evolution to effectively accommodate the escalating demands of AI workloads.
1. Power efficiency
A primary challenge is the substantial increase in power requirements, driven by the deployment of specialised processors crucial for handling intricate AI workloads. Navigating this challenge requires data centres to explore and implement innovative power-efficient solutions. Prioritising energy-efficient hardware and leveraging advancements in processor technology becomes paramount. This strategic approach helps minimise carbon footprint, while meeting the ever-growing power needs of AI-centric operations.
2. Thermal management and cooling solutions
The second critical challenge stems from the heightened heat generated by these specialised processors. To maintain optimal performance and prevent hardware failures due to overheating, data centre architects must invest in innovative cooling technologies such as liquid cooling and advanced heating, ventilation and air conditioning (HVAC) systems. Striking a delicate balance between processing power and thermal management is crucial in safeguarding the longevity and reliability of the entire data centre infrastructure.
ENSURING A HOLISTIC APPROACH
In the face of the ever-evolving and rapidly changing landscape of data centre infrastructure, a holistic design approach emerges as the cornerstone for operators aiming to future-proof their operations and enable compatibility with new technologies and emerging waves of demand.
The key to success lies in involving all stakeholders, recognising the importance of collaboration and communication across diverse disciplines. Engaging not only power and cooling specialists, but also those
responsible for facility management, storage and technology deployment fosters a comprehensive understanding of the data centre’s intricate requirements.
As data centres embrace high-density configurations and rapidly evolving technology, the holistic approach extends to decision-making timelines. While operators may be inclined to defer decisions to the final stages of design, a balance must be struck to avoid risks associated with delayed investments and potential loss of market shares. Holistic design, therefore, involves streamlining decision-making processes while considering lead times and involving stakeholders at every stage.
In a dialogue with industry experts, the importance of technology interchangeability surfaces as a critical consideration for clients. In some areas, there has been a slowdown in direct deployments by hyperscalers, which may reflect a strategic pause to understand what technology changes and specifications are required. Challenges arise in finding the optimal operating conditions for CPUs and GPUs, with manufacturers defining specifications and clients striving to plan for a diverse technology landscape over the next five to 10 years.
In this pursuit of future-ready, AI-enabled design principles, clients encounter design pitfalls and challenges. The balance between CPU and GPU environments, coupled with defining optimal operating conditions, requires a meticulous approach to allow adaptability over an extended operational lifespan. As the industry grapples with these complexities, a holistic design ethos remains the compass guiding operators through the dynamic terrain of data centre evolution.
HOLISTIC DESIGN IS THE WAY FORWARD
In this era dominated by AI, mobile and cloud technologies, and the advent of hybrid computing as the new norm, the importance of holistic design in data centres has never been more apparent. The evolution in the fungibility of workloads in the realm of AI
signals a paradigm shift, recognising that workloads are no longer static, but dynamic, ever-changing entities.
Navigating this dynamic landscape requires a holistic approach. Data centre architects, faced with the challenges of climate change, surging power requirements, heightened heat generation, and the crucial need for a robust connectivity infrastructure, are at the forefront of this transformative journey.
In embracing a holistic design philosophy, data centres can position themselves to not only meet, but thrive in the face of the burgeoning demands of the AI-driven era. Sustainability and efficiency become the bedrock of operations, ensuring that data centres lead the charge in an era defined by growth and technological innovation.
CREATING A SUSTAINABLE DATA CENTRE ECOSYSTEM
Giorgio Girelli, General Manager of Aruba Enterprise, looks at five essential steps to ensure that your data centre operations are energy-efficient.
Data centres are highly energy-intensive facilities. According to the International Energy Agency, the global electrical energy consumption of data centres equates to 1% of total global energy consumption or 220-320TWh/year. Containing huge computing powers, which need to constantly be in operation and are highly sophisticated systems with multiple redundancies, it is no surprise.
However, with global priorities shifting towards sustainable business practices, the emerging challenge for data centre operators has been to strike a balance between operational reliability and sustainability.
DATA CENTRE DESIGN
Energy efficiency should be at the forefront of the design process - meaning location, construction materials and systems need to be carefully considered to ensure the data centre consumes energy intelligently.
Choosing technology partners that share similar sustainability goals is also important. This includes ensuring that partners use technologies in line with the overall targets as a part of a technological ecosystem.
EMBRACING RENEWABLE SOURCES
While conserving energy is crucial, it is equally important that data centres power their activities with energy from renewable sources. Data centres should aim to produce more ‘clean’ energy than they consume, cancelling out its operational impact.
Hydroelectric power plants, for example, can be used to produce clean energy for use in data centres. Investing in photovoltaic systems that cover large parts of the walls and roofs of data centres with sufficient exposure to the sun is one of the ways to achieve this.
ENVIRONMENTAL CERTIFICATIONS
Certifications offer a solid guide and can help with the implementation of the energy tactics necessary to reduce emissions, as well as maintain new strategies employed. One of the most significant certifications is ISO 50001, which ensures an organisation has a healthy energy management system (EMS) that continuously improves performance.
Similarly, ISO 14001 certification specifies and standardises the requirements of an environmental management system, requiring a third party to certify a company’s ability to monitor its environmental impacts.
The ISO 22237 certification route offers customers an international certification that analyses and verifies all the components that determine an infrastructure’s energy efficiency.
A SUSTAINABLE SUPPLY CHAIN
Businesses need to consider the impact of their entire supply chain to ensure their total environment is working towards similar goals. The total environmental footprint of any data centre is determined by the combination of the impact caused during construction, the cumulative effects of consumption and operational maintenance throughout its lifespan, and the impact generated during decommissioning. Because of this, customers and suppliers should be chosen after considering their values and sustainability practices in place.
EUROPEAN COLLABORATION
With the publishing of the European Energy Efficiency Directive, sustainability has become a priority on the EU’s political agenda, driving many initiatives from political and industry voices, aimed at identifying best practices.
The Climate Neutral Data Centre Pact spearheaded this movement as a self-regulatory initiative agreed upon by the main European providers to proactively guide the transition towards a climate-neutral economy. More than 80 companies have signed up to it, setting ambitious and measurable targets to eventually make European data centres climate neutral by 2030.
Other examples include the European Green Digital Coalition, which aims to invest in the development of sustainable and efficient digital services and tools to measure the impact of technologies on the environment. The code of conduct for Data Centre Energy Efficiency was also established to encourage operators to reduce energy consumption cost-effectively, without compromising business continuity.
Collaboration at an international level is crucial for setting standards and guidance for data centre operators. Whilst the targets set won’t be easy to meet, these efforts are an impressive sign of the industry taking action.
LOOKING FORWARD
Data centres are the backbone of the digital age, but their vast energy consumption and environmental impact have raised concerns about their sustainability. Operators also have a fundamental role to play in this challenge. This can be achieved by adopting efficient systems of energy production and use, working with third-party certification organisations, and relying on suppliers who are equally sensitive to the issue. By addressing these five areas, data centres can play a vital role in creating a more sustainable digital future. The transition towards greener data centres may pose challenges, but the benefits are undeniable.
Aruba, aruba.it
Powered by Centiel’s Distributed Active-Redundant Architecture (DARA) for minimizing downtime 9-nines availability (99.9999999%)
StratusPower UPS goes beyond power with a commitment to peace of mind and operational excellence.
Experience the future of data centres today!
30 years design life
Built on proven semiconductor technology for increased reliability
www.centiel.com
GRAEPELS: ELEVATING DATA CENTRE SECURITY WITH EXPERT SOLUTIONS
Amid the rising demand for robust physical security in the data centre industry, Fred Graepel, Managing Director at Graepels, unveils how Graepels’ cutting-edge security mesh emerges as the definitive solution.
In the modern digital landscape, data centres play a critical role in storing and processing vast amounts of information. With the rising importance of data security, businesses seek robust solutions to safeguard their infrastructure and sensitive data.
In this labyrinth of data centre security, Entropic Ecology Indoors has successfully leveraged Graepels’ woven wire products to address its security demands.
NAVIGATING THE CHALLENGE:
Prior to engaging with Graepels, Entropic Ecology Indoors - a supplier of low energy, long life, HVAC systems for indoor environmentsencountered challenges in finding a security solution that aligned with its requirements.
The company needed a solution that not only provided robust security, but also offered ventilation and visibility to its exact specification for its customer’s needs in the data centre industry.
GRAEPELS SAVES THE DAY
As the one-stop shop for data centre security, Graepels manufactures perforated metal and woven wire mesh security solutions for data centres across the UK and onwards into Europe. With over six decades of expertise, it understands the intricacy between security and operational efficiency and supply products of exceptional strength and durability, using only the highest quality materials. Perforated metal and woven mesh offer high percentage
open area where ventilation is required for cooling and does not inhibit fire control systems. Graepels also offers a choice of colours for powder coating, patterns, hole shapes and sizes, with the option to incorporate your logo.
Woven wire and perforated metal application in data centres:
• IT security cages and partitions
• Servers and colocation cages
• Hot and cold aisle containment
• Ventilated servers
• Secure storage cages
GRAEPELS’ SOLUTION FOR ENTROPIC ECOLOGY INDOORS
Graepels offered Entropic Ecology Indoors its industry-leading solution that met the company’s requirements. The product matched Entropic Ecology Indoors’ requirements, providing the ideal balance of security, ventilation and visibility essential for its customers data centre environment.
Graepels’ capability to manufacture to exact specifications proved invaluable to the company. A spokesperson at Entropic Ecology Indoors says, “Opting for Graepels over other potential manufacturers and suppliers brought
about significant advantages for us. By choosing Graepels, we gained the ability to provide a solution to our customers where others couldn’t.
“I would recommend Graepels to anyone seeking similar solutions. They are a pleasure to deal with. They are also very helpful on follow up queries and provide a prompt response. Once we placed our orders, they maintained constant communication, providing updates every step of the way, solidifying our trust in Graepels.”
ELEVATE YOUR DATA CENTRE SECURITY TODAY!
Graepels’ woven wire and perforated metal solutions are proving instrumental in addressing security needs within the data centre environment by delivering tailored solutions that prioritise security, ventilation and visibility. This ongoing success underscores Graepels’ expertise in providing innovative and effective solutions for data centre security challenges.
Contact Graepels via email enquiry@graepels.com or call for more information: +44 (0) 1925 295609
Graepels, graepels.com
CLOUD MIGRATION: NO LONGER A SCARY PROSPECT WITH SEVEN KEY STEPS
Russ Kennedy, Chief Product Officer at Nasuni, explores how data stewards can take on the intimidating task of cloud migration and prove that it doesn’t have to be as scary as it seems with seven actionable steps.
There are few tasks more daunting than moving business data to the cloud. After all, data is one of the most, if not the most, important assets a business has.
It’s a small wonder that cloud migration is seen as such a nail-biting job; the technology equivalent of moving house, but since today’s cloud storage platforms are known to deliver a safer and more secure method for storing, managing and protecting data, cloud migration is as inevitable as that dreaded house move.
But there is good news amid the worrybecause of continual advances in cloud connectivity and service capabilities, you can complete this intimidating task and contain the potential risks without it being quite as scary as
it seems, by following seven actionable steps. These comprise:
Step 1: Find the right partner
Above all else, you will want to work with an integrator or vendor with a strong track record in this area. You want to be sure they have knowledge, extensive practice with organisations like yours and plenty of references from satisfied customers. You should make the time for honest conversations with satisfied clients, so you can ask them about their own experience and how the vendor managed any bumps in the road during their migration project.
Step 2: Understand your exact needs
Second, the vendor and the specific team dedicated to your account should fully understand all your needs and requirements. You don’t want to simply turn a massive migration project entirely over to them; a cloud migration needs to be a partnership to be successful. You should explain exactly what you’re looking for and work together to understand how you’re going to collectively achieve the organisation’s goals.
Step 3: Which cloud is right for you?
Third, remember that your data isn’t going to move to some mysterious, intangible place of storage, but instead, an interconnected network of strategically sited physical infrastructures. You need to know exactly where your data is going to reside, whether that region is right for you, your users and your regulators, how your company will consume those cloud resources, and what they will cost.
And while hyperscale clouds from Amazon, Google and Microsoft in essence give you global reach and performance, each one, nevertheless, has subtle operational and financial differences that you need to understand and plan for, well in advance. When reviewing your cloud options, you need to build
a complete understanding of your logistics and your different options for deployment and consumption. Cloud migration concerns can even be helpful, in that they force you to plan the process, confront difficult questions and identify the true benefits of moving data to the cloud.
Step 4: Know your TCO
Closely allied to logistics is a longstanding migration fear: Are the costs visible up front or in the small print? Too many organisations have proceeded with their migrations, expecting their costs to be x, only to find out that egress fees or other unexpected costs might double that investment, or worse. They didn’t finalise their true of cost of ownership. But IT teams can avoid such difficulties, or outright shocks, at the planning stage by evaluating different vendors’ solutions that preserve your optionality. By following this step, if a vendor or cloud changes the service economics, you will be able to switch to a more favourable one.
Step 5: Harness your data even as it’s moved
Many data professionals are troubled by the thought of downtime and a nagging lack of certainty over when exactly they will start benefiting from their new cloud platform.
The physics of moving data cannot be denied. And it takes time to copy all those bits and bytes from one place to another and organise them once they’ve been moved for optimal use by your company’s teams. But you and your colleagues shouldn’t have to wait until your migration is complete to start enjoying the benefits, as today’s methodologies enable you to leverage your data even as it’s being migrated, and you should seriously consider this option as you evaluate your migration options.
Step 6: Retain control
In the world of apps and emergent AI, any customer-oriented or innovative company’s priorities will change from one month or quarter to the next. So, even though you want to detail your needs at the outset, you should also make sure that you preserve optionality in case those expectations shift. If you choose a particular cloud, will you be able to switch clouds in the
future? How difficult will that process be? These are all critical questions that you will need to settle at the outset.
Step 7: Agree your project timing
Establishing a clear timeline with agreed project milestones is crucial. The migration programme must fit within your business cycles and align with both your internal and external corporate objectives. You should understand what you will and won’t be able to do while the migration is underway and when it will be completed.
Fail to prepare and prepare to fail, as the old saying goes, but by breaking down and fully understanding and then mitigating the elements that make cloud migrations challenging, businesses can set themselves up for better customer response, faster collaboration and accelerated innovation.
Nasuni,
nasuni.comASCENDING THE CLOUD: UNDERSTANDING CDN INTEGRATION
Kevin Cochrane, Chief Marketing Officer at Vultr, discusses how by adding advanced, intelligent content distribution capabilities to existing cloud infrastructure, a new generation of CDN can push content closer to the edge without compromising security.
With digital infrastructure constantly evolving, the lines between cloud computing architectures and Content Delivery Networks (CDNs) need to be clearer. Both facilitate data access, but they differ in speed and efficiency.
A global survey of 600 senior ITDMs found that the enterprise sector plans to put 31% of its IT spend in the public cloud by 2026. Web developers and enterprises must understand the nuances of cloud computing and CDNs, as well as how the next generation of CDNs bridges the gap between the two technologies, to maximise their IT investment.
CDN VS CLOUD COMPUTING
Cloud computing and CDNs operate on similar principles, but differ in their data delivery methods. Cloud computing focuses on processing and storing data in centralised servers, which offers scalability and flexibility when needed. On the other hand, CDNs optimise content delivery by caching data in edge servers positioned closer geographically to the end-user. A CDN functions like a network of servers dedicated to content delivery, playing a crucial role in enhancing the speed and responsiveness of web content delivery.
Internet users demand smooth browsing experiences, regardless of their location or device. Slow web pages frustrate users, causing them to leave, resulting in lost business opportunities. CDNs help by storing content on an origin server and sending it to edge cache servers when necessary. When users request content, a unique CDN URL is translated into an IP address through a domain name service, retrieving the content from a nearby cache server. The process reduces latency and lightens the load on the primary server by spreading it across multiple global edge servers.
Though traditional CDNs have revolutionised content delivery, they are complex and have left enterprises and web developers grappling with the intricacies of infrastructure management. Configuring, managing and optimising traditional CDNs can be daunting, requiring specialised expertise and significant time investment. However, as digital content delivery becomes increasingly critical for global businesses, especially with the emergence of AI-generated content, there is a growing demand for simplified solutions that offer cost-effective and timely infrastructure management.
THE RISE OF NEXT-GENERATION CDNS
The emergence of next-generation CDNs aims to address the growing digital ecosystem needs by integrating advanced content distribution capabilities with existing global cloud infrastructure. By harnessing the scalability and reliability of cloud computing, next-generation CDNs can push content closer to the edge on a global scale. This positioning minimises latency and optimises user experience across diverse geographic locations. Moreover, these CDNs uphold strict security measures and data sovereignty standards, safeguarding sensitive information from unauthorised access and ensuring compliance with regulatory requirements. This helps to maintain user trust, protects data, and mitigates legal liability across different countries.
As businesses undergo digital transformation, next-gen CDNs become vital for faster content delivery, global expansion, and enhanced
customer satisfaction. Integrated CDNs adapt distribution strategies to user needs and network conditions, ensuring optimal performance and seamless scalability. This approach outperforms traditional CDN offerings, making it more cost-effective. The convergence of cloud computing and CDNs marks a significant shift, providing a holistic solution for accelerated data delivery and improved customer experience.
THE FUTURE OF DIGITAL CONTENT DISTRIBUTION
As businesses navigate the complexities of digital transformation, the convergence of cloud computing and CDN heralds a new era of efficiency and innovation. By embracing integrated CDNs with global cloud computing, enterprises can streamline infrastructure management, optimise resources, and deliver seamless digital experiences to their customers. The journey towards a unified approach to digital content distribution requires innovation and a company commitment to harnessing the full potential of cloud technologies.
By integrating advanced content delivery capabilities with cloud infrastructure, enterprises can unlock new opportunities for growth, agility and customer engagement. As we move towards a seamless digital future, the convergence of cloud computing and CDNs will push the boundaries of customer engagement to the edge.
Vultr, vultr.com
HOW IMMUTABLE OBJECT STORAGE CAN STOP THE UNSTOPPABLE
As attackers continue to wreak havoc on businesses, immutable backup storage is an absolute must, claims Anthony Cusimano, Chief Evangelist and Director of Technical Marketing at Object First.
It’s no secret that ransomware threats are a growing problem. Despite considerable efforts to secure businesses and catch cyber criminals, the ransomware business was booming in 2023, with attackers collecting over one billion dollars in extortion money from victims - the highest figure ever.
The potential to reap high profits from holding valuable data hostage, and the increasing sophistication of attackers, make it likely that ransomware incidents will continue to increase. So what can we do about it?
IMMUTABILITY: A MUST HAVE
In an era where business continuity is crucial, and ransomware payments are growing, the need for secure, simple and powerful data protection is non-negotiable.
Immutable backup storage solutions are the missing puzzle pieces in any organisations’ security strategies.
Immutable storage is a type of data repository where information cannot be modified, deleted or overwritten for a set period of time after it is written, or ever,
in certain instances. Most immutable storage targets are object storage and utilise an ‘object lock’ mechanism to prevent unintentional or deliberate alterations or deletions.
It can also come in multiple forms, with governance and compliance mode being two examples. Governance allows specific administrators to disable immutability, whereas, compliance mode ensures files remain readable and unalterable for the period set. This makes it ideal for storing critical business data such as legal documents, financial records or personal information.
THE IMMUTABLE DIFFERENCE
When considering solutions that offer immutability, ensure that you have considered or implemented the following concepts and understand that not all immutability is made equal.
• Write Once Read Many (WORM)
Using functionality like object lock and WORM configurations, immutable backup and storage are read-only solutions, ensuring that once data is written, it cannot be modified or destroyed. In today’s digital world, where sensitive business data and information are constantly shared, this is crucial to ensuring malicious acts cannot alter it.
•
Cannot be encrypted by ransomware
One of the main aims of ransomware attacks is to hold data hostage and coerce large amounts of money from businesses. However, with immutable backup storage that can’t be encrypted by ransomware, threats are less effective.
•
Integrating immutability with zero trust
Whilst immutable backup and storage are robust solutions, businesses can’t be too careful. Alongside immutable solutions, businesses should employ zero trust throughout IT systems, assume a breached state, and implement least-privileged access.
•
Improves data availability
As data can’t be deleted if it’s protected using immutable solutions, it is always available when needed. This is particularly important in the event of hardware and software failures. If other business systems are hit with outages or cyber attacks, business continuity is still guaranteed, as there’s no downtime or rebooting of systems required to access data. Unlike other cyber security solutions, businesses won’t suffer huge financial losses by disrupting activity or needing to take solutions offline, streamlining time management across the organisation. Encrypted immutable data is another option that can further heighten security. It means that even if an attacker accesses data, it’s useless to anyone without the key.
• Maintains data authenticity
It uses cryptographic hashes that verify whether data has been tampered with. Whilst traditional cyber security solutions might try to convince IT teams that data hasn’t been interfered with, it’s much harder to prove this is the case. With immutable storage, in the case of a ransomware attack, businesses can quadruple-check that their data is authentic and hasn’t been compromised. Zero trust models impose short-interval authentication and least-privilege authentication on every actor within and outside an organisation. With the rise of sophisticated ransomware attacks, businesses must view every person as a potential threat and follow the mantra, ‘assume nothing, verify everything’.
Importantly, one challenge many businesses face is possessing what is claimed to be immutable architecture that isn’t actually immutable in practice. This could be for several reasons, including storage in governance mode, mis-marketed immutability features
that lack technical validity, or due to a lack of understanding of the underlying technology and how to secure it properly.
When appropriately configured, immutable backup storage guarantees that ransomware attacks cannot modify or overwrite data and hold it for ransom.
CONCLUSION
Ransomware is a threat that isn’t going anywhere. It’s imperative that business leaders work with IT teams to invest in simple and reliable anti-ransomware solutions now, not when disaster hits. Ransomware protections should be proactive and preventative, not solely reactive.
The rise in ransomware attacks might seem persistent, but immutable backup storage can stop unknown actors from gaining access to precious resources and ensure watertight security to prevent data leaks.
Object First, objectfirst.com
FROM CLOUD TO COLOWHY REPATRIATION ISN’T THE WHOLE STORY
Rod Faul, Senior Client Director at Kao Data, debunks cloud repatriation by looking at the complexities of a cloud native approach, the hidden cost of data, and why performance considerations are vital as AI adoption gathers pace.
Cloud computing is now more than 20 years old, and like many of us, its potential is not matched by reality. Where organisations once wanted reduced dependence on the CapEx of owning or operating data centres, many senior decision-makers were persuaded that outsourcing their compute offered attractive cost-savings.
More recently, however, considerations such as location, the need for greater control of data, the complex management of cloud instances and the increased size of datasets, especially given the growth of Artificial Intelligence (AI), has led organisations to repatriate their compute.
In fact, “Cloud repatriation hasn’t gone away, but it may have gone mainstream,” says Basecamp’s CTO, David Heinemeier, following his company’s announcement to get off the cloud and bring its infrastructure in-house, primarily due to spiralling costs as the business grew. David, whose email service, HEY, once ran entirely in the public cloud, decided that enough was enough.
More recently, David reported that X, formerly known as Twitter, has seen major
success from its plans to exit the cloud. In a social media post, its engineering teams said that the shift had reduced its monthly cloud costs by 60%, its cloud data storage by 60%, and cloud data processing costs by 75%. Additionally, it built an on-premises GPU-powered supercomputer to support its new clusters.
Away from this, other businesses hit by shock operating costs are also questioning the original requirements for moving to the cloud. Especially when access to lower-cost technology, industry-leading technical teams and hosted space are as easily accessible as the cloud and offer many significant benefits.
HYBRID APPROACH
A strategy identified by many is the hybrid approach; combining the power, flexibility, and scalability of the cloud with the precision, performance, and tangibility of on-premises infrastructure, including colocation data centre services, to save money, increase security and improve compliance and sustainability.
The savings are, in some cases, up to 70%, when compared with a cloud-first approach, especially if the balance of cloud, colo and/or on-premises is perfectly designed for your workload.
Basecamp is not the first to ditch cloud infrastructure for on-premises tech. Dropbox famously bucked the trend when it moved away from AWS and onto custom-built infrastructure, and claimed reduced operating costs of nearly $75 million over two years. Others, including Amazon, Uber and Netflix, have found success by going cloud-native, but most enterprise businesses are very different from these blue chip companies.
Small businesses don’t have the option or the budget to build and maintain their own IT and many owe their growth to the cloud, fuelled initially by incentives such as free credits to on-board immediately and, therefore, liberate their developers to innovate.
CLOUD COST CRISIS IN NUMBERS
Mass repatriation hasn’t yet materialised, although cloud costs are an existential threat to SMEs, and many have cut back on their cloud investment.
Many SMEs are often tempted into pay-as-you-go cloud contracts that start cheap, or even free in many cases, but soon become expensive as business grows, and keeping track of sessions across multiple platforms, access points, time zones, and providers becomes difficult. Like mobile phone contracts, it becomes challenging to work out exactly what you’re using, what you’re not, and where you’re over-spending.
Google, however, has recently changed tact, confirming it will no longer charge customers fees for transferring their data out of its cloud when switching to a new provider. This is something of a first for a hyperscaler, and it’s likely that AWS and others will follow suit.
A 2022 report by Nutanix found that cost is also the biggest worry for companies using the cloud, more than security or data migration. Furthermore, Gartner predicts that 60% of infrastructure and operations leaders will encounter public cloud costs overrun during 2024.
Enterprises and start-ups running machine learning (ML), generative AI and deep learning programs requiring high performance processors (GPUs) are also finding that paying for HPC infrastructure across virtualised servers in the cloud is suboptimal. Most genuine HPC applications, such as those supporting genome sequencing or GPU-powered research, are best optimised using parallel processing in one location.
Many cloud vendors sell their infrastructure as developer-centric, but some teams have found cloud services to be creatively limiting. When all the hardware and software is handled by your cloud provider, there’s little you can do to customise it, or even control where it’s located.
Moreover, security is a concern. The best cloud-based cyber security options are expensive, and there are security consultants who prefer to keep backups and sensitive data away from public servers. Customisation, accessibility and compliance are reasons cited by companies wanting to leave the cloud. It’s a misconception that maintaining compliance best practices is the cloud provider’s call. Meeting legal and industry standards is always the responsibility of the user or data owner.
AI AND THE CLOUD –A NEW EVOLUTION?
Another key consideration is the advent of GPU-powered cloud computing, where services such as NVIDIA’s DGX Cloud are now enabling start-ups, scale-ups, and established enterprise to access AI Training-as-a-Service from just the touch of a button.
NVIDIA and other new entrants to the market such as GreenAI Cloud, Taiga Cloud, Nexgen Cloud, Omniva and Hypertec are disrupting the traditional cloud model by providing serverless AI capabilities, at almost limitless scale. This surge has created a new challenge for the industry, with more cloud providers now seeking specialist colocation capacity engineered for AI, in which to deploy NVIDIA’s GPU technology at-scale.
As we move forward, we’re likely to see the AI goldrush gather pace, companies hoovering up capacity at a record-rate. One thing to be
mindful of, however, is that only the strong will survive, so choose your provider carefully, as migrating your workloads is likely to be far from easy.
DIGITAL INFRASTRUCTURE: HYBRID, MULTI-CLOUD AND ADVANCED COLOCATION
In a world of SMEs, AI start-ups and enterprise organisations, decision-makers are exploring hybrid solutions that more closely match the route taken by Tapjoy, which moved away from a cloud-based strategy and is now placing workloads on private servers.
A cost-effective model for many growing organisations is multi-cloud, which combines multiple public or private clouds with on-site IT. Hosted Software-as-a-Service (SaaS) products can be part of a multi-cloud model, too - this approach offers optimal visibility, flexibility and customer service, as well as financial efficiency.
The most cost-efficient strategy for many businesses, however, is a combination of hybrid-cloud and high performance colocation, and customers’ private-cloud can be situated in the same colocation facility. Here, the colo facility will provide power, connectivity, hardware cooling and maintenance, creating an environment that offers a balance of physical support, security, scalability and freedom.
HOW TO CHOOSE YOUR COLO?
When choosing a colocation provider, data-intensive businesses should look for flexible contracts that allow for scalability, the freedom to customise their solution, access to 100% renewable power and industry-leading
levels of energy efficiency. This offers the perfect balance of resiliency and low operating costs, which is essential in today’s climate.
Sustainability is also a vital consideration for many businesses, and operators who can deliver a low PUE, powered by renewables, and are committed to meeting net zero are chart toppers. At Kao Data, for example, sustainability has been central to the business since inception, and it continues to ensure that its operations have a low environmental impact.
For a hybrid approach to be successful, the colocation partner must have dedicated, high performance infrastructure, diverse connectivity, and high-speed cloud access to support its customers data requirements. Businesses need access to public cloud platforms through services such as MegaPort and Console Connect, which enable the creation of custom hybrid and multi-cloud setups directly from the data centre.
Dropbox has invested millions in dedicated on-ramps within custom-built colocation facilities, whilst retaining its cloud capability which demonstrates the hybrid model in-action. Moreover, the cloud has not burst. The hyperscale infrastructure of AWS, Google and other public cloud providers can certainly meet the dynamic workloads, scale and service requirements of multiple enterprises and industries.
For businesses weary of cost spikes, data egress charges, long wait times and security challenges, access to colocation data centres engineered for AI and the cloud presents a viable and cost-effective solution that offers the balance of performance, efficiency and scalability.
Kao Data, kaodata.comCLOUD-FIRST: WHY ORGANISATIONS NEED WAN ACCELERATION
Cloud computing is one of the drivers for SD-WAN, NaaS, and Data Centre-as-a-Service (DaaS) growth. However, David Trossell, CEO and CTO of Bridgeworks and Graham Jarvis, Freelance Business and Technology Journalist at Trudy Darwin Communications, believe that these technologies could benefit from WAN acceleration.
SD-WANs (Software-Defined Wide Area Networks) are increasingly the solution of choice for addressing WAN issues. They emerged in the early 2010s to improve application performance at a lower cost than MPLS (Multiprotocol Label Switching), and became particularly popular during COVID-19, when enterprises needed to connect a mixture of office and remote workers to company resources.
There is now a new paradigm shift towards Network-as-a-Service (NaaS). This is a
subscription-based cloud model that moves away from individual offerings to a more comprehensive model. Enterprises rent network infrastructure components, from hardware to software and management tools, from a provider. This can include compute, storage, remote management and monitoring and networking. NaaS can also include hybrid clouds, on-premise cloud and edge environments.
COMPLETING THE CLOUD PICTURE
David says that in recent times, there has been an “if in doubt, put it in the cloud” attitude. This mentality saved hassle in areas such as specifying new equipment, purchasing RFP and installation. NaaS, and more recently, security as access service edge (SASE), complete the cloud picture.
Deanna Darah at Techtarget explains, “According to the 2023 State of Network Edge Survey, conducted by Eleven Research, enterprises are looking to move from SD-WAN to NaaS, to support the evolving needs of the network edge. While SD-WAN enables enterprises to manage WAN performance more efficiently, it can’t always handle modern network needs in terms of agility, scalability and cost. Of the 200 network architects and administrators surveyed, almost 90% said they’re interested in implementing NaaS.”
This doesn’t mean that SD-WANs are dead. Far from it, NaaS will lead to more managed SD-WAN services, and most NaaS offerings include them anyway. Adding to this is the fact that VMWare wants to erase the lines between data centre, cloud and edge.
VMWare’s Mark Lohmeyer, Senior Vice President and General Manager of Cloud Infrastructure, says that customers are looking to adopt multi-cloud across their data centres, public, cloud and edge environments. The key driver is the need for flexibility to build and run the right application in the right location.
The basis for this is customers’ technical and business requirements. To address them, it’s essential to enable core enterprise class storage, networking and management capabilities that support the most enterprise grade applications around the world. To this end, VMWare believes it’s enabling a consistent infrastructure service across its customers’ existing data centres and private cloud environments, as well as across major public clouds, edge and distributed environments.
CLOUD DRIVING SD-WANS
Cloud computing is also one of the drivers for SD-WAN, NaaS, and DaaS growth. However, these technologies could benefit from WAN acceleration. It can provide organisations with the flexibility to do more with their data across longer distances, while improving backup and recovery to ensure that it can obfuscate cyber attacks because of the encryption used, the speed and efficiency of data transfer. Moreover, with latency and packet loss often causing mischief, organisations can improve their cloud application performance and bandwidth utilisation.
Like an increasing array of technologies these days, WAN acceleration uses artificial intelligence and machine learning. It also uses data parallelisation. It is unique to the point that if an organisation has SD-WANs – perhaps as part of a NaaS strategy – they could bolster them by deploying a WAN acceleration overlay onto them.
David explains that for the user that wants to shy away from the cloud and traditional MPLS circuits, SD-WAN has massive cost and performance benefits. However, the idea that it can reduce latency and packet loss is a bit of a misnomer. Latency can only be reduced by moving the two end points closer together or selecting a route that has a more direct route with fewer hops. Where once it was between sites and/or the cloud, it is now cloud direct to the user, and with the cloud global presence, we no longer need SD-WANs for the more remote users, since the point of presence is much closer than ever before.
He adds that WAN acceleration overlays are essential because SD-WANs can’t change the impact of latency and packet loss by themselves. For large data set transmissions over long distances, SD-WANs will never reach the full performance of the underlying WAN. For critical data transfers, such as backups and for air-gapped offsite depositories, this can be a major advantage.
ACCELERATING TCP/IP TRAFFIC
So, given that many organisations are looking to maintain and adopt a cloud-first strategy, how can WAN acceleration fit into the NaaS model, and how different is it to Infrastructure-as-a-Service?
David says that SD-WANs are regarded as bookend solutions, because one is required at both or all sites. The same applies to WAN acceleration insofar as it relates to SD-WANs. He explains that the WAN acceleration technology accelerates the TCP/IP traffic over the SD-WAN by mitigating the issues using AI. Thus, it makes the full use of its capabilities, as well as maximises the full bandwidth capability of the WAN.
WAN acceleration could fit into a large NaaS with SD-WANs where large remote data sets are ingested, and organisations could benefit from faster data and network performance.
UNDERSTANDING NETWORKS
To ensure that organisations deploy the right technologies for cloud-first strategies, they should first take time to understand their network infrastructure. Integrated Research explains, “Network managers need to familiarise themselves with the Open Systems Interconnection model OSI model, which provides a framework for understanding how data flows through a network.”
To do this, it’s wise to undertake a proof-of-concept before buying into any technology, and to compare it against other solutions that claimed to either reduce or mitigate latency. This will arm IT teams with knowledge of what works best for their organisation.
It’s also essential to understand security and compliance requirements. Does data need to be encrypted in transit between clouds? If it does, then WAN optimisation is not the right solution. WAN acceleration can achieve that in contrast to it and enable compliance with data protection regulations. With WAN acceleration, manpower resources can be freed up, which allow it to monitor network performance. This technology can also be part of a disaster recovery plan, as it significantly improves recovery time objectives (RTO) and recovery point objectives (RPO), offering substantial benefits to any cloud-first strategy.
PRECISION LIQUID COOLING CAN SIGNIFICANTLY REDUCE TELCO OPERATORS’ TOTAL ENERGY COSTS
Iceotope has announced the launch of its second-generation KUL RAN, a 2U chassis in a 19-inch, short-depth form factor featuring Hewlett Packard Enterprise (HPE) ProLiant DL110 Gen11 servers with 4th Gen Intel Xeon Scalable processors. The plug-and-play enterprise grade solution is optimised for high-density, low latency radio access network (RAN) and edge computing services.
The combination of Iceotope’s precision liquid cooling technology, HPE’s leadership in the global server market, and Intel’s world leading expertise in x86 silicon, offers a consolidated solution to reduce Scope 1, 2 and 3 emissions in the telecoms market. The KUL RAN (2U) solution reduces server power consumption by up to 20%. It also reduces component failure rates by up to 30%, extending the operational lifetime of sensitive IT equipment and greatly reducing truck rolls.
KUL RAN is highly scalable, allowing seamless expansion from one unit to many, adapting effortlessly to telco operator’s evolving needs. Engineered for fast and accurate on-site exchange, it simplifies equipment updates and return-to-base servicing, reducing both time and cost.
Iceotope, iceotope.com
OWN COMPANY EMPOWERS CUSTOMERS TO CAPTURE VALUE FROM THEIR DATA
Own Company, a SaaS data platform, has announced a new product, Own Discover, that reflects the company’s commitment to empower every company operating in the cloud to own their own data.
Own Discover is expanding its product portfolio beyond its backup and recovery, data archiving, seeding, and security solutions to help customers activate their data and amplify their business. With Own Discover, businesses will be able to use their historical SaaS data to unlock insights, accelerate AI innovation, and more in an easy and intuitive way.
Own Discover is part of the Own Data Platform, giving customers quick and easy access to all of their backed up data in a time-series format so they can:
• Analyse their historical SaaS data to identify trends and uncover hidden insights
• Train machine learning models faster, enabling AI-driven decisions and actions
• Integrate SaaS data to external systems while maintaining security and governance
Own Company, owndata.com
MICROCHIP RELEASES BACKPLANE MANAGEMENT PROCESSORS FOR DATA CENTRES
To help provide system versatility, standards-based operation and cost savings in data centre and storage applications, Microchip Technology has launched the EEC1005-UB2 Universal Backplane Management (UBM) controller family.
The generic, easily configurable UBM devices can be used on hard drive backplanes to provide storage enclosure management and reporting to computing
host systems using industry standard communication protocols. ECC1005-UB2 devices are compliant with the latest SFF-TA-1005 version 1.4 specifications and are downward compatible with systems currently using the EEC1005-UB1 UBM to enable easy migration and quick time to market.
Easily configurable to support different drive types, EEC1005 UBM controllers are interchangeable with a variety of backplanes, including NVM Express (NVMe), Serial-Attached SCSI (SAS) and Serial ATA (SATA) drives. The controllers support tri-mode operation, which enables them to work with all three types of drives simultaneously. The use of a standard connector between NVMe, SAS and SATA drives can help reduce cable and connector pin requirements, lowering the overall Bill of Materials (BOM) costs.
Microchip Technology, microchip.com
CADENCE INTRODUCES AI-DRIVEN DIGITAL TWIN SOLUTION FOR DATA CENTRES
Cadence Design Systems has introduced the industry’s first comprehensive AI-driven digital twin solution to facilitate sustainable data centre design and modernisation, marking a significant leap forward in optimising data centre energy efficiency and operational capacity.
The Cadence Reality Digital Twin Platform virtualises the entire data centre and uses AI, high-performance computing (HPC) and physics-based simulation to significantly improve data centre energy efficiency by up to 30%.
The platform will benefit data centre designers and operators navigating the complexities of modern data centre systems, particularly in addressing issues that arise from stranded capacity due to inefficient use of data centre compute and cooling resources and in handling AI-driven workloads and their environmental
impact in an age of increasing electricity scarcity.
Cadence’s transformative design platform will accelerate the development of next-generation data centres and AI factories across every industry. Integrated with the NVIDIA Omniverse development platform, it enables up to 30x faster design and simulation workflows.
Cadence Design Systems, cadence.com
GCORE LAUNCHES FASTEDGE, A SERVERLESS EDGE COMPUTING PRODUCT
Gcore has announced the launch of FastEdge, a serverless product revolutionising application deployment and performance. Designed for cloud-native development, FastEdge is a low-latency, high-performance solution for creating responsive and personalised applications without the complexities of server management.
FastEdge offers serverless edge execution, leveraging Gcore’s expertise in cloud technology, AI,
and security. The service enables developers to deploy decentralised apps globally, bypassing the need for server configuration or infrastructure maintenance. This innovation is built on its robust content delivery network (CDN), distributing custom code across over 160 edge nodes worldwide. This ensures near-immediate response to user interactions for exceptional app responsiveness.
The high-speed performance at the heart of FastEdge derives from the WebAssembly (Wasm) runtime environment. WebAssembly boasts an ultra-fast startup time, launching applications multiple times quicker than traditional container-based solutions. The isolated sandbox environment of FastEdge provides enhanced security, protecting against malware and ensuring a consistent, high-performance experience.
Gcore, gcore.com
ACRONIS UNVEILS CYBER PROTECT 16: A NEW ERA IN CYBER SECURITY
Acronis, a provider of cyber protection, has introduced the latest release of its flagship product – Acronis Cyber Protect 16. Acronis Cyber Protect delivers robust protection against cyber threats and unparalleled backup and recovery capabilities. This latest version establishes a new benchmark in easy and fast recovery after cyber attacks or data loss, especially for modern multi-site organisations.
As technology advances, the necessity for an integrated cyber security and data protection solution fit for distributed organisations has become increasingly evident. Factors including the rise of remote work and a rapidly changing threat landscape have increased attack surfaces and raised larger data access and privacy concerns. The product introduces a new centralised dashboard that further improves and simplifies
management with a single pane of glass, providing visibility and simplified management for the entire environment.
Acronis Cyber Protect 16 provides a unique integration of backup, disaster recovery, cyber security, and remote endpoint management delivered via a single, cost-effective, efficient platform.
Acronis, acronis.com
VULTR REVOLUTIONISES GLOBAL AI DEPLOYMENT WITH INFERENCE
Vultr has announced the launch of Vultr Cloud Inference. This new serverless platform revolutionises AI scalability and reach by offering global AI model deployment and AI inference capabilities.
Today’s rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has
created a growing need for more inference-optimised cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organisations increasingly focus on inference spending as they move their models into production.
But with bigger models comes increased complexity.
Developers are being challenged to optimise AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency.
With that in mind, Vultr created cloud inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions.
Vultr, vultr.com
SIEMON LIGHTVERSE COPPER/FIBER COMBO PATCH PANEL WINS 2024 BIG INNOVATION AWARD
The Siemon Company, a global leader in network infrastructure solutions, has announced that its LightVerse Copper/Fiber Combo Patch Panel has been named a winner in the 2024 BIG Innovation Awards presented by the Business Intelligence Group.
The LightVerse Copper/Fiber Combo Patch Panel breaks new ground in space constrained data centres and intelligent buildings by seamlessly integrating high performance fibre optic and copper connectivity within a single 1U rack space. This innovative solution eliminates the need for separate panels, saving valuable space and simplifying network deployments.
“We are proud to receive this prestigious recognition from the Business Intelligence Group,” says Henry Siemon, President and CEO of Siemon. “The LightVerse Copper/Fiber Combo Patch Panel embodies our commitment to continuous innovation, providing our customers with the agility and efficiency they need to thrive in today’s hyper-connected world.”
“Innovation is driving our society,” says Maria Jimenez, Chief Nominations Officer, Business Intelligence Group. “We are thrilled to be honouring Siemon as they are leading by example and improving the lives of so many.”
Organisations from across the globe submitted their recent innovations for consideration in the BIG Innovation Awards. Nominations were then judged by a select group of business leaders and executives who volunteered their time and expertise to score submissions.
The LightVerse Copper/Fiber Combo Patch Panel is just one example of Siemon’s unwavering commitment to innovation. The company invests heavily in research and development, continuously pushing the boundaries of what’s possible in the network infrastructure landscape. For more information, please visit www.siemon.com/ LVCombo.
Siemon, siemon.com