Building a Digital Society Ebook: Next Meets Now

Page 1

> eBook >>

Building a digital society

Now meets next

More than fiber, we’re fiber solutions

With more than four decades of fiber experience and a holistic network approach, nobody understands and solves network complexity like CommScope. Our advances in fiber-optic technologies have been the catalyst for many of the industry’s significant milestones.

Today, we’re proud to partner with some of the world’s most forward-thinking building and campus owners and data center managers, helping them navigate the challenges of advancing architectures and topologies, higher-speed migration, fiber density, climate change and more.

Globally positioned, locally focused and with clear insight of what’s next, CommScope stands ready to put it all on the line to help you evaluate, evolve and advance.

Lets get started

Fiber
campus networks © 2023 CommScope, Inc. All rights reserved. AD-117908-EN (07/23) beenhigherormorevaried.Yournetwork infrastructurehastobeuptothetask. canCommScopeenterprisefibersolutions help. everyOurfibersolutionsfortheenterprisecover connectionandcable,coretoedge— frompoweredfibercablingthatextendsyour backbonetraditionalstructuredcablingnetworktothe fiberandspliceenclosuresthat speed,supporteverhigherdemandsforthroughput latencyandcapacity. Network management Network analytics connectivity Fiber connectivity panels Campusenvironments LEGEND CommScopeCommScopeproductsfiberproducts comprehensiveWith40+yearsofnetworkingexpertiseandexperienceanda fiberportfolio,CommScopeprovidesdatacenter building,Trustedfiberpartnerforyour datacenterand Network management analyticsNetwork radios controllerConstellation Smallcells Automated manegementinfrastructure FiberGuide entrance Propel connectivityandfiberpanels Switching Datacenternetworks Enterprisebuildings cablesolutions iberconnectivity Campusenvironments Poweredfiber Powered Fiberconnectivity Sm andFiberSolutionsforBuilding,DataCenter CampusNetworks
solutions for building, data center &
Scan here to download the fiber solutions brochure

5 Chapter one: What’s now?

6 What’s the deal with Industry 4.0?

9 The arrival of the 5G Edge

13 Why smart cities are both incredible and inevitable

15 The revolution will be optimized

18 Chapter two: What’s next?

19 Connectivity in 2023: Emerging tech to drive industry forward

23 Interplanetary Internet, digital zebras and the disconnected Edge

28 Standards updates for optical fiber: What you need to know

33 Data centers and Industry 4.0: The next manufacturing revolution

36 Chapter three: Data centers – A digital transformation

37 Burgeoning data center demands lead to more resilient fiber platforms

40 Data center interconnection: Could we see a significant shift in the status quo?

41 Data center evolution in 2023: Efficiency is the name of the game

43 Q&A with Ken Hall, CommScope

6 23

9 15 19

40

Video conferencing >> Contents
13

Introduction:

From telehealth and smart cities to remote working and keeping in touch with loved ones, it’s safe to say we are living in an increasingly digitalized world.

Connectivity is king, but what we’re experiencing today is just the tip of an ever-growing iceberg. As our digital society continues to develop, so too does the complex digital infrastructure making it all possible.

But when the sky is the limit, how do we ensure what’s now translates to what’s next? In this eBook, we examine the applications driving innovation, as well as the digital transformation of not only our day-to-day lives, but the data centers and networks that form the backbone of today's – and tomorrow’s –digital world.

4 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society

Chapter one: What’s now?

Data centers have come a long way since the monolithic blocks of the past. Rewind five-plus years, and terms such as artificial intelligence (AI), machine to machine learning, "Industry 4.0," "the Edge" and "5G" were relatively unheard of. Now, they’re the buzzwords dominating the industry. But what do they actually mean?

In this chapter, we de-mystify the jargon to find out where we currently sit as an industry and examine some of the challenges that come with an increasingly connected world.

5 | DCD eBook • datacenterdynamics.com

What’s the deal with Industry 4.0?

And how do we deal with it?

The fourth Industrial Revolution is about connecting people, information, and processes. As such, it has the potential to radically alter not just the business of manufacturing, but how enterprises of all kinds operate. As industrial revolutions go, Industry 4.0 is decidedly different. Whereas the three preceding industry disruptions focused on making the production process faster and more efficient, the fourth Industrial Revolution is about connecting people, information and processes. As such, it has the potential to radically alter not just

the business of manufacturing, but how enterprises of all kinds operate.

Let’s get to it: How and why did Industry 4.0 develop, and what are its implications for your network infrastructure?

How we got here

The hype around Industry 4.0 began even before the term was coined in 2016. The concept first appears in a 2011 strategic document generated by the German government outlining a plan for the computerization of manufacturing.

The beginnings of Industry 4.0,

however, date to the early 2000s with the birth of the internet of things (IoT), advances in power over Ethernet standards, and the arrival of faster 4G wireless, which enabled deployment of millions of sensors.

The first Industrial Revolution, in the late 1700s, was triggered by the development of steam power and hydropower to drive greater production volumes. At the beginning of the 19th century, widespread industrial electrification enabled the first assembly-line, mass production factories, which quickly supplanted manual production processes and signaled the second industrial revolution.

6 | DCD eBook • datacenterdynamics.com
>> DCD eBook | Building a Digital Society
Kamlesh Patel CommScope

The first and second revolutions were all about how production was organized and powered, with the goal of increasing productivity and reducing labor costs. The third revolution, which began circa 1969, featured the use of digitalization and computer technology. While the primary effect was the continued automation of the factory, the digital age opened the door to computerdriven advancements that have gone well beyond the realm of productivity. Which brings us to Industry 4.0.

Why Industry 4.0 is different

Whereas the third Industrial Revolution was defined by widespread digitalization (the rise of computers, process logic controllers, etc.), the fourth Industrial Revolution is all about fusing digital, physical and virtual resources to create intelligent processes that think, do and respond faster and more

As Henrik von Scheel, one of the fathers of Industry 4.0, argued, “In essence, the centerpiece of Industry 4.0 is the people – not technology.” The goal is to use cyber-physical technologies to enable autonomous and real-time decision making, monitoring and processes to create a hyperconnected, intelligent and proactive environment.

And while the concept of Industry 4.0 has traditionally been translated to mean “smart manufacturing,” it is rapidly being adopted in industries such as utilities, logistics, energy, healthcare, insurance and others.

Drivers and enablers

To be sure, this next industrial revolution hasn’t developed of its own accord; a variety of market forces have been pushing companies in this direction for years.

de-skilling of the workforce as baby boomers retire and companies are having a hard time finding younger workers who are willing to acquire the training necessary to replace them.

This is especially affecting manufacturing and, to a lesser extent, IT-related professions. Unless the workplace dynamics unexpectedly change, enterprises must pivot to solutions that enable more standardized, automated and intelligent processes.

accurately than humans alone can.

The fourth Industrial Revolution is a way of describing the blurring of boundaries between the physical, digital, and biological worlds. It’s a fusion of advances in artificial intelligence (AI), robotics, IoT, 3D printing, genetic engineering, quantum computing, and other technologies.

Industry 4.0 has the potential to empower business owners to better control and understand every aspect of their operation and allows them to leverage instant data to boost productivity, improve processes, and drive growth. In that respect, it is fundamentally different from any of the preceding revolutions.

Perhaps the most influential trend has been the mainstreaming of digital technologies into people’s everyday lives. An estimated 83 percent of the world’s population now owns a smartphone, with all the real-time convenience of “anywhere, anytime” connectivity.

Our expectations regarding how we interact and transact with businesses mirror this new reality. This is forcing organizations to become more agile, responsive and cost-efficient by automating processes, making decisions off real-time data and leveraging deeper insights to become leaner and more productive.

In addition, industries across the board are facing a significant

The technologies helping to drive these processes represent a mix of process-oriented solutions (such as advanced robotics, IIoT and additive manufacturing) plus more powerful data analytics (big data, AI, augmented reality, data simulation and more).

While these state-of-the-art developments garner many of the headlines, they would not be possible without the wired and wireless network connectivity needed to bring it all together. Developing the network infrastructures that can ably support the ubiquitous connectivity, bandwidth and power demands of Industry 4.0 looms as one of the toughest challenges.

Network infrastructure challenges

The technologies fueling these changes will rely on an evolved network infrastructure. To support the sheer volume of connected and powered devices and data traffic, the infrastructure must have a few basic requirements:

7 | DCD eBook • datacenterdynamics.com
In essence, the centerpiece of Industry 4.0 is the people – not technology
>Henrik von Scheel

• Simplify network design to enable faster deployment and provisioning

• Reliably power and connect a vast number of new network devices and systems at the Edge

• Easily scale and reconfigure to support converged, segmented and hybrid networks.

The network infrastructure challenges can be separated into three large buckets:

Performance

The number of connected devices, a surge in data traffic and demand for real-time response converge to create a perfect storm consisting of latency, reliability and bandwidth issues. Among these, 83 percent of global IT leaders say network latency is the biggest determinant of application performance.

Industry 4.0 networks will likely rely on high-speed wired and wireless connections and an array of communication interfaces (private LTE and 5G, DAS, Bluetooth, etc.). Multi-gig network capabilities will likely need to extend throughout the facility, with multiple failover points to support UR-ULL throughput.

Architecture

Given the growth in decentralized Edge-based connectivity, network managers will likely extend the reach of structured cabling copper networks beyond the existing standards-based 100-meter distance limitation. Choosing the right cloud deployment option (private, public, hybrid) is also a big part of the design, as it determines which on-premises components are necessary.

Manageability

As network architectures become more distributed and complex, the time and cost of deploying and managing the infrastructure continue to grow. This is particularly the case with regards to supporting remote, Edge-based devices and systems. The need to reliably and quickly add network capabilities, whenever and wherever, suggests modular infrastructure solutions with distributed control.

What you should be thinking about

In any of its manifestations, Industry 4.0 will create far-reaching network changes in manufacturing, as well as data center and other enterprise environments. The following considerations are offered to help IT and network managers prepare for the disruptions.

Faster network speeds and lower latency performance could play an increased role as new, more resource-intensive and timesensitive applications (like digital twinning) emerge.

In digital twinning, each physical aspect of the manufacturing process is virtually represented by its “digital twin.” Using detailed CAD modeling, the digital twin simulates real-world outcomes – enabling the data analysis and system monitoring needed to improve planning and prevent problems before they occur. These capabilities will rely on new lower latency data center deployments running at faster network speeds.

Ubiquitous network connectivity is another critical enabler as more factories are upgraded with a mix of private 5G wireless networks. Two important requirements network managers should consider

are: having the right connectivity and cabling in place to support the mobile edge compute (MEC) hardware and ensuring the infrastructure to the WAPs can support multiple generations of network architecture.

Network security is among the most talked-about issues regarding Industry 4.0. The trend toward IIoT, machine learning, big data and IT/OT/IP network convergence is giving rise to new and wideranging security concerns. To guard against the various security attacks, CommScope recommends network managers take the following steps:

• Identify and fix outdated systems, unpatched vulnerabilities and poorly secured files

• Ensure device and technology partners provide regular software updates and security patches

• Monitor all OT assets, in real time, to identify and quarantine vulnerabilities, if necessary.

In the coming years, the emergence of Industry 4.0 will create a vast new set of opportunities across the manufacturing ecosystem and beyond. The implications regarding network performance demands, architectures and component designs will help redefine the physical layer infrastructure as we know it.

To keep abreast of the developments and understand how they affect your network’s technology roadmap, stay connected with CommScope. In our capacity as a global leader in network infrastructure, our finger is on the pulse of the industry and the emerging technology trends and issues driving it. Stay in touch and be prepared for what’s next.

8 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
The fourth Industrial Revolution is a way of describing the blurring of boundaries between the physical, digital, and biological worlds
>Kamlesh Patel CommScope

The arrival of the 5G Edge

5G has been heralded as the future of technology, with its quicker speeds, shorter latency, and greater efficiency.

Sure, there was 3G, and 4G before this, but the telecoms world has made a big noise about 5G and why it promises to be a game-changer.

Since the launch of the nextgeneration connectivity in 2019, markets worldwide have adopted 5G, with most smartphones released in the last two years being 5G-compatible.

5G mobile connections could surpass one billion by the end of this year, according to analyst firm CCS Insight. This number is expected to explode to 4.5 billion by 2026, with China tipped to lead the way.

Another analyst firm Counterpoint Research recently revealed that one

tipping point has been reached: in the second quarter of 2022, more 5G smartphones were sold than 4G mobiles

Relationship with Edge

So 5G is already having a big impact on the telecom industry and reaching consumers. At the same time, as a fast data solution located close to data sources and connected directly to devices, it’s obvious that 5G will tie in closely with Edge.

Finnish vendor Nokia says that ‘cloud-native Edge infrastructure will be essential to enable the successful implementation of 5G’, noting that it will support new, advanced use cases powered by network slicing capabilities.

Meanwhile, STL Partners, a telecoms consultancy firm, has been pretty busy about why 5G needs the Edge.

9 | DCD eBook • datacenterdynamics.com
5G promised great things, but it has hit many hurdles on its way to reality. What are the prospects now?
Once 5G is available everywhere, it will give us reliable and dedicated connectivity that could help to transform many industries
>Lee Larter Dell Technologies

In a report, Tilly Gilbert, principal consultant and Edge practice lead at STL Partners, notes that Edge computing will reduce latency on the networks.

To achieve ultra-low latency, necessary for use cases like autonomous drones or remote telesurgery, the combination of 5G and Edge computing will be necessary, especially in the long term.

Another key benefit of 5G for Edge computing is that it can enable operators to change their backhaul business models, adds the report. It will allow data to be filtered at a local edge site, with whatever data is necessary being stored in the centralized cloud after being analyzed and rationalized (an architecture also referred to as multi-access Edge, computing or MEC).

5G has been a slow burner?

But where are we with 5G currently?

Nokia senior director of sales engineering Fayyaz Patwa told DCD’s Stephen Worn that “the complexities around 5G mean it will take time” for 5G to take off fully. He made the comments during a fireside chat, titled "Why has the rollout of 5G not been as easy as expected?"

It’s also the case that proponents in the telecoms industry may have raised hopes too high: “Part of the reason we haven’t met expectations with 5G is because the team and I have been beating the drums on 5G since about 2015, and we’ve set high expectations,” said Patwa.

“1G through to 4G was primarily user-centric and data-centric, but 5G is very different, not just about consuming large data and speed; it’s about latency and reliability. It’s a game-changer and achieving these advanced 5G features requires the complete architecture to be redesigned. 5G requires a complete redesign.”

His comments were echoed by Viavi Solutions CTO Sameh Yamany, who adds that the industry needs to be patient.

“It’s a great time to be in this era with 5G coming in, but there’s a lot of expectation with 5G. And we need to understand that it’s complex technology.

“It’s promised a lot and people were expecting a big change, but we need to look at the reality. There’s been a lot of complications such as spectrum availability, device availability, and how 5G is different from 4G architecturally, and 5G requires a lot of different phases to

go in.”

Yamany adds that while 5G will deliver greater speeds through its lower latency and greater bandwidth, this isn’t the thing that people should be excited about.

He sees a bigger picture: 5G will drive the next industrial revolution, he says.

“Here is the reality, 5G will bring speed and that’s what people will love, but that’s not what 5G is about, it’s about the connecting of machine-to-machine (M2M) and driving the next industrial revolution.”

He explains further that 5G will be crucial to the use cases that are expected to explode in the coming years, including private 5G networks, utilities, railways, aviation, Industry 5.0, and the education sector.

Hurdles around 5G

Delving a bit deeper into the topic of 5G, Worn asked the duo about the key hurdles 5G has faced to date. Both further addressed challenges around the spectrum, notably in the United States with the Federal Communications Commission (FCC), plus conflicts over 5G around airports, says Patwa.

“In the race for 5G, the US is behind,” he observes, but, to offer a dose of reality, notes the vast size of the country plus the different types of terrain within the States.

However, he does believe the US has been slightly behind with the rollout, despite US operators spending over $80 billion of spectrum in the C-band.

Patwa thinks that, in some areas, the FCC is part of the problem: “The other challenge is the FCC. We feel the FCC has fallen behind in allocating spectrum C-Band, which is referred to as beachfront spectrum [i.e. attractive property] in the race for 5G. It was auctioned about a year ago and the carriers spent billions.

10 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Getty Images Autonomous drones will rely on a combination of 5G and Edge computing

And then we ran into this airport and FCC issue, which slowed down the rollout even further.”

He’s referring to the FCC’s skirmish with the Federal Aviation Administration (FAA) late last year, which concerned spectrum in the C-Band. The FAA warned that 5G transmission within this band might interfere with flight safety.

The fallout from this was enough to delay AT&T and Verizon’s planned rollout of 5G services in the C-Band, near airports. The FAA noted that its radar altimeters use spectrum in the 4.2GHz to 4.4GHz band, while US operators were anticipated to operate uncomfortably close to that in C-Band n77 spanning 3.3GHz to 4.2GHz.

Subsequently, AT&T and Verizon agreed to delay the full rollout of their 5G networks until July 2023 in order to allow airlines more time to mitigate fears of interference.

The Americans aren’t the only ones that have had issues with the 5G rollout, with German Open RAN newcomer 1&1 recently delaying its own 5G rollout by six months.

And in the UK, Ericsson’s UK public head of affairs Patricia Dooley told a panel at this year’s Connected Britain event that the nation needs to show the same enthusiasm for 5G as it does for fiber.

“I’d like to see the same level of enthusiasm for 5G rollout as there is with fiber roll-out,” she said.

“I’d love to see our Prime Minister talking about 5G connectivity and fiber connectivity. I think it's really important that there is support there for the operator community, both large and small to deploy this network everywhere and quickly.”

Different architectures

Yamany agrees that the spectrum issues haven’t helped, but adds that 5G represents a complete "architectural change" to those before it.

“5G is bringing a lot of

architectural change, and maybe (in the future) 6G will be the continuation of 5G Advanced, focusing on nonterrestrial networks. There have been complicated political and environmental barriers around the world to 5G deployments. 5G is moving from being for consumers to being for machines.

“The blessing of 5G is also its curse. It’s moving to industries, but these industries are moving at their own pace and have their own standards to meet,” says Yamany.

Speaking to DCD, Nokia head of wireless networks Jane Rygaard had a similar opinion, noting that the architectural change to 5G is something that cannot be overlooked, with the move from a centralized network to a network edge that is highly decentralized.

She notes that the industry spent 15 years centralizing all the existing networks but is now looking to decentralize these networks, as they are easier to scale and provide better system reliability and security.

Use cases

As for the use cases, there’s a range of sectors anticipated to grow in the coming years, from remote surgery to things we’re already seeing, such as cloud gaming.

Another industry is agriculture, which Dell Technologies UK

networking director Lee Larter tells DCD can be pivotal for sustainability.

“There is an abundance of opportunities around 5G and Edge. Some sectors have many options, for example, farming and agriculture. When we use Edge solutions to understand farming better, we can reduce waste and optimize more sustainable methods.”

Larter adds that 5G will transform many industries once it's fully available, and should be a key focus for the UK government in its approach to "Levelling Up" the UK’s economy.

“Once 5G is available everywhere, it will give us reliable and dedicated connectivity that could help to transform many industries. It would also help to address some of the imbalance that the government is trying to even out with its leveling up agenda.”

Another use case that Dell boasts about is its partnership with the Formula 1 racing team McLaren. The company is able to “leverage its data which is turned into new innovations that help McLaren improve their performance,” says Larter.

“Modernized connectivity is essential for getting the best out of Edge solutions,” adds Larter.

11 | DCD eBook • datacenterdynamics.com
Getty Images The FCC and FAA clashed in the US over planned rollout of 5G services services in the C-Band near airports in the US Source

“All these use cases require exceptional levels of information flowing, which requires very low latency without delays. 5G is an expanded, low-latency, highperformance network that captures all this real-time data for business advantage.”

Metaverse is closer to reality

The Metaverse will also become more of a reality thanks to advancements in 5G and Edge, says Rygaard. Or Metaverses, because it’s plural and won’t just be one, she said.

This virtual world that exists online will be made up of different

“The more we build physical systems in applications, the more we can talk about the Metaverse going forward. The Edge plays a massive role because we bring applications closer to where they make sense.”

Need to modernize

Without the Edge, organizations will fall behind, adds Larter, who warns that businesses will need to keep up to date.

He argues it's impossible to ignore, with many smart household products using data. Data is critical for the future, and how we use it will be even more important. Again, 5G

connected to other data sources.

“When you have more data, you have a more extensive threat surface area, which needs protecting. With Edge, hackers can look for vulnerabilities across various devices rather than within the data center, which means we must create new security solutions.”

The advances in Edge and 5G will bring opportunities for businesses, says Larter. He adds, that with the new technology capabilities will come a new generation of digital skills.

“With 5G and Edge come new digital skills and opportunities for

websites, social media platforms, and games to create cyberspace. It’s something that Facebook founder Mark Zuckerberg is looking to push.

But 5G and the Edge will play a key part in bringing the Metaverse to life, Rygaard says, as virtual reality (VR) and related technology will rely on the lower latency of 5G, which is a critical enabler to bring this virtual futuristic world to life.

Rygaard: “I think the most significant part is how do we, from a security and privacy perspective, make sure that we have data available in the right places for the right reasons?

will be important in this.

“Organizations are going to need Edge to remain competitive. If businesses don’t start modernizing and using these new technologies to their advantage, they will be left behind.

“It will become imperative for them to modernize and make better use of the data they’re generating. But to do this well, 5G is needed.”

On the subject of smart home devices such as Alexa, Larter warns businesses to figure out the best way to get vast amounts of valuable information from these devices while ensuring they’re protected and

future generations. Even when leveraging this new technology and collecting valuable datasets, we need human input to structure the information and create the right business outcome.”

Quite what these digital skills turn out to be is not clear yet, but a burgeoning community is working on it.

In any case, there is little doubt that the future could be very exciting if all the promises made for 5G, Edge, and related technologies turn into reality for businesses. 

12 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Getty Images
Organizations are going to need Edge to remain competitive. If businesses don’t start modernizing and using these new technologies to their advantage, they will be left behind
>Lee Larter Dell Technologies

Why smart cities are both incredible and inevitable

When we look into the future, we have a terrible habit of underestimating it. We look at the last 10 years, how far technology has come in that time, and expect this development to be mirrored in the decade coming.

This couldn’t be further from the truth. In reality, progress follows the law of accelerating returns, and the last 10 years saw significantly more growth than the decade before.

Explained as simply as possible, the "law of accelerating returns" is based on the principle that, as we develop more technology, further growth becomes easier to conceive. We learn from experience, and faster due to the resources available to us.

When we think about this velocity, smart cities seem just around the corner, and as a concept are heavily

reliant upon data centers for their success.

“The world population presently stands at approximately 7.7 billion people, with nearly four billion or 54 percent living in cities today. By 2050, it's projected that more than two-thirds of the world population will live in urban areas, with seven billion of the expected 9.7 billion people occupying cities globally.”

Marc Cram, the director of new market development for Server Technology, a brand from Legrand, sees these population densities as directly related to the development of smart cities. This is, in part, due to necessity. As the population expands, we need to find a way to manage it effectively in the highly dense areas. Resources will be stretched and become reliant on efficiency.

13 | DCD eBook • datacenterdynamics.com
And your data center needs to be prepared
A successful smart city will depend on solutions across six major domains: Economy, environment and energy, government and education, living and health, safety and security, and lastly, mobility
Getty Images
>Marc

“A successful smart city will depend on solutions across six major domains: Economy, environment and energy, government and education, living and health, safety and security, and lastly, mobility. The common understanding for a smart city in 2021 is one that provides for the real-time monitoring and control of the infrastructure and services that are operated by the city, thereby reducing energy use and pollution while improving health, public safety, and the quality of life of the citizens and visitors.”

It should not go unacknowledged that this could trigger some anxiety. The concept of real-time monitoring can feel a little too Orwellian for many people’s comfort but, frankly, is already a part of our daily lives.

We are already monitored by GPS on our mobile phones – our preferences are tracked on the internet, and microphones listen to our every conversation. The question that smart cities answer is how this integration of technology help our daily lives.

This process is already beginning in New York.

“Smart, in this case, means being efficient in the use of both human and financial capital to minimize energy usage while ensuring the quality of service for public utilities such as water, electricity, and transportation, and to provide for the day-to-day safety of people and resilience of infrastructure.

“Smart means taking advantage of automation and remote management capabilities for lighting, power, transportation, and other mission-critical applications to keep the city running. With thousands of cameras and millions of sensors already in use around New York City, they are already well on their way to being a smart city that processes over 900 terabytes of video information every single day.

“Recently, the city of New York committed grant monies for the development of a couple of new IoT-based applications that will likely require the use of smart street

lighting that is available through the New York Power Authority.

“First is a real-time flood monitoring pilot project led by the City University of New York and New York University to help understand flood events, protect local communities, and improve resiliency. Two testbed sensor sites – one in Brooklyn and the other in Hamilton, which is a power beach in the Queen's area which has a history of nuisance flooding. The software

solution that is being tested must act as an online data dashboard for residents and researchers to access the collected flood data.

"Secondly, the city is testing computer vision technologies that automatically collect and process transportation mobility data through either a live video feed, recorded video, or through some sort of site-mounted sensors. Currently, street activity data is collected through time and personintensive methods that limit the location, duration, accuracy, and number of metrics available for analysis. By incorporating computer vision-based automated counting technology, the city hopes to overcome many of these limitations with flexible solutions that can be deployed as permanent count stations or short duration counters with minimal setup costs and calibration requirements.”

At some point, this technology could develop even further, interacting with citizens on a personal level by utilizing Edge computing. As Cram pointed out, “for example, the light fixture can be a WiFi device. A desk itself could be outfitted with sensors to detect how warm or cool you are, and measure movement to suggest getting up and walking around for a period of time.”

14 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Smart means taking advantage of automation and remote management capabilities for lighting, power, transportation, and other mission-critical applications to keep the city running
Click here to download
>Marc Cram Server Technology

The revolution will be optimized

Deus ex machina facta reali

Whether billed as the savior of humanity, the fifth industrial revolution (overlapping with the fourth, currently in progress), or the end of humanity, everyone seems to have a take on AI – and if you don’t, you should. AI – artificial intelligence – has matured at an alarming speed and is set to touch every aspect of our lives in the coming years, and data centers are certainly no exception.

Indeed, for the data center operator, it’s a multi-headed beast. In the first instance, the compute power required for AI functions to be viable will require more data centers, with higher capacities, and more advanced equipment, such as GPU-enabled servers – all consuming more high-voltage power, and requiring bigger communication “pipes” at higher speed and lower latency.

15 | DCD eBook • datacenterdynamics.com
Chris Merriman DCD
All the significant providers or competitive, growing providers are planning facilities with AI in mind, currently bidding on AI opportunities, or already implementing AI deployments

Meanwhile, that same AI will also change the way in which data centers are designed and operated, allowing facilities to work with speeds and efficiencies hitherto unseen, and identifying maintenance issues before they even occur. DataCenter Dynamics spoke to four experts in the field about how they see the rise of AI changing the data center landscape.

“AI provides greater levels of predictability and the opportunity to improve efficiency and effectiveness of data centers, including offering better insights into resource management, as well the optimization of both workload management and energy management within the data center. Asset management and predictive maintenance program management are just some of the potential benefits,” explains Com Shorten, senior director of Data Centers at JLL.

Max Smolaks, research analyst at the Uptime Institute, adds, “This should result in improved efficiency and, in theory, free up staff to focus on more important tasks. AI will also play an important part in the management of ‘lights out’ data centers that don’t have any human staff nearby.”

For some vendors, the race is already on, as Matthew Farrant, global COO for Data Center Solutions at CBRE, points out: “We believe AI has the potential to be part of the future success of the data centers industry. To reap the benefits of AI, operators will need to shift their operations to be AIled. This isn’t a quick shift, but it’s

already starting to take place with hyperscalers.”

David Liggit, founder/CEO of datacenterHawk, echoes this point: “Hyperscale data center users are already implementing highdensity servers and configuring hall deployments to optimize the latency between AI racks. Vendors will have continued business with more opportunities to sell new products. Architects and technicians are already working on how to construct new sites and adapt to fit the needs of data center users who are equipping AI.”

The change won’t happen overnight, though, at times, it feels like that’s exactly what’s occurring. It’s not enough to wake up one morning and decide, “We’re going all in on AI.” The change will come as new data centers are commissioned and old ones are refurbished, often in conjunction with bringing them into line with modern sustainability standards. Indeed, the very location of your next data center could be dictated by designing with an AI lens. Shorten summarizes it like this:

“There is an ongoing study of the “site selection strategy” regarding the location and proximity of AI-dedicated data centers. The current cloud data centers are predominantly located close to large metros and the architecture configuration is based on “Zonal” (availability zones and regional zones) to provide high availability and low latency. The AI data center location strategy may be different, allowing for greater latitude and also the need to “follow the power”

outside of main metros which traditionally have significant challenges delivering large 100MVA grid connections.”

Liggit points out, “Many of the early leases that use AI are taking up just a quarter of the space leased while using the total amount of allocated power. With the AI‘s power usage being so concentrated, cooling is paramount for server effectiveness and longevity.“

Whilst the predictions of a real-life “Skynet” as predicted by the Terminator movie franchise are far removed from the truth, the march of AI does present additional considerations for data centers using them, particularly surrounding security. A vulnerable AI has the potential to flood consumers with fake or corrupted data, as Smolaks tells us:

“AI-based systems depend on data sent through the network, which means cybersecurity should be a major concern for any organization deploying such systems. In addition, new types of attacks aimed specifically at AI models are emerging. One example is data poisoning – which can force models to produce incorrect results. It remains to be seen what effect such attacks would have on critical infrastructure.”

Conversely, however, Farrant points to a strong argument for greater collaboration between vendors to ensure interoperability between different AIs, and the benefits of pooling training data: “We believe it’s good for the industry to apply some focus to standards around data so that any investment in supplementary sensors to ‘feed the AI’ doesn’t negatively impact the customer if they decide to move from one AI solution to another.

“We also need to remember that the more data an AI engine has available, the better the insights it can generate. Everyone wants to take advantage of ‘big data’ but this can’t be created in isolation – to help itself, the industry needs to find a way of being comfortable about contributing operational data

16 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
We believe AI has the potential to be part of the future success of the data centers industry. To reap the benefits of AI, operators will need to shift their operations to be AI-led
>Matthew Farrant CBRE

to a ‘shared pool;’ otherwise, no matter how good their AI engine, its effectiveness will be constrained by only having data from one site.”

With so much to consider, it's easy to question the cost benefit, but our experts are certain that this isn’t an either/or situation – AI is the way of the future and it has benefits that are already being felt, even though, at this early stage, it‘s hard to plot a timeline for a return on investment for a technology whose full potential is yet to be realized – Liggit explains:

“In the business world, companies are already making employees' work hours more efficient by delegating tasks to AI. Data center providers have a unique opportunity to capitalize on this opportunity. All the significant providers or competitive, growing providers are planning facilities with AI in mind, currently bidding on AI opportunities, or already implementing AI deployments.”

Meanwhile, outside the data center bubble, it‘s also clear that migrating the world’s data centers to AI control doesn’t just benefit those of us in the industry – its advantages are far-reaching and societal.

Shorten tells us: “The potential

for AI to transform businesses, industries and society has been mounting for decades. But recent advancements have moved the science from niche to mainstream. The technology’s proficiency in writing, drawing, coding and composing has compelled corporate leaders to consider both the opportunities and threats that AI presents for their future. For commercial real estate, it’s clear that strategically embracing AI could transform the sector.”

While Smolaks adds an upside, with a caveat:

“Greater use of AI in data center operations will result in more efficient, more cost-effective facilities, and that should drive down the prices of digital services. At the same time, this technology will introduce new risks to uptime, and new points of failure. The first data center outage caused by AI is probably not far away.”

So what have we learned?

Fundamentally, the data center industry is staring down the barrel of one of the biggest existential changes in its history, and being a spectator is not an option. We’ve learned that, while the efficiencies that AI brings to the data center environment are huge, and potentially an investment that will pay dividends, there are inherent risks, particularly around site security and training data that we will all need to be conscious of.

Over the next decade, we will continue to see exponential growth in the demand for and construction of data centers. The differences will come in the way they are designed, located, built, and maintained. The results for the world at large, on the other hand, are almost too myriad and too exciting to contemplate. It’s going to be quite a ride. 

17 | DCD eBook • datacenterdynamics.com Click here to listen
We also need to remember that the more data an AI engine has available, the better the insights it can generate
>Matthew Farrant CBRE

Chapter two: What’s next?

In this chapter we examine the emerging technologies driving the industry forward, life after 5G and the ever-changing standards when it comes to connectivity. We also sit down with CommScope’s Alastair Waite to find out what Industry 4.0 will look like for the data center industry as it continues to progress, the people needed to make it all happen and how big of a role AI will play in fueling our digital destiny.

18 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society

Connectivity in 2023: Emerging tech to drive industry forward

Widespread geopolitical trends, along with the cost of energy and low-growth “slowflation,” had an outsized effect on the growth of broadband networks. Added challenges from supply chain issues, skilled labor shortages and logistical constraints limited the speed of network deployments. Economists across the world are predicting recessions, but with variances about how long and deep they will be, with some countries impacted more than others. Despite this outlook, the reality is that connectivity is now vital; it is no longer seen as discretionary but as essential – for economic recovery, government initiatives, business infrastructure and for household personal use.

A post-pandemic understanding of the importance of broadband and closing the digital divide, along with unprecedented government funding, an accommodating financial environment, and a mature crop of network technologies, have yielded a steep demand curve for building next-generation networks, setting the stage for unparalleled capacity expansion. At the same time, enterprises are re-evaluating their own operational costs structures as a result of the business disruptions of the last

19 | DCD eBook • datacenterdynamics.com
After a challenging 12 months, there is growing optimism that demand for broadband services will surge again in 2023
Trevor Smith CommScope
Getty Images > DCD eBook | Building a Digital Society
Image: This year is likely to see a significant advance in bringing rural communities online

couple of years, with a keen interest in realizing greater efficiency across their organizations, including their networks.

After being thrown into remote work, remote learning, telemedicine and all the other new ways we’ve had to lean on our connectivity, people see ubiquitous connectivity not as a convenience or luxury, but a basic necessity of modern living. The connectivity haves and have-nots became more clearly defined during the pandemic, and we all know a person or business who was left behind in this sudden, sometimes awkward advance in the digital revolution.

In addition, everything from shopfloor automation to social media has raised universal awareness of the need for fast, reliable connectivity, especially for Internet of things (IoT) devices and artificial intelligence/ machine learning (AI/ML)-powered services underpinning Industry 4.0 potential. Whether you are planning your business’ next infrastructure overhaul or shopping for broadband capable of 4K streaming to every room in the house, chances are you’re keeping a keen eye on what we can expect from network technology manufacturers, ISPs and other sources.

This combination of accommodating environment (in both commercial and residential segments) and the never-ending craving for capacity is the reason why those of us deeply involved in the connectivity industry are so excited about 2023 – and all the possibilities that are emerging as the world eagerly embraces developing technologies.

Connectivity and the digital divide

The digital divide is not a new concept, but after the pandemic forced us to work in new ways, its impact was thrown into sharp relief. The United Nations revealed that there are still more than 2.9 billion unconnected people around the world who are in jeopardy of being left behind. Such staggering numbers prove that the industry still has a lot of work to do if we are to overcome the stubborn challenges posed by the digital divide. Closing this divide will mean facilitating the delivery of a wide range of services and applications to enhance these peoples’ lives – as well as improve business efficiency and productivity which have their own downstream benefits to underserved communities.

That said, as more people get online and use more cloud-based applications, more pressures will be

felt across the telecommunications industry. From network operators to data centers, challenges remain not only in getting connected, but also in ensuring reliable, ubiquitous connectivity.

Throughout 2023, governments and network operators will be working together to close the digital divide faster than we have seen in the past, including that part of the divide which runs through rural communities where high-quality broadband access may be limited by economics. This year is likely to see a significant advance in bringing rural communities online and getting people, places, and devices connected no matter where they are.

Investing in digital infrastructure has huge advantages for communities across the globe, both economically and socially. The task is monumental and we aren’t there yet, but current efforts make the future look promising. There now exists a critical mass of determination, resolve and funding in the industry to make universal broadband connectivity a plausible reality.

Reducing environmental impact to help with global sustainability

In 2023, sustainability will continue to be a key focus area for the industry as external pressures continue to rise from consumers, governments, and environmental groups. Governmental regulations are being introduced across the globe on environmental, social and governance (ESG) reporting, bringing these issues into sharper focus for the industry and the commercial customers we serve, many of which have their own sustainability goals to fulfill. Investment decisions are increasingly considering these regulations. Organizations will continue to use ESG measures as a way of attracting those investors.

To remain aligned with the goals of the Paris Agreement, the industry must reduce emissions by 45 percent before 2030, or risk contributing to the irreversible effects of climate change. The good news here is that industry leaders have already

20 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Throughout 2023, governments and network operators will be working together to close the digital divide faster than we have seen in the past
>Trevor Smith CommScope
To remain aligned with the goals of the Paris Agreement, the industry must reduce emissions by 45 percent before 2030, or risk contributing to the irreversible effects of climate change
>Trevor Smith

set ambitious internal targets to reduce power consumption and incorporate green initiatives into their organization's day-to-day activity, tackling the problem from multiple angles at once: materials sourcing, lighter designs, greater recyclability, improved packaging and logistics, and many other aspects combine to create lasting green impact. Further, by demanding transparency into their partners’ footprint, business leaders will be able to work together more effectively to tackle the issue of climate change.

It takes real results and quantifiable improvements to resonate with investors, employees and customers. This means a greater focus on the impacts of the whole supply chain is now on the table, with the industry looking at how it can move away from single-use materials or reduce the number of components needed for products.

All aspects of the industry are keenly aware that it must increase the efficiency of the delivery of services, while at the same time reducing the amount of energy use. It is here we will start to see the innovations needed, whether it is advancements in fiber and edge-based infrastructure

technologies, or through machine learning and artificial intelligence.

Convergence: Networks and security

The continued growth and expansion of cellular networks, particularly the ongoing deployment of 5G, may have been touted by some as the beginning of the end for WiFi, but 2023 will likely prove to be a year of convergence rather than conquest between the two technologies.

The convergence of networks holds great possibilities for bandwidth, efficiency, security, and flexibility. Both WiFi and cellular technologies have expanded capacity with new bands, thanks to unlicensed shared spectrum. In WiFi’s case, this is the 6 GHz band accessible to WiFi 6E and WiFi 7, which basically quadruples throughput.

Likewise, in private LTE and 5G cellular networks in the United States, the addition of the Citizens Broadband Radio Service (CBRS) 3.5 GHz band adds 150 MHz of spectrum. This band is lightly licensed and far more easily available to enterprises than traditional 3GPP-licensed bandwidth. Outside of the US, similar

concepts are being adopted to offer “industrial spectrum” for enterprise private mobile networks. In both cases, greater bandwidth means greater capacity – and greater potential for what the network can do.

As both WiFi and cellular continue to develop ever-increasingly comparable capabilities, we can expect to see each of them aid the other through their complementary strengths, until eventually they become a converged, usertransparent unified platform that shifts seamlessly between technologies as needed. It is at this point that IoT can really start to take off, whether it is in one’s home or in a vast manufacturing facility.

Of course, as the trajectory of smart devices joining the IoT goes up, so too does the risk of these devices being exploited and hacked. Ransomware will continue to evolve as a threat, not just for large organizations, but for residential buildings too. In 2023, we will continue to see vendors consolidating security features into a single platform, complete with pricing and licensing options to make their solutions more attractive to more people.

Tools are currently available to lock down IoT devices, but, for many, those tools remain inaccessible. Enterprise IT departments have the skills and knowledge to do this, but, for households, it remains a major security deficit. A converged private network provides the foundation for software-based credentials management systems that can protect all IoT devices, for everyone. In a sense, such a credentials management system is the “IoT of IoTs,” in that it automates access and control to secure the entire IoT environment. This platform-level approach can also provide extra assurance as global microprocessor chip supply chains realign and manufacturing becomes increasingly domestically sourced.

Value over raw speed

The speed and volume of data being generated, processed and transported

21 | DCD eBook • datacenterdynamics.com
Low latency is the key to unlocking the bandwidth that make all of the applications the modern world has come to rely on work
>Trevor Smith CommScope

will continue to grow at breakneck speeds in 2023. From placing grocery orders online today to using 5G-connected driving assist systems in the future as the technology matures further, users expect full efficacy in data transmission, and downtime cannot be afforded. Significant latency, or lag, in a network is not an option.

Low latency is the key to unlocking the bandwidth that makes all of the applications the modern world has come to rely on work. Data centers must be ready for the continual and growing pressure of increased network traffic, so an ongoing migration of these data centers to the edge of the network – in a quest to save a few milliseconds off network latency – is virtually certain.

A few years ago, the world was content with 4G’s 50 milliseconds latency. One-twentieth of a second was fast enough for what it did. However, 5G’s latency can be as low as 10 milliseconds, making it necessary for applications that require such near-instantaneous response times.

5G also supports a wide range of bands that each provide customized strengths that can be suited to a particular place or application. Its substantial sub-6 GHz bands (including C-band) are ideal for broader coverage of larger areas, as well as mixed indoor-outdoor areas. At the other end are mmWave bands (26 GHz and up) that provide incredible throughput speeds but can’t cover much distance and can’t easily travel through walls or other solid obstacles. Together, these available bands can be employed as needed to provide the most efficient and effective network to suit a particular deployment environment or goal.

For consumers, the growth of lowlatency 5G means taking the next step into virtual reality and augmented reality – whether this is viewing 3D videos of the cheering crowd, as seen from the performer’s perspective,

watching your favorite show or movie in 4K resolution while waiting for your flight, or calling up 4K resolution sports replays, curated statistics, and player profiles on demand. The applications range far beyond simple entertainment, however, as 5G’s low latency also empowers many other services that are increasingly important to modern living, such as smart home systems that reduce utility costs and improve safety, telemedicine, medical telemetry and other advanced healthcare applications such as remote robotic surgeries. Yes, those can run on 5G networks.

When it comes to home customer premises equipment (CPE) infrastructure, key advances are set to be made. The latest fiber-tothe-home (FTTH) infrastructure can economically deliver 10Gbps broadband right away. The emergence of the new DOCSIS 4.0-compliant devices in the second half of 2023 will mean tripling the upstream speeds of residential connections.

For service providers and streaming services, 2023 will be all about customer retention and growth. Low latency is also becoming more important than raw speeds, especially for gamers. In 2023, we expect gaming to become one of the prominent services to use 6GHz

WiFi and, as we move toward more immersive experience services in the home with AR, VR and MR, lower latency will be required to maintain proper immersion quality.

Conclusion

While the development of technology is always evolutionary, we often feel the power of the societal revolutions they enable, from television to the computer to the internet to the cloud. In 2023, much of the planet will be introduced to a world where many of the visionary capabilities we’ve talked about for the last several years – from smart buildings to immersive metaverse reality – will start dramatically changing how we live.

As an industry, we will be working hard to try and close the digital divide in the most environmentally responsible way possible. This drive will see more innovations being developed and coming to market, whether it is on a macro level, such as network convergence, or on a micro level, with products redesigned with less components, for example.

In all aspects of telecommunications, 2023 should shape up to be a big year, one that has those of us involved in it extremely optimistic and excited for the opportunities on the horizon 

22 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Click here to find out more

Interplanetary Internet, digital zebras and the disconnected Edge

Were you to have traveled through central Kenya in the early 2000s, you may have come across something highly unusual: a dazzle of zebras sporting strange collars.

The animals were not part of a bizarre fashion show, but rather early pioneers of a technology that could one day span the Solar System, connecting other planets to one giant network.

The connected world we inhabit today is based on instant gratification. “The big issue is that

the Internet protocols that are on the TCP/IP stack were designed with a paradigm that ‘I can guarantee that I send the information, and that information will be received, and I will get an acknowledgment during an amount of time that is quite small,’” Professor Vasco N. G. J. Soares explained, over a choppy video call that repeatedly reminded us what happens when that paradigm breaks down.

Fifty years on from its invention, the Transmission Control Protocol (TCP) still serves as the de facto backbone of how our connected age

operates.

But there are many places where such a setup is not economically or physically possible.

The plains of Africa are one such locale, especially in 2001 when Kenya had virtually no rural cellular connectivity, and satellite connectivity required bulky, powerhungry, and expensive equipment.

Zebras care not for connectivity; they don’t plan their movements around where to find the best WiFi signal. And that was a problem for an international group of zoologists

23 | DCD eBook • datacenterdynamics.com
Seb Moss DCD
Getty Images > DCD eBook | Building a Digital Society
Delay-tolerant networks ask you to imagine an Internet where connectivity isn’t guaranteed

and technologists who wanted to track them.

Faced with a landscape devoid of connection, the team had to come up with a way to study, track, and collect data on zebras - and get that data back from the field.

To pull this off, the group turned to a technology first conceived in the 1990s - delay or disruptiontolerant networking (DTN). At its core is the idea of ‘store and forward,’ where information is passed from node to node and then stored when connectivity falls apart, before being sent to the next link in the chain. Instead of an end-to-end network, it is a careful hop-by-hop approach enabling asynchronous delivery.

In the case of ZebraNet, each equine served as a node, equipped with a collar featuring solar panels, GPS, a small CPU, flash memory, and radio connectivity.

Instead of communicating with satellites or telecoms infrastructure, the migratory habits of each zebra are stored on the collar. Then, when the animal is near another electronic equine, it shares the data. This continues until one of the zebras passes a mobile base station – perhaps attached to a Range Rover – and it uploads all that it has collected.

“It was one base station for about 10-12 collars,” project member and Princeton University Professor Margaret Martonosi told DCD. “The main limit on storage capacity had to do with the physical design of what would fit on the circuit board and inside the collar module. Our early papers did some simulationbased estimates regarding storage requirements and likely delivery rates.”

It's an idea that sounds simple on the face of it, but one that requires a surprisingly complex and thoughtout approach to DTN, especially with more ambitious deployments.

“How much information you need to store depends on the application,” Soares explained. “So this means that you need to study the application

24 | DCD eBook • datacenterdynamics.com
DTN represents a fundamental shift in networking protocols that will result in military networks that function reliably and securely, even in the changing conditions and challenging environments
>Tad Elmer BBN Technologies

that you're going to enable using this type of connection, and then the amount of storage, and also the technologies that are going to be used to exchange information between the devices.”

You also need to decide how to get data from A, the initial collection point, to Z, the end-user or the wider network. How do you ensure that it travels an efficient route between moving and disconnecting nodes, without sending it down dead ends or causing a bottleneck somewhere in the middle?

This remains an area of vigorous debate, with multiple approaches as to how one should operate a DTN currently being pitched.

The most basic approach is single-copy routing protocols, where each node carries the bundle forward to the next node it encounters, until it reaches its final destination. Adding geographic routing could mean that it only sends it forward when it meets a node that is physically closer to the end state, or is heading in the right direction.

Then there are multiple-copy routing protocols that see each node send it to a bunch of others. Versions of this approach like the ‘epidemic protocol’ would spread data across a network rapidly, but risk flooding all the nodes.

“On a scenario that has infinite resources, this will be the best protocol," Soares said. “But in reality, it's not a good choice because it will exhaust the bandwidth and the storage on all the nodes." ‘Spray and Wait’ tries to build on this by adding limits to control the flooding.

Another approach, ‘PRoPHET,’ applies probabilistic routing to nodes when they move in nonrandom patterns. For example, after enough study, it would be possible to predict general movement patterns of zebras, and build a routing protocol based upon it.

Each time data travels through the network, it is used to update the probabilistic routing - although this can make it more brittle to sudden,

unexpected changes.

For his work at the Instituto Politécnico de Castelo Branco, Soares combined geographic routing with Spray and Wait to form the routing protocol ‘GeoSpray.’

“My scenario was assuming vehicles moving and data traveling between them, and so I would need the geographic information,” he said. “A single copy is the best option if you can guarantee connection to the destination, but sometimes you have to use multiples to ensure that you will find someone that will get it there for you eventually.”

Each approach, the amount of storage, and how long nodes store data before deleting it, has to be crafted for the application.

In South Africa, a DTN deployment was used to connect rural areas. Communities used e-kiosks to send emails, but the data was just stored on the system. When a school bus passed, it transferred the data to the bus, brought it to the city, and connected it to the wider net. When it returned, it brought any replies with it.

But as we connect every inch of the globe, such spaces for DTN are shrinking. “The spread of cell connectivity across so much of the world has certainly been helpful for overall connectivity and does supplant DTN to a degree,” Martonosi admitted.

“On the other hand, the cost of cell connectivity is still high (often prohibitively so) for many people. From a cost perspective, collaborative dynamic DTNs and mesh networks seem like a very helpful technology direction.”

Following ZebraNet, Martonosi worked on a DTN system to connect rural parts of Nicaragua, C-Link, and SignalGuru to share vehicle data. Due to increasing connectivity, such efforts “have not caught on widely,” she said.

“But you can see aspects of these techniques still around - for example, the bluetooth-based contact tracing apps for Covid-19

are not dissimilar from some aspects of ZebraNet and C-Link’s design.”

Terrestrial DTN proponents now primarily focus on low-power IoT deployments, or situations where networks have been impacted - such as natural disasters, or battlefields.

Indeed, the US Defense Advanced Research Projects Agency (DARPA) is one of the largest funders of DTN, fearing that the connectivityreliant US military could be easily disrupted.

“DTN represents a fundamental shift in networking protocols that will result in military networks that function reliably and securely, even in the changing conditions and challenging environments where our troops must succeed now and in the future," BBN Technologies CEO Tad Elmer said after his company received $8.9 million from DARPA to explore battlefield DTN.

The agency has published much of its work, but whether all of its research is out in the open is yet to be seen. However, DARPA was also instrumental in funding the development of the TCP/IP-based Internet, which was carried out in public.

“The irony is that, when Bob [Kahn] and I started to work on the Internet, we published our documentation in 1974,” TCP/IP cocreator Vint Cerf told DCD. “Right in the middle of the Cold War, we laid out how it all works.

“And then all of the subsequent work, of course, was done in the open as well. That was based on the belief that if the Defense Department actually wanted to use this technology, it would need to have its allies use it as well, otherwise you wouldn't have interoperability for this command and control infrastructure.

Then, as the technology developed, “I also came to the conclusion that the general public should have access to this,” Cerf recalled. “And so we opened it up in 1989, and the first commercial services started. The same argument

25 | DCD eBook • datacenterdynamics.com

can be made for the Bundle Protocol.”

With the DTN Bundle Protocol (published as “RFC5050”) Cerf is not content with ushering in the connected planet. He eyes other worlds entirely.

“In order to effectively support manned and robotic space exploration, you need communications, both for command of the spacecraft and to get the data back,” he said. “And if you can't get the data back, why the hell are we going out there? So my view has always been ‘let‘s build up a richer capability for communication than point-to-point radio links, and/ or bent pipe relays.’

“That‘s what‘s driven me since 1998.”

DTN is perfect for space, where delay is inevitable. Planets, satellites, and spacecraft are far apart, always in motion, and their relative distances are constantly in flux.

“When two things are far enough apart, and they are in motion, you have to aim ahead of where it is, it’s like shooting a moving target,” Cerf said. “It has to arrive when the spacecraft actually gets to where the signal is propagating.”

Across such vast distances, “the notion of 'now' is very broken in these kinds of large delay environments,” he noted, adding

that the harsh conditions of space also meant that disruptions were possible.

What we use now to connect our few solar assets relies primarily on line-of-sight communication and a fragile network of overstretched ground stations.

With the Bundle Protocol, Cerf and the InterPlanetary Internet Special Interest Group (IPNSIG) of the Internet Society hope to make a larger and more ambitious network possible in space.

An earlier version, CFDP, has already been successfully trialed by Martian rovers Spirit and Opportunity, while the International Space Station tested out the Bundle Protocol in 2016. “We had onboard experiments going on, and we were able to use the interplanetary protocol to move data back and forth – commands up to the experiments, and data back down again,” Cerf said.

With the Artemis Moon program, the Bundle Protocol may prove crucial to connecting the far side of the Moon, as well as nodes blocked from line-of-sight by craters.

“Artemis may be the critical turning point for the interplanetary system, because I believe that will end up being a requirement in order to successfully prosecute that mission.”

DTN could form the backbone of Artemis, LunaNet, and the European Space Agency’s Project Moonlight. As humanity heads into space once again, this time it will expect sufficient communication capabilities.

“We can smell the success of all this; we can see how we can make it work,” Cerf said. “And as we overcome various and sundry barriers, the biggest one right now, in my view, is just getting commercial implementations in place so that there are off-theshelf implementations available to anyone who wants to design and build a spacecraft.”

There’s still a lot to work out when operating at astronomical distances, of course.

“Because of the variable delay and the very large potential delay, the domain name system (DNS) doesn't work for this kind of operation,” Cerf said. “So we‘ve ended up with kind of a two-step resolution for identifiers. First you have to figure out which planet are you going to and then after that you can do the mapping from the identifier to an address at that locale, where you can actually send the data.

“In the [terrestrial] Internet protocols, you do a one-step workout – you take the domain name, you do a lookup in the DNS, you get an IP address back and then

26 | DCD eBook • datacenterdynamics.com
DCD

you open a TCP connection to that target. Here, we do two steps before we can figure out where the actual target is.”

Again, as with the zebras, cars, and other DTN deployments, understanding how much storage each space node should have will be crucial to its effective operation.

But working that out is still an open question. “If I know where the nodes are, and I know the physics, and I know what the data rates could be, how do I know I have a network which is capable of supporting the demand?” Cerf asked.

“So I went to the best possible source for this question, Leonard Kleinrock at UCLA." Kleinrock is the father of queuing theory and packet switching, and one of the key people behind ARPANET.

“He‘s still very, very active – he‘s 87, but still blasting on,” said Cerf.

“I sent him a note saying, ‘Look, here‘s the problem, I‘ve got this collection of nodes, and I‘ve got a traffic matrix, and I have this DTN environment; how do I calculate the capacity of the system so that I know I‘m not gonna overwhelm it?”

Two days later, Kleinrock replied with “two pages of dense math saying, ‘okay, here‘s how you formulate this problem,’” Cerf laughed.

Kleinrock shared with DCD the October 2020 email exchange in which the two Internet pioneers debate what Kleinrock described as an "interesting and reasonably unorthodox question.”

“Here's our situation," Cerf said in the email, outlining the immense difficulty of system design in a network where just the distance of Earth to Mars can vary from 34 million to 249 million miles. “The discrete nature of this problem vs

continuous and statistical seems to make it much harder.”

Kleinrock provided some calculations and referenced earlier work with Mario Gerla and Luigi Fratta on a Flow Deviation algorithm. He told DCD: “It suggests the algorithm could be used where the capacities are changing, which means that you constantly run this algorithm as the capacity is either predictably changing or dynamically changing.”

Cerf said that Kleinrock proved immensely helpful. “Now, I didn‘t get the whole answer. I still don‘t have the whole answer,” he said. “But, I know I have one of the best minds in the business looking at the problem.”

As with many other aspects of the Interplanetary Internet, “this is not a solved problem,” Cerf said.

“But we‘re on it.” 

27 | DCD eBook • datacenterdynamics.com
In order to effectively support manned and robotic space exploration, you need communications, both for command of the spacecraft and to get the data back
>Vint Cerf TCP/IP co-creator

Standards updates for optical fiber: What you need to know

As a global leader in fiber structured cabling and an active participant in major standards organizations around the world, CommScope is committed to helping our customers stay abreast of standards activities that impact their fiber network design, planning and operations. These include the following recent updates and developments within key international standards development organizations such as IEC TC 86, ITU-T SG15 and North American TIA fiber infrastructure standards, as well as IEEE 802.3

Ethernet and Fiber Channel application standards.

Activities on the international front

IEC TC 86 – which prepares standards for fiber optic systems, modules, devices and components –includes three main subcommittees: SC 86A (fibers and cables), SC 86B (interconnecting devices and passive components) and SC 86C (systems and active devices).

In SC 86A/WG3, the restructuring project for the IEC 60794-1-2

series on cable test procedures is ongoing. All the test methods from the previous IEC 60794-1-21 (mechanical), -1-22 (environmental), -1-23 (cable elements) and -124 (electrical) documents are being separated into individual documents for ease of revision and management. For example, test methods E1 through E34 in the original IEC 60794-1-21 document will be split into standalone documents with the 60794-1-1xx numbering format. IEC 607941-22 will be in the 60794-1-2xx numbering format, and so forth.

28 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
As the industry continues to evolve, ongoing reviews from standards bodies aiming to improve accuracy, usability and alignment of various fiber standards can be overwhelming. Here’s what you need to know about fiber optic cabling infrastructure and applications
>> eBook Building a Digital Society

The first edition of IEC TR 63431 microduct technology is under development. This technical reference document provides guidance for design, implementation, testing and repairs of microducts, and introduces a new benchmark for microduct sizes and color codes. Other technical requirements such as cable weight aerial microduct and minimum bend radius are being considered. It is anticipated to be published in Q1 2024.

In SC 86B, extensive debates surrounding tuned versus untuned single-mode cylindrical-ferrule (LC, SC, etc) connectors occurred over many meeting cycles. The two active projects are IEC 617553-1 (SM PC) and IEC 61755-3-2 (SM APC), both are anticipated to be published in Q3 2024. Significant portion of the existing SM connector deployments are based on the pre-existing tuned variant. Two variants have been introduced: a new tuned variant, and untuned variant for both attenuation Grade B and Grade C. The newly adopted variants are compatible with each other; however, they have different dimension requirements comparing to the pre-existing variant. This can result in a non-compliance attenuation performance between the new untuned connectors and the already installed base. For any changes or adds to existing infrastructure, it is highly recommended to remain using the same connectors in compliance to pre-existing tuned requirements.

Another active project is the visual inspection of connector endfaces. Over the past decade, there have been many reports of poor inspection reproducibility and correlation between different field inspection systems. This was confirmed in a round-robin study between various test equipment, where large variation in scratch counts were found (see IEC TR

63367). Contaminated end faces remain the primary field-failure mode. At this moment, the committee concluded that visual inspection is not suitable as a substitute for optical qualification such as attenuation and return loss measurements. A visual inspection task force (VITF) is actively working to further improve inspection capability.

On the emerging technology aspect, there have been active discussions surrounding the performance, suitability and requirements for new fiber technologies, including contactless expanded beam connectors, high power laser applications such as onboard and co-packaged optics, multicore fiber (MCF) connector interfaces, hollow-core fiber, and optical-electrical hybrid connectors.

In SC 86C/WG1, which defines fiber-optic systems and active devices requirements, published the IEC 61280-4-3 (PON attenuation and return loss measurement methods) document in mid-2022. It defines three methods to measure the loss performance of a PON system: Method A for use with a light source and power meter (LSPM), method B for use with an OTDR, and informative-only method C for estimating attenuation on activated systems using a filtered OTDR that blocks off specific wavelengths in the upstream direction.

IEC 61280-4-2 is currently in revision and is anticipated to be published in Q3 2024. The

document defines the measurement requirements of attenuation and return loss of installed singlemode cable plant. The cable plant can include single-mode optical fiber, connectors, adapters, splices and other passive devices. The applicable installation environments include residential, commercial, industrial, data center premises and outside plant. The principles of the procedures and considerations can also be applied to cable plants containing branching devices (splitters) and WDM devices. One important aspect of the document is the measurement methods addressing different cabling configurations – patch panels at both ends, connectors at both ends or a mixed configuration. Existing one-cord, three-cord and two-cord reference methods address the three configurations mentioned previously. These three methods are used to measure what is typically referred to as a permanent link, where the equipment cords are not yet installed. A new method “equipment cord reference method” is being introduced to measure the end-to-end channel, including the equipment cords that have been installed.

ITU-T Study Group (SG) 15

The international standards developed by ITU-T SG15 detail technical specifications giving shape to global communication infrastructure. SG15 is structured in three working parties (WPs)

29 | DCD eBook • datacenterdynamics.com
Type-U2 fiber transition is based on CommScope’s SYSTIMAX® Ultra-Low Loss and Propel solutions
>Sunny Xu CommScope

– WP1, WP2 and WP3, and each WP is further divides into various questions or subjects to be studied. defines home networks connecting in-premises devices and interfacing with the outside world; WP2 defines fiber- or copper-based access networks through which subscribers connect; and WP3 defines technologies and architectures of optical transport networks enabling long-haul global information exchange.

In WP2 (Q7/15), a new supplement to L.250 (Topologies for optical access network) on national experiences for FTTx network architectures is being proposed. Since FTTx technology has substantially matured over the past 20 years, this provides an opportunity to share guidance and best-practices with developing countries. Experiences expected to be included are Japan, South Korea, China, India, Netherlands, Switzerland, Spain, and Brazil.

New recommendation is being developed, which describes requirements for pre-connectorized cabling architectures. Housings, closures, cable assemblies and terminals with factory-terminated connectors will be defined. Since most common hardened connectors have not been standardized, the

recommendation will focus more on performance criteria and generic definition instead of specific interface types.

TIA-TR42

Within TIA, the TR-42 Engineering Committee develops and maintains standards for cabling infrastructure, including the TIA-568 series of cabling standards. The TR-42.11 subcommittee on fiber optic systems published the TIA568.3-E Optical Fiber Cabling and Component Standard in September 2022. In the new publication, the color green is designated for multimode angled physical contact (APC) MPO connectors, which is identical to the color used for single-mode APC MPO connectors. Therefore, making the cable color the primary differentiator between multimode and single-mode connectivity.

The pin/unpin configurations have been modified and updated on the MPO cable assemblies – trunk cables, fanout cables, patch cords and cassettes/modules, to reflect the existing install base. Trunk cables are now pinned, while MPO-toLC cassettes/modules, fan-out cables and MPO patch cords will be unpinned to facilitate end-to-end

MPO-based duplex applications.

In addition, two new component types and polarity methods have been defined – Type-U1 and Type-U2 fiber transitions for MPO-to-LC breakout cables and modules, along with Method U1 and U2 polarity methods for MPO-based duplex applications that both use Type-B trunks and A-B duplex patch cords. Type-U2 fiber transition is based on CommScope’s SYSTIMAX® Ultra-Low Loss and Propel solution

The primary difference between U2 and U1 is that the LC pairing are flipped as shown; however, U2 offers additional benefits. When upgrading from a legacy method B to method U2, trunks, adapters, and patch cords remain unchanged, only the breakout cable or module changes result in minimal material and labor cost during the migration. On existing universal systems with type A trunks and an A-A duplex patch cord on one end, upgrading to method U1 will require replacement of the trunk, module and patch cords, it essentially leads to a complete new deployment. In addition, CommScope’s type-U2 fan-out cables and cassettes offer the additional benefit of supporting direct connections in a parallelto-serial (QSFP-to-SFP) breakout applications. Type-U1 fan-out cables

30 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society

or cassettes cannot support direct breakout connections and require the use of an additional A-A duplex patch cords for these applications.

Ethernet Applications (IEEE 802.3)

Within the IEEE 802.3 Ethernet Working Group that develops media access control and physical layer parameters standards for Ethernet applications, the work of the P802.3db task force for 100 Gbps, 200 Gbps and 400 Gbps short reach multimode applications was finalized with the standard approved in September 2022. Based on a 100 Gbps signaling rate per lane, these applications include duplex 100 Gbps applications up to 50 meters (100GBASE-VR) and 100 meters (100GBASE-SR), as well as 200 Gbps over two pairs of multimode up to 50 and 100 meters (200GBASE-VR2 and 200GBASESR2) and 400 Gbps over four pairs of multimode to up 50 and 100 meters (400GBASWEVR4 and 400GBASESR4).

In December 2022, the standards development efforts of the IEEE P802.3df task force were split into two separate projects – one based on the current 100 Gbps signaling rate per lane (802.3df) and the other based on future 200 Gbps signaling rate per lane (802.3dj). This change was necessary given the number of variants and the expected timeline for the development of a 200 Gbps signaling rate. It’s important to note that these projects are still in the early phases of development and could include additional objectives in the future.

Expected for completion in mid2024, current objectives for P802.3df include the following:

•400G over four pairs of singlemode fiber up to 2km

•800G over eight pairs of multimode fiber up to 50m and up to 100m

•800G over eight pairs of

singlemode fiber up to 500m and up to 2km.

Expected for completion in Q2

2026, current objectives for P802.3dj include the following:

•200G over one pair of singlemode fiber up to 500m and 2km

•400G over two pairs of singlemode fiber up to 500m

•800G over four pairs of singlemode fiber up to 500m and up to 2km

•800G over one pair of singlemode fiber with four wavelengths over 2km

•800G over one pair of singlemode fiber with four wavelengths up to 10km and 40km

•6T over eight pairs of singlemode fiber up to 500m and up to 2km

Storage Applications (Fiber Channel, INCITS T11)

Fiber channel T11 standardizes high-speed data transfer technology that is designed to connect general purpose computers, mainframes, and supercomputers to storage devices. Fiber channel SANs are often deployed for low latency applications best suited to blockbased storage, such as databases used for high-speed online transactional processing (OLTP), such as those found in banking, online ticketing, and virtual environments.

The FC-PI-8 (128GFC) project is near completion and is anticipated to be published in Q3-Q4 2023. 128GFC is based on a single lane rate of 112.2Gbps (PAM4) which is faster than the comparable 100Gb Ethernet single lane variant. It provides backward compatibility to previous two generations of 64GFC and 32GFC. While four-lane 128Gbps fiber channel (FC-PI-6P) has been around since 2016, this

new single lane standard includes short-reach backplane applications, singlemode fiber up to two and 10 kilometers and multimode fiber to 100 meters, all on existing cabling infrastructure.

The FC-PI-9 (256GFC) project was approved, and the development began in December 2022. FC-PI-9 will be based on single lane rate of 200G (PAM4), exact speed is still under development. Currently there are two singlemode variants up to two and 10 kilometers, and one multimode variant. The multimode variant is still under discussion, both 200G VCSEL and BiDi are being considered.

While these updates are just a snapshot of recent noteworthy standards activities happening for fiber, CommScope’s Standards Advisor is your ideal source for all the latest on fiber and copper standards relevant to the structured cabling industry. Issued quarterly, the Standards Advisor provides detailed updates for cabling standards (ANSI/TIA, ISO/IEC, IEC, ITU-T and CENELEC), application standards (IEEE 802.3 and fiber channel T11), and even developments in the world of multisource agreements (MSAs). 

31 | DCD eBook • datacenterdynamics.com Click here to find out more

Data centers and Industry 4.0: The next manufacturing revolution

Data centers’ role in the next evolution of industry

Over the course of the past decade, few global industries have experienced a demand surge to rival that of data centers. And, one of its main catalysts has been Industry 4.0.

Data-driven technologies are increasingly defining the way the world operates, with global manufacturing and supply chains being amongst the most heavily (and permanently) impacted spheres.

So, how is the emerging Industry 4.0 shaping the global data center industry?

Data centers and the social media era

Although the data center industry was still sizable in the 2000s, at this point it was being propelled, predominantly, by government spending and financial houses.

According to Alastair Waite, data center market development at CommScope, the recent proliferation of data centers – and indeed, the entire global data center market – can largely be attributed to the rise of hyperscale and cloud companies.

These organizations transcended national borders, which proved

32 | DCD eBook • datacenterdynamics.com
A lot of operators who are serious about having a global footprint are having to operate in a more collaborative way, and they're having to share more information with their partners
>Alastair Waite CommScope

a critical differentiator from the primary data center users that came prior.

When data centers were primarily being used by banks and governments, although there was a huge amount of capital available to spend, it was typically limited to within their borders, their immediate company domain, or centered around the primary international hubs, like New York, London, and Hong Kong.

“It was when the hyperscale and cloud companies really started to push their footprint globally, into Europe and into Asia, that things really started to explode. That’s when you started seeing data centers getting close to ‘a location near you,’” Waite explains.

Laying the groundwork for Industry 4.0

Although the industry has been successful for many years, it was during this period that growth flourished.

From 2010 to 2015, large hyperscale companies adjusted their focus towards global expansion. “But they realized that that was really stretching the industry, their own people and supply chains,” Waite explains.

“So, they brought in new concepts. For example, Meta brought in the Open Compute Project, which pulled back the curtain and showed everybody what was going on inside the data center.”

It was during this time that providers started releasing plans detailing how to build servers and switches. This quickly enabled global data center supply chains and even encouraged competition.

Two of the key impacts of this were, firstly, that it enabled the industry at large to learn from its leaders and sharpen its approach. Secondly, the level of connectivity available increased significantly. In turn, data centers made more and more services available, and people enjoyed the benefits of these,

meaning it became a “self-fulfilling prophecy,” cites Waite.

Meanwhile, the requirements of social media meant that data was being pushed closer to the Edge and brought to more global locations.

As data centers increasingly moved away from the central hubs into more diverse locations, lower latency communication between end users and devices could be achieved, and far more applications became possible.

Now, building on developments that were first fueled by social media, the data center boom is being propelled even further, by the emergence of Industry 4.0.

Defining Industry 4.0 and AI’s role within it

The world has undergone four industrial revolutions. The predecessors of Industry 4.0 have been characterized by traits like water-powered factories and machines, electrification, and digital technologies.

For Industry 4.0, its defining trademark is the increasing dependence of industry on robots.

Almost all manufacturing tasks plan to incorporate robots and connectivity solutions, including IoT, 6G, AI and VR, to operate as efficiently as possible.

Now in the early dawn of

Industry 4.0, we are already seeing a considerable blurring of the boundaries between the physical and digital worlds.

New software and automation are changing activities that would have previously been carried out by humans, most of these being heavy lifting (like on automotive assembly lines, for example) or repetitive manual tasks (like in food manufacturing processes). As a result, these technologies are now completing an increasing proportion of industry’s physical tasks.

But, are we beginning to see Industry 4.0 blur the boundaries between the digital and biological spheres, too?

A key milestone for this boundary-blurring will be when machines go beyond simply taking instructions, to performing critical thinking independently, and applying that knowledge to solve complex problems in real time.

And although Industry 4.0 has not reached this stage (yet), we’re all aware of how rapidly AI is advancing. ChatGPT’s dominance in the news (from relative obscurity a year ago) is a testament to that.

“Taking automotive manufacturing as an example, the machines will be jam-packed with sensors, which will have to make decisions that, probably, humans

33 | DCD eBook • datacenterdynamics.com
You need to have a superfast medium that's going to be low latency – like 5G, private 5G, LTE or optical fiber – which is connecting the data center to the manufacturing areas. That, in combination with data centers, is what's required to really have a world-class Industry 4.0 operation
>Alastair Waite CommScope

would have made previously. And I think AI is the only way to support that development.”

“Industry 4.0 will make it complete, and I think, with this, the physical, digital and biological worlds will come together,” Waite asserts.

What does Industry 4.0 mean for data centers?

Data centers’ role in supporting Industry 4.0 is absolutely critical.

“Having the ability to gather and manipulate data – to come to a sensible decision about what the next activity is going to be – is key. Machine-to-machine communication between servers is just growing unabated,” says Waite.

With the ever-increasing introduction of machines – which will be working 24 hours a day, seven days a week – higher and higher bandwidths will be required, and data centers will prove crucial to supporting this shift.

As a result, data centers are going to have to be built with greater resiliency, higher bandwidth capabilities and, in the majority of cases, on-premise at the user's location.

Then, alongside the new requirements for the data centers themselves, the medium to deliver said information is also going to be imperative.

“You need to have a superfast medium that‘s going to be low latency – like 5G, private 5G, LTE or optical fiber – which is connecting the data center to the manufacturing areas. That, in combination with data centers, is what‘s required to really have a world-class Industry 4.0 operation,” Waite adds.

Collaboration, transparency, and sustainability

Although the industry still remains comparatively secretive and fiercely competitive, the last 15 years have shifted considerably, in the favor

of increased collaboration and transparency.

Consumer calls for sustainability mean data centers are having to share more strategic directions with their industry partners.

Collaboration around sustainability is now a hugely important topic to the industry at large. As a result, not only do data centers need to communicate their efforts better, but they also need to choose partners with the same aspirations, and the ability to support them on that path.

“A lot of operators who are serious about having a global footprint are having to operate in a more collaborative way, and they‘re having to share more information with their partners,” Waite asserts.

“I think there‘s an acceptance that you can‘t do this on your own. You have to bring other people along with you – who have different skill sets and an ability to think and operate globally – and be open with them.

“At CommScope, we take sustainability extremely seriously, not just at a corporate level. We look at it from a business unit level, and we also look at how our products impact our customers and their architectures. We‘re trying to design products that will help our customers achieve their sustainability targets.”

And this isn’t a case of greenwashing; CommScope has the action – and the stats – to back up its serious sustainable stance. In 2022 alone, via the sourcing of renewable electricity, the company managed to save 11,375 metric tons of CO2 from entering our atmosphere.

It is this kind of mindful business practice that will be absolutely essential as we progress through Industry 4.0 and beyond.

Today, operators need to not only manage the ever-surging demand that Industry 4.0 is creating, but also strive to consistently improve their sustainability standards, in line with governments’ targets and customers’ expectations. Again, this only serves to exemplify how invaluable improving collaboration and transparency really is. Not just amongst data center leaders, but also between them, their suppliers, and their partners.

34 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
 To find out more about CommScope’s sustainability in action, you can check out CommScope’s 2023 sustainability report here.
At CommScope, we take sustainability extremely seriously, not just at a corporate level. We look at it from a business unit level, and we also look at how our products impact our customers and their architectures. We're trying to design products that will help our customers achieve their sustainability targets
>Alastair Waite CommScope

Chapter three: Data centers -A digital transformation

As we continue to vacate our analog lives, digital transformation is a term we often hear to describe that journey. But it’s not only organizations and individuals experiencing this sea change.

As technology becomes ever more advanced and our insatiable appetite for data continues to snowball, data centers need to be able to keep up with the speed, resilience and scale our new world demands. In this chapter we examine the shift in the status quo and beg the question, “How do you future-proof a data center?”

35 | DCD eBook • datacenterdynamics.com

Burgeoning data center demands lead to more resilient fiber platforms

The data center environment is constantly changing, which should surprise absolutely nobody. But some changes are more profound than others, and their long-term effects more disruptive.

To be clear, data centers –whether hyperscale, global-scale, multi-tenant or enterprise – aren’t the only ones affected by such fundamental changes. Everyone in the ecosystem must adapt, from designers, integrators and installers to OEM and infrastructure partners.

We are witnessing the next great migration in speed, with the largest operators now transitioning to 400G applications and already planning

the jump to 800G. So, what makes this latest leap significant?

For one thing, the move to 400G then 800G and eventually 1.6T and 3.2T officially marks the beginning of the octal era, which brings with it some fundamental changes that will affect everyone.

But first, a bit of context. What’s driving changes in data center infrastructure

Increases in global data consumption and resourceintensive applications like big data, IoT, AI and machine learning are driving the need for more capacity and reduced latency within the data center.

At the switch level, faster, highercapacity ASICs make this possible. The challenge for data center managers is how to provision more ports at higher data rates and higher optical lane counts.

Among other things, this requires thoughtful scaling with more flexible deployment options. Of course, all of this is happening in the context of a new reality that is forcing data centers to accomplish more with fewer resources (both physical and fiscal).

While data center network managers are ultimately responsible for ensuring their infrastructure is up to the task, their partners (installers, integrators, system

36 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Why increasing data center demands ultimately lead to a more resilient fiber platform
>> eBook Building a Digital Society
Ken Hall CommScope

designers and OEMs) all have a substantial amount of skin in the game. The value of the physical layer infrastructure is dependent in large part on how easy it is to deploy, reconfigure, manage and scale.

Identifying the criteria for a flexible, future-ready fiber platform

Several years ago, shortly after the launch of CommScope’s highspeed migration platform, we began focusing on the next-generation fiber platform.

So, we asked our customers and partners: “Knowing what you know now – about network design, migration and installation challenges and application requirements – how would you design your nextgeneration fiber platform?”

Their answers echoed the same themes: Easier, more efficient migration to higher speeds, ultralow-loss optical performance, faster deployment, more flexible design options.

In synthesizing the input, and adding lessons learned from 40+ years of network design experience, we identified several critical design requirements necessary for addressing the changes affecting both our data center customers and their design, installation and integration partners:

1. The need for applicationbased building blocks

2. Flexibility in distributing increased switch capacity

3. Faster, simpler deployment and change management.

Application-based building blocks

As a rule, application support is limited by the maximum number of I/O ports on the front of the switch. For a 1RU switch, capacity is currently limited to 32 QSFP/ QSFP-DD/OSFP ports. The key to maximizing port efficiency lies in your ability to makes the best use of the switch capacity.

Traditional four-lane quad designs provided steady migration

to 50G, 100G and 200G. But at 400G and above, the 12- and 24-fiber configurations used to support quad-based applications become less efficient, leaving significant capacity stranded at the switch port. This is where octal technology comes into play.

Beginning with 400G, eightlane octal technology and 16-fiber MPO breakouts become the most efficient multi-pair building block for trunk applications. Moving from quad-based deployments to octal configurations doubles the number of breakouts, enabling network managers to eliminate some switch layers.

Moreover, today’s applications are being designed for eight and 16-fiber cabling. Supporting 400G and higher applications with 16-fiber technology allows data centers to maximize switch capacity, while providing flexibility to adapt to higher count needs in the future.

This 16f design – including matching transceivers, trunk/array cables and distribution modules – becomes the common building block enabling data centers to progress from 400G to 800G, 1.6T and beyond.

Yet, not every data center is ready to move away from its legacy 12- and 24-fiber deployments. They must also be able to support and manage applications without wasting fibers or losing port counts. Therefore, efficient application-based building blocks for 8f, 12f and 24f configurations are needed, as well.

Design flexibility

Another key requirement is for a

more flexible design to enable data center managers and their design partners to quickly redistribute fiber capacity at the patch panel and adapt their networks to support changes in resource allocation.

One way to achieve this is to develop built-in modularity in the panel components that enables alignment between point-ofdelivery (POD) and network design architectures.

In a traditional fiber platform design, components such as modules, cassettes and adapter packs are panel specific. As a result, changing components that have different configurations also involves swapping out the panel

The most obvious impact of this limitation is the extra time and cost to deploy both new components and new panels. At the same time, data center customers must also contend with additional product ordering and inventory costs.

In contrast, a design in which all panel components are essentially interchangeable and designed to fit in a single, common panel would enable designers and installers to quickly reconfigure and deploy fiber capacity in the least possible time and with the lowest cost. So too, it would enable data center customers to streamline their infrastructure inventory and its associated costs.

Simplifying and accelerating fiber deployment and management

The final key criteria defined by CommScope’s research and design efforts is the need to simplify and accelerate the routine tasks involved

37 | DCD eBook • datacenterdynamics.com
The move to 400G then 800G and eventually 1.6T and 3.2T officially marks the beginning of the octal era, which brings with it some fundamental changes that will affect everyone
>Ken Hall CommScope

in deploying, upgrading and managing the fiber infrastructure. While panel and blade designs have offered incremental advances in functionality and design over the years, there is room for significant improvement.

Additionally, the issue of polarity management also deserves mention. As fiber deployments grow more complex, ensuring the transmit and receive paths remain aligned throughout the link becomes more difficult.

In the worst case, ensuring polarity requires installers to flip modules or cable assemblies. Mistakes may not be identified until the link has been deployed and resolving the issue adds time.

Enter the Propel™ solution

The result of CommScope’s intelligence-gathering and subsequent design and engineering efforts is the Propel solution, a new end-to-end, high-speed, modular fiber platform. The Propel platform is rigorously designed around the key criterion of application-based building blocks, design flexibility

and deployment speed.

The Propel solution is the first global fiber platform to incorporate native 16-fiber technology while also supporting eight-fiber, 12-fiber and 24-fiber applications, with four-fiber count-aligned module sizes.

As a result, it provides a single platform that supports multiple network generations. It is also optimized with ultra-low-loss optical performance and is designed to be a greener, more sustainable solution.

Collaborating to connect now and next

Data centers are rapidly evolving,

as data speeds and infrastructure complexity increase. This is especially true within hyperscale environments where lane speeds are accelerating to 400G, 800G and beyond and fiber counts across all layers of the network multiply.

It is therefore important that network managers, designers, integration professionals and installers continue to collaborate closely to help data center operators maximize their existing infrastructure investments while preparing for future applications.

38 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Increases in global data consumption and resource-intensive applications like big data, IoT, AI and machine-learning are driving the need for more capacity and reduced latency within the data center
>Ken Hall CommScope
 Click here to watch
39 | DCD eBook • datacenterdynamics.com
Data center interconnection: Could we see a significant shift in the status quo? Click here to download
>Panel:

Data center evolution in 2023: Efficiency is the name of the game

Examining the new realities and obligations for cloud services that come with the everevolving data center landscape

The last few years have introduced unprecedented business conditions for every industry, but among the most heavily affected are cloudbased services that are run by the global network of data centers. The business model has changed to accept new realities and fulfill new obligations – and extrapolating this recent history into the near future is an uncertain exercise at best.

Nevertheless, it is of vital interest that we do gain as clear a perspective as possible – because more of the world depends on cloud services – and, by extension, data center operations – than ever before. If there’s one thing we know the future holds, it’s that our dependence on them is going to increase.

An unprecedented one-two-three punch

The challenge is that, over recent

years, the baseline has continued to move. First, the world was rocked by global Covid-19 lockdowns and the overnight reality of hundreds of millions of people working and learning from home. This shift threw immense pressure onto data centers to handle high-bandwidth video and other cloud-based applications over a much more widely distributed area.

Then came the worldwide supply chain disruptions and labor shortages, making it hard for data centers to build out additional capacity because they couldn’t find critical components or the skilled people to install and run them.

And most recently, global inflation and spiking energy prices, exacerbated by the conflict in Ukraine, have forced companies and nations alike to further rearrange their supply chains and make

adjustments to continue operating persistently elevated energy costs. Note that these are just world events that aren’t even exclusive to the business of data centers. In addition, the growing social and commercial role of back-end data center processing and storage has presented just as many challenges.

Doing more, in more places, with less margin for error

Consider all the new applications that rely on capable, reliable data center support to operate. For instance, there is the mobile app ordering at your local restaurant, the high-speed robots in a warehouse picking your online order just minutes after you hit “check out” and even the driving assistequipped vehicle in the next lane.

The speed and volume of data being generated, processed and

40 | DCD eBook • datacenterdynamics.com
>> DCD eBook | Building a Digital Society e

transported by these applications and countless others is growing exponentially. The world cannot afford downtime, no matter if the consequence is a delayed lunch order or compromising the full efficacy of a 5G-connected drivingassist system

Low-latency 5G is unlocking the bandwidth – and just as important, the low latency – that many of these new and amazing applications require to work. All that gets piped to data centers, which are increasingly being moved to the edge of the network to shave those last few precious milliseconds off the response time reporter (RTR).

Energy efficiency will drive data center evolution in 2023

For all data center environments, efficiency is not so much a metric for profitability as it is a metric for survival. Whether a small to mid-sized multi-tenant data center or a vast cloud or hyperscale deployment, the intense, simultaneous pressures of demand and expenses – particularly energy expenses – will determine its future.

The bottom line is that data centers must increase the efficiency of their delivery of services, using fiber and Edge-based infrastructure, as well as machine learning (ML) and artificial intelligence (AI). And at the same time, they must increase the efficiency of operations – and that means reducing energy use per unit of compute power.

Certainly, cost is the most obvious factor when weighing energy efficiency, but it’s by no means the only one. Consider how customers and investors are growing more attuned to how their corporate partners source and use their electricity. Some progressive metropolitan areas are telling data centers that, in addition to concerns about data centers’ appearance, noise and water use, their energyhungry business is not wanted.

And in some cases, the area lacks available electrical grid capacity to host them

Now in 2023, where we are dreading headlines from Europe and elsewhere about rolling blackouts and insufficient heating, both regulatory and social opinions will only tilt further away from data center developers. That is why it is so urgent that energy efficiency takes top priority and data centers make those critical upgrades, such as:

• Convert storage to the most efficient media, based on access time

• Use detailed analytics to identify storage, compute, and power consolidation opportunities

• Deploy ultra-efficient UPS systems

• Re-evaluate the thermal limits of the center itself

• Consider colocation to share electrical and communications overhead

• Account for stress on existing electrical grid and moving to more sustainable power, localized to the data center.

On a more strategic level, moving data centers to the Edge of the network, connected by high-speed fiber, can improve energy efficiency as well as latency. Also, consider locations where there is access to renewable energy sources like wind, solar, hydro and nuclear.

For the largest cloud and hyperscale data centers, there is an opportunity to take advantage of localized power generation in various forms, to both power the data center and, if excess power is generated, provide back to the grid.

Efficiency flows downstream

While many may never appreciate the broader social and commercial impact a data center has on the world, it’s worth remembering how fast, robust data storage and processing can improve all of the

most vital parts of our days – and indeed, our lives.

For instance, every day, the cloudbased services that data centers enable, help:

• Employees connect with each other and work efficiently from their homes, office, or while traveling

• Farmers plan, plant and harvest healthier crops while reducing wasted water and chemical applications

• Factories build, stock, manage and ship products with robotic labor that prevents countless workplace accidents and injuries

• Ordinary people create expressive user-created content that connects individuals across a school or around the planet in gaming, social media and the metaverse

• Service providers stream all kinds of entertainment and information content to homes, laptops and mobile devices in a seamless mesh of connectivity. All of these examples, and countless others, demonstrate how much efficiency in our daily life depends on data centers – and that demonstrates how important energy efficiency will matter to those data centers in 2023 and beyond. 

41 | DCD eBook • datacenterdynamics.com
Cabling considerations of AI data centers Dr. Earl Parsons, Director Data Center/Intelligent Building Architecture Evolution
to
Click here
find out more

Q&A Ken Hall, CommScope

Propelling the industry forward, with CommScope’s Ken Hall

With a veritable tsunami of new technology the likes of AI, AR, VR and Edge computing flooding today’s market, capacity demands are soaring. How do data operators go about adapting their infrastructure to both what’s now, as well as the unknown of what’s next?

There truly is an ever-increasing demand for capacity and efficient methods to provide access on demand. Regardless of the applications, there are some common network infrastructure elements that impact critical concerns like latency, redundancy, power consumption and space. We now have multigeneration visibility to parallel fiber counts, data rates and performance requirements that we did not have a few years back. Data center operators can adapt to some degree, but from a greenfield perspective, they can plan their building blocks more efficiently now with the visibility we have.

Human beings generally fear change, particularly those operating missioncritical environments. Does CommScope partake in a lot of client education in terms of the changes needed to future-proof their infrastructure?

We do, but it is a two-way street. We have direct global visibility to trends, applications and connectivity development with our industry standards participation and footprint, as well as relationships with industry drivers. Our clients have their own background, experience, and business models for their operations. We expand our best practice knowledge from their input. We provide tools, support and options guidance to simplify their Day 1 and Day 2 migration planning. We also work with and encourage clients to collaborate with the networking teams to optimize their shared solution.

In terms of that education, how important is the physical layer of the data center in enabling customers to quickly and costeffectively implement changes?

The greater the need for capacity and lower latency, the more critical the physical layer is for flexibility to change. Ultra-low loss singlemode or multimode performance with application-optimized fiber counts can bring a variety of efficiencies to simplify migrations. In a duplex fiber port environment, pre-terminated trunk fiber counts just need to match at both ends with planned polarity.

Today we continue to see parallel applications becoming the norm due to their ability to deliver higher data rates, but more importantly for many data centers, switch port breakouts to four or eight other devices are connecting to leaf switches or directly to servers. Fiber counts are increasing by at least a factor of four or eight. Planning layouts for that and utilizing mesh architectures can flatten the network and improve

42 | DCD eBook • datacenterdynamics.com > DCD eBook | Building a Digital Society
Q&A

response time as a result, while the cabling installed can support multiple network generations.

With budget and deployment schedules shrinking, do you find that speed to market is now trumping cost in terms of priority?

I believe so. Speed to market rose in priority for data center operators during the pandemic with supply chain issues and the corresponding impact on provisioning materials and labor. Data center architectures lend themselves to pre-terminated options. Pre-provisioning as much as possible off-site in controlled factory settings significantly improves onsite installation efficiency. There is a cost trade-off but the time to go live on site is much faster.

Let’s touch on CommScope’s fiber platform, Propel. Why ‘Propel,’ what’s in a name?

Propel is about moving forward at an accelerated pace. By default, it includes having a strong base. CommScope has decades of fiber experience, technology leadership and best-in-class portfolio.

Propel is built on that foundation and engineered the solution with innovative and flexible capabilities to enable next-generation options and simplify the user experience.

Why do your customers need it and what challenges does it help them mitigate?

Traditional fiber panel options have been historically based on 12- or 24-fiber building blocks by cable construction and then terminated and patched to network electronics. With the growth of eight and 16-fiber applications, those blocks are not aligned with the simplest needs. Array cables are needed to bridge infrastructure and application.

We cleaned the slate when designing Propel. It is a single panel platform

which accommodates modules directly aligned with ultra low-loss MPO8, MPO12, MPO16 or MPO24 providing forward and backward compatibility to applications, enabling double LC duplex density option using SN connectivity, cable management innovations to simplify change, front and rear module installation, QR code on components to provide instant access to factory data, and packaging changes for waste reduction and sustainability. These benefits tailor the solution to their needs now and in the future.

Some might argue complex challenges require complex solutions. Are innovations around simplification/userfriendliness as important as those that improve say, speed, latency or capacity?

I’m a fan of simplicity. That said, I do not see those innovation types as mutually exclusive. Each of those considerations is critical to the user experience. Complexity should be designed in behind the scenes so that ongoing operations are easier and faster on-site. We know we can reduce latency using higher data rate, higher capacity switches with breakouts, flattening the network. The customer interface can then be simple, manageable equipment cords rather than unwieldy arrays or spaghetti cabling. That path can also provide ongoing energy benefits.

A bold statement, but would you say a fiber platform such as Propel is helping facilitate the survival of data centers in terms of future-proofing their infrastructure for whatever comes next?

Propel was designed around flexibility on-site with logical alignment of trunk cables, module sizes to match, interchangeability of all within the same panel, and cable management innovations. We accommodate all MPO options as well as supporting VSFF connections for higher density. The intent is to

support multiple network generations and growth without the need to replace panels. Today we support all duplex and known parallel applications with the ability to adapt in the same panel should new needs arise.

CommScope provides “global support” for its customers. How is CommScope in a position to provide support at an international scale? Is this support both pre- and post-sale?

CommScope has direct sales and technical resources in regions around the globe as well as over 10,000 partners supporting customers in over 150 countries with pre- and post-sale support. We also have Technical Assistance Center support regionally. From the product perspective, we have manufacturing and distribution in region so we are able to cover customer needs globally, regionally or nationally.

Finally, is a strong, resilient fiber infrastructure a key component in successfully navigating the “Metaverse”, whatever that brings?

Absolutely. Providing capacity and responsiveness is critical to delivering the values we expect to see or have yet to develop. High-speed distributed fiber transport is a critical foundation for that. Data centers delivering that capacity on a local level geographically will provide the necessary on and off ramps! 

43 | DCD eBook • datacenterdynamics.com
 To find out more about CommScope’s Propel solution, visit www. commscope.com/ propel

The solution of choice for data centers

Power forward with the speed and agility of Propel, the high-speed fiber platform

• Propel is a completely modular structured cabling and connectivity solution with all the bandwidth, headroom, design flexibility and ease of installation to keep pace with your evolving data center or building network.

• High-density fiber panels, modules, adapters, fiber assemblies and interchangeable components ensure fast and repeatable installation across multiple upgrades.

• Ultra low-loss, 8/12/16/24-fiber solution enables seamless migration to more e cient 400/800G deployments.

• Maximize design options and manageability while reducing deployment time, cost and complexity.

Award Winning Solution

Social media provider simplifies infrastructure to address growth.

Review Case Study

© 2023 CommScope, Inc. All rights reserved. AD-117907.1-EN (09/23)
2023 2022

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.