> eBook
Networking Networking at scale: The architecture we need to build bigger distributed computing networks
Command the network. Successfully deploy networks, applications, and services. Consistently optimize performance across the ecosystem. Always provide the highest quality customer experience. Every day, you fight to successfully transition to next-generation technologies and implement major transformations with greater confidence. VIAVI provides the multi-dimensional visibility, intelligence, and insight you need to efficiently manage physical and virtual environments and profitably deliver optimum service levels, transition to new technologies, and launch innovative services.
Learn more at viavisolutions.com/hyperscale
Introduction
Contents 17
O
ver the past couple of years, our digital lives have accelerated exponentially, catalyzed by a global pandemic that appears to be never ending. With more and more people relying on digital services for work, play and communication, we need networks that can cope with this growing demand. Enter, hyperscale networking. This is networking at scale, capable of providing the architecture we need to add more resources to the system and essentially form a bigger distributed computing network. There are many popular use cases for hyperscale, with 5G very much stealing the spotlight. 5G is the technology that will ultimately enable other use cases, the likes of smart cities, autonomous vehicles and telemedicine, applications that promise to make our everyday lives that little bit easier. But of course, with larger networks, comes the scope for larger problems. In this eBook we dive into the pain points of hyperscale infrastructure, and some of the solutions available to help combat these challenges.
15 11 08 04 05 08
Chapter one: What is hyperscale?
09 10 11
Chapter two: Hyperscale use cases
13
5G powered smart cities: How far are we really?
15
How is 5G changing the way we work?
17 20
Driving into an autonomous future
21
Managing the pain points of the hyperscale ecosystem
23 25 28
Putting our networks to the test
29
Further reading
What is a hyperscale data center? DCD>Talks Networks with Sameh Yamany, VIAVI Solutions The current state of 5G Why smart cities are both incredible and inevitable
Chapter three: Hyperscale challenges and solutions
Testing to the Edge Panel: Testing times is your data center network ready?
DCD eBook | Networking
Chapter 1: What is hyperscale? In this chapter we ask the question, what exactly is a hyperscale data center? We explore the benefits, drawbacks, and considerations when it comes to establishing a super-sized facility. This chapter also features a DCD>Talks session on all things networking with VIAVI CTO, Sameh Yamany, as well as an informative video where Yamany explains why hyperscale ecosystems are not only a key facet in our lives today, but will ultimately help underpin our digital future.
4 | DCD eBook • datacenterdynamics.com
What is a hyperscale data center? The benefits, the draw backs and what to consider
I
n layman’s terms, a hyperscale data center is a large-scale distributed computing center that supports high volume data processing, computing, and storage services. The term “hyperscale” refers to data center size as well as the ability to scale up capacity in response to demand. According to the IDC, to be classed as ‘hyperscale’ a data center must utilize at least 5,000 servers and 10,000 square feet of floor space, although many centers are significantly larger. A single hyperscale data center may include hundreds of miles
of fiber optic cable to connect the servers used for data storage, networking, and computing. Data center interconnects (DCIs) link hyperscale data centers to one another. Proactive DCI testing during installation is essential for identifying sources of latency and other performance issues. Software defined networking (SDN) is often incorporated, along with the hyperscaler’s own unique software and hardware technologies. Horizontal scaling (scaling out) is accomplished by expanding or adding more hardware to the data
Sameh Yamany VIAVI
center. Vertical scaling (scaling up) increases the power, speed, or bandwidth of existing hardware. There are over 600 hyperscale data centers in operation worldwide today, and this number continues to grow. Benefits of hyperscale data centers Hyperscale data centers allow internet content providers (ICPs), public cloud deployments, and big data storage solutions to deploy new services or scale up quickly, making them highly responsive to customer demand. The immense data center size also leads to improved cooling efficiency, balanced workloads among servers, and reduced staffing requirements. Additional benefits include: •
Reduced downtime: The built-in redundancies and continuous monitoring practices employed by hyperscale data center companies minimize interruptions in service and accelerate issue resolution.
•
Advanced technology: Bestin-class server and virtual networking technologies along
5 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
with 400G-800G Ethernet DCI connections lead to ultra-fast computing and data transport speeds, high reliability levels, and automated self-healing capabilities. •
Lower CAPEX: Hyperscale customers benefit from lease or subscription models that eliminate upfront hardware and infrastructure costs and enable the computing needs of their business to be flexibly scaled up or down.
Drawbacks of hyperscale data centers With hyperscale data center architecture According to the IDC, to relying on be classed as ‘hyperscale’ size and a data center must utilize scalability, at least 5,000 servers and resource shortages 10,000 square feet including of floor space land, materials, labor, and equipment can quickly derail their growth. These challenges can be more severe in underdeveloped or remote regions with less available workers, utilities, and roads. Hyperscale construction schedules are compressed by the increased demand for internet content, big data storage, and telecom applications. The addition of 5G, the IoT, and intelligent Edge computing centers adds to this burden. These pressures can lead to minimized or omitted predeployment performance and fiber testing, and more problems or issues discovered after commissioning. Supply chain issues for hyperscale data centers are complicated by customization and the early adoption of new hardware and software technologies. High volume production coupled with short lead
times creates challenges for many suppliers. Rapid evolution of technology to enhance performance can also become a drawback. The speed of advancement accurately predicted by Moore’s Law forces hyperscale data center companies to refresh hardware and software infrastructure almost continuously to avoid obsolescence. Key factors to consider Configuration options for hyperscale data centers continue to multiply in the age of disaggregated, virtual networks, and Edge computing. The fundamental constraints common to all hyperscale data centers influence their long-term efficiency, reliability, and performance. Site location is among the most important factors when it comes to planning. With new hyperscale installations topping two million square feet, the cost and availability of real estate vs the desirability of the location and availability of resources must be balanced. Improved automation, machine learning, and virtual monitoring are allowing remote global locations with inherent cooling efficiencies to become viable options. Energy source availability and cost are primary considerations,
with some data center peak power loads exceeding 100 megawatts. Cooling alone can account for up to half of this budget. Redundant power sources and standby generators ensure the “five nines” (99.999%) reliability sought by hyperscale data center companies. On premise or nearby renewable energy sources are also being pursued to reduce CO2 emissions from hyperscale data center power consumption. Security concerns are magnified by the size of the hyperscale data center. Although proactive security systems are an essential part of cloud computing, a single breach can expose huge amounts of sensitive customer data. Improving visibility both within and between hyperscale data centers to ward off potential security threats is a vital objective for network managers and IT professionals. Hyperscale data center architecture Hyperscale data center architecture differs from traditional data centers beyond the sheer size and capacity. Modular, configurable servers with centralized (UPS) power sources improve efficiency and reduce maintenance.
6 | DCD eBook • datacenterdynamics.com
Cooling systems are also centralized, with large fans or blowers used to optimize temperature levels throughout the facility. At the software level, virtualization processes like containerization enable applications to move quickly between servers or data centers. Disaggregation and Edge computing in hyperscale data center architecture have made high-speed 400G and 800G Ethernet transport testing essential.
available space, power, and cooling capacity to other organizations. In some cases, design services, IT support, and hardware are also procured by the tenant company. This allows smaller companies to realize the benefits of a hyperscale data center while avoiding the ground-up investment and time. Hyperscale data center power consumption
With internet traffic increasing ten-fold in less than a decade and data centers already consuming approximately three percent of the Enterprise vs hyperscale data world’s electricity, more focus has center been placed on hyperscale Enterprise data centers data center power are owned and consumption. But, operated by the There are over 600 progress is being companies made: hyperscale they support. These centers data centers in operation • Network originated as worldwide today, and this function small on-thevirtualization is number continues to premises server set to unseat active grow rooms to support electronics and a specific company artificial intelligence location. to intelligently control server and optical power Consistent growth levels have already created in traffic, storage, and computing significant efficiency requirements have now driven improvements. many enterprise data centers into hyperscale territory. This change applies to the increased size of enterprise data center deployments, as well as dispersed locations, energy-efficient designs, and increased use of self-healing automation. Despite this convergence, higher fiber density across the network is typically found in hyperscale data centers.
•
The advent of lights out (unmanned) data centers assisted by 5G enabled IoT monitoring processes will enable more hyperscale data centers to be deployed in colder, remote locations (such as Iceland) with built-in cooling benefits.
•
As energy consumption increases, a shift from fossil fuels to renewable sources such as solar, wind, and hydroelectric power will reduce the overall environmental impact of hyperscale data centers.
Colocation vs hyperscale data center As hyperscale data center solutions evolve to meet growing customer requirements for capacity and performance, colocation has become an increasingly popular approach. The colocation model allows data center owners to lease their
•
Many leading cloud computing companies and data center owners, including Google, Microsoft, and Amazon, have pledged climate neutrality by
2030, while others have already accomplished this goal. Hyperscale data center design The size and complexity of hyperscale architecture drives a top-down Energy source approach to availability and hyperscale data cost are primary center design. considerations, with some Short and long-term data center peak power requirements for loads exceeding 100 memory, storage, megawatts. Cooling alone and computing can account for up to power ripple half of this budget through hardware specification and configuration, software design, facility planning, and utilities. The role of the data center within a campus setting and other interconnection requirements must also be carefully considered. Test planning during the design phase can prevent construction delays and reduce instances of service degradation post-deployment. Important considerations include MPO-native fiber link testing and certification, high-speed optical transport network (OTN) testing, and Ethernet service activation. Early incorporation of Observability tools and network traffic emulation can further safeguard ongoing performance.
Further reading Faster deployment and monetization of hyperscale and Edge data centers Hyperscale ecosystems, the future is now: A video with Viavi’s CEO Sameh Yamany as he explores the key drivers and challenges for hyperscale ecosystems and where Viavi fits in What is hyperscale?
7 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
DCD
Talk: DCD>Talks Networks with Sameh Yamany, VIAVI Solutions
CLICK TO WATCH
8 | DCD eBook • datacenterdynamics.com
Chapter 2: Hyperscale use cases In this chapter we examine some hyperscale use cases, with a focus on 5G, a technology that brings with it some big promises. From smart cities to the way we work and travel, how much impact will actually 5G have on our everyday lives?
9 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
The current state of 5G As 5G competition heats up, why availability is no longer enough
A
s of January 2020, commercial 5G networks have been deployed in 378 cities across 34 countries, according to VIAVI report ‘The State of 5G Deployments.’ The country with the most cities with 5G availability is South Korea with 85 cities, followed by China with 57, the United States at 50, and the UK, with 31 cities. In terms of regional coverage, EMEA leads the way with 168 cities where 5G networks have been deployed, Asia is second with 156 cities, with 54 cities covered by 5G across the Americas. Deployments include both mobile and fixed wireless 5G networks. As the battle for 5G supremacy heats up, VIAVI findings indicate that a number of operators are blanketing the largest population
centers, with as many as five communications service providers (CSPs) deploying 5G in cities such as Los Angeles and New York. “For 5G operators there is a heady mixture of optimism and fear,” said Sameh Yamany, chief technology officer, VIAVI. “The optimism is related to a plethora of new commercial applications that could change operator economics for the better, even though they may not feel the commercial impact for some time. “The immediate fear is that they will get left behind in the short-term marketing battle by rival operators if they’re not fast enough in their landgrab.” Sameh continued, “Nonetheless, very quickly, the overarching driver will change from simply having 5G network availability to having the
Sameh Yamany VIAVI
best 5G networks. “Even as operators continue their 5G build-out, they will simultaneously have to shift gears from network validation and verification through to advanced analytics and automated network troubleshooting. The race for the best 5G network has only just begun.”
Very quickly, the overarching driver will change from simply having 5G network availability to having the best 5G networks > Sameh Yamany, Chief technology officer, VIAVI
Further reading State of 5G infographic’ Whitepaper: Hyperscale and 5G, the future is now eBook: Practical guide to owning and operating 5G networks eBook: Tools and techniques for successful implementation, maintenance and monetization VIAVI, The State of 5G Deployments.
10 | DCD eBook • datacenterdynamics.com
Why smart cities are both incredible & inevitable And your data center needs to be prepared
Georgia Butler DCD
W
hen we look into the future, we have a terrible habit of underestimating it. We look at the last 10 years, how far technology has come in that time, and expect this development to be mirrored in the decade coming. This couldn’t be further from the truth. In reality, progress follows the law of accelerating returns and the last 10 years saw significantly more growth than the decade before. Explained as simply as possible, the ‘law of accelerating returns’ is based on the principle that as we develop more technology, further growth becomes easier to conceive. We learn from experience, and faster due to the resources available to us.
11 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
A successful smart city will depend on solutions across six major domains. Economy, environment and energy, government and education, living and health, safety and security, and lastly, mobility >Marc Cram, Server Technology, Legrand
When we think about this velocity, smart cities seem just around the corner, and as a concept are heavily reliant upon data centers for their success. “The world population presently stands at approximately 7.7 billion people, with nearly four billion or 54 percent living in cities today. By 2050, it’s projected that more than two-thirds of the world population will live in urban areas, with seven billion of the expected 9.7 billion people occupying cities globally,” says Marc Cram, director of new market development for Server Technology, a brand from Legrand Cram sees these population densities as directly related to the development of smart cities. This is, in part, due to necessity. As the population expands, we need to find a way to manage it effectively in the highly dense areas. Resources will be stretched and become reliant on efficiency. “A successful smart city will depend on solutions across six major domains. Economy, environment and energy, government and education, living and health, safety and security, and lastly, mobility. “The common understanding for a smart city today, is one that provides for the real-time monitoring and control of the infrastructure and services that are operated by the city, thereby reducing energy use and
pollution, while improving health, public safety, and the quality of life of the citizens and visitors.” It should not go unacknowledged that this could trigger some anxiety. The concept of real-time monitoring can feel a little too Orwellian for many people’s comfort, but this is already a part of our daily lives. We are already monitored by GPS on our mobile phones our preferences are tracked on the internet, and microphones listen to our every conversation. The question that Smart Cities answer is how can this integration of technology help our daily lives? This process is already beginning in New York. “Smart, in this case, means being efficient in the use of both human and financial capital to minimize energy usage, while ensuring the quality of service for public utilities such as water, electricity, and transportation, and to provide for the day-to-day safety of people and resilience of infrastructure. “Smart means taking advantage of automation and remote management capabilities for lighting, power, transportation, and other mission-critical applications to keep the city running. “With thousands of cameras and millions of sensors already in use around New York City, they are already well on their way to being a
smart city that processes over 900 terabytes of video information every single day. “Recently, the city of New York committed grant monies for the development of a couple of new IoT-based applications that will likely require the use of smart street lighting that is available through the New York Power Authority. “First, is a real-time flood monitoring pilot project led by the City University of New York and New York University to help understand flood events, protect local communities, and improve resiliency. Two testbed sensor sites -one in Brooklyn and the other in Hamilton, which is a power beach in the Queen’s area which has a history of nuisance flooding. The software solution that is being tested must act as an online data dashboard for residents and researchers to access the collected flood data. “Secondly, the city is testing computer vision technologies that automatically collect, and process, transportation mobility data through either a live video feed, recorded video, or through some sort of sitemounted sensors. “Currently, street activity data is collected through time and personintensive methods that limit the location, duration, accuracy, and number of metrics available for analysis. By incorporating computer vision-based automated counting technology, the city hopes to overcome many of these limitations with flexible solutions that can be deployed as permanent count stations, or short duration counters with minimal setup costs and calibration requirements.” At some point, this technology could develop even further, interacting with citizens on a personal level by utilizing Edge computing.
12 | DCD eBook • datacenterdynamics.com
5G powered smart cities: How far are we really? Can 5G-powered cities become a reality, and if so, how far are we from achieving it?
W
ith the current rate of 5G implementation to every possible industry and sector, transforming a city into a smart city is very likely to happen, and might not be as far away as we think. Simple applications that require connectivity can be served by using a fiber network and a Wi-Fi router, with minimal special features required in the network. Telecommunications companies, serving as the primary connectivity providers for the city, tend to position smart cities solutions as original within their capabilities. As smart city applications begin to mature, they are more demanding of resilient, low-latency connectivity, which may be closer to the mark due to the 5G’s prowess to provide all the requirements.
Shirly Lim VIAVI
The widespread availability of new technologies is required to transform a normal city into a smart one to reach high levels of sustainable urban development and improve the quality of life for its people.
astonishing number of 75 billion. This goes to show that the fast rate of the 5G technology integration will make it sooner rather than later to achieve a proper smart city.
A smart city utilizes the Internet of Things (IoT) to collect real-time data, which is being used to better understand how demand patterns are constantly changing, and thus responds with faster and lower-cost solutions.
Immersive connectivity
Generally, the ecosystem of a digital city is designed to run on ICT frameworks that will connect several networks of a multitude of devices, such as mobile devices, sensors, connected cars, home appliances, and data centers. IoT trends also suggest that by 2025, the number of connected devices worldwide will rise to an
As we all know, 5G is the most immersive piece of network technology ever to exist, surpassing any kinds of previous wide-area wireless or mobile networks. 5G networks are denser than 4G networks and while they retain a similar distribution to 4G networks, 5G is equipped with intermediate antennas that persist and boost signals significantly, resulting in an extraordinary yield. One of the most powerful features 4G brings, is that it could support up to 2,000 devices per square km. However, 5G can support much
Smart city First, we must know what defines a smart city and all it entails. The European Commission defines a smart city as “a place where traditional networks and services are made more efficient with the use of digital and telecommunication technologies for the benefit of its inhabitants and business”.
The widespread availability of new technologies is required to transform a normal city into a smart one to reach high levels of sustainable urban development and improve the quality of life for its people
13 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
IoT trends also suggest that by 2025, the number of connected devices worldwide will rise to an astonishing number of 75 billion higher numbers with up to a million devices. This alone has made it clear that 4G is not up to the task and 5G is the network that can transform a city into a smart city. The most exciting and promising factor of the 5G network is its ability to provide ground-breaking innovations inevitably. Cities all over the world are eager to team up with service providers for enterprise incubation and new technology implementation that are geared towards better city operations and living. In order to make this a reality, there are three crucial aspects to note, such as connectivity that is provided by the network, a computer environment that is provided by cloud partners, and data services and tools. The environment clearly demands interconnected, high-performance data systems that understand the pulse of the city and adapt it into
real time, improved processes, and exceptional citizen experiences. Digital solutions In its essence, 5G sparks the construction of smart cities from the theory lane into the practical, and it also paves the way for the development and deployment of new applications ranging from air quality monitoring, energy usage and traffic patterns to street lighting, smart parking, crowd management, and even emergency services. Furthermore, a smart city utilizes digital solutions, such as technology and data, to significantly improve several key quality of life indicators. This will lead to more improved traffic and commute time, accelerated emergency response time, low healthcare costs, decreased water consumption, unrecycled waste and its harmful emissions, and ultimately the potential for huge savings.
All of this will bring about new business opportunities for companies providing services and applications to manage the evercomplex IoT ecosystems and convert data into smart insights. In conclusion A smart city that once just seemed like an impossible dream has now become the project for the future. With the 5G technology, it will be sooner rather than later that smart cities all over the world emerge. Further reading D CD broadcast: What infrastructure investments need to be made to support ubiquitous computing in a smart city? DCD broadcast: How will the world of telecommunications and data infrastructure converge to support the ‘smart city’?
14 | DCD eBook • datacenterdynamics.com
How is 5G changing the way we work? Tackling the pain points of remote working
T
he endless potential of 5G networks were precisely what was needed to help get us through the difficult times brought about by the Covid-19 pandemic. Nearly two years on, it seems working remotely, something that for many was supposed to be a temporary arrangement, has become the new normal. That said, to say the transition from the office to remote working has been smooth and seamless would be an exaggeration, as many
Organizations and companies must train their employees with the integration of 5G to create a competent and exceptional workforce
still face issues when working from home, especially regarding connectivity. However, 5G’s staggering prowess has the capacity to diminish said issues and create a completely remote working experience with sustainable reliability, accessibility, and security. The rise of 5G will produce a long-lasting impact on our working behaviors and create a new model of work where it can be done virtually anytime and anywhere.
Shirly Lim VIAVI
Enhanced accessibility Working from home has been a necessity given the circumstances, and many people have relished the opportunity to work in the comfort and safety of their homes. Although it seems like working from home means having total control over when and where you work, the reality of it is a different matter. People still need to connect and be tied down to their Wi-Fi routers, as mobile networks aren’t sufficient to give a stable and fast connection speed with high bandwidth. This issue somewhat defeats the purpose of working remotely, as it only trades working from the office to working in certain spots at your home. The solution is simple, as 5G’s high-bandwidth and low-latency networks with fast, reliable connections give people the opportunity and freedom to log in to their work wherever they may be. Making this a reality is not difficult either, as businesses and companies must embrace what 5G has to offer and continue transitioning to cloud servers and communications. Using 5G networks will ensure a productive and efficient way of working and future-proof employee collaboration.
15 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
5G is tailored for every individual, meaning you don’t need to share your connection with other people, and everybody can use it for themselves, as the abundance of access points will ensure safe and fast connectivity for everyone Break past limitations Working remotely means that you are integrating your work into life itself. The key difference between working on-site and remotely is that you will waste a considerable amount of time commuting from your home to the workplace. This will eventually lead to burnout and demotivation. Working from home, people tend to have more time and energy on their hands to be more productive and efficient without the burnout. However, there is an important yet forgettable issue regarding this implementation, and that is the strain on bandwidth and connection. Working from home means you are using a shared internet connection with everybody else in the house, and more often than not when somebody starts streaming or downloading, your work connection will be compromised. This is the number one challenge when it comes to combining work and living in one digital ecosystem. Despite this predicament, all hope is not lost, as the 5G network is the perfect technology to overcome this challenge. The 5G network boasts reliable and faster connectivity, with greater bandwidth and near-zero latency.
Furthermore, 5G is tailored for every individual, meaning you don’t need to share your connection with other people, and everybody can use it for themselves, as the abundance of access points will ensure safe and fast connectivity for everyone.
Last but not least, to make this scenario a reality, workers and employees alike would need to learn and polish their skills regarding technology handling, as these ground-breaking technologies tend to be complex and very detailed.
This will lead to an increased scope of work that can be done remotely without the risk of slowing down or a system error. Utilizing the capabilities of the 5G network will enable staff to work smoothly and uninterrupted, from wherever they are.
Thus, organizations and companies must train their employees with the integration of 5G to create a competent and exceptional workforce. Embrace the 5G network to develop yourself and work with maximum efficiency and comfort.
Digital transformation As we begin to step into the world of infinite connectivity, our work tools will also transform. From AI-powered bots to blockchain programs or even communication through virtual and augmented reality technologies, workers will be able to finish tasks or projects much more rapidly and minimize human error. Moreover, 5G can rely on data as part of the day-to-day work experience, from research and merchandising to software development and customer experience, which will make workers more informed while redefining what productivity and efficiency look like.
5G’s staggering prowess has the capacity to create a completely remote working experience with sustainable reliability The way people work shouldn’t stay the same as time goes by, and the implementation of 5G networks will bring people to a new way of working, where convenience and productivity can be parallel to each other.
16 | DCD eBook • datacenterdynamics.com
Driving into an autonomous future Sameh Yamany VIAVI
Could Advanced Driver-Assistance Systems (ADAS) be the push needed toward a fully autonomous future?
A
n exciting ultra-reliable, low-latency communication (URLLC) 5G use case that is not always associated with the hyperscale ecosystem is Advanced Driver-Assistance Systems (ADAS). This oversight is not surprising with selfdriving vehicles grabbing headlines and exceeding expectations. The hyperscale data centers anchoring 5G services from distant locations are not as visible, but they still provide essential artificial intelligence (AI) rules, software updates, and big data storage to make ADAS possible.
17 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking Moving towards level five automation
What is ADAS?
The Society of Automotive Engineers (SAE) has defined six levels of automation with level zero, being drivers that perform all functions manually and level five equal to full automation with no drivers needed. The most advanced vehicles available today are usually classified as level two, where the driver must remain engaged while the car performs computer-guided functions. Truly autonomous transportation will require a combination of high data capacity, latency as low as 1ms, and 99.9999% reliability. 5G and Edge computing anchored by hyperscale data centers have the power to turn this utopian vision into reality.
Levels of driving automation
0 1
No automation
2
Partial automation
3
Conditional automation
Manual control. The human performs all driving tasks (steering, acceleration, braking etc.)
Driver assistance The vehicle features a single automated system (e.g. it monitors speed through cruise control.)
ADAS. The vehicle can perform steering and acceleration. The human still monitors all tasks and can take control at any time. Environmental detection capabilities. The vehicle can perform most driving tasks, but human override is still required.
4
High automation
5
Full automation
The vehicle performs all driving tasks under specific circumstances. Geofencing is required. Human override is still an option.
The vehicle performs all driving tasks under all conditions. Zero human interaction or attention is required
ADAS origins in the 1950s and many advanced driver-assistance systems for individual vehicles have been available for decades. Early ADAS features like anti-lock brakes, adaptive cruise control, and back-up cameras were designed with safety in mind. Built-in navigation systems and handheld devices equipped with GPS have changed driving habits forever. As LiDAR, cameras, and pattern recognition technologies advance, instant communication and data transfer between vehicles, the cloud, and other objects are the missing ingredients needed to transform ADAS into a global transportation network.
ADAS and hyperscale If fully autonomous transportation is the Holy Grail of ADAS, hyperscale computing may be the unlikely enabler that makes it possible. With high mobility and low latency pointing to Edge computing as an obvious solution, hyperscale data centers, with a minimum of five thousand servers on a ten thousand square foot or larger footprint, might seem as outdated as paper road maps and compasses – big data changes this equation. ADAS is poised to become the largest IoT use case and automotive data is expected to reach zettascale proportions by 2028.
If fully autonomous transportation is the Holy Grail of ADAS, hyperscale computing may be the unlikely enabler that makes it possible >Sameh Yamany
VIAVI
ADAS features Vehicle to Everything (V2E) V2E and V2X are common acronyms for “vehicle to everything”. While not quite “everything”, 5G technology does extend communication in many different directions. Vehicle to Network (V2N) refers to direct vehicle access to cloud-based services. Vehicle to Infrastructure (V2I) typically includes communication with equipment installed on or near the roadside, such as traffic signs and toll booths. Vehicle to vehicle (V2V) allows the sensory information from surrounding vehicles to be shared for safety and navigation intelligence, while Vehicle to Pedestrian (V2P) communication can be used to warn both drivers and pedestrians of potential obstacles, including each other. Infotainment
While many decisions can and will be made by onboard computers and many more functions will be performed at the Edge, there is still an enormous volume of data to be offloaded and analyzed.
As ADAS moves closer to level five, more travel time will be available for communication and entertainment. Some aspects of the infotainment experience will depend on the vehicle’s computer hardware and graphics.
Infrastructure for AI, data analysis for traffic optimization, and content storage are obvious non-latency dependent functions that can be housed in hyperscale data centers.
5G will also play a role by supplying streaming content and connected gaming services to meet consumer demands. Many of these applications will be
18 | DCD eBook • datacenterdynamics.com
bandwidth intensive, although not as exacting for latency and reliability. This presents an opportunity for infotainment content and user preference data storage in the hyperscale cloud.
5G is expected to move telematics further into the consumer domain as driving behavior is communicated to insurance companies, auto dealerships, and individuals. The future of ADAS and hyperscale
3D location When complimented by artificial intelligence, augmented reality, and GPS data, the information provided by V2E can move geolocation services from 2D to 3D. This improves situational awareness, giving drivers detailed 3D information on the surrounding terrain and potential obstacles. Hyperscale data centers will support the essential AI rules and long-term storage needed to maintain this virtual 3D map of the world. Telematics By combining informatics and telecommunications, telematics provide a means to send vehicle information directly to the cloud for storage and analysis. This technology has found a ready-made application in fleet management for trucking companies, taxi operators, and emergency services.
Although predictions for ADAS adoption and level five transformation vary, it is virtually certain that the data storage and computing demands will be unprecedented. This will lead to an ongoing trade-off between onboard vehicle computers, Edge computing locations, and hyperscale data centers to prioritize and balance storage, analysis, and latency. As data centers become more disaggregated and interoperable, their importance to ADAS advancement will remain unchanged. The Automotive Edge Computing Consortium (AECC) was founded in 2018 to help vehicle manufacturers, OEMs, and suppliers evolve network architecture and computing infrastructure to meet the challenges of ADAS.
ADAS is poised to become the largest IoT use case and automotive data is expected to reach zettascale proportions by 2028 While questions remain as to who will finance, build, and operate the new infrastructure, consumer demand could be the ultimate driver of data center expansion. This will make proactive use case emulation, pre-deployment DCI fiber characterization, and high-speed transport testing invaluable. Further reading P anel: What level of intelligence and automation do data center networks require? DCD broadcast: Exploring data center network infrastructure dynamics
19 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
Chapter 3: Hyperscale challenges and solutions In this chapter, we look at some of the pain points brought about by a hyperscale ecosystem. We dive into further use cases, alongside some of the solutions available today to help manage, monitor and test for issues, ideally before they arise.
20 | DCD eBook • datacenterdynamics.com
Managing the pain points of the hyperscale ecosystem A larger ecosystem brings with it the potential for larger problems
W
ith a minimum of five thousand servers on a ten thousand square foot or larger footprint, hyperscale data centers are continually challenged by increasing bandwidth, storage, computing power, and speed requirements. The rapid scalability that defines hyperscale computing can only be accomplished through a combination of new hardware (horizontal scaling) and improved performance of existing data centers (vertical scaling). Finding the talent and resources to build or expand hyperscale data centers is a pain point that only grows stronger as the scale increases, especially with aggressive installation timelines. This can lead to reduced or omitted fiber and system verification testing, exposing
data centers to downstream failures and rework. Given the massive scale and energy requirements, Internet content providers (ICPs), big data storage, and public cloud operators face growing pressure to improve efficiency and reduce emissions.
Sameh Yamany VIAVI
With data centers already consuming approximately three percent of the world’s electricity and emitting a volume of CO2 comparable to the airline industry, clean energy conversion and netzero carbon footprint commitments are on the rise.
Hyperscale and 5G
Hyperscale data centers themselves will become primary beneficiaries of the 5G services they enable > Sameh Yamany, Chief technology officer, VIAVI
5G changes the hyperscale definition Just as demand and sustainability compete for the hyperscalers’ attention, 5G has entered the picture with a new blueprint for hyperscale computing. Core functions in the cloud continue to anchor the network architecture, but distributed edge computing and disaggregation to support ultra-low latency 5G use cases move hyperscalers out of their
State of 5G infographic’ Whitepaper: Hyperscale and 5G, the future is now (Viavi whitepaper eBook: Practical guide Viavi webinar: to owning and operating Managing the data 5G networks (Viavi eBook center lifecycle from Viavi site) end-to-end eBook: Tools and techniques for successful implementation, maintenance and monetization (Viavi eBook from Viavi site) Network cloudification: Distributed, disaggregated, native cloud based, and fully automated -DCD
21 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking more efficient while maintenance and calibration can be scheduled based on feedback from millions of embedded sensors utilizing hyperscale cloud computing.
proverbial box. In other words, 5G big data remains centralized while instant data moves closer to the Edge.
Connected health
Resolving the hyperscale pain points
Breakthrough innovations like remote surgeries have rightfully attracted public attention, although it may be the everyday healthcare improvements made possible by 5G that ultimately impact and save more lives.
Unmanned data centers embracing the IoT give us an important clue to address the pain points threatening hyperscale expansion. As data centers get larger, they will also become more intelligent, flexible, and distributed.
Telemedicine can provide a path to routine care for isolated, immobile, or symptomatic patients. The IoT wearable market is set to explode with the capacity boost and latency reduction of 5G. Hyperscale
Harnessing and embracing new technology (including 5G) to test, monitor, and streamline data center operations is the best way to turn the current challenges into opportunities.
Hyper-intelligent hyperscale 5G verticals open up “Jetson-esque” possibilities for more efficient communities and industries. If properly leveraged, 5G data test and assurance solutions can have a similar impact on hyperscale pain points. Intelligence and automation are needed to successfully create, test, and assure 5G network slices deployed from end to end. A successful union between 5G and hyperscale will require AI, machine learning, and network function virtualization (NFV) to achieve new levels of performance.
5G hyperscale use cases Advanced driver-assistance systems (ADAS) As we looked at in more depth in the last chapter, a truly transformational 5G use case supported by hyperscale computing is Advanced driverassistance systems (ADAS). Based on emerging “vehicle to everything” (V2E) technology, ADAS establishes a new transportation model with 5G providing the requisite ultrareliable, low-latency communication (URLLC). Edge computing power is the key to meeting ADAS latency requirements in the 1-2 ms range. Parameters including vehicle spacing, traffic signal timing, pedestrian avoidance, and augmented signage can be fully automated and optimized.
> Sameh Yamany,
data centers in perfect sync with Edge computing locations are the key to supporting these virtual healthcare applications securely and reliably. Unmanned data centers
The fourth Industrial Revolution (Industry 4.0) brings the internet of things (IoT), machine learning, and real-time autonomous decision making to warehouse and factory floors. The benefits of factory automation backed by high bandwidth, low latency private 5G networks are seemingly limitless. Robots, vehicles, facilities, and tools become smarter, safer, and
Harnessing and embracing new technology (including 5G) to test, monitor, and streamline data center operations is the best way to turn the current challenges into opportunities Chief technology officer, VIAVI
Factory automation
including frigid, inhospitable regions where land and natural cooling sources are inexpensive and plentiful.
Hyperscale data centers themselves will become primary beneficiaries of the 5G services they enable. Using the IoT to monitor and control temperature, power, and surveyance functions in real time is in line with a shift towards lights out (unmanned) data center operation, particularly at the Edge. Physically removing humans also opens new possibilities for hyperscale data center locations,
For energy consumption and carbon footprint reduction, this means innovations like intelligent renewable energy grids on or near premises, factory automation to support clean concrete production, and remote operation to scale back the human presence and infrastructure. The use cases being rolled out today are just the tip of the iceberg. Social, environmental, and healthcare issues that have eluded solutions for decades now look to 5G as a beacon of progress and hope. Despite the emphasis on 5G RAN and device innovations, comprehensive testing of hyperscale data centers is also necessary to ensure the promise of 5G. A proactive approach to predeployment fiber, RAN and Xhaul testing recognizes a new standard of automated cloud-based test and diagnostics tools that help rather than hinder construction timelines. This progressive approach to testing also includes live network traffic emulation and AI powered “selfhealing” capabilities to prevent outages, repairs, and unplanned updates. Industry-leading and automated MPO, high speed transport, fiber certification, emulation, and observability tools demystify hyperscale testing to support the data center of the future.
22 | DCD eBook • datacenterdynamics.com
Putting our networks to the test Surprisingly, perhaps, the root causes of many network problems point towards errors during installation, David Zambrano of VIAVI tells us more
F
or a business that will, in just one year’s time, celebrate its 100th birthday, one might assume that the VIAVI team is well accustomed to change. Yet, the world around us has transformed dramatically in recent years as we welcome the increasingly close embrace of technology and communications into our everyday lives. Our own roots as an organization are firmly planted within the bits and bytes of testing, and enablement of every form of high-performance network. And we still relish rolling our sleeves up with network teams as they drive their networks faster and further.
However, as we gain a deeper understanding of project workflow demands, this has subsequently directed a greater focus upon taking time, complexity, and risk out of the equation. Qualifying installation is key This is, perhaps, more important than ever, as renewed data center expansion around the world applies increased pressure onto supply chain and service enablement teams. Accelerated delivery targets may force installation companies to accept increased risk to ensure on-time delivery. We acknowledge the reality that network testing can
David Zambrano VIAVI
be very time-consuming, and by its nature tends to occur towards the latter stages of a project, neither of which is ideal as time pressure increases. It can therefore be a temptation to assume, after all the development and quality work done by cabling vendors, that your brand-new fiber infrastructure will deliver on expectations fresh out-of-the-box. We see this occasionally factoring into teams being instructed to do a minimum of testing, just enough to get a feel for the network, but not enough to impact delivery dates. With disaggregated networking becoming more commonplace and
A recent industry study undertaken by VIAVI, found that installers can spend as much as 20 percent of their working week troubleshooting physical network issues
23 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
software defining the deployment and subsequent management, one might question whether it is still necessary to test fibre network deployments at all anymore. After all, if there is a problem, then the network can probably work around it on-the-fly or spin-up resources at another site. While there is a degree of resilience that may be built into networks, both physically and through intelligent software intervention, this represents a workaround rather than a prevention. There will still be a
to their high ‘mean time to repair’ (MTTR). Pulling many kilometres of fibre through congested ducts, unpacking and dressing-in new cable assemblies, even something as simple as leaving dust caps off connectors, may result in issues that can be extremely disruptive and expensive to correct – something the hard-working operations teams whose job is to manage networks will no doubt attest to. Consequently, failing to test properly represents somewhat of a false economy, focusing excessively
Time and again, we have observed the root cause of network problems often points towards errors during installation, ineffective or incomplete testing or poor maintenance of the fiber network
cost associated with identifying and resolving their root cause, the complexity of which may be significantly higher on a live network.
on the short-term wins without considering the potential longerterm losses.
What do we need to test?
But how can this be made easier and faster, without sacrificing network assurance on the altar of speed?
Collaboration and planning: Simplify and accelerate high speed network tests
Time and again, we have observed the root cause of network problems often points towards errors during installation, ineffective or incomplete testing or poor maintenance of the fiber network. In comparison to most active network elements, there can be some significant physical work involved in deploying cabling infrastructure. A recent industry study undertaken by VIAVI, found that installers can spend as much as 20 percent of their working week troubleshooting physical network issues. Network connectivity issues with the fiber connections between data centers are among some of the costliest operational problems due
engagement and understanding is a big focus today for VIAVI and for Blue Helix, a leading distribution partner of ours based in the UK, but servicing throughout EMEA. By engaging on a global level and then leveraging their unique strengths to deliver together in partnership on a local one, our combined efforts are already proving their worth for data center build and operational teams throughout the region. In 2022’s fast-moving data center market, choices made today must deliver immediately on revenue and business values, including enhanced process efficiency, maximized resource availability and more effective network monetization. The decision to test is a critical one, to maintain installation quality, resolve challenges before they become crises and to maximize a network’s return on investment. Despite perceptions that the process will consume vast quantities of time, this is no longer the case thanks to the optimized tools and efficient automated workflows which are at our disposal. Success however is not simply down to clever technology, it also requires talking, teaching, trust and teamwork, a combination that will stand up to any test you care to throw at it.
Centralized planning and testing automation are one part of the equation, but so too is early and close collaboration between infrastructure vendors, primary and secondary (and sometimes tertiary) contractor teams and, of course, the network owners themselves.
M aintaining robust, DAC, AOC and transceiver connectivity in hyperscale data centers
Recent efforts of note here have led to dramatically simplified MPObased (multifibre) network testing and troubleshooting workflows, as well as the definition of clientspecific best practices to help them overcome persistent and disruptive network performance issues.
P hase two of white box networking: Disaggregation of DWDM optical networks
Fostering this kind of deeper
24 | DCD eBook • datacenterdynamics.com
Further reading
M onitoring private network connections from enterprise to cloud
Testing to the Edge How can organizations ensure Edge deployments deliver to their full potential?
T
oday, keeping customers satisfied is an increasingly complex task, as a burgeoning need for speed, and an ever-growing array of content delivery channels and digital connection points continue to reshape the business landscape as we know it. Mobility has gone mainstream, and customers expect a positive experience, no matter where they are. The pressure to provide this seamless ‘anytime, anywhere’ customer experience has businesses completely rethinking the way they design their IT networks, build out IT infrastructure and ultimately interact with customers.
For those companies that haven’t already taken it, Edge is the next logical step in the extension of existing strategies, bringing services closer to where they’re needed most
At the center of this shift, we find Edge computing, moving computation, storage and other tasks outside the core network and nearer to the collection source, such as an IoT (Internet of Things) device. The end result? A faster, more streamlined customer experience can then be realized because the compute process and associated decision making happens closer to where the data is physically generated. But despite the benefits, implementing the Edge doesn’t come without its network challenges. “Because there’s a lot of variability in technology and approach out there, there may be a lot of different challenges encountered in terms of getting an installation to happen cleanly and smoothly,” says VIAVI’s David Zambrano.
> David Zambrano Global account executive, VIAVI
25 | DCD eBook • datacenterdynamics.com
Claire Fletcher DCD
DCD eBook | Networking “At the end of the day, customers want to get maximum ROI from the network, so VIAVI’s goal is to try to help them remove as many performance roadblocks as we can.” Whether developing easy to use, yet powerful automated tools, or the creation of simplified but effective client certification workflows, VIAVI has been involved extensively within the network testing arena for a significant period of time. As the company has evolved and entered new markets, today, VIAVI is positioned as a key player across all main aspects of networking. “From service provider fiber to the home (FTTH), to wireless 5G, enterprise headquarter buildings or data centers, the types of things that we are able to do, are primarily around optical testing, and ensuring that the networks which service providers or enterprises deploy will fully deliver to their expectations,” says Zambrano. In addition to the data center space, VIAVI has also been engaging with many of the teams deploying fiber broadband networks on a global scale. VIAVI equipment not only verifies the backhaul networks, ensuring they are delivering as they should, but confirms that there is a clean signal from the head ends too. “Whether these network deployments are simple or complex, we can offer the right testing solutions and workflows to not only confirm a successful deployment, but to also verify how well your network is actually performing,” explains Zambrano.
For the number of conversations that have been had about Edge over the years, there is probably an equal number of definitions > David Zambrano Global account executive, VIAVI
Defining the undefinable We’re already seeing many colocation and hyperscale data center operators building their networks increasingly closer to the demand centers. However, there is a limit to how many medium to large data centers can be built within metropolitan areas, and in reality, one might question how effective traditional sites can be in servicing increasingly mobile applications. Therefore, developing a definite Edge-oriented approach is the next logical step in the extension of
26 | DCD eBook • datacenterdynamics.com
existing strategies, bringing services closer to where they’re needed most. Yet the problem here is, the concept of Edge is still evolving. In fact, what constitutes an Edge deployment is still so broad it makes having these conversations somewhat difficult.
> David Zambrano Global account executive, VIAVI
When asked to define the Edge, Zambrano said, “I think for the number of conversations that have been had about Edge over the years, there is probably an equal number of definitions. “Edge could be a small cabinet which is in your office, and that’s duplicating what a cloud service provider is offering you. At the same time, it could be a containerized solution at the bottom of a tower, or, in reality, still be a full-sized data center. It really depends on the location and the localized requirement. “I think there will be a lot of different configurations, sizes, shapes, and methods by which Edge is deployed and managed, it’s still evolving,” says Zambrano. “But then you get some people that say, ‘Well we’ve been doing Edge for years, it’s just what we call pop sites’. So it’s not really anything different, but it is certainly a term that has captured the imagination.” What’s driving the Edge? Lately 5G has unquestionably been the star of the low latency show, but there are many other low latency applications that make incredibly strong use cases for the Edge. “Traffic management systems where we wouldn’t want there to be any delays strike me as a very good application for this,” says Zambrano. “Many of the cities that are implementing these systems are also moving toward smart energy grids.” Then you have the less mission critical applications, such as gaming. That said, try telling a serious gamer their mission isn’t critical. In this instance, the time taken between thumb and server will define how happy they are as a customer. There has also been a lot of
It’s not just about speedy communications, data privacy is also becoming increasingly important
attention given to what people are referring to as the “Metaverse”. As an immersive gaming or entertainment platform a great deal of local data capacity is going to be needed, but there is also interest in it from a business-focused standpoint as well, perhaps as an evolution of the virtual meeting places which have been so important in recent years. In both scenarios, the userexperience here is going to be extremely important to the different providers all vying for dominance in what could be a very competitive market. And it’s not just about speedy communications, we should not forget that data privacy is also becoming increasingly important, particularly from a medical perspective. “A lot of hospitals and clinics would not want their patients’ data to be shared somewhere in the cloud that they have no control over. So that’s a use case for Edge where they’ll set up their own cloud application data center. So it’s running cloud applications, but it isn’t going anywhere else,” says Zambrano. Testing, testing There might be some distinct nuances when it comes to Edge deployments, and the way they are tested. Ideally however, a highly distributed Edge environment could benefit from a proactive approach in order to satisfy the fundamental service-level criteria clients are looking to achieve. Zambrano explains, “I think speed and efficiency of delivery is going to be a priority for most because it’s about being agile in how we deploy resources. Not just getting something deployed quickly, but
also being sure that it’s going to work when it’s in-place is key, as you’re potentially dealing with a lot of locations.” He continues, “There is going to be a team that needs to manage this deployment. Therefore, in an unmanned data center – which Edge facilities often are – automating monitoring and management is likely to be beneficial. It’s much more efficient to confirm if something is wrong before teams are deployed to try and resolve it, but more importantly, root cause analysis will help them to be far more efficient when it comes to problem resolution. Up until recently, this kind of visibility has been the missing piece of the puzzle. For uptime sensitive or security conscious customers, the ability to quickly and efficiently find the precise location of a problem at the Edge could be crucial. And automated testing isn’t only advantageous when it comes to proactive problem solving and efficiency improvements, it also comes into play in measuring the key KPIs that not only appeal to customers, but ensure the data center is compliant with current regulations. Measures such as these may help simplify the management of a site, providing invaluable visibility and control to help ensure the longevity and reliability of an operation, which ultimately affect the bottom line. According to Gartner, by 2025 there is set to be a 75 percent rise in Edge deployments, and assurance strategies like those from VIAVI, could be what sets you apart from the competition.
27 | DCD eBook • datacenterdynamics.com
DCD eBook | Networking
Panel
Testing times: Is your data center network ready? Click here to receive a link to the full presentation
This panel discussion deep-dives into network deployment operational testing and automation to: • • •
Reduce deployment, testing times and costs Optimize optical networks at the foundation of this change Reduce latency and ensure reliability in line with SLAs
#DCDEdgeVirtual
28 | DCD eBook • datacenterdynamics.com
Further reading White paper: The data center of tomorrow
Viavi 800G Pluggable Optics Poster
Hyperscale technology and solutions
29 | DCD eBook • datacenterdynamics.com
Command the network. Successfully deploy networks, applications, and services. Consistently optimize performance across the ecosystem. Always provide the highest quality customer experience. Every day, you fight to successfully transition to next-generation technologies and implement major transformations with greater confidence. VIAVI provides the multi-dimensional visibility, intelligence, and insight you need to efficiently manage physical and virtual environments and profitably deliver optimum service levels, transition to new technologies, and launch innovative services.
Learn more at viavisolutions.com/hyperscale