The Future of Data Center Connectivity Supplement

Page 1

Sponsored by

Future of Data Center Connectivity Supplement

INSIDE

Understanding the data journey The DPU dilemma

Data centers with dishes

Wiring up the Edge

> There’s a major shift happening, and it’s coming from the network card

> Earthbound infrastructure is merging with space tech, as a new cloud business model emerges

> Processing the data where it’s created is a good idea. But what are the consequences?


fASTER

Data center experts deploy

with less risk.

EcoStruxure™ for Data Center delivers efficiency, performance, and predictability. • Rules-based design accelerates the deployment of your micro, row, pod, or modular data centers. • Lifecycle services drive continuous performance. • Cloud-based management and services help maintain uptime and manage alarms. #WhatsYourBoldIdea se.com/datacenter

©2020 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-21069523_GMA

EcoStruxure Pod Data Center EcoStruxure IT


Connectivity Supplement

Sponsored by

Contents 4. The DPU dilemma There’s a major shift happening, and it's coming from the network card 10. Data centers with dishes Earthbound infrastructure is merging with space tech, as a whole new cloud business model emerges 14. Wiring up the Edge Processing the data where it's created is a good idea. But what are the consequences?

Network frontiers

N

etworks never stand still for long. Every new generation of hard ware has implications for the wires and switches that connect and empower it.

Right now, we see plenty of new networking frontiers. This supplement presents three of them, that range form inside the backplane of your servers, right to the rim of space.

SmartNICs change everything

10 4

14

It seemed such a simple idea, to make network cards more intelligent, so they could offload the grunt work in server communications. But it had much bigger implications than anyone expected. There's a precedent of course. The GPU started out as a way to speed up graphics for gamers. Now those coproceessors are the backbone of all the world's fastest supercomputers. By the same token, processors designed to speed up networking are enabling changes in architecture, so the way data centers are networked is going to influence the way they are built. Right now, there's no agreed definition of what to call this revolution, from SmartNICS, to something else. We spoke to a pioneer who believes that smart networking will create a market for data processing units (DPUs) which can restructure server farms, and an operator who's eager to put those DPUs to work in bare metal services. We also found an analyst who sees a potential goldrush.

Ground stations to grow With thousands of satellites appearing in new constallations, ground stations are no longer a big-ticket investment. Satellite operators are coming to the conclusion that ground station infrastructure is better if it's shared, and available as a service. That sounds very familiar, to anyone who's met the cloud. And sure enough, the hyperscalers are getting involved - adding processing power to ground stations, and dishes to their data centers The next step is to package it all into a neat cloud service. But can the cloud players adapt to this new market, or will the incumbents be able to slim down their infrastructure to address the new flexibility that is coming in the current space boom?

Wiring up the Edge The need for Edge facilities is well-rehearsed. There's a class of applications, like the Internet of Things, where data is generated locally, and must be processed quickly. That's led to an explosion of players offering small increments of data center capacity, to be installed at the Edge, close to where data is generated and used. But what about the networks? Latency was the driving force which created the Edge, so the most important factor about the Edge will surely be the architecture of the networks that actually deliver it where it is needed. As is so often the case, the important part of the infrastructure is coming into existence on the fly. Yet another new frontier.

Connectivity Supplement 3


The DPU dilemma: life beyond SmartNICs There’s a major shift happening in server hardware, and it’s emerged from a surprising direction: the humble network card

W

e are near the start of the next major architectural shift in IT infrastructure,” says Paul Turner, vice president, product management vSphere at VMware in a blog. He’s talking about new servers which are built with maximum programmability in a single cost-effective form factor. And what made this revolution possible is a simple thing: making network cards smarter. The process began with the Smart network interface card or SmartNIC - and has led to a specialized chip: the DPU or data processing unit - an ambiguouslynamed device with a wide range of applications. “As these DPUs become more common we can expect to see functions like encryption/decryption, firewalling, packet inspection, routing, storage networking and more being handled by the DPU,” predicts Turner. The birth of SmartNICs Specialized chips exist because x86 family processors are great at general purpose tasks, but for specific jobs, they can be much slower than a purpose-built system. That’s why graphics processor chips (GPUs) have boomed, first in games consoles, then in AI systems. “The GPU was really designed to be the best at doing the math to draw triangles,” explains Eric Hayes, CEO of Fungible, one of the leading exponents of the new network chips. “Jensen Huang at Nvidia was brilliant enough to apply that technology to machine learning, and realize that architecture is very well suited to that type of workload.” Like GPUs, SmartNICs began with a small job: offloading some network functions from the CPU, so network traffic could flow faster. And, like GPUs, they’ve eventually found themselves with a wide portfolio of uses.

4

But the SmartNIC is not a uniform, one-size-fits-all category. They started to appear as networks got faster, and had to carry more users’ traffic, explains Baron Fung, an analyst at Delloro Group. “10Gbps is now more of a legacy technology,” Fung explained, in a DCD webinar. “Over the last few years we've seen general cloud providers shift towards 25 Gig, many of them are today undergoing the transition to 400 Gig.” At the same time, “cloud providers need to consolidate workloads from thousands and thousands of end users. SmartNICs became one of the solutions to manage all that data traffic." Servers can get by with standard or “foundational” NICs up to around 200Gbps, says Fung. “Today, most of the servers in the market have standard NICs.” Above that, network vendors have created “performance” NICs, using specialized ASICs to offload network functions, but SmartNICs are different. “SmartNICs add another layer of performance over Performance NICs,“ says Fung. “Essentially, these devices boil down to being fully programmable devices with their own processor, operating system, integrated memory and network fabric. It’s like a server within a server, offering a different range of offload services from the host CPU.” It’s a growth area: “SmartNICs are relatively small in numbers now, but in the next few years, we see increasing adoption of these devices.” SmartNICS are moving from a specialist market to more general use: “Today, most

Peter Judge Global Editor

of the smart devices are exclusive to cloud hyperscalers like Amazon and Microsoft, who are building their own SmartNICs for their own data centers,” says Fung. “But as vendors are releasing more innovative products, and better software development frameworks for end users to optimize their devices, we can see more adoption by the rest of the market, as well.” SmartNICs will grow at three percent annual growth over the next few years, but will remain a small part of the overall market, because they are pricey: “Today, they are a three to five times premium over a standard NIC. And that high cost needs to be justified.” In a general network application, SmartNICs can justify their cost by making networks more efficient. “They also prolong the life of infrastructure, because these smart devices can be optimized through software. It's really a balance, whether or not a higher price point for SmartNICs is justifiable.” But there is potential for confusion, as different vendors pitch them with different names, and different functions. Alongside SmartNICs and DPUs, Intel has pitched in with the broadly similar infrastructure processing unit (IPU). “There’s many different acronyms from different vendors, and we’ve seen vendors trying to differentiate with features that are unique to the target applications they're addressing,” says Fung. Enter Fungible One of those vendors is Fungible. The company is the brainchild of Pradeep

“Over the last few years we've seen general cloud providers shift towards 25 Gig, many of them are today undergoing the transition to 400 Gig”

DCD Supplement • datacenterdynamics.com


Connectivity Supplement

Sindhu, a formidable network re-builder and former Xerox PARC scientist, who founded Juniper Networks in 1996. Juniper was based on the idea of using special purpose silicon for network routers, instead of using software running on general purpose network switches. It rapidly took market share from Cisco. In 2015, Pradeep founded Fungible, to make special purpose devices again - only this time making network accelerators that he dubbed “data processing units” or DPUs. He’s now CTO, and the CEO role has been picked up by long-time silicon executive Eric Hayes. Hayes says the Fungible vision is based on the need to move more data from place to place: “There's data everywhere, and everybody's collecting data and storing data. And the question really comes down to how do you process all that data?” Equinix architect Kaladhar Voruganti gives a concrete example: “An aeroplane generates about 4.5 terabytes of data per day per plane. And if you're trying to create models or digital twins, you can imagine the amount of data one has to move,” says Voruganti, who serves in the office of the CTO at Equinix. CPUs and GPUs aren’t designed to help with the tasks of moving and handling data, says Hayes: When you start running those types of workloads on general purpose CPUs or GPUs, you end up being very inefficient, getting the equivalent of an instruction per clock. You're burning a lot of cores, and you're not getting a lot of work done for the amount of power that you're burning.” Hayes reckons there’s a clear distinction between SmartNICs, and DPUs which go

beyond rigid network tasks: “DPUs were designed for data processing. They're designed to do the processing of data that x86 and GPUs can't do efficiently.” He says the total cost of ownership benefit is clear: “it really comes down to what is the incremental cost of adding the GPU to do those workloads versus the amount of general purpose processing, you'd have to burn otherwise. According to Hayes, the early generations of SmartNICs are “just different combinations of arm or x86 CPUs, with FPGAs and hardwired, configurable pipelines. They have a limited amount of performance trade off for flexibility.” By contrast, Fungible’s DPU has “a custom designed CPU that allows a custom instruction set with tightly coupled hardware acceleration. So the architecture enables flexibility and performance at the same time.“ The Fungible chip has a MIPS 64 bit RISC processor with tightly coupled

hardware accelerators: “Tightly coupled hardware accelerators in a data path CPU: this is the definition of a DPU.” The DPU can hold “a very, very efficient implementation of a TCP stack, with the highest level of instructions per clock available, relative to a general purpose CPU.” What do DPUs do? DPUs make networked processing go faster, but Fungible is looking at three specific applications which shake up other

Connectivity Supplement 5


radio access network) systems emerging to get 5G delivered.

parts of the IT stack. The first one is the most obvious: speeding up networks. Networks are increasingly implemented in software, thanks to the software defined networking (SDN) movement. “This goes all the way back to the days of Nicira [an SDN pioneer bought by VMware],” says Hayes. SDN networks make the system more flexible by handling their functions in software. But when that software runs on general purpose processors, it is, says Hayes, “extremely inefficient.” SmartNICs take some steps to improving SDN functionality, Hayes says, but “not not at the level of performance of a DPU.” Beyond simple SDN, SmartNICs will be essential in more intelligent network ecosystems, such as the OpenRAN (open

6

Rewriting storage The next application is much more ambitious. DPUs can potentially rebuild storage for the data-centric age, says Hayes, by creating running memory access protocols over TCP/IP and offloading that, thus creating, “inline computational storage.” NVMe, or non-volatile memory express, is an interface designed to access flash memory, usually attached by the PCI express bus. Running NVMe over TCP/ IP, and putting that whole stack on a DPU, offloads the whole memory access job from the CPU, and means that flash memory no longer has to be directly

connected to the CPU. “The point of doing NVMe over TCP is to be able to take all of your flash storage out of your server,” says Hayes. “You can define a very simple server with a general purpose x86 for general purpose processing, then drop in a DPU to do all the rest of the storage work for you.” As far as the CPU is concerned, “the DPU looks like a storage device, it acts like a storage device and offloads all of the drivers that typically have to run on the general purpose processor. This is a tremendous amount of work that the x86 or Arm would have to do - and it gets offloaded to the GPU, freeing up all of those cycles to do the purpose of why you want the server in the first place.” Flash devices accessed over TCP/IP can go from being local disks, to become a centralized pooled storage device, says Hayes. “That becomes very efficient. It’s inline computational storage, and that

“You can define a very simple server with a general purpose x86 for general purpose processing, then drop in a DPU to do all the rest of the storage work for you”

DCD Supplement • datacenterdynamics.com


Connectivity Supplement means we can actually process the coming into storage or going back out. In addition to that, we can process it while it's at rest. You don't need to move that data around, you can process it locally with a GPU.” Speeding GPUs In a final application DPUs meet the other great offload workhorse, the GPU, and helps to harness them better - because after all, there’s a lot of communication between CPUs and GPUs. “In most cases today, you have a basic x86 processor that’s there to babysit a lot of GPUs,” says Hayes. “It becomes a bottleneck as the data has to get in and out from the GPUs, across the PCI interface, and into the general purpose CPU memory.” Handing that communication task over to a DPU lets you “disaggregate those GPUs,” says Hayes, making them into separate modules which can be dealt with at arm's length. “It can reduce the reliance on the GPU-PCI interface, and gives you the ability to mix and match whatever number of GPUs you want, and even thinslice them across multiple CPUs.” This is much more efficient, and more affordable in a multi-user environment, than dedicating sets of GPUs to specific x86 processors, he says. A final use case for DPUs is security. They can be given the ability to speed up encryption and decryption, says Hayes, and network providers welcome this. “We want to ensure that the fabric that we have is secure,” says Voruganti. Easier bare metal for service providers and enterprises Equinix is keen to use DPUs, and it has a pretty solid application for them: Metal, the bare metal compute-on-demand service it implemented using technology from its recent Packet acquisition. In Metal, Equinix offers its customers access to physical hardware in its facilities, but it wants to offer them flexibility. With DPUs, it could potentially allow the same hardware to perform radically different tasks, without physical rewiring. “What I like about Fungible’s solution is the ability to use the DPU in different form factors in different solutions,” says Voruganti. “I think in a softwaredefined composable model, there will be new software to configure hardware for instance as AI servers, or storage controller heads or other devices “Instead of configuring different servers with different cards and having many different SKUs of servers, I think it will make our life a lot easier if we can use software to basically compose the servers

“In most cases today, you have a basic x86 processor that’s there to babysit a lot of GPUs. It becomes a bottleneck as the data has to go from GPUs, across the PCI interface, and to the CPU" based on the user requirements.” That may sound like a fairly specialized application, but there are many enterprises with a similar need to bare-metal service providers like Equinix. There’s a big movement right now under the banner of “cloud repatriation," where disillusioned early cloud customers have found they have little control of their costs when they put everything into the cloud. So they are moving resources back into colos or their own data center spaces. But they have a problem, says Hayes. “You’ve moved away from the uncontrolled costs of all-in cloud, but you still want it to look like what you’ve been used to in the cloud.” These new enterprise implementations are “hybrid," but they want flexibility. “A lot of these who started in the cloud, haven't necessarily got the networking infrastructure and IT talent of a company that started with a private network,” says Hayes. DPU-based systems, he says, “make it easy for them to build, operate, deploy these types of networks.” Standards needed But it’s still early days, and Voruganti would like the vendors to sort out one or two things: “We're still in the initial stages of this, so the public cloud vendors have different flavors of quote-unquote smartNICs,” he says. “One of the things that operators find challenging is we would like some standardization of the industry, so that there is some ability for the operator to switch between vendors for supply chain reasons, and to have a multi vendor strategy.” Right now, however, with DPU and SmartNIC vendors offering different architectures, “it is an apples-to-oranges comparison among SmartNIC vendors.” With some definitions in place, the industry could have an ecosystem, and DPUs could even become a more-or-less standard item. Power hungry DPUs? He’s got another beef: “We’re also concerned about power consumption. While vendors like Fungible work to keep within a power envelope, we believe that

the overall hardware design has to be much more seamlessly integrated with the data center design.” Racks loaded with SmartNICs are “increasing the power envelope,” he says. “We might historically have 7.5kW per rack, in some cases up to 15 kilowatts. But we're finding with the new compute and storage applications the demand for power is now going between 30 to 40kW per rack. It’s no good just adding another power-hungry chip type into a data center designed to keep a previous generation of hardware cool: “I think the cooling strategies that are being used by these hardware vendors have to be more seamlessly integrated to get a better cooling solution.” Equinix is aiming to bring special processing units under some control: “We’re looking at the Open19 standards, and we’re starting to engage with different vendors and industry to see whether we can standardize so that it's easy to come up with cooling solutions.” Standards - or performance? Hayes takes those points, but he isn’t that keen on commoditizing his special product, and says you’ll need specialist hardware to avoid that overheating: “It's all about software. In our view, long term, the winner in this market will be the one that can build up all those services in the most efficient infrastructure possible. The more efficient your infrastructure is, the lower the power, the more users you can get per CPU and per bit of flash memory, so the more margin dollars you're gonna make.” Fung, the analyst, can see the difficulties of standardization: “It would be nice if there can be multi-vendor solutions. But I don't really see that happening, as each vendor has its own kind of solution that's different.” But he believes a more standardized ecosystem will have to emerge, if DPUs are to reach more customers: “I’m forecasting that about a third of the DPU market will be in smaller providers and private data centers. There must be software development kits that enable these smaller companies to bring products to market, because they don’t have thousands of engineers like AWS or Microsoft.”

Connectivity Supplement 7


Low Earth Orbiting Satellites (LEO) will be the Future of Delivering a Seamless 5G Experience

I

f you listen to what major telco

quickly. So much so that I am forced to

service providers are saying, review

make a distinction between infrastructure

delivery of enhanced mobile broadband

their coverage maps, and check the

based in space and infrastructure based on

and next-generation IoT devices by

connection status on your phone,

the ground (terrestrial).

providing higher data rates and lower

it seems that 5G has hit the ground running and almost everyone in

Signals are indeed beaming down

Moreover, they will assist with the

latency across a constellation of satellites

from space to support our ‘terrestrial’ 5G

spanning the sky. To help make this

developed countries already has it. But

infrastructure on Earth. The end result

happen, Schneider Electric is providing

what about sparsely populated areas that

is that 5G networks will leverage LEO

physical infrastructure solutions for MEC

are difficult to access? These areas do not

(Low Earth Orbiting Satellites) in their

data centers and ground stations with

have sufficient Return on Investment (ROI)

architecture, which greatly simplifies 5G

our EcoStruxure Micro Data Centers and

for telcos to build out terrestrial (land-

deployments. With the combined space

EcoStruxure Modular Data Centers.

based) coverage.

and terrestrial infrastructure, a seamless 5G experience can be delivered across the

How will this 5G architecture work?

Have we reached the point in evolution

entire globe. LEO satellite constellations

Let’s examine how this 5G architecture

where futuristic science fiction in space is

will supplement terrestrial 5G infrastructure

using LEO will be laid out. Many people

a reality? Remember the Star Trek TV show

to increase network coverage and provide a

are familiar with GEO satellites – these

opening, “Space, the final frontier”? Well

backup in the event of natural disasters like

are geostationary and orbit 22,300 miles

it seems we are getting there, and pretty

earthquakes, floods, and hurricanes.

(35,800 kilometers) directly above the

Yes, I just used the word terrestrial.


Schneider Electric | Advertorial

equator. They travel in the same direction as the rotation of the Earth. This allows ground-based antennas the ability to point directly at the satellite in a fixed position. In contrast, LEO satellites are miniaturized, orbiting versions that operate between 500-and-2,000 kilometers above Earth’s surface. Latency is significantly reduced as the satellite, due to its low orbit, is better positioned to quickly receive and transmit data. The low orbit also creates a smaller coverage area, so LEO satellites continuously hand off communication signals and traffic across a constellation of satellites and ground stations. This ensures seamless, wide-scale coverage over a pre-

and rural areas. But one challenge is that a

Expanding cellular 5G networks to air,

defined geographical area.

significant number of areas lack sufficient

sea, and remote areas

In LEO-enabled 5G, ground stations collect and transmit data to a LEO constellation (uplink), and different ground

ROI for telcos to build out terrestrial or land-based coverage. About 3.4 billion people worldwide lack

The first implementation of LEOenabled 5G will most certainly play a key part in extending cellular 5G networks to

stations receive the data (downlink)

internet, according to the GSMA. For many,

air, sea, and other remote areas not covered

and pass it on to the intended receiver.

broadband is unreliable or inaccessible

by terrestrial networks. They could offer

These surface-based ground stations

where they live. SpaceX is looking to

a seamless extension of 5G services from

have 2 purposes: 1 ) provide real-time

tackle that problem. In his keynote at

the city to airplanes, cruise liners, and

communication with satellites, and 2)

Mobile World Congress Barcelona last

other vehicles in remote locations. IoT

serve as command and control centers

month, SpaceX CEO Elon Musk shared

sensors and M2M connections on farms

for the satellite constellation. The LEO

that the aerospace company is pivoting

and remote worksites like mines can also

constellations require significant network

its resources to create the Starlink satellite

capitalize on the wide coverage areas

management and benefit from advances

constellation, which will provide broadband

offered by LEO enabled 5G. In addition,

in analytics and artificial-intelligence

coverage focusing on remote parts of the

in the event of a natural or man-made

algorithms to reduce response times and

globe.

disaster where terrestrial 5G infrastructure

operating costs. LEO satellites do not replace 5G clusters

Musk told attendees to “think of Starlink

is damaged, satellite networks can take over

as filling in the gaps of 5G and fiber” for the

and keep the network going, especially

for URLL (ultra reliable low latency)

hard-to-reach 3-5% of world population.

critical and life-saving communication

applications used in stadiums, seaports,

The satellite constellation can be helpful to

services.

airports, manufacturing sites, etc. 5G

Communications Service Providers (CSP)

RAN (radio antennae network) and MEC

trying to acquire 5G licenses in countries

possible, Schneider Electric is providing

(mobile edge cloud) data centers are still

where they must provide a certain amount

physical infrastructure solutions for MEC

required. LEO satellite constellations and

of rural coverage.

data centers and ground stations with

ground stations take the place of fiber optic

“It’s very difficult to make the economic

To make this space age transformation

EcoStruxure Micro Data Centers and

networks and the hops, transfer stations,

case for rural coverage,” said Musk. Starlink

EcoStruxure Modular Data Centers. Feel

and the control and routing in metro data

can either provide rural network coverage

free to comment and let me know: are you

centers.

or, for CSPs with existing rural towers,

ready for Space, the final frontier?

provide backhaul. A Starlink satellite constellation to

Starlink is not alone. Some of the

provide broadband coverage to remote

world’s best entrepreneurial technology

areas

companies are also in the game. Jeff Bezos

For years, satellite communication has

and Amazon are planning to launch and

remained stand-alone technology,

operate a constellation called the Kuiper

independent of mobile networking. But

System.

that is all changing. The next generation of

It will deliver high-throughput, low-

satellites are built to support 5G networks

latency broadband service to millions of

to manage connectivity to cars, vessels,

underserved customers, airplanes, boats,

airplanes, and other IoT devices in remote

and other vessels.


Data centers with dishes Hyperscalers are installing ground stations at data centers, as the cloud drives a merger between Earth and space infrastructure, and a move a more cloud-like, virtualized, as-a-service way of thinking

W

ith the rise of hyperscale-space offerings such as Microsoft’s Azure Orbital and Amazon’s AWS Ground Station, satellite operators and customers increasingly have the option to connect their satellites directly to the cloud. It might be odd to see companies historically focused on servers sprouting antennas, but the goal is the same; to collect, store, and process as much data as possible. “When you see an Amazon or an Azure investing in this, what they're really focusing on are the terabytes of data that get periodically dropped from the satellites down to Earth as they pass over a ground station,”

says Robert Bell, executive director, World Teleport Association. The same trends affecting data centers and mobile operators are now coming for ground stations, leading to changes in technology and business models, and growing levels of cooperation and cohabitation with data centers. Where once ground stations were entirely separate and largely siloed pieces of infrastructure, they are now becoming increasingly tightly connected to and colocated at data centers as the amount of satellite data increases. At the same time, the industry is undergoing its own transformation towards virtualized infrastructure offered via an as a service model.

10 DCD Supplement • datacenterdynamics.com

Dan Swinhoe News Editor

Data centers with dishes Ground stations are largely "data centers with dishes," according to WTA’s Bell. Beyond housing large amounts of compute, operating ground stations can be a capital intensive business; facilities need large footprints, good connections, and a suitable location. Like data centers, there are different business models. Governments and telcos may have the resources to operate their own ground stations ‘on-premise;’ others will colocate their antennas alongside other companies’ infrastructure at a third party facility. These “teleports” are analogous to Internet Exchanges or carrier hotels, hosting ground stations and providing connections


Connectivity Supplement to fiber networks. Increasingly, companies are sharing time on antennas via self-service portals in a model adopted from the world of cloud computing (aka Ground Station as a Service, or GSaaS). Beyond government, TV and broadcast has traditionally been the dominant customer of satellites. Now, though, broadcast TV is losing market share to Internet-based solutions, the space sector and ground station market are seeing a boom period. “The growth in the ground station segment – though certainly not to the size of the cloud services industry – has been driven by the amazing growth in the overall space segment in the last ten years,” says John Williams, vice president, Real-Time Earth at Viasat. In the old days, satellites could cost $100 million, so a few million more on a dedicated antenna network was only a fraction of the total budget. Today, smallsats can be built for less than a $100,000, and put into orbit cheaply. More satellites can be built, and the ground station can become one of the largest costs of a project. The growth, says Williams, is not tied to any one niche. Segments such as remote sensing, global broadband, environmental monitoring, SAR imagery, weather tracking, and IoT are all growing and building out new satellite capacity. The ground segment has had to evolve and grow to support these new economics; both by building more ground stations, and also sharing antenna usage across many customers via the GSaaS model to bring down 'the cost of ground.' “Smaller constellations and start-ups do not waste precious funds and resources on building out their own network and are fully bought into GSaaS from day one,” says Williams. Virtualizing the ground station While the customer base around satellites is becoming increasingly cloud-first and hardware agnostic, those same trends are also changing how ground station operators do business. “Everything is now moving into virtualization at an incredibly fast pace,” says Bell. “There was a CEO with a great line; if you're having 10 services right now on

an antenna, in the very near future you're gonna have to deal with putting 10,000 services on an antenna.” “There's a very small number of innovators in this space and they're all working incredibly hard to try to produce antennas that can cost-effectively connect to all these different services. It's kind of like wave division multiplexing, we're gonna just pack more and more service onto the antennas we've got, because the technology is evolving for us to do that.” At the same time, as telecoms hardware heads towards more commodity ‘plug and play’ hardware that relies on virtualization, the same trend is occurring in the ground station segment. “It's going to look massively different in five years; every system except the actual physical antenna and the amplifier is going to be virtualized, it's all gonna be running on commodity hardware,” says Bell. Sergy Mummert, SVP of Global Cloud & Strategic Partnerships at SES Networks, said within five to 10 years we should hopefully be seeing multi-band, multi-orbit antenna systems that allow for even more infrastructure sharing than is currently possible; a single antenna might be able to send and receive information to and from a large broadcast satellite in GEO, a communications satellite in LEO, and a small Earth Observation cubesat. “That enables a shared infrastructure; virtualized modems and platforms. Then the most popular manufacturers are becoming much more software based, and they're differentiating more on the capabilities of their software instead of the features on their box," he said. The switch is already happening: new business models allow for more shared infrastructure and dish time can be resold under an ‘as a Service’ offering. It’s no longer necessary to place your own dish at a teleport (though many are also developing their own dishes as well). “New entrants have come into the market to look at how to advance ground technology, in both virtualization technology and new business models that use shared infrastructure to bring the cloud model to teleports,” says Mummert. “That’s a game-changer because it further reduces

"Ten years ago, there were two primary providers for GSaaS in the entire world. Now there are at least ten; each one brings something different to the table, but the overall effect is to bring down the cost"

the barrier for new entrants and new applications. “The original teleport providers would just offer you the land and power, and you invest in your own equipment. But now they're starting to see the benefits of providing their own ground station model. “The more you can share ground station infrastructure makes it a much better investment for third parties who want to be in that business or even for the operator.” Williams notes that while enterprises are adopting a ‘hybrid/multi-cloud’ approach to their IT infrastructure, satellite firms are also adopting a multi-provider approach to the ground segment. “With more providers, they get increased geographic coverage to reduce latency and an overall more resilient and reliable ground network. Ten years ago, there were two primary providers for GSaaS in the entire world. Now there are at least ten; each one brings something different to the table, but the overall effect on the free market is to bring down the total cost of doing business.” The cloud providers come for ground stations The new wave of satellites creating huge amounts of data means the ground segment has grown in importance as the gateway to get space-created information back onto terra firma. Incumbents are adopting as a service offerings and GSaaS-native players such as Leaf.Space are appearing, but the market has also drawn interest from the cloud hyperscalers. AWS has launched ground station services in Oregon; Ohio; Bahrain; Seoul, South Korea; Stockholm, Sweden; Sydney, Australia; Ireland; and South Africa's Cape Town. It is working with the likes of Lockheed Martin, Telespazio, and others to offer time on its dishes to operators. Microsoft has installed its own ground station infrastructure at a number of its data centers, and is also hosting antennas for GEO and MEO satellite firm SES at Microsoft locations. Incumbents such as KSAT and Viasat are working with the company to incorporate their offerings into Azure Orbital. Google is hosting ground station infrastructure for SpaceX’s Starlink constellation, and SpaceX is also integrating its ground stations with Azure’s networking capabilities. “The cloud providers are focused on trying to transform infrastructure; and when they saw all the advancements on the space side, they said your ground infrastructure hasn't advanced at all, and so this is where they wanted to invest,” explains Mummert. While communications satellites – including Amazon’s own Kuiper Project – are obviously part of the picture, the

Connectivity Supplement 11


original drive to connect the cloud to space was largely driven by the rise of Earth Observation players and their cloud-first mentalities. “Cloud service providers offer two primary services, storage and compute. Satellite operators are voracious customers for both of those services,” says Viasat’s Williams. “Once downlinked, satellite imagery must be demodulated and processed into raw imagery and stored. There is a very significant market for geospatial intelligence analysis; taking imagery and transforming it into meaningful, actionable information.” Mummert notes that adding new capacity through the hyperscalers offers lower latency and a way to process and analyze data in a far quicker and more direct way than previously possible. “For the Earth observation players, most of their ground stations are in the poles, and so their latency in getting data to the end user was 30 to 45 minutes and some even longer. Ground stations away from the poles quickly improves things for customers who want

much more real-time information.” “Of course then, if it's provided by a cloud provider, then the processing of that data is improving and more and more analytics partners on their clouds bring special ways to look at the data.” Mummert said that the shared antenna infrastructure is mostly for Earth Observation satellites at the moment. Communications satellites don’t usually share dishes in the same way currently, but will follow suit in the future. “The issue right now is most of the communications broadband operators take so much of the antenna time so it's hard to even share it,” he says. “Over time the plan is to build enough of them that they can be shared.” Deploying ground stations at data centers While Microsoft’s Azure Orbital and AWS’ Ground Station service are providing an ‘as a Service’ model to a number of operators to communicate with their satellites, Microsoft and Google are also providing space for third

"There is a very significant market for geospatial intelligence analysis; taking imagery and transforming it into meaningful, actionable information" 12 DCD Supplement • datacenterdynamics.com

parties to host their own infrastructure at the hyperscalers’ data centers. Mummert said SES was looking to deploy ground infrastructure for its new O3b mPOWER communications satellite system network around the time Microsoft was exploring space as a business opportunity. “We were in the middle of our planning cycle for mPower, and Microsoft was looking to enter the space business. It was the right timing.” At first, SES was limited to placing infrastructure in places where Microsoft was already developing data centers – away from interference-heavy metros, for example. In future, the company can plan further ahead based on where Microsoft’s future development plans align with SES customers’ needs. “We have telco or MNO customers that also want their own gateway when they buy capacity. Turns out some of those telcos are already partnered with a cloud provider, so there's some alignment there that we can look at. "There'll be more data gateways, but they don't have to be the size of what our initial gateways are.” While the hyperscalers have been keen to embrace the ground station and space sectors, so far there’s been little movement from other providers to follow suit, but that may change. “I would be stunned if a whole bunch of other cloud or data center providers didn't [look to colocate more ground station infrastructure at facilities],” says QTA’s Bell. “When somebody comes and says they will pay to put a dish on your roof and they'll manage all the mystery for you and you'll get it as a port on your router, why would you ever say no?” Earlier this year, Equinix said that the company is often approached by satellite firms about installing ground station infrastructure. “It’s primarily an issue of managing space, weight, and interference. A clear sky view is always necessary,” said Jim Poole, vice president of business development at Equinix. Mummert says colocation of ground stations and data centers is a driver in emerging markets where data centers “have more opportunity,” but avoiding retrofits is key for making the economics more attractive. “What we've learned from our work with Microsoft is your best bet is to develop the design for your ground station, in with the data center, before the concrete is poured.” “Any type of update to a data center is very expensive, but if you're part of the original, adding the ground station is a rounding error.”


Connectivity Supplement

“When somebody comes and says they will pay to put a dish on your roof and they'll manage all the mystery for you and you'll get it as a port on your router, why would you ever say no?” Customers and competition The arrival of cloud players is changing the ground station segment’s thinking around potential partners and competition, and also customer relationships. “The industry is working very carefully with customers to make decisions about what data needs to be on-premise, and what data they prefer to have on the cloud,” notes WTA’s Bell. Mummert says the addition of SES infrastructure at Microsoft locations offers more capabilities to customers, enabling them to discuss how they approach data and computing. It also allows SES to send customer data directly through the Azure backbone, which avoids a connectivity step such as a VPN or purchasing an additional port. “Suddenly, you're getting into how the cloud actually works for your end customer. That's a whole different conversation than going 'how do I get you from point A to point B?' “These are whole new conversations

with our customers that, in the past, were almost the customer's problem and we wouldn't step in to see how to optimize it.” Mummert acknowledges that SES customers might not be Azure customers; ground station providers have to be as ‘cloud neutral’ as possible, and so the company is a direct connect partner with AWS and is in the process of doing the same with Google. “We understand that our customers drive the cloud decisions, so the way that we talked about optimizing the network has to be the same.” On the subject of whether these companies become competition when offering similar services, SES’ Mummert says that these are customer-driven decisions, and many customers are seeking to use multiple orbits and multiple systems in a complementary way to provide greater resiliency and different services for different use cases. Viasat’s Williams notes that while the company is focused on developing cloud-

based solutions he says are “ideal” for antennas colocated at data centers, there will always be a demand for ground stations elsewhere. “Antennas at data centers is not a panacea as most data centers are located in the mid latitudes,” he says. “While accessibility to a data center is great, geography is key when it comes to the ground segment. Data centers are not typically sited with customer downlink considerations in mind and they are not often near the 'strategic' geographic locations. “Locating data centers is a compromise adopted by the cloud providers to get data into the cloud quickly, which is where the cloud providers make their money. But it is not an optimal decision for the satellite customers.” WTA’s Bell notes as well that teleport and ground station operators can be a neutral connection partner; the operators can deal with the connectivity and relationships with the cloud providers, or in some cases even be the cloud provider. “SES has done a number of deals with broadcast customers in developing nations, where it's the cloud service provider,” he says. “Of course it's not really; it's reselling its Direct Connect relationships. But the broadcasters trust SES. And that's been a driver for a lot of business.”

LEO satellites, in-orbit compute, and the rise of the ground station Edge Currently, there are around 4,000 operational satellites in orbit. By the end of the decade, that number could be closer to 100,000, meaning the ground station industry is going to have to adapt fast to keep up. Mummert says the ‘densification’ of networks – primarily driven by the LEO satellite constellations – will drive the need for more ground stations. “Some of them don't have inter-satellite links, and so most of them need many more ground stations to land their traffic.” As well as deploying ground stations at Google facilities, SpaceX is building a ground station on the Isle of Man off of the UK’s west coast and likely elsewhere. UK firm Arqiva – which operates teleports in Crawley, the Isle of Man, Bedford, Martlesham, and Morn Hill – won a contract in early 2021 to provide ground station gateways in the UK for Starlink. OneWeb hasn’t revealed how many ground stations it has in total or their locations, but does have facilities across

Kazakhstan, Norway, and Portugal alongside US stations in Alaska, Connecticut, and Florida. Reports indicate its network of 648 satellites could require up to 44 ground stations in total, with around 22 thought to be in development last year at the time of its bankruptcy. Because of that proliferation of ground stations, Mummert predicts that ‘what is a ground station' will have to be redefined to have a much smaller, thinner interface in many cases; a more Edge-like facility to service the core locations, possibly even located at Edge data centers. “It will change in terms of a standard ground station where we usually see three to five-meter antenna systems, to smaller ones, but many of them,” he says. “I see a mixture of alignment with data centers and thinner Edge nodes; you need more spatial diversity for frequency reuse and I think that's going to benefit from a mixture of both data center and Edge deployments. At the same time, a number of companies are looking at in-orbit computing; turning

satellites into computing nodes to create a new kind of Edge node, sometimes referred to a ‘data center in space.' While the Teleport Association’s Bell says he’s yet to hear a convincing business case for in-orbit processing, he says intersatellite communications – for example from a satellite the other side of the planet to a space station – offers the chance to send data without the need for a ‘double hop of sending signals to Earth, to another ground stations via terrestrial fiber, and back up into orbit. SES’ Mummert also sees opportunity for satellites as a new type of computing Edge for content providers. “It's definitely the objective of most of the hyperscalers; they all want their fabric, if you will, in every major Edge,” he says. “Even for our GEO architecture, we're working on cloud providers to use them as CDNs. Satellite has been a long-time broadcast platform, but these organizations are realizing that the original properties of GEO make a lot of sense to them.”

Connectivity Supplement 13


Connecting the lowlatency Edge Shifting data and processing to the Edge will impact how we design and build networks

I

n a world drowning in data, network traffic is already a complex task. But the rise of Edge computing will complicate matters all the more, requiring bigger and smarter networks than ever before. "There's a lot of content that's being generated outside the walls of the data center," Commscope's hyperscale and cloud solutions architect Alastair Waite explains. "The Internet of Things, smart cities, etc. - that's all generating data in a distributed fashion." While there has always been local data creation, the sheer quantity being produced at the network edge is a new - and rapidly growing - phenomenon. "The network is having to adapt to be able to manage these huge pools of data that are now being pushed around the network," Waite says. "Before, we saw more centralized data being generated centrally and in large data centers."

This data is not just being created, but requires a back-and-forth response from the Internet. Workloads like artificial intelligence (AI) or augmented reality (AR) will send data off into the network, but also expect a response - and fast. That's a problem for two reasons. Current bandwidth constraints mean that it is often not technically or financially feasible to send all that data back to a central facility and, even if you do, it may mean too much latency to be useful. Here the Edge is presented as a way of killing two birds with one stone. Not only can it process data for a low-latency response, but it can also filter and compress the data that needs to be sent back to the larger data center. "With AR and virtual reality (VR), latency can make a person sick," Tilly Gilbert, a senior consultant at telecoms advisory STL Partners, says. "So that application really is reliant on that low latency under

14 DCD Supplement • datacenterdynamics.com

Sebastian Moss Deputy Editor

50 milliseconds or so. And then there are those really high bandwidth use cases where processing at the Edge can make them cheaper or more efficient by filtering information out rather than streaming all raw data to the centralized cloud." This is still mostly a dream of tomorrow. "Today's networks are not reasonably accessible," Yuval Bachar, the former principal hardware architect of the Microsoft Azure platform, says. "If you try to send data from point A to point B, which are not on the same carrier, you're going to be exchanged somewhere that can be two miles away, but also could be 1,000 miles away. Your latency is completely unpredictable." He adds: "So the current network does not give us a predictable latency that we need for the future applications of the Edge. And the current network also cannot handle the very, very large datasets which are being generated at the endpoints. So there will


Connectivity Supplement very great experience of extremely fast load. But in some domains, like areas of Europe and Asia Pacific, the experience was not sufficient." The problem is that every time the homepage is loaded, it is unique, requiring specific processing for every person, every time they visit the page. "It requires touching the data center constantly," he says. "We decided to actually build an Edge platform. We built a micro data center that we're actually placing in strategic areas, enabling faster response to what the data center can actually provide to the end user. And by that enable a low latency environment, even though the data center is much further away." He continues: "That's created a dramatic improvement in the experience that the end users had, specifically in Europe." It also, he claimed, allowed the company to roll out richer features they would have otherwise not felt comfortable deploying. "But this is an early-stage development." Bachar is convinced that this Edge case is not just an edge case, but rather a hint of what is to come. That is, he admits, if people can make the numbers work. "On paper, we understand what needs to be done," he says. "But it's all tied to a business model - if we don't have a way to monetize it, then the big players will not jump in there, and there are a few very large companies in the world that can actually make this investment." If and when they do make that investment, "whoever is going to take the first step is going to be dominating this market, just like what happened with the cloud," he predicts.

"On paper, we understand what needs to be done. But it's all tied to a business model - if we don't have a way to monetize it, then the big players will not jump in there" Here, again, the network demands will be crucial in defining the business model. The vast scale of the network overhaul means that cloud providers or other data center companies will not be able to go it alone, and will likely have to partner with network operators, argues Caroline Puygrenier, director of strategy and business development, connectivity, at Digital Realty's Interxion. "With 5G, Edge, new network architectures, satellite constellations, and so on, we need there to be a greater collaboration between the network operators and the actual cloud service providers," she says. "We all benefit from that implementation of new technology, it's not just one segment of the verticals, that's going to develop or pay for the implementation." Her company appears to be hoping to cash in on this potential collaboration, investing in AtlasEdge, and installing a former Digital Realty exec as CEO. AtlasEdge is a joint venture between DigitalBridge and telco conglomerate Liberty Global to turn thousands of sites at telco locations into Edge data centers. There are issues. "Some of the cloud providers are much more interested in getting access to telco networks so they

can get access to telco customers, more so than partnering long term," Mark Thiele, CEO of data center procurement company Edgevana, says. "Many of the initial solutions have huge gaps in opportunity - from a cloud provider standpoint, they're too expensive, and they are not autonomous from a centralized network. "But people are working on it." When they do solve this challenge, it will have a profound impact on the network of tomorrow, bringing high bandwidth to Edge locations, and offloading processing to those sites. That doesn't mean that's it for the centralized data center, though. "As this data is being created at the Edge locations, a lot of it is going to have to come back to somewhere," Commscope's Waite says. "So it's going to be extremely important to make sure that your cloud data center, whether that's in a multi-tenant data center or within your own premises, has the correct level of bandwidth being provisioned." In conversations with cloud and hyperscale providers, it's "all about 400 gigabits and beyond," he says. "They're asking 'what's next?' because they want to be able to deliver that seamless experience that's really driving the bandwidth at the moment."

Connectivity Supplement 15


IT professionals manage with

at the edge using EcoStruxure™ IT.

Gain the flexibility you need to optimize uptime at the edge. • Gain visibility and actionable insights into the health of your IT sites to assure continuity of your operations. • Instead of reacting to IT issues, take advantage of analytics and data-driven recommendations for proactive management. • Keep costs under control by managing your IT sites more efficiently. Choose to outsource to Schneider Electric’s experts, address issues yourself, or leverage a mix of both approaches.

ecostruxureit.com

©2021 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-21556505_GMA-US

EcoStruxure IT Expert


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.