The Enterprise Edge Supplement

Page 1

Enterprise Edge Supplement

How Edge adapts to every industry

Shipping to the Edge

> The marine industry finds ways to deliver services over satellite links

Gamers get on the cloud

> Riot Games uses cloud services to get lower latency

Towards dark factories

> Manufacturing pioneered the use of Edge. But can it go all the way?

Sponsored
INSIDE
by
2 | DCD Magazine • datacenterdynamics.com APC Smart-UPS Modular Ultra lets you operate in any edge environment. • Reduces installation cost and saves IT space • 50% smaller, 60% lighter with 2.5x more power. • Simple remote management with EcoStruxure™. • The most sustainable UPS of its kind. when they operate at the edge. IT professionals get more EcoStruxure IT modernized DCIM Smart-UPS™ Modular Ultra apc.com/edge ©2023 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-22622616_GMA

necessary

The infinite variety of Edge

Edge infrastructure is slowly moving from a buzzword to the norm within many different sectors. With the low latency it provides, companies are finding ways to digitize, automate, and improve their operations by bringing their compute closer to the Edge.

But while progress is being made, the development and utility of the Edge is limited by a variety of factors, including its financial viability and task complexity.

The concept of the ‘Edge’ is also blurred. There is no one-size-fits-all approach, and the version of the Edge for a specific application can look entirely different.

This supplement picks three very different sectors, each in its way a pioneer exploring its own distinct vision of the Edge

The Shipping Edge

It is easy to forget that ships need computational power. But with sustainability regulations and restrictions increasing, ships need more and more sensors to gather information, and they need a way to process this information when connectivity is tenuous at best.

4

The shipping industry is also looking towards reducing the amount of manpower needed on board, and with this comes an increased reliance on technology to provide key insights into performance and safety for those crew who do remain.

While much of this data is sent to the cloud, there is a need for some data to be processed immediately and, for this, the ships need compute at the Edgeboth on board and on the shore.

So what would this look like for a completely unmanned ship? And could a more-automated vessel backed with on-board Edge resources, have avoided an incident like 2021's Suez blockage (p4)?

Sponsored by

Manufacturing in the dark

The manufacturing industry is, in many ways, pioneering the use of Edge computing and IoT technologies.

The application of this varies, depending on what the company manufactures - from Mars chocolate bars, to electric vehicles. But what doesn’t change is the need for security and low-latency.

As more companies explore the concept of dark factories, and robots take over the roles of people, this is becoming increasingly apparent. But what do the economics of this look like, and is it worth updating equipment to reduce operating costs in terms of the human workforce (p10)?

Online Gaming

The online gaming industry is explosive, and as it gains popularity the expectations for a high-quality experience are only growing.

Riot Games needed to develop its network, as games like League of Legends gained large numbers players, all wanting the kind of low-latency experience that normally takes a business user in the direction of Edge computing. But the vision of Edge adopted by Riot came from the cloud.

Riot adopted AWS Outposts in an attempt to reduce latency for all of its users. We talked with them, and to AWS, about how this works, and how Riot has developed its reach over the years (p13).

This supplement could only cover a tiny sample of the possible sectors where Edge is making a distinctive contribution, and we are bound to be back on this subject regularly.

If you know of exciting Edge case studies, get in touch with us!

Contents
10 4. Shipping: At the Edge of the world
the Maritime Industry 10. In gaming, cloud provides an Edge How Riot Games moved its workloads to AWS, but augments it with the Edge 13. What Edge brings to manufacturing Are dark factories really feasible? 13 Enterprise Edge  Enterprise Edge Supplement | 3
The digital infrastructure that will be
to fully digitize and automate

Shipping: At the Edge of the world

The

If you wait at the London Gateway port, countless ships will pass you by, behemoths groaning under their weight as they cut through the water.

Once upon a time those ships would have been just an empty husk for transport, reliant on the wind and the people on board to keep the vessel charging forward. But as technology has developed, so has the complexity of the ships. While not yet ‘manless,’ the maritime industry is guided by the data it gathers on board and ashore, and no longer down to human judgement alone.

As we move towards a globally digitized fleet, those ships will need a complex system of digital infrastructure to keep them connected to the shore, to each other, and to process the information on-board.

digital infrastructure that will be necessary to fully digitize and automate the Maritime Industry

Digital ships

Ships are now, if not mobile cities, certainly small mobile towns. The 2022 cruise ship, ‘Wonder of the Seas’ can host almost 7,000 people, all of whom expect Internet connectivity throughout. But the number of people demanding connectivity does not even begin to compare with the number of sensors gathering data, and those sensors are demanding to be heard.

Data is constantly being collected on board the ship. Sensors monitor the engine, fuel consumption, ship speed, temperature, and external data like weather patterns and currents.

According to Marine Digital, the modern ship generates over 20 gigabytes of data every day (though this is, of course, wildly variable depending on the size and purpose of the ship). The important takeaway is that this is not a simple undertaking, and there is no one-size-fits-all approach.

For ship management company Thome, there is less IT on-board than on-shore. “We treat all ships as a small office,” said Say Toon Foo, vice president of IT at Thome Group. “On most of our ships, we have at least one server. ”

As a management company, Thome doesn’t own the ships it works with. Instead of attempting to process the data onboard, Thome processes the majority ashore, communicating with the crews via a very small aperture terminal (VSAT).

VSATs connect the ships by fixing on a geo-stationary satellite and can offer download rates between 256kbps to 90Mbps, and upload rates usually between 100bps to 512kbps. This pales in comparison to 5G’s 20Gbps download and 10Gbps upload, but there are no 5G masts mid-ocean.

“It's a good speed we have [with the VSAT], but not everything can run from the satellite, so we do need that server. But the VSAT means that if we do have a complication, we can share that with the staff on shore,” explained Toon Foo.

Good is, of course, relative. But, happily for Thome, the shipping management company doesn’t really need to process the data in real time.

Instead, the company relies mostly on daily or hourly data updates transmitted via the not entirely reliable VSAT, which are processed in its on-site server room, or in the majority of cases, sent to the cloud.

As an approach, sending most of the data to the shore to be processed seems to be the norm.

Columbia Shipmanagement uses a unique Performance and Optimization Control Room (POCR and/or performance center) as part of its offering. The POCR enables clients to optimize navigational, operational, and commercial performance by analyzing data both collected on board the ship and ashore.

“The ships are directly in touch with the Performance Center,” said Pankaj Sharma, Columbia Group’s director of digital performance optimization. “We proactively check the routes before

departure, looking for efficiency, safety, and security. Then, as the vessel is moving, the system is monitoring and creates alerts which our team is reacting to 24/7.”

With over 300 ships to manage, much of this is automated into a traffic light system (green means good, and alerts only light up when the color changes).

Some of this is then processed onsite, but the vast majority is clouddriven. “Right now we are on Azure, but we have also used AWS and we have a private instance where we have our own cloud hosting space,” added Sharma.

Edge on board

Having computational power onboard the ship is entirely possible, but it has challenges. There is limited room on a ship, and there are also weight limitations, while IT engineers or specialists are not the norm in the crew.

Edge system vendor Scale Computing designed a server to get around these issues, one that has been used by Northern Marine shipping.

“Looking at Northern Marine, initially they worked with traditional 19-inch rack servers on board the ships - two HPE servers and a separate storage box,” said Johan Pellicaan, VP and managing director at Scale Computing.

“Just over a year ago, they started to use an Intel-based enterprise Edge compute system, the Scale Computing HE150. This is a nano-sized computer [under five square inches).”

The Edge at Sea 
Enterprise Edge Supplement | 5
Columbia Ship Management

Scale’s offerings are based around tightly integrated micro-computers. The HE150 and HE151 are based on Intel’s NUC (next unit of computing) barebone systems, running Scale’s HC3 cluster software.

They use significantly less power than a traditional 19-inch server, and take a tiny fraction of the space.

Traditional servers “need about 12 cores and six to nine gigabytes of storage in a three-node cluster as an absolute minimum. In our case, we need a maximum of four gigs per server and less than a core.”

This means that the Scale software has a lower overhead: “In the same size memory, we can run many more virtual machines than others can,” claimed Pellicaan.

Edge is really defined by the kind of work done rather than the location, so it is fair to say The shipping industry is using the Edge - be it on-board, or the Edge on-shore.

Automation - could we have unmanned ships?

In many industries, the next step for digitization is automation. In the maritime sector, this raises the prospect of unmanned ships - but Columbia’s Sharma explained that this would be complex to deliver, given the latency imposed by ship-to-shore communications.

“When we talk about control rooms, which would actively have an intervention on vessel operations, then latency is very important,” he said.

“When you think of autonomous vehicles, the latency with 5G is good enough to do that. But with ships, the

latency is much worse. We're talking about satellite communication. We're talking about a very slow Internet with lost connection and blind spots.”

The fact is that satellite connectivity is simply not fast enough to allow ships to take the step towards autonomous working and full automation.

“There is sufficient bandwidth for having data exchanged from sensors and from machinery, and eventually being sent to shore. But latency is a big issue and it's a barrier to moving into autonomous or semi-autonomous shipping.”

Much of this makes it seem like ships are at the end of the world, rather than at the Edge. But ships do not travel at dramatically fast speeds like other vehicles, so latency can be less of a problem than one might expect.

A relatively fast container ship might reach 20 knots (37km per hour), compared to an airplane which could reach 575 mph (925kmph), meaning

that most of the time, hourly updates would be sufficient - but not always, there are plenty of incidents where fast responses are essential, and even then things can still go wrong.

For instance, in a highly reported incident in 2021, a container ship blocked the Suez Canal for six days. It’s worth exploring the incident to ask whether having more compute on board (even if it is only one server) might have helped avoid the problem. Could

on-board IT have helped prevent the Suez Canal blockage?

In March 2021, the ‘Ever Given,’ a ship owned by Shoei Kiden Kaisha, leased to Evergreen Marine and managed by Bernhard Schulte Shipmanagement ran aground at the Suez Canal in Egypt, with its bow and stern wedged in opposite banks of the 200m wide canal.

Blocking the major trade route prevented 369 ships passing, representing $9.6 billion in trade. The

6 DCD Supplement • datacenterdynamics.com
06-02-2023
Ever Given IMO 9811000 C Hamburg

crash was put down to strong winds (around 74km per hour) pushing the 400 meter (1,300ft) ship off course, and speculation was made by the Egyptian authorities that technical or human errors may have played a role, although this was denied by the companies involved.

The weather is something that is not taken for granted in the Maritime industry. “Weather based-data was the first machine learning project we did in the POCR,” said Sharma. While this research was not focused on an incident like the Suez Canal blockage, Columbia did explore the impact of wind on efficiency.

“Weather is a really important factor,” explained Sharma. “A badly planned weather voyage can increase the fuel consumption by 10 to 15 percent, while a well-planned voyage might save five percent.

The company “did a project where we got high-frequency data from the vessel AIS position, and every 15 minutes we layered that with speed data, consumption data, and weather data. We then put this into a machine learning algorithm, and we got some exceptional results,” he said.

Instead of being able to work on a 20 or 30 degrees basis, the company was able to operate at 5 degrees. “It became a heat map rather than a generic formula and we could then predict the speed loss very effectively,” he said.

Simulating ships

Evert Lataire, head of the Maritime Technology University at Ghent University in Belgium, conducted data analysis using ship tracking websites to find what happened in the Suez Canal incident, putting much of this down to the ‘Bank Effect,’ an effect of hydrodynamics in shallow waters.

DCD reached out to Lataire to find out whether he thinks that having more compute on-board, could potentially prevent disasters like the Suez Canal blockage.

Lataire’s research doesn’t require intensive compute power, real time data analysis can have a big impact on control. When a ship is out at sea, the data can be gathered around its position, but not the impact of forces on the ship.

“The surrounding water has an enormous impact on the ship and how it behaves. A ship sailing in shallow water will have a totally different turning circle compared to deep water, to a magnitude of five.

"So the diameter you need to make a

circle with your ship will be five times bigger in shallow water compared to deep water.”

This is where the bank hypothesis came into account for the Suez Canal disaster. According to Lataire, the crew manning the ship will have been aware something was going wrong, but by then it would have been too late.

“Once you are there, it’s lost. You have to not get into that situation in the first place,” said Lataire.

On-board Edge computing could be enough to alert the crew, and the ship management company, that an issue was going to arise, but it is not yet able to predict, nor prevent, the outcome.

Lataire’s research generates calculations that can then be used to create simulations - but this isn’t currently possible in real-time on the ship. Lataire believes that autonomous ships will come to fruition, but will be limited to small city boats, or to simple journeys like those taken by ferries, in the near future. In the distant future, this could expand further.

The ‘manless’ ship is still a work in progress, but the digitized and ‘smart’ ship is widely in practice. By using a combination of on-board Edge computing, on-shore and on-premise computing, the cloud, along with VSAT for connectivity and geostationary satellites, the ships themselves and those controlling them can make datadriven decisions.

Until we can find a solution to the latency-problem for ships, automation will remain a pipedream, and sailors will keep their jobs. But with technological advances, it is only a matter of time. 

Enterprise Edge Supplement | 7
Shipping 
The Ever Given blocking the Suez Canal -Wikimedia

Modern DCIM’s Role in Helping to Ensure CIO Success

It is widely reported [1], [2] that the role of a Chief Information Officer (CIO) is experiencing a sea change. IT is now at the center of business strategy as digital technologies power and sustain the global economy. The criticality of IT in every aspect of business has driven CIOs from only filling the tactical role of deploying, operating, and maintaining IT to also focusing on business strategy. CIOs increasingly have a leading role in driving business innovation, aligning IT projects with business goals, digitalizing business operations, and leading corporate organization change programs, for example. This role expansion has made their job more critical and complex.

What has not been as widely reported, however, is that the traditional CIO role of IT service delivery has become more critical and complex as well. After all, a CIO’s impact on business strategy and execution depends on continuous IT service delivery. As shown in the figure below, the success of a CIO is ultimately

rooted in a solid foundation of maintaining resilient, secure, and sustainable IT operations. But, in an environment of highly distributed hybrid IT, this becomes harder to do.

Modern data center infrastructure management (DCIM) software, optimized for distributed environments, plays an important role in maintaining this foundation for hybrid data center environments with distributed IT infrastructure. Schneider Electric White Paper 281, “How Modern DCIM Addresses CIO Management Challenges within Distributed, Hybrid IT Environments” explains in some detail, using real world examples, how DCIM can make the electrical

and mechanical infrastructure systems powering and cooling your distributed and edge IT installations more resilient, physically and cyber secure, as well as more sustainable.

Traditional DCIM was fundamentally designed and used for device monitoring and IT space resource planning for larger, single data centers. But the days of managing a single enterprise data center are over. Business requirements are forcing CIOs to hybridize their data center and IT portfolio architecture by placing IT capacity in colocation facilities and building out capacity at the local edge – sometimes in a big way. In addition to managing and maintaining resilient and

Patrick Donovan Schneider Electric

secure operations at all these sites, CIOs are now being asked to report on the sustainability of their IT operations. DCIM software tools are evolving so CIOs and their IT operations teams can do their jobs more effectively.

Modern DCIM offers have simplified procurement and deployment, making it easier to get started and use the tool across your distributed IT portfolio. A single log-in will provide a view of all your sites and assets in aggregate or individually from any location. Software and device firmware maintenance can be automated and done from afar. These newer offers not only make it easier to have remote visibility to power and cooling infrastructure to maintain availability, but they also address security and sustainability challenges.

How DCIM improves security

Data center environmental monitoring appliances can be used to not just detect and track temperature, humidity, fluid leaks, smoke, and vibration, but they also typically integrate with security cameras, door sensors, and access cards to provide physical security for remote IT installations. Monitored and controlled through DCIM software, these appliances help remote operations teams monitor and track human activity around critical IT as well as environmental conditions that could also threaten the resiliency of business operations. In the case of cyber security, modern DCIM solutions provide tools to help ensure network-connected power and cooling infrastructure devices do not become a successful target for a cyberattack.

All these devices, as well as the DCIM server and gateway, must always be kept up to date with the latest firmware or software patches. Cyber criminals are constantly working to find vulnerabilities in existing code to hijack devices to steal data, control devices, cause outages, etc. New firmware and software patches not only fix bugs and provide additional performance enhancements, but they often address known security vulnerabilities. These code updates should be installed or applied as soon as they become available from the vendor. Without an effective DCIM solution, this process requires on-going discipline and action from the operations team.

Also, the security features and settings that were enabled and configured during

the initial setup and installation also need to be maintained throughout the life of the infrastructure device, network appliance, or management server/gateway. By minimizing the number of users with the ability to change these settings, you reduce the chances of unintended or non-permitted changes being made. Beyond that, these settings must be checked regularly to ensure they remain set properly over time. This requires additional, on-going discipline and regular action by the ops team.

However, DCIM tools with a security assessment feature can simplify all this work described above significantly, at least, for power and cooling infrastructure devices. These assessments will scan all connected devices across the entire IT portfolio to provide a report highlighting out-of-date firmware and compromised security settings. Some DCIM tools will also automate the updating of firmware and provide a means to perform mass configurations of security settings across multiple devices at once to greatly simplify the process.

How DCIM helps achieve sustainability goals

DCIM can be used to reduce your IT operation’s energy use and greenhouse gas (GHG) emissions, as well as give you basic information to start tracking and reporting sustainability metrics. Energy reductions can be accomplished using DCIM planning & modeling functions. These tools work to better match power consumption to the IT load by turning down or turning off idle infrastructure resources. Or the software can make you aware of where to consolidate the IT load to reduce both IT energy consumption as well as the power losses from the supporting infrastructure. The new white paper describes several specific use cases of how DCIM planning & modeling tools can help reduce energy consumption.

Modern DCIM can also help CIOs and their teams to begin tracking and reporting basic sustainability metrics for their portfolios of on premise data centers, edge computing sites, and colocation assets. Some DCIM (“out of the box”) offers will collect data and report for individual sites and in aggregate:

PUE: current and historical

Energy consumption: usage at sub-sys-

tem level to show in both real-time and historical trends of total consumption, IT consumption, and power losses

Carbon footprint (Scope 2 emissions) based on local carbon emissions factors in total and by subsystem including IT, power, and cooling.

For these metrics to be meaningful, of course, it is important for the DCIM software to be able to communicate with and normalize data from all power and cooling infrastructure devices, regardless of make or model. This ensures a complete picture of environmental impact. So DCIM tools and infrastructure devices that embrace common, open protocols (e.g., SNMPv3) and accommodate the use of APIs/web services should be used.

Note, DCIM is in the early phase of its evolution towards becoming an environmental sustainability reporting management tool for data center white space and edge computing installations, in addition to being a tool for improving resiliency and security. The white paper explores a bit how DCIM will likely evolve in this direction in the near term. But, again, for most enterprise businesses that are just getting started with sustainability, modern DCIM tools can be used today to track and report the basics.

In summary

As the role of enterprise CIOs expands to driving business strategy, digitalization, and innovation, their traditional role of IT service delivery remains critical. However, this role has become much more challenging as IT portfolios have become more distributed geographically and spread among cloud, colocation, and the edge. IT resiliency and security must be constantly monitored and maintained across an entire portfolio of IT assets. At the same time, urgency and pressure are growing to track, report on, and improve environmental sustainability. Our new white paper describes in detail how DCIM monitoring & alarming as well as planning & modeling functions address these challenges and serve to make distributed, hybrid IT more resilient, secure, and sustainable. 

Schneider Electric | Advertorial 
Enterprise Edge Supplement | 9

Riot Games’ ‘valorant’ use of Edge computing

How Riot Games moved its workloads to the cloud and the Edge

When you enter into a game's universe, you are temporarily transported out of your current reality.

The gaming industry has successfully commercialized escapism and with the advent of cloud and Edge computing, players can escape into the same world no matter their actual location.

It is no surprise then that, according to Statista, the video gaming industry

is expected to reach a revenue of $372 billion in 2023 - around $30 billion more than projected for the data center industry.

But the two sectors are interlinked, with the former heavily reliant on the latter.

Hitting the network

Initially, gaming developments needed only the user's computer or device to be powerful enough to run the current

state of the art. In 1989, the 8-bit Game Boy could support a simple game like Tetris; but a more recent arrival, the more complicated Sims 4 (launched in 2014) needed a 64-bit operating system, and a 4-core AMD Ryzen 3 1200 3.1 GHz processor as a bare minimum.

But alongside their increasing local demands, games have gone online, enabling players in different locations to compete or collaborate with one another, in increasingly complex games. This means gaming now has increasing

10 DCD Supplement • datacenterdynamics.com
 Enterprise Edge Supplement
Georgia Butler Reporter Courtesy of Riot Games

networking, bandwidth, and digital infrastructure requirements, the nuances of which vary on a case-by-case basis.

Riot Games is one of the major game developers in the field. The company is particularly well known for its 2009 multiplayer online battle game League of Legends (LoL), and the 2020 first-person shooter, Valorant

The company also runs almost entirely on Amazon Web Services (AWS).

“Games are composed of a few different workloads, and the compute infrastructure for those tends to meet different requirements as well,” David Press, senior principal engineer at Riot Games, told DCD

Online games are many-layered. From the website where the game can be downloaded or where players can access additional information, to the platform where Riot collects data about the players and uses that to make data-informed decisions, to the platform service which supports all the around-the-game features.

All of these, for the most part, do not require any different infrastructure than a digitized company in a different sector. The workloads may be different, but as Press explained, they aren’t ‘special.’

Where video games cross over into the unique, is in the game servers themselves which host the simulation.

“If it's a first-person shooter game like

Valorant, where you're in a map running around and using your weapons to try to defeat the other team, it’s a very highfrequency simulation,” explained Press.

Speedy protocol

That high frequency presents a different type of workload. The simulation tends to be very CPU-heavy, and Riot is running thousands or tens of thousands of these matches, all at once and across the globe.

“It's generally a very large homogenous fleet of compute,” Press said. “Then, from a network perspective, it's also a bit different. These machines are sending a lot of very small User Datagram Protocol (UDP) packets.”

The simulation is creating a 3D world for the game player. The server has to generate, in real time, things like the character’s movements and the different plays - in League of Legends, this could be casting a spell that moves through space and hits another player.

That simulation has to run incredibly fast.

“We're running the simulation very quickly, and we need to be able to distinguish between a spell or bullet hitting another player, or missing them, and then we need to broadcast all those changes to all the players that are in that world,” Press said.

“All these little UDP packets carry the

information that this person moved here or this person cast this spell, and that is happening maybe 30 times a second or 120 times a second depending on the game.”

UDP packets work in gaming because they run on top of the Internet Protocol to transmit datagrams over the network. Unlike the TCP protocol used in other applications, UDP packets are small and can avoid the overhead of connections, error checks, and retransmission of missing data, meaning that they can move very quickly.

But the speed at which this can transfer - the latency - is also dependent on the digital infrastructure available, and for the cloud to support the gaming industry, it has to come to the Edge.

League of Legends was launched in 2009, and at the time it ran on onpremise infrastructure. But as the number of people playing grew first from the hundreds to the thousands, all the way up to the hundreds of millions, hosting this solely on-premise became impossible.

Not only did the sheer quantity of game players create issues, but the global spread of the game also introduced new challenges. Latency is directly proportional to the distance between the player and the game server, so distributed IT became a requirement to enable a reasonable quality of gameplay.

LoL  Enterprise Edge Supplement | 11
Courtesy of Riot Games

Moving to the cloud

Riot Games began moving its workloads onto AWS in a slow process - and one that is still ongoing.

“We started that migration workload by workload,” said Press.“Websites were probably the first thing we moved to AWS and then following that, our data workloads moved to AWS pretty early on. And that is being followed by the game platform and the game server. We're still in the process of migrating those.”

The company did this by building an abstraction layer which is, internally, called R-Cluster.

“[R-Cluster] abstracts computeincluding networking, load balancers, databases, all the things that game platform game servers need. That abstraction can run on both AWS and our on-prem infrastructure, so our strategy was to first create that layer and migrate League onto that abstraction.

"It was still mostly running on-prem initially. But then once most of League was migrated to that abstraction, then we could more easily start moving the workloads to AWS and nothing had to change with LoL itself once it was targeting that abstraction.”

That process is being done region by region, and instead of Riot relying on having enterprise data centers in every region, the gameplay is instead on AWS - be it an AWS region, Local Zone, or Outposts which are deployed in Riot’s onpremise data centers.

The decision of which AWS service to

use depends on the accessibility in the region, and which service brings reduces latency for the customer. According to Press, Riot aims for latency under 80 milliseconds for League of Legends, and under 35 milliseconds for Valorant

But according to Press, there is a balance to be found in this. If you were to put a game server in every city, the service risks becoming too segmented.

“If we put a game server location in every big city in the United States, that would actually be too much because it would carve up the player base and you'd have far fewer people to match up against,” said Press. “It’s a balance between better latency, and making the match-making time too long, or matching players with those at different skill levels.”

Dan Carpenter, director of the North American games segment at AWS, agreed: “You want the server itself to be close to all the players but also to work in a network capacity, so people from all over the world, whether it's someone in Korea playing against someone in Dallas, can still have a similar experience within the game itself.

“That's represented from a hardware perspective, where of course you have the game server that is both presented to the end user but also needs to scale very quickly in the back end, especially for big events that occur in games.”

Massive tournaments

For games like LoL, which fall under ‘esports,’ players can take part in massive multiplayer tournaments, and those games are occurring simultaneously, and

the infrastructure needs to be close to every end user.

“You need that hardware close to the end user, but also with high-performance networking, storage, and a variety of other facets within the infrastructure ecosystem that are required to attach to that.”

AWS currently offers 31 cloud regions globally, and 32 Local Zones (smaller cloud regions). When neither of these options provides low enough latency, Riot can then turn to AWS Outposts.

“In certain cases, an Outpost, which is a piece of AWS hardware that we would install in a physical data center, could be used to become closer to customers and enable more computing opportunities at the Edge.”

Outposts put Edge computing, game servers, and storage closer to the customer, which then backhauls to AWS’ global backbone via the big fiber that connects the regions.

It’s not perfect

There will, of course, always be some locations where one simply can’t get quite as low latency. As an example, David Press offered Hawaii. But for the most part, with Edge computing working alongside the cloud, the infrastructure needed for online games like those offered by Riot is solid enough to provide a strong gaming experience.

This of course changes as we explore next-generation gaming technologies, like those using virtual reality, and those entirely streamed via the cloud. But that is for another article.

12 DCD Supplement • datacenterdynamics.com
 Enterprise Edge Supplement

What Edge computing brings to the manufacturing sector

Are dark factories really feasible?

The world of manufacturing is extremely competitive, with massive conglomerates all vying for that competitive edge, while smaller outfits look to survive long enough to become a conglomerate.

Increasingly, as we move to a more digitized version of factories, it has become apparent that what can give the manufacturing industry that competitive edge is, well, the Edge.

The digital revolution brings with it the essential need for digital infrastructure, and as we progress towards smart and even dark factories, that infrastructure will need to change its form.

Welcome ‘Industry 4.0’

‘Smart factories’ is something of a buzzphrase, like ‘Edge’ itself. But, despite the danger of overuse devaluing the concept, more and more manufacturers are adopting some principles into their facilities that deserve the description.

The smart factory curates data via sensors on the machines and around the factory floor, analyzes the data, and learns from experience. In order to do this effectively, that processing has to happen where the action is.

“There will potentially be thousands of sensors, and they’re all collecting data, which then needs to be analyzed,” explains

The Edge of Production 

Matt Bamforth, senior consultant at STL Partners.

“This is a huge amount of data, and it would be extremely expensive to send all of this back to a central server in the cloud. So in analyzing and storing this data at the Edge, you can reduce the backhaul.”

According to Bamforth, Edge computing will leverage four key use cases in the world of manufacturing: advanced predictive maintenance, precision monitoring and control, automated guided vehicles or AGVs, and real-time asset tracking and inventory management. Another key purpose of Edge computing can be found in digital twins.

A good example of how this is implemented can be seen in the factories of confectionery company Mars. While Mars prefers ‘factory of the future’ as opposed to ‘smart factories,’ the premise remains the same.

Scott Gregg, global digital plants director for Mars, said in a podcast that prior to using this kind of technology, “the plant floor was a bit of a black box. Data really wasn't readily available, plant and business networks were not necessarily connected, and engineering was at the forefront to solve some of those traditional challenges.”

Mars, along with the introduction of sensors on the factory lines, has also implemented digital twin technology.

Digital twins

“The twin allows us to use and see data in real-time to help us reduce nonquality costs and increase capacity. As for innovation, it's now pushed our plant floor associates to look at solving problems in a very different way, providing them with a toolset that they've never had before and with a different way of thinking,” added Gregg.

Digital Twins work by simulating the real equipment and technology, and can be used to test operational experiments without having to run equipment and risk wasting or damaging it. As can be imagined, however, this technology needs to be leveraged by a complex IT setup.

“With the introduction of the twin, we've had to go across engineering, traditional IT functions, networks, servers,

and cloud hosting. So all these different groups are now coming together to solve problems on the plant floor, which we've never done before,” explained Gregg.

The task of actually digitizing the Mars factory was taken on by Accenture, and uses the Microsoft Azure platform along with Accenture Edge accelerators.

Starting in 2020, Accenture introduced on-site Edge processing as well as implementing sensors directly onto the factory lines.

“The sensor literally screws into the line itself and uses the data from the machine to understand what's happening,” Simon Osborne, Accenture’s technology lead on the Mars Project told DCD

One example of this is on the Skittles factory line. According to Osbourne, the sensor would be doing things like counting the number of Skittle candies going into each bag, and using that data to measure performance.

“The twin would be, firstly, just monitoring. But then over time, predicting and making the machine more accurate. They're trying to reduce waste, reduce energy, to help the sustainabiltity agenda, and save money,” Osborne explained.

Osborne went on to admit, however,

that he couldn’t confirm if the sustainability benefits of the system outweighed the energy used to run the digital twin system and supporting IT, as Accenture didn’t have access to that information.

While some of the data will be collected and processed at the Edge, it is mostly sent to the cloud for processing.

“They have a lot of servers on each site, but most of the heavy processing would be on the cloud,” said Osborne.

“But where you have, for example, a camera, there would be Edge probes sitting right near it because they need to make the decision in real-time. From a latency perspective, they want the decision-making done immediately, they don't want to send it off and wait.

“So we use Edge as often as we can to get the processing as near to the production line as possible.”

Richard Weng, managing director at Accenture, also pointed out that it isn’t just about latency, but the complexity and quantity of the data that needs to be processed.

“Some of the simple use cases may use around 2,000 data points per minute. But some of the more sophisticated ones,

14 DCD Supplement • datacenterdynamics.com
 Enterprise Edge Supplement
"We are slowly seeing fewer and fewer people in the factories. Will they ever be completely lights out? I think it depends on the product, but I think that we are still quite a way off"

especially when you're talking about videos and things like that, is enough to crash the system, which is why we do that processing and run the camera as close to the Edge as possible.”

In factories, there is also a security concern. In the case of Mars, they don’t want their patented recipes to be public knowledge - after all the Mars Bar is a sacred treat.

Keeping data close to your chest

Mars worked around this by having ‘two sides’ to its Edge server set up. One side communicates with the operations technology, which then passes that information through to the IT side via a demilitarized zone.

Weng explained that the IT side communicates with the outside world, while the internal one is “basically responsible for aggregating, computing, and providing that one single proxy or one single thread from the factory to the outside.”

While Mars is running what it calls a ‘factory of the future,’ it is still a long shot away from a dark factory or a lights-out factory.

The dark factory is so automated, that it renders human input redundant - at least in theory. In reality, most dark factories will need some human workers monitoring the equipment or carrying out repairs and maintenance.

But what is undisputed is the need for additional intelligence in the technology to bring about the benefits available from

this hands-off approach to manufacturing.

‘Industry 5.0’ and lights out factories

For now, the sheer quantity of data points that would need to be analyzed, along with the energy needed to power such an exploit, and the Capex and the Opex costs, leaves the manufacturing industry unable to make the dark factory a widespread phenomenon.

Those who are actively pursuing a reduced human workforce, are thus exploring futuristic technologies, like the Boston Dynamics Spot dogs.

The Spot, while dancing uncomfortably close to the uncanny valley, is able to patrol the factory and monitor the goingson through its series of sensors. The technology has been utilized in factories and facilities.

One user is GlobalFoundries, a semiconductor manufacturer, which gathers data on the thermal condition and analog gauge readings of pumps, motors, and compressed gas systems.

While Spot dogs are a useful technology, they have not yet rendered human interaction unnecessary.

Some manufacturers have successfully implemented the dark factory but it is by no means the norm and is limited by a variety of factors.

Mark Howell, technical specialist and manager of IT facilities at Ford, told DCD that, despite increased automation, we are still a while away from the lights-out manufacturing of vehicles.

“Manufacturers work on a ‘just in

time’ basis,” he said. “The amount of parts on the production line is only going to get you through the next few minutes, possibly half an hour. But things are constantly coming to the workstations on the line, regardless of whether that's people standing there putting those things together or it's robots that are assembling components on those production lines.

“Using information technology to communicate, we are slowly seeing fewer and fewer people in the factories. Will they ever be completely lights out? I think it depends on the product that you're manufacturing, but I think that we are still quite a way off from assembling a vehicle that way.”

It isn’t only down to the complexity of the product, though this is a significant consideration. The technology is for many products readily available, the hesitation comes from economics, politics, and local regulations.

“If you're going to build a lights-out factory, you pretty much build it in a country like the UK,” explained Howell.

Manufacturers need to balance the cost of labor, with the cost of machinery. In the UK, you have a high cost of labor and the machine is manufactured in similar places. Once you manufacture that machinery, you can begin to see a return on investment.

But in those countries where there are limited wage regulations or even no minimum wage at all, it no longer makes sense. As with anything, it is a cost-benefit analysis, and lights-out factories need to make sense to the businesses before they will invest the upfront money on the equipment needed.

For the time being, dark factories are more a demonstration of what technology can do, rather than a practical global solution. The futuristic robots we see monitoring a factory floor in publicity stunts, while very exciting, would need to be a cheaper solution than just hiring someone to do the job themselves.

In many sectors and areas of the world, that cost-benefit analysis is not yet producing a close call. But when it does, the dark factories of the future will be heavily dependent on Edge computing, and the low-latency 5G networks needed to support this, even more so than the smart-factories of today 

Enterprise Edge Supplement | 15 The Edge of Production 

62% of IT outages can be attributed to IT infrastructure failure1. Our Data Center Infrastructure Management (DCIM) 3.0 offer provides device monitoring, health assessments, and more so you can:

• Run simulated impacts to expose vulnerabilities in IT infrastructure and address them immediately

• Reduce physical threats by monitoring IT environmental conditions

• Improve sustainability efforts by tracking PUE, energy, and carbon emissions

Software to
monitor, and
Realize your company’s IT potential EcoStruxure™ IT modernized DCIM APC Smart-UPS™ Modular Ultra apc.com/edge #CertaintyInAConnectedWorld ©2023 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-22622616_GMA 1Uptime Institute Global Data Center Survey, 2018
design,
manage your IT space

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.