DECENTER

Page 1

Artificial Intelligence at the Edge

With edge computing, computation takes place closer to where data originated, avoiding some of the drawbacks of sending that data to the cloud. We spoke to Dr Domenico Siracusa about the work of the DECENTER project in developing a new edge computing platform designed to support AI application developers and ensure computational resources are managed effectively. The field of artificial intelligence (AI) is developing rapidly, with new applications emerging that are filtering down into many areas of everyday life, yet many of these applications are quite resource-intensive. The edge computing concept promises to bring significant benefits in these terms, reducing latency and saving network bandwidth, so potentially extending the reach of AI. “With edge computing, computation takes place closer to where the data originated,” says Dr Domenico Siracusa, head of the RiSING research unit at the Fondazione Bruno Kessler in the Northern Italian city of Trento. This avoids some of the drawbacks associated with sending data to the cloud for computation. “First of all, sending everything to the cloud takes time,” says Dr Siracusa. “The data gets to the cloud, then more time is taken to compute the data, for example to decide whether an individual should be allowed to enter a branch office. Sending the data to the cloud may lead to unacceptable delays.” 52

DECENTER project A second important issue is that edge computing helps to save network bandwidth, while it also addresses issues around safety and data privacy. As part of the DECENTER project, Dr Siracusa is now working to develop an edge – or fog – computing platform, which is designed to support application developers. “We want to help application developers to create applications in a way that they can then be distributed into what we call a continuum between the cloud and the edge,” he outlines. There are two key groups of stakeholders here, application developers and infrastructure operators. “The developer has to think about how powerful their AI method is. Does it recognise n people out of 20, or 20 out of 20? How accurate is it? Does it complete its job quickly?” continues Dr Siracusa. “On the other hand, someone also has to manage the infrastructure, meaning that they have to understand how the applications are running. If you think for instance about the concept of a smart city, you may have nodes, or

applying cloud computing concepts, but outside an actual cloud environment. “We aim to send less data to the cloud. But our technologies are closely related to cloud technologies,” he explains. One important aim in the project is to enable the automatic movement of pieces of software in a dynamic, efficient and scaleable way. “You can move different pieces of software from the cloud to the edge, from the edge to the cloud, or you can decide to store them only on the edge. But when you want to update it, this can be dynamically managed,” continues Dr Siracusa. “We are providing technologies to support AI solutions in four different use cases. One is about detecting dangers to pedestrians and alerting them.” This specific use case is located in Trento, where the local municipality is looking into commissioning an AI application to alert pedestrians to danger at road crossings. This is increasingly necessary today, with many people distracted by digital devices when they’re out and about. “This application recognises when cars are coming, and whether they are approaching too fast. It also recognises at the same time when someone is crossing the street,” says Dr Siracusa. The crossing is equipped with Internet of Things (IoT) sensors, which gather the relevant data before it is computed, and then the pedestrian is notified if there is any danger; Dr Siracusa

says latency is the main consideration here. “If you send those videos to the cloud, it may take a few seconds to compute,” he stresses. “We have found that when we put microservices in the cloud, it still takes more than 150 milliseconds to alert a pedestrian to danger, when that is our limit.” The latency requirement is met however when the micro-service is moved to the edge,

beneficial for these robots. It could help them to recognise if an object or obstacle on their path is a human being. If it’s a human then they can sound an alarm, while if it’s another robot one of them will have to move. If it’s a static object, then the robot will have to replan its route.” An AI application would enable the robot to identify what is in front of them, but they do

With our orchestration, we can remove all the pieces of software dynamically when the robots are idle. When they are required again, so when the robot moves from idle to active, we can then restore them. which is crucially important in the pedestrian crossing use case. A further use case in the project centres around robotic logistics. “This is a very interesting industrial case. These are battery-powered robots which can move around a shop floor,” explains Dr Siracusa. These robots are effectively small computers, which can navigate their way around a shop floor, yet Dr Siracusa says they are not very powerful in terms of computational capacity. “The processing capacity of these robots is almost immediately exhausted. Firstly because they consume a lot of battery, and secondly because it’s difficult to add anything,” he says. “Using AI could be

not have enough capacity to run everything on a continuous basis. Dr Siracusa and his colleagues in the project are developing an orchestration system which will help address this issue. “With our orchestration system, we can put this AI application on the robot. Or we can put it into a small server, installed on the customers’ premises. This effectively provides another level of computation,” he explains. The orchestration system will also help to ensure that services are deployed as efficiently as possible. “The robots consume a lot more battery than they actually need, as all the applications always run. They can be active, idle, or charging,” continues Dr

computational capacity, at every crossing and bus stop.” “The first tools we have proposed in DECENTER are designed to help AI developers to divide applications into sets of small modules, containing both optimised AI models and all the other components that are necessary for the application to run, like, for instance, a database or a graphical interface.” Thanks to DECENTER it is possible to re-use these components and to share in a smart way - among different AI applications - the results of the predictions or decisions that other applications have made. “By doing so, we are basically offering the possibility to break a toy down into small bricks that can be easily assembled and interconnected later on, and finally put into play by infrastructure operators,” explains Dr Siracusa. Once such applications are ready, the project helps infrastructure operators to deploy them while ensuring that computational resources are used efficiently. Dr Siracusa and his colleagues are essentially

EU Research

www.euresearcher.com

53


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.