5 minute read

INFORMATION PROCESSING AND MACHINE LEARNING

Information processing and machine intelligence techniques have an essential role to play in making sense of the enormous quantities of data generated by the IoT. Our researchers are finding new ways to extract useful information from this data pool and developing intelligent software to enable IoT systems to learn, identify patterns and make smarter decisions for a range of applications.

Information Processing and Machine Learning

Encompassing both IoT software and hardware, their work is considering a range of challenges and trade-offs. For example, how can resource-hungry information processing and artificial intelligence (AI) techniques be optimised for low-power devices? At device level, what’s the optimal trade-off between the speed and efficiency of intelligent processes and their accuracy, and how can hardware design contribute to better results? And how can the privacy and security benefits of distributed systems be realised while enabling the holistic view of data that machine learning and AI require?

A long-established discipline at Southampton, machine intelligence has been a focus of research activity here for more than 20 years. Our work in this area crosses the spectrum of computer science and electronics and our researchers collaborate with industry and academic partners around the world, and with Southampton’s Centre for AI and Machine Intelligence8

Putting more intelligence at the edge

Currently almost all machine learning and AI happens on cloud-based servers. For example if you ask your virtual assistant a question, the device itself will detect your voice and wake up, but after that the audio is recorded and sent to the cloud for interpretation and to generate a response.

This approach provides access to plenty of computational resource but raises issues about privacy – data might be intercepted during communication or the central store of data compromised. There are also issues around the power cost of communicating data and latency of the response. For these reasons, it would be useful to do some AI at the edge – that is, on or close to the device level rather than centrally. However, most IoT devices don’t have the processing power to run complex AI algorithms.

Our researchers are therefore investigating ways to bring intelligence to the edge in a way that is both quick and energy efficient. Southampton is a partner in the International Centre for Spatial Computational Learning9 a programme funded by the Engineering and Physical Sciences Research Council (EPSRC). Collaborating with colleagues at Imperial College London and the universities of Toronto, California Los Angeles and Sydney, our researchers are exploring ways to adapt and scale machine learning algorithms so that they either use less energy or return a response more quickly, while making more efficient use of the power resource of existing hardware.

For applications where less precision can be tolerated, this might mean a trade-off between accuracy and speed or power use. For example, it’s essential for the AI that runs a smart vehicle’s safety-critical systems to be both fast and accurate, and these systems may need additional power at certain times – for example, for processing video when travelling at higher speeds. However, it’s possible for the vehicle’s less critical AI elements, such as the infotainment system, to work at a lower power and output without any adverse effects.

8 https://www.cmi.ecs.soton.ac.uk

9 https://spatialml.net/

“Modern AI techniques are very power hungry, using massive server farms to churn through vast amounts of data. Our aim is to develop learning models using purpose-built hardware that can be embedded into smallfootprint, low-power devices.”

Researchers within Southampton’s Centre for Doctoral Training for Machine Intelligence for Nano-electronic Devices and Systems10 (MINDS CDT) are also looking at how to run machine learning on very low-cost, low-power devices. Cutting across both computer science and electronics, their work includes the design of novel hardware that is specifically designed for certain algorithms. They are also exploring how to enable machine learning across distributed networks, whether through a ‘super node’ where all the network’s computation happens, or peer-to-peer communication between the nodes in a network.

Optimising machine learning for health and care

One area that will benefit from more efficient machine learning methods is human activity recognition (HAR) based on sensor data, where the optimisation of HAR algorithms will help to realise the potential of innovations in personalised health and continuous health monitoring.

HAR data is expensive to collect and tends to be available only in small datasets. Our researchers have tested new ways to interpret it without using recurrent neural networks, which are the current state of the art but which require lots of data and are difficult to train. The have also found that transfer learning (where software learns a model using one dataset and is then able to apply it to different datasets) using a technique called cascade learning outperforms

“We’ve come up with an interesting model that uses Bluetooth as a proxy of the networks of human interaction over which infections spread. This novel modelling approach incorporates both digital and manual contact tracing and looks at how this might affect the R number.”

Dr Kate Farrahi Lecturer in Vision,

the commonly used end-to-end deep learning approach in terms of computational and memory resource.

This stream of work aims to contribute to the development of smaller, embedded deep learning models that can work quickly using less data than current methods, with multiple potential applications in the health and social care field.

Also in the sphere of healthcare, our researchers are using information processing and machine learning techniques to predict and prevent potential health problems. By analysing existing health datasets for trends and patterns, they have designed algorithms that can be applied to real-time data streams from remote patient monitoring systems, giving doctors additional information with which to make treatment decisions. Working with clinicians, they have designed systems with the potential to predict and prevent heart attacks and epileptic seizures, and they are looking to apply the same principles to a system that will predict the likelihood of neurological disorders in babies and young children in high-risk groups.

Modelling COVID-19 contact tracing

One aspect of our researchers’ work on machine learning came to the fore at the beginning of the COVID-19 pandemic, when a 2014 study that modelled a Bluetooth contact tracing system using mobile phone data suddenly took on new relevance.

The Singapore government got in touch to find out more about the study and subsequently launched a Bluetooth-based system, and the paper attracted significant attention from other government agencies and the media.

Since then, our researchers have updated state-of-the-art COVID-19 models to incorporate contact tracing in order to gain a realistic understanding of how factors such as user uptake can affect infection numbers.

Using AI in dynamic transport scenarios

Connected and autonomous vehicles are set to revolutionise travel and transport in the future, making journeys more convenient and efficient through platooning (vehicles travelling as a group) and smart ride-sharing schemes, for example. The concept of an internet of connected vehicles is underpinned by AI and our researchers are tackling some of the inherent challenges. For example, automated car users will be required to share data from their vehicle with others in order to join a platoon, so our researchers are designing intelligent agents to help people negotiate data use agreements to preserve their privacy. They are also modelling and developing AI algorithms for complex and dynamic scenarios, such as platoon formation, routing decisions and intersection control.

Another stream of this research is investigating ways to incentivise people to use an automated transport system in a way that is compatible with its aims, rather than ‘gaming’ it for their own ends – for example by offering financial incentives to share a ride or take a longer route if this benefits the efficiency of the system as a whole.

This work is being undertaken as part of AutoTrust11, a programme funded by the EPSRC.

“Once you have autonomous, connected vehicles that can communicate with each other, you can make more efficient use of the road network. This involves lots of aspects of AI, such as using limited data to infer what other road users might do or where congestion might occur, or optimising communal ridesharing routes based on levels of traffic and user preferences.”

Professor

of

11 https://autotrust.org.uk/

This article is from: