3 minute read
Touch in Robots: A Neuromorphic Approach
by Ella Janotte (Italian Institute of Technology), Michele Mastella, Elisabetta Chicca (University of Groningen) and Chiara Bartolozzi (Italian Institute of Technology)
In nature, touch is a fundamental sense. This should also be true for robots and prosthetic devices. In this project we aim to emulate the biological principles of tactile sensing and to apply it to artificial autonomous systems.
Advertisement
Babies are born with a grasping reflex, triggered when something touches their palms. Thanks to this reflex, they are able to hold on to fingers and, later on, manually explore objects and their surroundings. This simple fact shows the importance of biological touch for the understanding of the environment. However, artificial touch is less prominent than vision: even tasks such as manipulation, which require tactile information for slip detection, grip strength modulation and active exploration, are widely dominated by visionbased algorithms. There are many reasons for the underrepresentation of tactile sensing, starting from the challenges posed by the physical integration of robust tactile sensing technologies in robots. Here, we focus on the problem of the large amount of data generated by e-skin systems that strongly limits their application on autonomous agents which require low power and data-efficient sensors. A promising solution is the use of eventdriven e-skin and on-chip spiking neural networks for local pre-processing of the tactile signal [1]. Motivation E-skins must cover large surfaces while achieving high spatial resolution and enabling the detection of wide bandwidth stimuli, resulting in the generation of a large data stream. In the H2020 NeuTouch [L1] project, we draw inspiration from the solutions adopted by human skin.
Coupled with non-uniform spatial sampling (denser at the fingertips and sparser on the body), tactile information can be sampled in an event-driven way, i.e., upon contact, or upon the detection of a change in contact. This reduces the amount of data to be processed and, if merged with on-chip spiking neural networks for processing, supports the development of efficient tactile systems for robotics and prosthetics.
Neuromorphic sensors Mechanoreceptors of hairless human skin can be roughly divided into two groups: slowly and rapidly adapting. Slowly adapting afferents encode stimulus intensity while rapidly adapting ones respond to changes in intensity. In both cases, tactile afferents generate a series of digital pulses (action potentials, or spikes) upon contact. This can be applied to an artificial e-skin, implementing neuromorphic, or event-driven, sensors’readout.
Like neuromorphic vision sensors, the signal is sampled individually and asynchronously, at the detection of a change in the sensing element's analogue value. Initially, the encoding strategy can be based on emitting an event (a digital voltage pulse) when the measured signal changes by a given amount with respect to the value at the previous event.We will then study more sophisticated encoding based on local circuits that emulate the slow and fast adaptive afferents. Events are transmitted offchip asynchronously, viaAER-protocol, identifying the sensing element that observed the change. In this representation, time represents itself and the temporal event pattern contains the stimulus information. Thus, the sensor remains idle in periods of no change, avoiding the production of redundant data, while not being limited by a fixed sampling rate if changes happen fast.
We aim to exploit the advantages of event-driven sensing to create a neuro-
Figure 1: A graphical representation of the desired outcome of our project. The realised architecture takes information from biologically inspired sensors, interfacing with the environment. The outcoming data are translated into spikes using event-driven circuits and provide input to different parts in the electronic chip. These different parts are responsible for analysing the incoming spikes and delivering information about environmental properties of objects. The responses are then used to generate an approximation about what is happening in the surroundings and impact the reaction of the autonomous agent.