Special Theme
Introduction to the Special Theme
Brain-inspired Computing – by Robert Haas (IBM Research Europe) and Michael Pfeiffer (Bosch Center for Artificial Intelligence)
The origins of artificial intelligence (AI) can be traced back to the desire to build thinking machines, or electronic brains. A defining moment occurred back in 1958, when Frank Rosenblatt created the first artificial neuron that could learn by iteratively strengthening the weights of the most relevant inputs and decreasing others to achieve a desired output. The IBM 704, a computer the size of a room, was fed a series of punch cards and after 50 trials it learnt to distinguish cards marked on the left from cards marked on the right. This was the demonstration of the single-layer perceptron, or, according to its creator, "the first machine capable of having an original idea". [1] Computation in brains and the creation of intelligent systems have been studied in a symbiotic fashion for many decades. This special theme highlights this enduring relationship, with contributions from some of the region’s leading academic and industrial research laboratories. Ongoing collaboration between neuroscientists, cognitive scientists, AI researchers, and experts in disruptive computing hardware technologies and materials has made Europe a hotspot of brain-inspired computing research. The progress has been accelerated by EUfunded activities in the Future and Emerging Technologies (FET) programme, and more recently in the Human Brain Project. Over the last decade, similar large-scale interdisciplinary projects have started in the rest of the world, such as the BRAIN initiative in the US, and national initiatives in China, Canada and Australia. Braininspired computing projects from large industrial players such as IBM, Intel,
6
Samsung and Huawei have focused on disruptive hardware technologies and systems. In technology roadmaps, brain-inspired computing is commonly seen as a future key enabler for AI on the edge. Artificial neural networks (ANNs) have only vague similarities with biological neurons, which sparingly communicate using binary spikes at a low firing rate. However, compared with conventional ANNs, spiking neural networks (SNNs) require further innovation to be able to classify data accurately. Wozniak et al. (page 8) present the spiking neural unit (SNU), a model that can be introduced into deep learning architectures and trained to a high accuracy over a broad range of applications, including music and weather prediction. Yin et al. (page 9) show another application of backpropagation through time for tasks such ECG analysis and speech recognition, with a novel type of adaptive spiking recurrent neural network (SRNN) that achieves matching accuracy and an estimated 30 to 150-fold energy improvement over conventional RNNs. Masquelier (page 11) also exploits backpropagation through time, but accounts for the timing of spikes too, achieving comparable classification accuracy of speech commands compared with conventional deep neural networks. SpiNNaker, described by Furber (page 12), and BrainScaleS, outlined by Schemmel (page 14), are the two neuromorphic computing systems that are being developed within the Human Brain Project. With a million cores representing neurons interconnected by a
novel fabric optimised to transmit spikes, SpiNNaker allowed the first real-time simulation of a model of the early sensory cortex, estimated to be two to three times faster than when using GPUs or HPC systems. BrainScaleS introduced novel chips of analogue hardware circuits to represent each neuron, which are then configured in spiking neural networks, performing over 1000 times faster than real time, and leverage general-purpose cores closest to the analogue cores to control the plasticity of each neuron's synapses. This enables experimentation with various learning methods: For instance, Baumbach et al. (page 15) describe use cases that exploit these cores and simulate the environment of a bee looking for food, or perform reinforcement learning for the Pong game. In Göltz et al. (page 17), the timing of spikes is exploited, with an encoding representing more prominent features with earlier spikes, and training of the synaptic plasticity implemented by error backpropagation of first-time spikes. How can modern AI benefit from neuroscience and cognitive science research? Alexandre et al. (page 19) present their interdisciplinary approach towards transferring neuroscientific findings to new models of AI. Two articles demonstrate how both hardwareand data-efficiency can be increased by following brain-inspired self-organisation principles. The SOMA project, presented by Girau et al. (page 20), applies both structural and synaptic neural plasticity principles to 3D cellular FPGA platforms. The multi-modal Reentrant Self-Organizing Map (ReSOM) model
ERCIM NEWS 125 April 2021