hardware for the accelerated simulation of neuronal network models. To this end, we collaborate closely with hardware experts at the Universities of Heidelberg, Manchester and Sussex. In summary, our E2L approach represents a powerful addition to the neuroscientist’s toolbox. By accelerating the design of mechanistic models of synaptic plasticity, it will contribute not only new and computationally powerful learning rules, but, importantly, also experimentally testable hypotheses for synaptic plasticity in biological neuronal networks. This effectual loop between theory and experiments will hopefully go a long way towards unlocking the mysteries of learning and memory in healthy and diseased brains.
This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3). We would like to thank Maximilian Schmidt for our continuous collaboration and the Manfred Stärk foundation for ongoing support. Links: [L1] https://kwz.me/h5f [L2] https://kwz.me/h5S [L3] (Graphics) www.irasutoya.com References: [1] J. Jordan, et al.: “Evolving to learn: discovering interpretable plasticity rules for spiking networks”, 2020. arXiv:q-bio.NC/2005.14149
[2] H.D. Mettler et al.: “Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming”, 202, arXiv: cs.NE/2102.04312. [3] J. Jordan, et al.: “Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers, Frontiers in Neuroinformatics, 12, 2018, doi:10.3389/fninf.2018.00002 Please contact: Henrik D. Mettler, NeuroTMA group, Department of Physiology, University of Bern henrik.mettler@unibe.ch
What Neurons do – and don’t do by Martin Nilsson (RISE Research Institutes of Sweden) and Henrik Jörntell (Lund University, Department of Experimental Medical Science) Biology-inspired computing is often based on spiking networks, but can we improve efficiency by going to higher levels of abstraction? To do this, we need to explain the precise meaning of the spike trains that biological neurons use for mutual communication. In a cooperation between RISE and Lund University, we found a spectacular match between a mechanistic, theoretical model having only three parameters on the one hand, and in vivo neuron recordings on the other, providing a clear picture of exactly what biological neurons “do”, i.e., communicate to each other. Most neuron models are empirical or phenomenological because this allows a comfortable match with experimental data. If nothing else works, additional parameters can be added to the model until it fits data sufficiently well. The disadvantage of this approach is that an empirical model cannot explain the neuron – we cannot escape the uneasiness of perhaps having missed some important hidden feature of the neuron. Proper explanation requires a mechanistic model, which is instead based on the neuron’s underlying biophysical mechanisms. However, it is hard to find a mechanistic model at an appropriate level of detail that matches data well. What we ultimately want, of course, is to find a simple mechanistic model that matches the data as well as any empirical model. We struggled for considerable time to find such a mechanistic model of the cerebellar Purkinje neuron (Figure 1a), which we use as a model neuron system. Biological experiments revealed, at an ERCIM NEWS 125 April 2021
early stage, that the neuron low-pass filters input heavily, so clearly, the cause of the high-frequency component of the interspike interval variability could not be the input, but was to be found locally. The breakthrough came with the mathematical solution of the long-standing first-passage time problem for stochastic processes with moving boundaries [1]. This method enabled us to solve the model equations in an instant and allowed us to correct and perfect the model using a large amount of experimental data. We eventually found that the neuron’s spiking can be accurately characterised by a simple mechanistic model using only three free parameters. Crucially, we found that the neuron model necessarily requires three compartments and must be stochastic. The division into three compartments is shown in Figure 1b, having a distal compartment consisting of the dendrite portions far from soma; a proximal compartment consisting of soma and nearby portions of
the dendrites; and an axon initial segment compartment consisting of the initial part of the axon. One way to describe the model is as trisecting the classical Hodgkin-Huxley model by inserting two axial resistors and observing the stochastic behaviour of ion channels in the proximal compartment. From the theoretical model we could compute the theoretical probability distribution of the interspike intervals (ISIs). By comparing this with the ISI histograms (and better, the kernel density estimators) of long (1,000–100,000 ISIs) in vivo recordings, the accuracy was consistently surprisingly high (Figure 1c). Using the matching to compute model parameters as an inverse problem, we found that the error was within a factor two from the CramérRao lower bound for all recordings. It seems that the distal compartment is responsible for integrating the input; the proximal compartment generates a 37