Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons
- URL: http://arxiv.org/abs/2110.14549v1
- Date: Wed, 27 Oct 2021 16:15:55 GMT
- Title: Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons
- Authors: Paul Haider, Benjamin Ellenberger, Laura Kriener, Jakob Jordan, Walter
Senn, Mihai A. Petrovici
- Abstract summary: We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
- Score: 0.7340017786387767
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The response time of physical computational elements is finite, and neurons
are no exception. In hierarchical models of cortical networks each layer thus
introduces a response lag. This inherent property of physical dynamical systems
results in delayed processing of stimuli and causes a timing mismatch between
network output and instructive signals, thus afflicting not only inference, but
also learning. We introduce Latent Equilibrium, a new framework for inference
and learning in networks of slow components which avoids these issues by
harnessing the ability of biological neurons to phase-advance their output with
respect to their membrane potential. This principle enables quasi-instantaneous
inference independent of network depth and avoids the need for phased
plasticity or computationally expensive network relaxation phases. We jointly
derive disentangled neuron and synapse dynamics from a prospective energy
function that depends on a network's generalized position and momentum. The
resulting model can be interpreted as a biologically plausible approximation of
error backpropagation in deep cortical networks with continuous-time, leaky
neuronal dynamics and continuously active, local plasticity. We demonstrate
successful learning of standard benchmark datasets, achieving competitive
performance using both fully-connected and convolutional architectures, and
show how our principle can be applied to detailed models of cortical
microcircuitry. Furthermore, we study the robustness of our model to
spatio-temporal substrate imperfections to demonstrate its feasibility for
physical realization, be it in vivo or in silico.
Related papers
- Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule [0.0]
We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.
We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
arXiv Detail & Related papers (2024-05-23T03:28:52Z) - Backpropagation through space, time, and the brain [2.10686639478348]
We introduce General Latent Equilibrium, a computational framework for fully local-temporal credit assignment in physical, dynamical networks of neurons.
In particular, GLE exploits the morphology of dendritic trees to enable more complex information storage and processing in single neurons.
arXiv Detail & Related papers (2024-03-25T16:57:02Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Theory of coupled neuronal-synaptic dynamics [3.626013617212667]
In neural circuits, synaptic strengths influence neuronal activity by shaping network dynamics.
We study a recurrent-network model in which neuronal units and synaptic couplings are interacting dynamic variables.
We show that adding Hebbian plasticity slows activity in chaotic networks and can induce chaos.
arXiv Detail & Related papers (2023-02-17T16:42:59Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.