Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic
Neurons
- URL: http://arxiv.org/abs/2302.01228v3
- Date: Wed, 7 Jun 2023 08:48:04 GMT
- Title: Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic
Neurons
- Authors: Rasmus H{\o}ier, D. Staudt, Christopher Zach
- Abstract summary: We propose a simple energy based compartmental neuron model, termed dual propagation, in which each neuron is a dyad with two intrinsic states.
The advantage of this method is that only a single inference phase is needed and that inference can be solved in layerwise closed-form.
Experimentally we show on common computer vision datasets, including Imagenet32x32, that dual propagation performs equivalently to back-propagation.
- Score: 15.147172044848798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Activity difference based learning algorithms-such as contrastive Hebbian
learning and equilibrium propagation-have been proposed as biologically
plausible alternatives to error back-propagation. However, on traditional
digital chips these algorithms suffer from having to solve a costly inference
problem twice, making these approaches more than two orders of magnitude slower
than back-propagation. In the analog realm equilibrium propagation may be
promising for fast and energy efficient learning, but states still need to be
inferred and stored twice. Inspired by lifted neural networks and compartmental
neuron models we propose a simple energy based compartmental neuron model,
termed dual propagation, in which each neuron is a dyad with two intrinsic
states. At inference time these intrinsic states encode the error/activity
duality through their difference and their mean respectively. The advantage of
this method is that only a single inference phase is needed and that inference
can be solved in layerwise closed-form. Experimentally we show on common
computer vision datasets, including Imagenet32x32, that dual propagation
performs equivalently to back-propagation both in terms of accuracy and
runtime.
Related papers
- Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Two Tales of Single-Phase Contrastive Hebbian Learning [9.84489449520821]
We show that it is possible for a fully local learning algorithm named dual propagation'' to bridge the performance gap to backpropagation.
The algorithm has the drawback that its numerical stability relies on symmetric nudging, which may be restrictive in biological and analog implementations.
arXiv Detail & Related papers (2024-02-13T16:21:18Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Formation of cell assemblies with iterative winners-take-all computation
and excitation-inhibition balance [0.0]
We present an intermediate model that shares the computational ease of kWTA and has more flexible and richer dynamics.
We investigate Hebbian-like learning rules and propose a new learning rule for binary weights with multiple stabilization mechanisms.
arXiv Detail & Related papers (2021-08-02T08:20:01Z) - BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian
Learning [37.376989855065545]
Spiking Neural Networks (SNNs) are biologically inspired, dynamic, event-driven models that enhance energy efficiency.
An SNN model is introduced that combines the benefits of temporally sparse binary activations and of binary weights.
Experiments validate the performance loss with respect to full-precision implementations.
arXiv Detail & Related papers (2020-12-15T14:06:36Z) - NeuroDiff: Scalable Differential Verification of Neural Networks using
Fine-Grained Approximation [18.653663583989122]
NeuroDiff is a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification.
Our results show that NeuroDiff is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
arXiv Detail & Related papers (2020-09-21T15:00:25Z) - Neural network quantum state tomography in a two-qubit experiment [52.77024349608834]
Machine learning inspired variational methods provide a promising route towards scalable state characterization for quantum simulators.
We benchmark and compare several such approaches by applying them to measured data from an experiment producing two-qubit entangled states.
We find that in the presence of experimental imperfections and noise, confining the variational manifold to physical states greatly improves the quality of the reconstructed states.
arXiv Detail & Related papers (2020-07-31T17:25:12Z) - Equilibrium Propagation with Continual Weight Updates [69.87491240509485]
We propose a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT)
We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT.
These results bring EP a step closer to biology by better complying with hardware constraints while maintaining its intimate link with backpropagation.
arXiv Detail & Related papers (2020-04-29T14:54:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.