Predictive Coding Can Do Exact Backpropagation on Any Neural Network
- URL: http://arxiv.org/abs/2103.04689v1
- Date: Mon, 8 Mar 2021 11:52:51 GMT
- Title: Predictive Coding Can Do Exact Backpropagation on Any Neural Network
- Authors: Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz,
Zhenghua Xu
- Abstract summary: We generalize (IL and) Z-IL by directly defining them on computational graphs.
This is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network.
- Score: 40.51949948934705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intersecting neuroscience and deep learning has brought benefits and
developments to both fields for several decades, which help to both understand
how learning works in the brain, and to achieve the state-of-the-art
performances in different AI benchmarks. Backpropagation (BP) is the most
widely adopted method for the training of artificial neural networks, which,
however, is often criticized for its biological implausibility (e.g., lack of
local update rules for the parameters). Therefore, biologically plausible
learning methods (e.g., inference learning (IL)) that rely on predictive coding
(a framework for describing information processing in the brain) are
increasingly studied. Recent works prove that IL can approximate BP up to a
certain margin on multilayer perceptrons (MLPs), and asymptotically on any
other complex model, and that zero-divergence inference learning (Z-IL), a
variant of IL, is able to exactly implement BP on MLPs. However, the recent
literature shows also that there is no biologically plausible method yet that
can exactly replicate the weight update of BP on complex models. To fill this
gap, in this paper, we generalize (IL and) Z-IL by directly defining them on
computational graphs. To our knowledge, this is the first biologically
plausible algorithm that is shown to be equivalent to BP in the way of updating
parameters on any neural network, and it is thus a great breakthrough for the
interdisciplinary research of neuroscience and deep learning.
Related papers
- Predictive Coding Networks and Inference Learning: Tutorial and Survey [0.7510165488300368]
Predictive coding networks (PCNs) are based on the neuroscientific framework of predictive coding.
Unlike traditional neural networks trained with backpropagation (BP), PCNs utilize inference learning (IL), a more biologically plausible algorithm.
As inherently probabilistic (graphical) latent variable models, PCNs provide a versatile framework for both supervised learning and unsupervised (generative) modeling.
arXiv Detail & Related papers (2024-07-04T18:39:20Z) - Emerging NeoHebbian Dynamics in Forward-Forward Learning: Implications for Neuromorphic Computing [7.345136916791223]
Forward-Forward Algorithm (FFA) employs local learning rules for each layer.
We show that when employing a squared Euclidean norm as a goodness function driving the local learning, the resulting FFA is equivalent to a neo-Hebbian Learning Rule.
arXiv Detail & Related papers (2024-06-24T09:33:56Z) - Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs [12.357635939839696]
We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
arXiv Detail & Related papers (2023-11-17T20:59:57Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks [4.698975219970009]
We propose a new bio-plausible learning framework consisting of two components: a new architecture, and its supporting learning rules.
Under our microcircuit architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule operating in local compartments to update synaptic weights.
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules.
arXiv Detail & Related papers (2021-11-14T10:32:22Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Predictive Coding Can Do Exact Backpropagation on Convolutional and
Recurrent Neural Networks [40.51949948934705]
Predictive coding networks (PCNs) are an influential model for information processing in the brain.
BP is commonly regarded to be the most successful learning method in modern machine learning.
We show that a biologically plausible algorithm is able to exactly replicate the accuracy of BP on complex architectures.
arXiv Detail & Related papers (2021-03-05T14:57:01Z) - A Theoretical Framework for Target Propagation [75.52598682467817]
We analyze target propagation (TP), a popular but not yet fully understood alternative to backpropagation (BP)
Our theory shows that TP is closely related to Gauss-Newton optimization and thus substantially differs from BP.
We provide a first solution to this problem through a novel reconstruction loss that improves feedback weight training.
arXiv Detail & Related papers (2020-06-25T12:07:06Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.