Tightening the Biological Constraints on Gradient-Based Predictive
Coding
- URL: http://arxiv.org/abs/2104.15137v2
- Date: Wed, 8 Dec 2021 17:28:50 GMT
- Title: Tightening the Biological Constraints on Gradient-Based Predictive
Coding
- Authors: Nick Alonso and Emre Neftci
- Abstract summary: Predictive coding (PC) is a theory of cortical function.
In this paper, we modify a gradient-based PC model to better fit biological constraints.
We show that modified PC networks perform as well or nearly as well on MNIST data as the unmodified PC model and networks trained with backpropagation.
- Score: 1.90365714903665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive coding (PC) is a general theory of cortical function. The local,
gradient-based learning rules found in one kind of PC model have recently been
shown to closely approximate backpropagation. This finding suggests that this
gradient-based PC model may be useful for understanding how the brain solves
the credit assignment problem. The model may also be useful for developing
local learning algorithms that are compatible with neuromorphic hardware. In
this paper, we modify this PC model so that it better fits biological
constraints, including the constraints that neurons can only have positive
firing rates and the constraint that synapses only flow in one direction. We
also compute the gradient-based weight and activity updates given the modified
activity values. We show that, under certain conditions, these modified PC
networks perform as well or nearly as well on MNIST data as the unmodified PC
model and networks trained with backpropagation.
Related papers
- Gradient Rewiring for Editable Graph Neural Network Training [84.77778876113099]
underlineGradient underlineRewiring method for underlineEditable graph neural network training, named textbfGRE.
We propose a simple yet effective underlineGradient underlineRewiring method for underlineEditable graph neural network training, named textbfGRE.
arXiv Detail & Related papers (2024-10-21T01:01:50Z) - Predictive Coding Networks and Inference Learning: Tutorial and Survey [0.7510165488300368]
Predictive coding networks (PCNs) are based on the neuroscientific framework of predictive coding.
Unlike traditional neural networks trained with backpropagation (BP), PCNs utilize inference learning (IL), a more biologically plausible algorithm.
As inherently probabilistic (graphical) latent variable models, PCNs provide a versatile framework for both supervised learning and unsupervised (generative) modeling.
arXiv Detail & Related papers (2024-07-04T18:39:20Z) - Online Training of Hopfield Networks using Predictive Coding [0.1843404256219181]
Predictive coding (PC) has been shown to approximate error backpropagation in a biologically relevant manner.
PC models mimic the brain more accurately by passing information bidirectionally.
This is the first time PC learning has been applied directly to train a neural network.
arXiv Detail & Related papers (2024-06-20T20:38:22Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.