Forward Signal Propagation Learning
- URL: http://arxiv.org/abs/2204.01723v1
- Date: Mon, 4 Apr 2022 04:41:59 GMT
- Title: Forward Signal Propagation Learning
- Authors: Adam Kohan, Edward A. Rietman, Hava T. Siegelmann
- Abstract summary: We propose a new learning algorithm for propagating a learning signal and updating neural network parameters via a forward pass.
In biology, this explains how neurons without feedback connections can still receive a global learning signal.
sigprop enables global supervised learning with only a forward path.
- Score: 2.578242050187029
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a new learning algorithm for propagating a learning signal and
updating neural network parameters via a forward pass, as an alternative to
backpropagation. In forward signal propagation learning (sigprop), there is
only the forward path for learning and inference, so there are no additional
structural or computational constraints on learning, such as feedback
connectivity, weight transport, or a backward pass, which exist under
backpropagation. Sigprop enables global supervised learning with only a forward
path. This is ideal for parallel training of layers or modules. In biology,
this explains how neurons without feedback connections can still receive a
global learning signal. In hardware, this provides an approach for global
supervised learning without backward connectivity. Sigprop by design has better
compatibility with models of learning in the brain and in hardware than
backpropagation and alternative approaches to relaxing learning constraints. We
also demonstrate that sigprop is more efficient in time and memory than they
are. To further explain the behavior of sigprop, we provide evidence that
sigprop provides useful learning signals in context to backpropagation. To
further support relevance to biological and hardware learning, we use sigprop
to train continuous time neural networks with Hebbian updates and train spiking
neural networks without surrogate functions.
Related papers
- NoProp: Training Neural Networks without Back-propagation or Forward-propagation [47.978316065775246]
We introduce a new learning method named NoProp, which does not rely on either forward or backwards propagation.
NoProp takes inspiration from diffusion and flow matching methods, where each layer independently learns to denoise a noisy target.
We demonstrate the effectiveness of our method on MNIST, CIFAR-10, and CIFAR-100 image classification benchmarks.
arXiv Detail & Related papers (2025-03-31T17:08:57Z) - Online Training of Hopfield Networks using Predictive Coding [0.1843404256219181]
Predictive coding (PC) has been shown to approximate error backpropagation in a biologically relevant manner.
PC models mimic the brain more accurately by passing information bidirectionally.
This is the first time PC learning has been applied directly to train a neural network.
arXiv Detail & Related papers (2024-06-20T20:38:22Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - aSTDP: A More Biologically Plausible Learning [0.0]
We introduce approximate STDP, a new neural networks learning framework.
It uses only STDP rules for supervised and unsupervised learning.
It can make predictions or generate patterns in one model without additional configuration.
arXiv Detail & Related papers (2022-05-22T08:12:50Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement
Learning Agents [0.0]
An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent.
We propose a novel algorithm called MAP propagation to reduce this variance significantly.
Our work thus allows for the broader application of the teams of agents in deep reinforcement learning.
arXiv Detail & Related papers (2020-10-15T17:17:39Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example [14.91507266777207]
We show that a recurrent neural network can learn to modify its representation of complex information using only examples.
We provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations.
arXiv Detail & Related papers (2020-05-03T20:51:46Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.