The Integrated Forward-Forward Algorithm: Integrating Forward-Forward
and Shallow Backpropagation With Local Losses
- URL: http://arxiv.org/abs/2305.12960v1
- Date: Mon, 22 May 2023 12:10:47 GMT
- Title: The Integrated Forward-Forward Algorithm: Integrating Forward-Forward
and Shallow Backpropagation With Local Losses
- Authors: Desmond Y.M. Tang
- Abstract summary: We propose an integrated method that combines the strengths of both FFA and shallow backpropagation.
We show that training neural networks with the Integrated Forward-Forward Algorithm has the potential of generating neural networks with advantageous features like robustness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The backpropagation algorithm, despite its widespread use in neural network
learning, may not accurately emulate the human cortex's learning process.
Alternative strategies, such as the Forward-Forward Algorithm (FFA), offer a
closer match to the human cortex's learning characteristics. However, the
original FFA paper and related works on the Forward-Forward Algorithm only
mentioned very limited types of neural network mechanisms and may limit its
application and effectiveness. In response to these challenges, we propose an
integrated method that combines the strengths of both FFA and shallow
backpropagation, yielding a biologically plausible neural network training
algorithm which can also be applied to various network structures. We applied
this integrated approach to the classification of the Modified National
Institute of Standards and Technology (MNIST) database, where it outperformed
FFA and demonstrated superior resilience to noise compared to backpropagation.
We show that training neural networks with the Integrated Forward-Forward
Algorithm has the potential of generating neural networks with advantageous
features like robustness.
Related papers
- Deep Learning Meets Adaptive Filtering: A Stein's Unbiased Risk
Estimator Approach [13.887632153924512]
We introduce task-based deep learning frameworks, denoted as Deep RLS and Deep EASI.
These architectures transform the iterations of the original algorithms into layers of a deep neural network, enabling efficient source signal estimation.
To further enhance performance, we propose training these deep unrolled networks utilizing a surrogate loss function grounded on Stein's unbiased risk estimator (SURE)
arXiv Detail & Related papers (2023-07-31T14:26:41Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Training neural networks with structured noise improves classification and generalization [0.0]
We show how adding structure to noisy training data can substantially improve the algorithm performance.
We also prove that the so-called Hebbian Unlearning rule coincides with the training-with-noise algorithm when noise is maximal.
arXiv Detail & Related papers (2023-02-26T22:10:23Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Learning for Integer-Constrained Optimization through Neural Networks
with Limited Training [28.588195947764188]
We introduce a symmetric and decomposed neural network structure, which is fully interpretable in terms of the functionality of its constituent components.
By taking advantage of the underlying pattern of the integer constraint, the introduced neural network offers superior generalization performance with limited training.
We show that the introduced decomposed approach can be further extended to semi-decomposed frameworks.
arXiv Detail & Related papers (2020-11-10T21:17:07Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - A Hybrid Method for Training Convolutional Neural Networks [3.172761915061083]
We propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks.
We show that the proposed hybrid method is capable of improving upon regular training in the task of image classification.
arXiv Detail & Related papers (2020-04-15T17:52:48Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.