SymBa: Symmetric Backpropagation-Free Contrastive Learning with
Forward-Forward Algorithm for Optimizing Convergence
- URL: http://arxiv.org/abs/2303.08418v1
- Date: Wed, 15 Mar 2023 07:39:23 GMT
- Title: SymBa: Symmetric Backpropagation-Free Contrastive Learning with
Forward-Forward Algorithm for Optimizing Convergence
- Authors: Heung-Chang Lee, Jeonggeun Song
- Abstract summary: The paper proposes a new algorithm called SymBa that aims to achieve more biologically plausible learning.
It is based on the Forward-Forward (FF) algorithm, which is a BP-free method for training neural networks.
The proposed algorithm has the potential to improve our understanding of how the brain learns and processes information.
- Score: 1.6244541005112747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper proposes a new algorithm called SymBa that aims to achieve more
biologically plausible learning than Back-Propagation (BP). The algorithm is
based on the Forward-Forward (FF) algorithm, which is a BP-free method for
training neural networks. SymBa improves the FF algorithm's convergence
behavior by addressing the problem of asymmetric gradients caused by
conflicting converging directions for positive and negative samples. The
algorithm balances positive and negative losses to enhance performance and
convergence speed. Furthermore, it modifies the FF algorithm by adding
Intrinsic Class Pattern (ICP) containing class information to prevent the loss
of class information during training. The proposed algorithm has the potential
to improve our understanding of how the brain learns and processes information
and to develop more effective and efficient artificial intelligence systems.
The paper presents experimental results that demonstrate the effectiveness of
SymBa algorithm compared to the FF algorithm and BP.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - An Improved Reinforcement Learning Algorithm for Learning to Branch [12.27934038849211]
Branch-and-bound (B&B) is a general and widely used method for optimization.
In this paper, we propose a novel reinforcement learning-based B&B algorithm.
We evaluate the performance of the proposed algorithm over three public research benchmarks.
arXiv Detail & Related papers (2022-01-17T04:50:11Z) - Neural Network Adversarial Attack Method Based on Improved Genetic
Algorithm [0.0]
We propose a neural network adversarial attack method based on an improved genetic algorithm.
The method does not need the internal structure and parameter information of the neural network model.
arXiv Detail & Related papers (2021-10-05T04:46:16Z) - A bi-level encoding scheme for the clustered shortest-path tree problem
in multifactorial optimization [1.471992435706872]
The Clustered Shortest-Path Tree Problem (CluSPT) plays an important role in various types of optimization problems in real-life.
Recently, some Multifactorial Evolutionary Algorithm (MFEA) have been introduced to deal with the CluSPT.
This paper describes a MFEA-based approach to solve the CluSPT.
arXiv Detail & Related papers (2021-02-12T13:36:07Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.