A Study of Forward-Forward Algorithm for Self-Supervised Learning
- URL: http://arxiv.org/abs/2309.11955v2
- Date: Fri, 8 Dec 2023 13:46:42 GMT
- Title: A Study of Forward-Forward Algorithm for Self-Supervised Learning
- Authors: Jonas Brenig, Radu Timofte
- Abstract summary: We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
- Score: 65.268245109828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised representation learning has seen remarkable progress in the
last few years, with some of the recent methods being able to learn useful
image representations without labels. These methods are trained using
backpropagation, the de facto standard. Recently, Geoffrey Hinton proposed the
forward-forward algorithm as an alternative training method. It utilizes two
forward passes and a separate loss function for each layer to train the network
without backpropagation.
In this study, for the first time, we study the performance of
forward-forward vs. backpropagation for self-supervised representation learning
and provide insights into the learned representation spaces. Our benchmark
employs four standard datasets, namely MNIST, F-MNIST, SVHN and CIFAR-10, and
three commonly used self-supervised representation learning techniques, namely
rotation, flip and jigsaw.
Our main finding is that while the forward-forward algorithm performs
comparably to backpropagation during (self-)supervised training, the transfer
performance is significantly lagging behind in all the studied settings. This
may be caused by a combination of factors, including having a loss function for
each layer and the way the supervised training is realized in the
forward-forward paradigm. In comparison to backpropagation, the forward-forward
algorithm focuses more on the boundaries and drops part of the information
unnecessary for making decisions which harms the representation learning goal.
Further investigation and research are necessary to stabilize the
forward-forward strategy for self-supervised learning, to work beyond the
datasets and configurations demonstrated by Geoffrey Hinton.
Related papers
- SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - On minimal variations for unsupervised representation learning [19.055611167696238]
Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks.
Revealing minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.
arXiv Detail & Related papers (2022-11-07T18:57:20Z) - Visual processing in context of reinforcement learning [0.0]
This thesis introduces three different representation learning algorithms that have access to different subsets of the data sources that traditional RL algorithms use.
We conclude that including unsupervised representation learning in RL problem-solving pipelines can speed up learning.
arXiv Detail & Related papers (2022-08-26T09:30:51Z) - Continual Contrastive Self-supervised Learning for Image Classification [10.070132585425938]
Self-supervise learning method shows tremendous potential on visual representation without any labeled data at scale.
To improve the visual representation of self-supervised learning, larger and more varied data is needed.
In this paper, we make the first attempt to implement the continual contrastive self-supervised learning by proposing a rehearsal method.
arXiv Detail & Related papers (2021-07-05T03:53:42Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Representation Learning by Ranking under multiple tasks [0.0]
The representation learning problem under multiple tasks is solved by optimizing the approximate NDCG loss.
Experiments under different learning tasks like classification, retrieval, multi-label learning, regression, self-supervised learning prove the superiority of approximate NDCG loss.
arXiv Detail & Related papers (2021-03-28T09:36:36Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Evolving Losses for Unsupervised Video Representation Learning [91.2683362199263]
We present a new method to learn video representations from large-scale unlabeled video data.
The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods.
arXiv Detail & Related papers (2020-02-26T16:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.