Continual Contrastive Self-supervised Learning for Image Classification
- URL: http://arxiv.org/abs/2107.01776v2
- Date: Tue, 6 Jul 2021 03:00:14 GMT
- Title: Continual Contrastive Self-supervised Learning for Image Classification
- Authors: Zhiwei Lin, Yongtao Wang and Hongxiang Lin
- Abstract summary: Self-supervise learning method shows tremendous potential on visual representation without any labeled data at scale.
To improve the visual representation of self-supervised learning, larger and more varied data is needed.
In this paper, we make the first attempt to implement the continual contrastive self-supervised learning by proposing a rehearsal method.
- Score: 10.070132585425938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For artificial learning systems, continual learning over time from a stream
of data is essential. The burgeoning studies on supervised continual learning
have achieved great progress, while the study of catastrophic forgetting in
unsupervised learning is still blank. Among unsupervised learning methods,
self-supervise learning method shows tremendous potential on visual
representation without any labeled data at scale. To improve the visual
representation of self-supervised learning, larger and more varied data is
needed. In the real world, unlabeled data is generated at all times. This
circumstance provides a huge advantage for the learning of the self-supervised
method. However, in the current paradigm, packing previous data and current
data together and training it again is a waste of time and resources. Thus, a
continual self-supervised learning method is badly needed. In this paper, we
make the first attempt to implement the continual contrastive self-supervised
learning by proposing a rehearsal method, which keeps a few exemplars from the
previous data. Instead of directly combining saved exemplars with the current
data set for training, we leverage self-supervised knowledge distillation to
transfer contrastive information among previous data to the current network by
mimicking similarity score distribution inferred by the old network over a set
of saved exemplars. Moreover, we build an extra sample queue to assist the
network to distinguish between previous and current data and prevent mutual
interference while learning their own feature representation. Experimental
results show that our method performs well on CIFAR100 and ImageNet-Sub.
Compared with the baselines, which learning tasks without taking any technique,
we improve the image classification top-1 accuracy by 1.60% on CIFAR100, 2.86%
on ImageNet-Sub and 1.29% on ImageNet-Full under 10 incremental steps setting.
Related papers
- EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - From Pretext to Purpose: Batch-Adaptive Self-Supervised Learning [32.18543787821028]
This paper proposes an adaptive technique of batch fusion for self-supervised contrastive learning.
It achieves state-of-the-art performance under equitable comparisons.
We suggest that the proposed method may contribute to the advancement of data-driven self-supervised learning research.
arXiv Detail & Related papers (2023-11-16T15:47:49Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - PRSNet: A Masked Self-Supervised Learning Pedestrian Re-Identification
Method [2.0411082897313984]
This paper designs a pre-task of mask reconstruction to obtain a pre-training model with strong robustness.
The training optimization of the network is performed by improving the triplet loss based on the centroid.
This method achieves about 5% higher mAP on Marker1501 and CUHK03 data than existing self-supervised learning pedestrian re-identification methods.
arXiv Detail & Related papers (2023-03-11T07:20:32Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Investigating a Baseline Of Self Supervised Learning Towards Reducing
Labeling Costs For Image Classification [0.0]
The study implements the kaggle.com' cats-vs-dogs dataset, Mnist and Fashion-Mnist to investigate the self-supervised learning task.
Results show that the pretext process in the self-supervised learning improves the accuracy around 15% in the downstream classification task.
arXiv Detail & Related papers (2021-08-17T06:43:05Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - Self-Supervised Training Enhances Online Continual Learning [37.91734641808391]
In continual learning, a system must incrementally learn from a non-stationary data stream without catastrophic forgetting.
Self-supervised pre-training could yield features that generalize better than supervised learning.
Our best system achieves a 14.95% relative increase in top-1 accuracy on class incremental ImageNet over the prior state of the art for online continual learning.
arXiv Detail & Related papers (2021-03-25T17:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.