CUCL: Codebook for Unsupervised Continual Learning
- URL: http://arxiv.org/abs/2311.14911v1
- Date: Sat, 25 Nov 2023 03:08:50 GMT
- Title: CUCL: Codebook for Unsupervised Continual Learning
- Authors: Chen Cheng, Jingkuan Song, Xiaosu Zhu, Junchen Zhu, Lianli Gao,
Hengtao Shen
- Abstract summary: The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning.
We propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary.
Our method significantly boosts the performances of supervised and unsupervised methods.
- Score: 129.91731617718781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The focus of this study is on Unsupervised Continual Learning (UCL), as it
presents an alternative to Supervised Continual Learning which needs
high-quality manual labeled data. The experiments under the UCL paradigm
indicate a phenomenon where the results on the first few tasks are suboptimal.
This phenomenon can render the model inappropriate for practical applications.
To address this issue, after analyzing the phenomenon and identifying the lack
of diversity as a vital factor, we propose a method named Codebook for
Unsupervised Continual Learning (CUCL) which promotes the model to learn
discriminative features to complete the class boundary. Specifically, we first
introduce a Product Quantization to inject diversity into the representation
and apply a cross quantized contrastive loss between the original
representation and the quantized one to capture discriminative information.
Then, based on the quantizer, we propose an effective Codebook Rehearsal to
address catastrophic forgetting. This study involves conducting extensive
experiments on CIFAR100, TinyImageNet, and MiniImageNet benchmark datasets. Our
method significantly boosts the performances of supervised and unsupervised
methods. For instance, on TinyImageNet, our method led to a relative
improvement of 12.76% and 7% when compared with Simsiam and BYOL, respectively.
Related papers
- LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - From Pretext to Purpose: Batch-Adaptive Self-Supervised Learning [32.18543787821028]
This paper proposes an adaptive technique of batch fusion for self-supervised contrastive learning.
It achieves state-of-the-art performance under equitable comparisons.
We suggest that the proposed method may contribute to the advancement of data-driven self-supervised learning research.
arXiv Detail & Related papers (2023-11-16T15:47:49Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - A Practical Contrastive Learning Framework for Single-Image
Super-Resolution [51.422185656787285]
We investigate contrastive learning-based single image super-resolution from two perspectives.
We propose a practical contrastive learning framework for SISR, named PCL-SR.
Compared with existing benchmark methods, we re-train them by our proposed PCL-SR framework and achieve superior performance.
arXiv Detail & Related papers (2021-11-27T15:42:12Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Demystifying Contrastive Self-Supervised Learning: Invariances,
Augmentations and Dataset Biases [34.02639091680309]
Recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class.
We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations.
Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet.
arXiv Detail & Related papers (2020-07-28T00:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.