Co$^2$L: Contrastive Continual Learning
- URL: http://arxiv.org/abs/2106.14413v1
- Date: Mon, 28 Jun 2021 06:14:38 GMT
- Title: Co$^2$L: Contrastive Continual Learning
- Authors: Hyuntak Cha, Jaeho Lee, Jinwoo Shin
- Abstract summary: Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
- Score: 69.46643497220586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent breakthroughs in self-supervised learning show that such algorithms
learn visual representations that can be transferred better to unseen tasks
than joint-training methods relying on task-specific supervision. In this
paper, we found that the similar holds in the continual learning con-text:
contrastively learned representations are more robust against the catastrophic
forgetting than jointly trained representations. Based on this novel
observation, we propose a rehearsal-based continual learning algorithm that
focuses on continually learning and maintaining transferable representations.
More specifically, the proposed scheme (1) learns representations using the
contrastive learning objective, and (2) preserves learned representations using
a self-supervised distillation step. We conduct extensive experimental
validations under popular benchmark image classification datasets, where our
method sets the new state-of-the-art performance.
Related papers
- Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Learning Representations for New Sound Classes With Continual
Self-Supervised Learning [30.35061954854764]
We present a self-supervised learning framework for continually learning representations for new sound classes.
We show that representations learned with the proposed method generalize better and are less susceptible to catastrophic forgetting.
arXiv Detail & Related papers (2022-05-15T22:15:21Z) - Generative or Contrastive? Phrase Reconstruction for Better Sentence
Representation Learning [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning may yield powerful enough sentence representation and achieve performance in Sentence Textual Similarity tasks on par with contrastive learning.
arXiv Detail & Related papers (2022-04-20T10:00:46Z) - Contrastive Learning from Demonstrations [0.0]
We show that these representations are applicable for imitating several robotic tasks, including pick and place.
We optimize a recently proposed self-supervised learning algorithm by applying contrastive learning to enhance task-relevant information.
arXiv Detail & Related papers (2022-01-30T13:36:07Z) - Contrastive Continual Learning with Feature Propagation [32.70482982044965]
Continual machine learners are elaborated to commendably learn a stream of tasks with domain and class shifts among different tasks.
We propose a general feature-propagation based contrastive continual learning method which is capable of handling multiple continual learning scenarios.
arXiv Detail & Related papers (2021-12-03T04:55:28Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Contrastive learning, multi-view redundancy, and linear models [38.80336134485453]
A popular self-supervised approach to representation learning is contrastive learning.
This work provides a theoretical analysis of contrastive learning in the multi-view setting.
arXiv Detail & Related papers (2020-08-24T01:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.