Contrastive Continual Learning with Importance Sampling and
Prototype-Instance Relation Distillation
- URL: http://arxiv.org/abs/2403.04599v1
- Date: Thu, 7 Mar 2024 15:47:52 GMT
- Title: Contrastive Continual Learning with Importance Sampling and
Prototype-Instance Relation Distillation
- Authors: Jiyong Li, Dilshod Azizov, Yang Li, Shangsong Liang
- Abstract summary: We propose Contrastive Continual Learning via Importance Sampling (CCLIS) to preserve knowledge by recovering previous data distributions.
We also present the Prototype-instance Relation Distillation (PRD) loss, a technique designed to maintain the relationship between prototypes and sample representations.
- Score: 14.25441464051506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, because of the high-quality representations of contrastive learning
methods, rehearsal-based contrastive continual learning has been proposed to
explore how to continually learn transferable representation embeddings to
avoid the catastrophic forgetting issue in traditional continual settings.
Based on this framework, we propose Contrastive Continual Learning via
Importance Sampling (CCLIS) to preserve knowledge by recovering previous data
distributions with a new strategy for Replay Buffer Selection (RBS), which
minimize estimated variance to save hard negative samples for representation
learning with high quality. Furthermore, we present the Prototype-instance
Relation Distillation (PRD) loss, a technique designed to maintain the
relationship between prototypes and sample representations using a
self-distillation process. Experiments on standard continual learning
benchmarks reveal that our method notably outperforms existing baselines in
terms of knowledge preservation and thereby effectively counteracts
catastrophic forgetting in online contexts. The code is available at
https://github.com/lijy373/CCLIS.
Related papers
- Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Continual Contrastive Spoken Language Understanding [33.09005399967931]
COCONUT is a class-incremental learning (CIL) method that relies on the combination of experience replay and contrastive learning.
We show that COCONUT can be combined with methods that operate on the decoder side of the model, resulting in further metrics improvements.
arXiv Detail & Related papers (2023-10-04T10:09:12Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Consistent Representation Learning for Continual Relation Extraction [18.694012937149495]
A consistent representation learning method is proposed, which maintains the stability of the relation embedding.
Our method significantly outperforms state-of-the-art baselines and yield strong robustness on the imbalanced dataset.
arXiv Detail & Related papers (2022-03-05T12:16:34Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Semi-Discriminative Representation Loss for Online Continual Learning [16.414031859647874]
gradient-based approaches have been developed to make more efficient use of compact episodic memory.
We propose a simple method -- Semi-Discriminative Representation Loss (SDRL) -- for continual learning.
arXiv Detail & Related papers (2020-06-19T17:13:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.