Lifelong Learning and Selective Forgetting via Contrastive Strategy
- URL: http://arxiv.org/abs/2405.18663v1
- Date: Tue, 28 May 2024 23:57:48 GMT
- Title: Lifelong Learning and Selective Forgetting via Contrastive Strategy
- Authors: Lianlei Shan, Wenzhang Zhou, Wei Li, Xingyu Ding,
- Abstract summary: Lifelong learning aims to train a model with good performance for new tasks while retaining the capacity of previous tasks.
Some practical scenarios require the system to forget undesirable knowledge due to privacy issues, which is called selective forgetting.
We propose a new framework based on contrastive strategy for Learning with Selective Forgetting (LSF)
- Score: 7.570798966278471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lifelong learning aims to train a model with good performance for new tasks while retaining the capacity of previous tasks. However, some practical scenarios require the system to forget undesirable knowledge due to privacy issues, which is called selective forgetting. The joint task of the two is dubbed Learning with Selective Forgetting (LSF). In this paper, we propose a new framework based on contrastive strategy for LSF. Specifically, for the preserved classes (tasks), we make features extracted from different samples within a same class compacted. And for the deleted classes, we make the features from different samples of a same class dispersed and irregular, i.e., the network does not have any regular response to samples from a specific deleted class as if the network has no training at all. Through maintaining or disturbing the feature distribution, the forgetting and memory of different classes can be or independent of each other. Experiments are conducted on four benchmark datasets, and our method acieves new state-of-the-art.
Related papers
- I2CANSAY:Inter-Class Analogical Augmentation and Intra-Class Significance Analysis for Non-Exemplar Online Task-Free Continual Learning [42.608860809847236]
Online task-free continual learning (OTFCL) is a more challenging variant of continual learning.
Existing methods rely on a memory buffer composed of old samples to prevent forgetting.
We propose a novel framework called I2CANSAY that gets rid of the dependence on memory buffers and efficiently learns the knowledge of new data from one-shot samples.
arXiv Detail & Related papers (2024-04-21T08:28:52Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Constructing Sample-to-Class Graph for Few-Shot Class-Incremental
Learning [10.111587226277647]
Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples.
In this paper, we propose a Sample-to-Class (S2C) graph learning method for FSCIL.
arXiv Detail & Related papers (2023-10-31T08:38:14Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Class-Incremental Learning via Knowledge Amalgamation [14.513858688486701]
Catastrophic forgetting has been a significant problem hindering the deployment of deep learning algorithms in the continual learning setting.
We put forward an alternative strategy to handle the catastrophic forgetting with knowledge amalgamation (CFA)
CFA learns a student network from multiple heterogeneous teacher models specializing in previous tasks and can be applied to current offline methods.
arXiv Detail & Related papers (2022-09-05T19:49:01Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning Via Candidates Voting [7.704949298975352]
We introduce an effective and memory-efficient method for online continual learning under class-incremental setting.
Our proposed method achieves the best results under different benchmark datasets for online continual learning including CIFAR-10, CIFAR-100 and CORE-50.
arXiv Detail & Related papers (2021-10-17T15:45:32Z) - Compositional Fine-Grained Low-Shot Learning [58.53111180904687]
We develop a novel compositional generative model for zero- and few-shot learning to recognize fine-grained classes with a few or no training samples.
We propose a feature composition framework that learns to extract attribute features from training samples and combines them to construct fine-grained features for rare and unseen classes.
arXiv Detail & Related papers (2021-05-21T16:18:24Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Fine-grained Angular Contrastive Learning with Coarse Labels [72.80126601230447]
We introduce a novel 'Angular normalization' module that allows to effectively combine supervised and self-supervised contrastive pre-training.
This work will help to pave the way for future research on this new, challenging, and very practical topic of C2FS classification.
arXiv Detail & Related papers (2020-12-07T08:09:02Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.