SRIL: Selective Regularization for Class-Incremental Learning
- URL: http://arxiv.org/abs/2305.05175v1
- Date: Tue, 9 May 2023 05:04:35 GMT
- Title: SRIL: Selective Regularization for Class-Incremental Learning
- Authors: Jisu Han, Jaemin Na, Wonjun Hwang
- Abstract summary: Class-Incremental Learning aims to create an integrated model that balances plasticity and stability to overcome this challenge.
We propose a selective regularization method that accepts new knowledge while maintaining previous knowledge.
We validate the effectiveness of the proposed method through extensive experimental protocols using CIFAR-100, ImageNet-Subset, and ImageNet-Full.
- Score: 5.810252620242912
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human intelligence gradually accepts new information and accumulates
knowledge throughout the lifespan. However, deep learning models suffer from a
catastrophic forgetting phenomenon, where they forget previous knowledge when
acquiring new information. Class-Incremental Learning aims to create an
integrated model that balances plasticity and stability to overcome this
challenge. In this paper, we propose a selective regularization method that
accepts new knowledge while maintaining previous knowledge. We first introduce
an asymmetric feature distillation method for old and new classes inspired by
cognitive science, using the gradient of classification and knowledge
distillation losses to determine whether to perform pattern completion or
pattern separation. We also propose a method to selectively interpolate the
weight of the previous model for a balance between stability and plasticity,
and we adjust whether to transfer through model confidence to ensure the
performance of the previous class and enable exploratory learning. We validate
the effectiveness of the proposed method, which surpasses the performance of
existing methods through extensive experimental protocols using CIFAR-100,
ImageNet-Subset, and ImageNet-Full.
Related papers
- Continual Learning with Weight Interpolation [4.689826327213979]
Continual learning requires models to adapt to new tasks while retaining knowledge from previous ones.
This paper proposes a novel approach to continual learning utilizing the weight consolidation method.
arXiv Detail & Related papers (2024-04-05T10:25:40Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.